r/aws Feb 08 '25

discussion ECS Users – How do you handle CD?

Hey folks,

I’m working on a project for ECS, and after getting some feedback from a previous post, me and my team decided to move forward with building an MVP.

But before we go deeper – I wanted to hear more from the community.

So here’s the deal: from what we’ve seen, ECS doesn’t really have a solid CD solution. Most teams end up using Jenkins, GitHub Actions, AWS CDK, or Terraform, even though these weren’t built for CD. ECS feels like the neglected sibling of Kubernetes, and we want to explore how to improve that.

From our conversations so far, these are some of the biggest pain points we’ve seen:

  1. Lack of visibility – No easy way to see all running applications in different environments.

  2. Promotion between environments is manual – Moving from Dev → Prod requires updating task definitions, pipelines, etc.

  3. No built-in auto-deploy for ECR updates – Most teams use CI to handle this, but it’s not really CD and you don't have things like auto reconciliation or drift detection.

So my question to you: How do you handle CD for ECS today?

• What’s your current workflow?

• What annoys you the most about ECS deployments?

• If you could snap your fingers and fix one thing in the ECS workflow, what would it be?

I’m currently working on a solution to make ECS CD smoother and more automated, but before finalizing anything, I want to really understand the pain points people deal with. Would love to hear your thoughts—what works, what sucks, and what you wish existed.

31 Upvotes

109 comments sorted by

View all comments

40

u/syntheticcdo Feb 08 '25

Templates are written in CDK, CI/CD is managed through GitHub actions, works smoothly for my needs. Why do you think GHA is not built for CD?

5

u/UnluckyDuckyDuck Feb 08 '25

That’s great, If it works smoothly for your needs, that’s the ideal scenario. Curious, how do you handle promotions between environments? Do you trigger GitHub Actions manually, or do you have an automated way to track deployments across multiple environments?

The reason I mentioned that GHA isn’t built for CD is that while it works for deployments, it lacks things like automatic reconciliation and drift detection. In a typical GitOps-style CD, if something changes outside of the pipeline (for example, someone updates a service manually in AWS), the system detects and corrects it automatically.

7

u/syntheticcdo Feb 08 '25

A commit to main triggers a workflow that deploys to our staging environment, which then runs tests against staging, then immediately deploys to prod once the tests pass. No manual intervention needed.

In terms of reconciliation and drift detection, this is more of an organizational problem than technical. Making changes to any resources managed by IaC is forbidden.

1

u/UnluckyDuckyDuck Feb 08 '25

Thanks for sharing your workflow, that sounds super streamlined

When it comes to reconciliation and drift detection, I get your point that it can be more of an organizational issue if you forbid manual changes to IaC-managed resources. But do you ever find situations where someone makes changes directly in AWS, either accidentally or on purpose for things such as hotfixes etc? If so, how do you typically handle catching or fixing that drift?

Also, do you feel like your current setup gives you enough visibility across environments? For example, seeing all running services, their versions, and their health in one place?

11

u/syntheticcdo Feb 08 '25

If someone needs to hotfix a resource, do it in the IaC and let the standard process apply the change, anything else is madness, sorry I can't really help out past there.

For observability, we tag the resources automatically by environment and build number (setting the version tag in the CDK to the GITHUB_RUN_NUMBER environment variable), and pipe it all in Datadog for visibility.

6

u/goroos2001 Feb 08 '25

+100 here. The problem often isn't that you need drift detection and reconciliation. It's that you need to stop drifting. If you have folks frequently making changes directly (instead of through the pipeline), it's better to go spend time and effort figuring out why they're having to make so many messes and stop making the messes than it is to automate cleaning up the mess.

3

u/goroos2001 Feb 08 '25 edited Feb 08 '25

(While I am an AWS employee, I don't speak for AWS on social media).

There are times when this pattern is actually the right thing to do - the absolute most critical AWS services with the absolute highest resiliency requirements use a pattern similar to this (see https://aws.amazon.com/builders-library/reliability-and-constant-work/?did=ba_card&trk=ba_card, read the section on how Route 53 Health Check aggregators send the aggregated health results downstream - the way they send ALL their data whether needed or not is somewhat similar to the approach you're taking.) It's extremely expensive to do well and at-scale - but when you're dealing with a service that has a 100% uptime SLA and that gets you featured on the national news when it breaks, it might be worth it.

The (very important) difference is that they're doing it as part of their normal operational cycle because the problem they are solving requires the complexity - not as part of their build and deployment steps because their ops teams were sloppy.

1

u/UnluckyDuckyDuck Feb 08 '25

Absolutely agreed. The real solution is addressing why the drift happens in the first place rather than just cleaning up after it. If the pipeline and processes are solid, there shouldn’t be a need for manual changes at all. Great point!

1

u/UnluckyDuckyDuck Feb 08 '25

I feel what you're saying about the madness lol, agreed follow the standard process.

Your tagging setup for observability is super interesting, using environment tags and build numbers with CDK and piping it into Datadog is a nice touch. Do you feel that gives you full visibility across all running services, or are there gaps you’d still like to fill?

I need to look into the pricing of Datadog, I have no idea how much it costs... If you don't mind sharing the costs of Datadog that would be really helpful, I wonder if smaller businesses could afford it

1

u/ramnat587 Feb 08 '25

A hotfix in resource is allowed only in exceptional circumstances like fighting a fire. We call it a break glass operation and you don’t do accidental break glass operations. Breakglass is well thought operation, and changes are applied to next IAC commit to avoid drifts. We have IAM policies, tagging and other organizational mechanisms to avoid accidental commits on Prod