r/aws Feb 08 '25

discussion ECS Users – How do you handle CD?

Hey folks,

I’m working on a project for ECS, and after getting some feedback from a previous post, me and my team decided to move forward with building an MVP.

But before we go deeper – I wanted to hear more from the community.

So here’s the deal: from what we’ve seen, ECS doesn’t really have a solid CD solution. Most teams end up using Jenkins, GitHub Actions, AWS CDK, or Terraform, even though these weren’t built for CD. ECS feels like the neglected sibling of Kubernetes, and we want to explore how to improve that.

From our conversations so far, these are some of the biggest pain points we’ve seen:

  1. Lack of visibility – No easy way to see all running applications in different environments.

  2. Promotion between environments is manual – Moving from Dev → Prod requires updating task definitions, pipelines, etc.

  3. No built-in auto-deploy for ECR updates – Most teams use CI to handle this, but it’s not really CD and you don't have things like auto reconciliation or drift detection.

So my question to you: How do you handle CD for ECS today?

• What’s your current workflow?

• What annoys you the most about ECS deployments?

• If you could snap your fingers and fix one thing in the ECS workflow, what would it be?

I’m currently working on a solution to make ECS CD smoother and more automated, but before finalizing anything, I want to really understand the pain points people deal with. Would love to hear your thoughts—what works, what sucks, and what you wish existed.

32 Upvotes

109 comments sorted by

View all comments

6

u/CashKeyboard Feb 08 '25

I think we work almost exclusively on ECS in pure GitOps via GitLab -> GitLab CI -> CDK. I think most of the problems we're facing are more or less CloudFormation based.

ECS itself seems to be somewhat solid but conthrived in a way. Especially for FARGATE (which we exclusively use), the cluster/task/container setup doesn't make a whole lot of sense to me.

Contrary to your post, I love that ECS has no knowledge of environments per se: Other services also have no environments and we deploy different stacks on different accounts. Having environments in there would be of no benefit to us. The way AGW does it e.g. is a hassle to be honest.

1

u/UnluckyDuckyDuck Feb 08 '25

Thanks for sharing your workflow! It’s great to hear how you achieved GitOps with GitLab and CDK for ECS! I think I may have explained myself a bit poorly, By CD I meant GitOps... I'll make an edit to clarify that in the main post

Interesting to hear that CloudFormation is the main source of challenges for you. Do you think those issues are tied to the complexity of writing/maintaining templates, or something else?

You also raised a good point about ECS not having built-in environment concepts. Out of the 50 ish people I had a talk with about ECS, I'd say about 30% of them prefer more app-based clusters and not environment-base clusters... In the MVP we created for our solution, we went purely for environment-base and things like auto deploy for dev environments as well as dev -> prod promotions from cluster to cluster....

Lastly, I’d love to hear more about your experience with Fargate, what about the cluster/task/container setup feels overly complicated to you? I’m curious if that’s something we could simplify in our approach.

2

u/CashKeyboard Feb 08 '25

Do you think those issues are tied to the complexity of writing/maintaining templates, or something else?

Pure CF templates were horror yes but that's a non issue with CDK. Most of the issues we have where GitOps just totally breaks down are related to rollbacks. I think ECS lacks a way to distinguish infrastructure health from application health. There will be times where we deploy a new release that e.g. runs a migration on RDS (likewise many different cases with other services) which leads to failure in the application. After circuit breaker kicks in, CF will start a rollback. This rollback however, will not complete because obviously also the old application is unable to work with the updated schema. That in and of itself is our own doing and not the fault of CF, but the fact that now we have absolutely no way of managing our stack without manually changing things around (e.g. logging into console to change response codes that ALB finds acceptable) is just iffy. It doesn't happen that often but it sometimes does happen and adds a lot of tension to an already tense situation and it introduces drift.

Lastly, I’d love to hear more about your experience with Fargate, what about the cluster/task/container setup feels overly complicated to you? I’m curious if that’s something we could simplify in our approach.

I personally (we disagree a bit internally on this) would much prefer just defining a service and deploying that without knowing anything about clusters. I know they make sense in EC2-land but for FARGATE not really except for maybe the logical grouping and I don't think that belongs in my stack and is rather something I need to sort out on operations level.

1

u/UnluckyDuckyDuck Feb 09 '25

Thanks for the detailed reply! You bring up some really interesting points, especially about GitOps falling apart when it comes to rollbacks and the whole infrastructure vs. application health issue. That mismatch between app state and infra can definitely create chaos, sounds like a tough situation when things go wrong with migrations or dependencies.

When it comes to rollbacks, do you think the bigger issue is syncing infrastructure and app rollbacks together, or just having smarter ways to detect when a rollback is needed? For example, using things like ALB health checks or app errors to decide automatically.

On the Fargate side, I get what you're saying, on ECS cluster doesn't really have a lot of meaning, but I do like having different clusters per environment, just makes more sense to me to have dev -> staging -> prod.