By using AWS re:Post, you agree to the Terms of Use

Questions tagged with Amazon Elastic Container Service

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

how to deploy an ecs service with a task definition that has 2 images with blue green deployment?

I had configured CodePipeline with CodeBuild and ECS blue green as an action provider to deploy my ECS service. In my buildspec.yml I created imageDetail.json like this ``` {"ImageURI": "imageid"}. ``` This setup was working fine when my task definition had only one image. Now my task definition has two images where one image depends from the other so I changed my buildspec.yml to create an imageDetail.json like this: ``` [{"ImageURI":"image1"}, {"ImageURI":"image2"}] ``` When configuring the pipeline with codebuild and ECS blue green deploy with this new task definition and imageDetail.json that has 2 images it is throwing the following error: "Exception while trying to parse the image URI file from the artifact: BuildArtifact." Then I tried doing this same setup but with ECS (rolling update) as an action provider instead of ECS blue green and it worked. With ECS (rolling update) as an action provider I needed to create an imagedefinitions.json instead of an imageDetail.json. The imagedefinitions.json created in buildspec.yml looks like this: ``` [{"name":"name1","imageUri":"image1"}, {"name":"name2","imageUri":"image2"}] ``` However, I want to use ECS blue green as an action provider where I need to create an imageDetail.json in the buildspec.yml file. So, can I create an imageDetail.json with two images like in imagedefinitions.json? I also made the same question here: https://stackoverflow.com/questions/73947923/how-to-deploy-an-ecs-service-with-a-task-definition-that-has-2-images-with-blue
0
answers
0
votes
13
views
asked 3 days ago

Problem on Application load balancer with rule: Health check only responds on the default rule

Hi everyone I have 3 microservices running on an **ECS cluster**. Each microservice is launched by a **Fargate task**. Each microservice runs in its own Docker container. * *Microservice A* responds on port 8083. * *Microservice B* responds on port 8084. * *Microservice C* responds on port 8085. My configuration consists of two public subnets, two private, an internet gateway and a NAT, as well as two security groups, one for fargate services and one for ALB. On the security groups I have enabled inbound traffic on all ports. I have defined a listner for the ALB that responds on port 80 and wrote some path-based rules to route requests to the appropriate target group (*every target group is a Target type*) :![Enter image description here](/media/postImages/original/IM8oFOWQXjQEuDjdKe3PeGgw) Only the health check of the target group that responds to the default rule responds ( but I suspect it all happens randomly) , and consequently only the service reachable on port 8083 works ![Enter image description here](/media/postImages/original/IMtOk5-EqJRrmxLa49ium6hg) The remaining target groups are **unreachable**. What you notice is that in the "*Registered Target"* section the assigned IP addresses change continuously. For example: ![![Enter image description here](/media/postImages/original/IMkdJ_RNqsTJazJ3J8j4foqw) Enter image description here](/media/postImages/original/IMCm7LLgy1QJKk0JsLC3XlGg) But every time IP assigned it generates a timeout. It can happen quite randomly that a certain IP address is registered correctly. These are the ECS configurations of one of the unresponsive services: ![Enter image description here](/media/postImages/original/IMOdt86JdpS_2paN_elspK5g) What is the problem and how can I solve it? Thank you. **UPDATE1** I tried to add a new instance for microservice A. For the new IP (10.0.0.137) the health check is not responding. After a few minutes, the provisioning of a new IP (10.0.0.151) appears and it is registered correctly: ![Enter image description here](/media/postImages/original/IMUcZubrfCRrGo-fpqYAvSJQ) **UPDATE2** It is really strange behavior. **All services are now connected correctly**, after several hours of failed attempts. It looks like an IP address assignment problem. Before finding the correct address, AWS makes several attempts with different IP addresses until it randomly finds the correct one. These are the CIDRs of my PRIVATE subnets * private_subnets = ["10.0.0.128/28", "10.0.0.144/28"] * public_subnets = ["10.0.0.0/28", "10.0.0.16/28"] While these are the IPs that connected successfully: 1. 10.0.0.136 (micorservice A istance1) 2. 10.0.0.151 (micorservice A istance2) 3. 10.0.0.153 (micorservice A istance3) 4. 10.0.0.152 (micorservice B) 5. 10.0.0.142 (Microservice C)
3
answers
0
votes
40
views
asked 4 days ago

Automatically stop CodeDeploy ECS Blue/Green deployment on unhealthy containers

We are writing a CI/CD setup where we remotely trigger a CodePipeline pipeline which fetches its task definition and appspec.yaml from S3 and includes a CodeDeploy ECS Blue/Green step for updating an ECS service. Images are pushed to ECR also remotely. This setup works and if the to-be-deployed application is not faulty and well configured the deployment succeeds in under 5 minutes. However, if the application does not pass health checks, or the task definition is broken, CodeDeploy will continuously re-deploy this revision during its "Install" step without end, creating tens of stopped tasks in the ECS Service. According to some this should time out after an hour, however we have not tested this. What we would like to achieve is automatic stops and rollbacks of these failing deployments. Ideally CodeDeploy should try only once to deploy the application and if that fails, immediately cancel the deployment and thus the pipeline run. According to the AWS documentation no options for this exist in CodeDeploy or the appspec.yaml that we upload to S3, so we are unsure of how to configure this if it is at all possible. We had two wanted scenarios in mind: 1. After one health check failure, the deployment stops and rolls back; 2. The deployment times out after a period shorter than one hour; ideally < 10 minutes. We currently have no alarms attached to the CodeDeploy deployment group, but it was my understanding that these alarms only trigger before the installation step to verify that the deployment can proceed instead of running alongside the deployment. In short; how would we configure either of those scenarios or at least prevent CodeDeploy from endlessly deploying replacement task sets?
0
answers
0
votes
19
views
asked 8 days ago