Unanswered Questions tagged with Amazon Elastic Container Service

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Bug Report: EventBridge Schedules Console does not set "launchType" for ECS RunTask, breaks Assign Public IP

**Issue Description:** When creating a schedule on the EventBridge Schedules Console that uses ECS RunTask, the schedule fails to include `launchType` in its `requestParameters` when it is set. This breaks the ability to set Assign Public IP with the error "Assign public IP is not supported for this launch type." Likely related to this issue: https://repost.aws/questions/QU7GVF66EhSjuLp04GafMKGQ/event-bridge-scheduler-fails-on-ecs-run-task-with-fargate-launch-type **Steps to Reproduce:** 1. Create a new schedule in the EventBridge Schedules console. 2. Choose ECS RunTask as the Target. 3. Under the RunTask config section, set Compute Options > Launch type to `FARGATE` 4. Set Configure Network Configuration > Auto-assign Public IP to `ENABLED` RunTask will fail with logs in CloudTrail similar to the following: ``` "errorCode": "InvalidParameterException", "errorMessage": "Assign public IP is not supported for this launch type.", "requestParameters": { "cluster": "arn:aws:ecs:XXXXX:XXXXX:cluster/XXXX", "count": 1, "enableECSManagedTags": true, "enableExecuteCommand": false, "networkConfiguration": { "awsvpcConfiguration": { "assignPublicIp": "ENABLED", "subnets": [ "subnet-XXXXX" ] } }, "overrides": {}, "placementConstraints": [], "placementStrategy": [], "platformVersion": "1.4.0", "startedBy": "chronos-schedule/XXXXX", "tags": [], "taskDefinition": "arn:aws:ecs:XXXXX:XXXXX:task-definition/XXXXX:X" }, ``` Notably, `launchType` is missing from `requestParameters` even through `assignPublicIp` was successfully set. **Workaround:** I was able to finish my desired test by [manually creating a schedule using the AWS CLI](https://docs.aws.amazon.com/cli/latest/reference/ecs/run-task.html). Then, I had no further issue implementing the request in the actual Lambda function being developed. However, this issue was an obstacle to testing and debugging.
0
answers
0
votes
2
views
asked 21 minutes ago

ECS Fargate and ALB failed the health check with timeout (Node.js app)

I have been trying for two days to run a Node.js application on ECS Fargate connected to a Load Balancer, but somehow it does not pass the health check. I use CDK to create the infrastructure and already use other applications with the same setup that work perfectly. The Dockerfile image works perfectly locally with docker-compose. I am not convinced that's a problem at the SG and infrastructure level, but I think more that it's application side, maybe you have an idea where to go to check. The application is not developed by me and uses Express as server with listener on port 9000. IMPORTANT: I created a small application to test with the same port and endpoint for heath check = it worked perfectly. (health check passed) I increased the ECS idle timeout etc. but nothing Looking at the Task Logs (see image), I see that the application returns a 200 status on the heath check endpoint, however if I try to call the load balancer from its address, the call goes to 504 timeout. I tried from an EC2 machine to connect to the task on the private IP-address: with telnet it seems to work, but if I curl on the health check endpoint, it goes to timeouts. Sometimes if I try to call the load balancer several times (perhaps while the task is starting), the application responds (i.e. I see that the endpoint is working), but immediately afterwards it goes into timeout Any idea? ![Enter image description here](/media/postImages/original/IMBRZWTodnTqu_cmYco5cULQ)
0
answers
0
votes
5
views
asked 5 hours ago

Updating a service seems to keep up the old instance without displaying it

**Setup** I am hosting a multi-container application utilizing an mosquitto MQTT broker. The service sits behind a network load balancer configured with an EIP to have a static IP address. Inside the service, some component cyclically posts messages in fixed time intervals, e.g. 5 seconds. The interval can be configured via a file that is pulled from an S3. The file is pulled as soon as the container is started via startup script containing the lines: ``` aws s3api get-object /mosquitto/config/mosquitto.conf \ --bucket test-bucket \ --key mosquitto.conf ``` To my understanding, this should ensure every new start of a container instance should be able to load the latest config. Multiple users can connect to this application, for testing purposes, I used the command line tool mosquitto_sub, which emulates a subscriber. This subscriber runs on my local machine. The app shall be updated when a new config file is uploaded on S3. Therefore, I wrote a lambda function that is successfully triggered with the following code: ``` import boto3 def lambda_handler(event, context): client = boto3.client('ecs') client.update_service(cluster='test-cluster', service='test-backend', forceNewDeployment=True) ``` The lambda is executed on PUT and POST operations and as expected, in the service dashboard, I can see that a second task is fired up as soon as I upload the file. After a few minutes, the old task has been replaced by the new task and runs fine. **Problem Formulation** Before the update, the subscriber displays the received message. The time interval in which they are arriving are the configured 5 seconds. The new config file reduces the amount of time between messages to 1 second, i.e. I would expect to get messages more frequently. However, the subscriber continues with the 5 second interval. However, after stopping and restarting the subscriber, I get messages in the 1 second interval, i.e. the new config value. Also, connecting a second subscriber shows that a new subscriber gets messages with the new config, while the older subscriber runs with the old config. This does not make sense to me, as the service dashboard clearly shows that only a single task is running. But it seems the old task is still up, it just isn't shown. I would like to know how to ensure that all subscribers are connected to the instance with the latest config as soon as it is up. Thanks and best wishes, Sebastian
0
answers
0
votes
9
views
asked 11 hours ago

how to deploy an ecs service with a task definition that has 2 images with blue green deployment?

I had configured CodePipeline with CodeBuild and ECS blue green as an action provider to deploy my ECS service. In my buildspec.yml I created imageDetail.json like this ``` {"ImageURI": "imageid"}. ``` This setup was working fine when my task definition had only one image. Now my task definition has two images where one image depends from the other so I changed my buildspec.yml to create an imageDetail.json like this: ``` [{"ImageURI":"image1"}, {"ImageURI":"image2"}] ``` When configuring the pipeline with codebuild and ECS blue green deploy with this new task definition and imageDetail.json that has 2 images it is throwing the following error: "Exception while trying to parse the image URI file from the artifact: BuildArtifact." Then I tried doing this same setup but with ECS (rolling update) as an action provider instead of ECS blue green and it worked. With ECS (rolling update) as an action provider I needed to create an imagedefinitions.json instead of an imageDetail.json. The imagedefinitions.json created in buildspec.yml looks like this: ``` [{"name":"name1","imageUri":"image1"}, {"name":"name2","imageUri":"image2"}] ``` However, I want to use ECS blue green as an action provider where I need to create an imageDetail.json in the buildspec.yml file. So, can I create an imageDetail.json with two images like in imagedefinitions.json? I also made the same question here: https://stackoverflow.com/questions/73947923/how-to-deploy-an-ecs-service-with-a-task-definition-that-has-2-images-with-blue
0
answers
0
votes
32
views
dmr1725
asked 2 months ago

Automatically stop CodeDeploy ECS Blue/Green deployment on unhealthy containers

We are writing a CI/CD setup where we remotely trigger a CodePipeline pipeline which fetches its task definition and appspec.yaml from S3 and includes a CodeDeploy ECS Blue/Green step for updating an ECS service. Images are pushed to ECR also remotely. This setup works and if the to-be-deployed application is not faulty and well configured the deployment succeeds in under 5 minutes. However, if the application does not pass health checks, or the task definition is broken, CodeDeploy will continuously re-deploy this revision during its "Install" step without end, creating tens of stopped tasks in the ECS Service. According to some this should time out after an hour, however we have not tested this. What we would like to achieve is automatic stops and rollbacks of these failing deployments. Ideally CodeDeploy should try only once to deploy the application and if that fails, immediately cancel the deployment and thus the pipeline run. According to the AWS documentation no options for this exist in CodeDeploy or the appspec.yaml that we upload to S3, so we are unsure of how to configure this if it is at all possible. We had two wanted scenarios in mind: 1. After one health check failure, the deployment stops and rolls back; 2. The deployment times out after a period shorter than one hour; ideally < 10 minutes. We currently have no alarms attached to the CodeDeploy deployment group, but it was my understanding that these alarms only trigger before the installation step to verify that the deployment can proceed instead of running alongside the deployment. In short; how would we configure either of those scenarios or at least prevent CodeDeploy from endlessly deploying replacement task sets?
0
answers
0
votes
23
views
asked 2 months ago