ECS Container and Docker

0

Hey Guys

I have been stuck on this for the longest now: I am deploying the Python code as a dockerized container.

I am doing all this in Python CDK.

Here is now I am creating the cluster

        vpc_test = _ec2.Vpc.from_lookup(self, "VPC",
                          vpc_id= "vpc-64c37b02"
                    )
        #Setting up the container to run the job
        cluster = _ecs.Cluster(self, "ClusterToGetFile",
                               vpc=vpc_test
                               )
        task_definition = _ecs.FargateTaskDefinition(self, "TaskDefinition",
                                                     cpu=2048,
                                                     memory_limit_mib=4096
                                                     )
        task_definition.add_container("getFileTask",

                               image = _ecs.ContainerImage.from_asset(directory="assets", file="Dockerfile-ecs-file-download"))

Here is the - Dockerfile-ecs-file-download

FROM python:3.9
WORKDIR /usr/app/src
COPY marketo-ecs-get-file/get_ecs_file_marketo.py ./
COPY marketo-ecs-get-file/requirements.txt ./
COPY common_functions ./
RUN  pip3 install -r requirements.txt  --no-cache
CMD ["python" , "./get_ecs_file_marketo.py"]

All i am trying to do, to begin with, is to run the task(Deploy) it manually.

All I have in the get_ecs_file_marketo.py.py file is

import logging
logging.info("ECS Container has stareted. ")

However, when I deploy the task, I get this error:

Stopped reason
Essential container in task exited

I am not able to figure out what am I doing incorrectly.

Appreciate any help or directions here.

Regards Tanmay

已提問 2 年前檢視次數 1116 次
2 個答案
1

I realized that the container was runnign, however the logs were not being send to cloudwatch as a default. I had to add the logging option in add_container to send the logs to cloudwatch.

已回答 2 年前
0

At the risk of not strictly answering the question, given that I don't like CDK too much and prefer Troposphere. In case the problem comes from the way you defined the ECS Service/Task definitions, see if you can get somewhere with AWS Copilot / ECS Compose-X. Also I would highly recommend you improve your Dockerfile for security reasons not to use the root user etc.

Essential container in task exited will have a return code if the container started at all (different than 0) and failed. Now, if you intend to run the task once (and exit 0 is expected) you probably don't want to use a ECS service but create a scheduled task (will start the containers based on triggers (time, events etc.) which is not expected to stay alive all the time the same way a service is. Now note that, for future work, if you have only 1 container in the task definition, that container is "essential" which means it's got to be up and running / healthy. If you have more than 1 container in the task definition, some can be expected to run and exit (SUCCESS expects return 0, otherwise any return code will do), others to be healthy, and some others just to be RUNNING.

profile picture
已回答 2 年前
  • HI John, Thanks for the feedback.

    My plan is to use the ecs run task as part of step function. So when Lambda is processing some data and it needs to do a data pull, it will call the RunTask of this container.

    Ideally I would wanted to have the Lambda in step function do it all, however the time limits are causing the lambda to die before the complete file is downloaded and pushed to S3. This will be a regular exercise, however the lambda upfront is required as the job will need to Enqueue a request, and keep checking for file status before the ecs/container can go and start downloading the file.

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南