ECS Fargate Blue/Green Deployment with EFS not working

0

I have a codePipeline that deploys to ECS. My task definition is using EFS access points. The first run of codePipeline works and everything runs wells i.e, the ECS task can connect to EFS and application runs. The problem starts at the codeDeploy when I rerun the pipeline (with same task definition that worked first time). New tasks are created in my ECS service but they die right away with exit code 0 without even logging anything out. It only says "Essential container in task exited". I'm a little bit new ECS so am I missing any steps or it's just how EFS works (can't connect to the same access point from two different containers?)

1 Answer
0

Hello.

Is your ECS task definition set to mount EFS?
It is possible to mount using an EFS access point from multiple clients.
Please check the following document for general troubleshooting methods when EFS cannot be mounted.
https://repost.aws/knowledge-center/fargate-unable-to-mount-efs

profile picture
EXPERT
answered 7 months ago
  • Yes, it can mount on the first run on the pipeline. But if I rerun the pipeline, i think it cannot mount again.

  • How do you set up the task definition?

  • I have this inside container definition

    "mountPoints": [
                    {
                        "sourceVolume": "<volume_name>",
                        "containerPath": "/<path>",
                        "readOnly": false
                    }
                ]
    

    and this is what my volume looks like.

    "volumes": [
            {
                "name": "<same_volume_name_from_above>",
                "efsVolumeConfiguration": {
                    "fileSystemId": "<my_efs_id>",
                    "rootDirectory": "/",
                    "transitEncryption": "ENABLED",
                    "authorizationConfig": {
                        "accessPointId": "<my_access_point_id>",
                        "iam": "ENABLED"
                    }
                }
            }
        ]
    

    I assume it wouldn't be connection issue or permission issue as it's working perfectly on the first run of the pipeline. Another thing that I noticed is that if I manually kill the task that is running fine, ECS service will create a new task and that task will work. It's like the first task that is able to connect to EFS works perfectly fine and newer tasks that is either trying to replace that task or replica from autoscaling won't work. So it's like only one working task at one time scenario which is really weird.

  • Can you confirm that there are no differences in the task definitions?

  • Yes, it's the same task definition, same revision number. All I'm trying to do is re-running the pipeline.

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions