How to perform CodePipeline ECS deployment based on Git tag

0

Hi fellow AWS humans,

I am running an ECS application that is automatically built and deployed using CodeCommit, CodePipeline, and ECR. The infratructure is managed with Terraform. My setup is fairly comparable to this tutorial here: https://devops-ecs-fargate.workshop.aws/en/1-introduction.html

The current ci/cd workflow is as follows:

  1. Git push to CodeCommit repo main branch
  2. CodePipeline builds a container Image and pushes it to the ECR registry
  3. Deploy the most recently built container to ECS and update the service

This is fine for very simple setups and I'm ok doing trunk based development (which, according to this blog post, is the suggested way when working with CodePipeline: https://aws.amazon.com/blogs/devops/multi-branch-codepipeline-strategy-with-event-driven-architecture/). However, I don't want the most recent build to be pushed straight to production. What I' like to achieve is a 2-step ci/cd process (2 pipelines, 2 separate target environments):

  1. Git push to CodeCommit repo main branch
  2. CodePipeline builds a container Image and pushes it to the ECR registry
  3. The most recently built container is deployed in the ECS dev environment
  4. Tagging a specific commit (using git tag) will trigger a separate CodePipeline
  5. The pipeline triggered in step 4 deploys the associated container to the production environment

It seems that the only way to use CodePipeline's built-in features for deployment is by specifying a fixed branch name from which all vcs commits will trigger a new build/deployment - I see no way of specifying a git tag (and no way of specifying any wildcards either). This blog post (https://aws.amazon.com/blogs/devops/adding-custom-logic-to-aws-codepipeline-with-aws-lambda-and-amazon-cloudwatch-events/) suggests that there are ways to circumvent this shortcoming by using a Lambda and CloudWatch Events.

My questions are:

  • is there any way to achieve the illustrated ci/cd setup with AWS CodePipeline?
  • if it is possible: What would be a best practice to implement this?

Thanks for any pointers and your help!

Kind regards and big thanks,

Maik

4 Answers
2
Accepted Answer

Another approach is extending your pipeline to first deploy to a dev/staging environment, then a manual approval action before deploying to production. This will allow you to deploy all changes to dev/staging for testing, then deploy only select builds to production. The manual approval action will pause the build until an reviewer with IAM access approves the change. Pipelines that are rejected or not approved within seven days will have the Failed status.

This approach is detailed in Lab 2 of the CI/CD for ECS Workshop, specifically in the Extend Pipeline steps. This workshop also includes advanced deployment methods like blue/green and canary deployments.

AWS
Noah_L
answered 2 years ago
AWS
EXPERT
reviewed 2 years ago
  • Thanks Noah! The workshop's Lab 2 Blue/Green deployment to production step with a manual approve sounds like an approach worth trying! Thanks a lot for pointing me to it!

1

Using a branch with the name of the version or environment is better than using a tag, so that you can make any break fixes to the version deployed or even continue deploying some important feature development requested by business after it goes to say SIT or UAT or production(higher environments) from Dev. If one wants only two they can limit themselves to two pipelines.

So you can have two or three pipelines corresponding to each of the environments. The important thin is taking care(back merging) of PRs(pull requests) raised against the production environment into the lower environments. All this is an elaborate way of release management in big enterprises.

One could also follow the blue green deployment or canary approach where there would be two production like environments available always and the latest version after testing can be swapped with the older version.

These URLs provide the approach for doing this using ECS and EKS https://ecsworkshop.com/blue_green_deployments/

https://catalog.us-east-1.prod.workshops.aws/v2/workshops/2175d94a-cd79-4ed2-8e7e-1f0dd1956a3a/en-US/

https://docs.aws.amazon.com/AmazonECS/latest/userguide/deployment-type-bluegreen.html

And finally here are all the change actions that can trigger the pipelines. https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-about-starting.html

However these do not preclude a completely different and customised way of triggering it by tags but using branches would be better.

AWS
answered 2 years ago
  • Thanks for your insights Madhav. I agree with you insofar as that branch-based deployments are preferable over a tag-based approach but to be honest, it feels a bit hacky to back merge PRs between three permanent branches that correspond to the respective environments. The approach introduced by Noah (manual approve pipeline step which triggers blue/green deployment to a separate stage, see https://catalog.us-east-1.prod.workshops.aws/v2/workshops/869f7eee-d3a2-490b-bf9a-ac90a8fb2d36/en-US/4-basic/lab2-bluegreen/00-overview)) seems to be the more "idiomatic" way of solving things with CodePipeline. Plus, by implementing it this way I can get away with a single (trunk) branch that triggers deployments to 2 separate environments.

    Thanks a lot for your thoughts though!

0

A way to trigger the pipeline based on a git tag is by using EventBridge rule (instead of the default by polling to git repo). The EventBridge specification would be something like this:

 detail:
          event:
            - referenceCreated
            - referenceUpdated
          referenceType:
            - tag
          referenceName:
            - prefix: version-
      Targets:
        - Arn: <Code Pipeline ARN>

And in your CodePipeline "source" configuration, disable default polling:

              Configuration:
                RepositoryName: ...
                BranchName: ...
                PollForSourceChanges: false

profile pictureAWS
jputro
answered 2 years ago
0

I've been through the same effort, yes its painful since we are using Git Flow and AWS recommends Trunk based but every company has different approach you can't enforce one over the other.

How we use gitflow?

  1. So what we do now is we create a release branch e.g release/1.5.0
  2. The release branch is first deployed to staging. This is where we do the trick part: AWS Codepipeline is already there but we have disabled automatic deploy, we initiate the process via a lambda function, where we provide the target branch and target environment e.g release/1.5.0 => staging: The lambda function first overwrite the branch of the AWS Codepipeline and then the lambda runs the Release of AWS Codepipeline.
  3. Once approved from staging, the same release branch release/1.5.0 is deployed to UAT e.g release/1.5.0 => uat: The same lambda is being used here as well.
  4. Once both ENVs are approved, then we deploy it to production e.g release/1.5.0 => prod
  5. Once approved and the release to production is good, then the release branch release/1.5.0 is merged into main branch and tagged e.g v1.5.0. The reason we do this is because maybe you have some issues during the deployment to production, so in order to quickly revert back, you can deploy the same main branch to prod using the same lambda e.g main => prod

This will still get you into a problem when you want to revert back to a specific version tag because I still haven't found a way whether it's possible or not to deploy a tag instead of deploying a release branch and the process should be manual by release manager, this is the usual process being followed in gitflow based startups since Trunk Based isn't for startups as they definitely need some level of supervision.

The same process is being followed by many companies but they usually do it via Jenkins since Jenkins has a lot flexibility as compared to AWS Codepipeline and AWS Codebuild, The easiest way to do all of this in Jenkins is using build with parameters and the parameters are branch name and target ENV. Imagine the same feature is given by AWS that could make things a lot easier for us to handle.

profile picture
answered 6 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions