Using CodeCatalyst to deploy Terraform Infrastructure as Code

13 minute read
Content level: Advanced
0

This article will guide you in setting up a CodeCatalyst Workflow that can be utilized for deploying Terraform. Terraform is a tool employed to write, plan, and deploy Infrastructure as Code (IaC) to an AWS Account.

Introduction

Amazon CodeCatalyst is a unified development tool including source repositories, CI/CD workflows, issue management, development environments, and other functional areas.

Terraform, a HashiCorp tool, deploys infrastructure resources and configures them, especially in cloud environments. This article explains how to use CodeCatalyst workflows for Terraform deployments. Workflows monitor code repositories, triggering automated processes for building, validating, and deploying resources. In this article, we'll deploy a simple lambda function with a URL for direct invocation. We'll also create an IAM role and a CloudWatch log group for proper function and log capture.

Prerequisites

  • A CodeCatalyst space with associated builders ID, configured with a billing account.
  • Within that space, a project which should be associated with an AWS account, which has the default role that CodeCatalyst deploys, along with the permissions described.
  • The AWS account contains an S3 bucket and DynamoDB table for Terraform state management as described in the Terraform documentation.

CodeCatalyst Deployment Role

Since CodeCatalyst sits outside our AWS accounts, we need to tell CodeCatalyst about our accounts as documented here by adding an AWS account to our space, which includes setting up a role within our accounts that CodeCatalyst can use. Whilst some default IAM permissions are added, we will need to add some specific permissions for the purpose of this article. Firstly, we will add the following policy to allow Terraform to manage its state file, replacing the highlighted text with the appropriate values:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:ListBucket", "Resource": "_insert_state_bucket_arn_" }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject", "s3:DeleteObject" ], "Resource": [ "_insert_state_bucket_arn_", “_insert_state_bucket_arn_/*” ] }, { "Effect": "Allow", "Action": [ "dynamodb:DescribeTable", "dynamodb:GetItem", "dynamodb:PutItem", "dynamodb:DeleteItem" ], "Resource": "_insert_state_dynamodb_table_arn_" } ] }

Additionally, we must grant Terraform the necessary permissions to manage the resources it deploys. To achieve this, we will add the following policy:

{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "iam:ListInstanceProfilesForRole", "iam:PassRole", "iam:ListAttachedRolePolicies", "iam:ListRolePolicies" ], "Resource": "arn:aws:iam::*:role/*" }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": [ "logs:ListTagsLogGroup", "lambda:CreateFunction", "lambda:TagResource", "logs:DescribeLogGroups", "lambda:ListFunctions", "logs:DeleteLogGroup", "lambda:GetFunction", "lambda:ListAliases", "logs:PutRetentionPolicy", "logs:CreateLogGroup", "lambda:CreateAlias" ], "Resource": "*" } ] }

Adding a CodeCatalyst Repository

We will now add a repository which we will use to store some Terraform code to deploy a lambda. We will then add a workflow to deploy the lambda to an AWS account.

We will now add a repository which we will use to store some Terraform code to deploy a lambda. We will then add a workflow to deploy the lambda to an AWS account.

  1. Login to CodeCatalyst via https://codecatalyst.aws and select Source Repositories from the project summary.

Enter image description here

  1. Click on Add repository and click on Create repository from the dropdown.
  2. On the Create source repository screen, enter the name demoCode in the name field and click on the Create button.
  3. This will create a new repository, containing only a template README.md file.

Adding Source Code For A Lambda Function

Now that we have a repository, we will add files to describe the infrastructure we’re going to deploy. First, we’ll create some source code for a Python lambda function. 5. Click on the** Create file **button in the Files section of the **repository **screen.

Enter image description here

  1. In the File name field, enter source/lambda_function.py to create a new Python file.

Enter image description here

  1. Add the following Python code into the editing area:

` import json

def lambda_handler(event, context): return { 'statusCode': 200, 'body': json.dumps({ 'message': 'Hello world' }) } `

  1. To save the file click the Commit button at the top of the editor file, which will show a screen similar to:

Enter image description here

  1. Click the Commit button at the bottom of the form, to commit the change to the repository.

Adding Terraform To The Repository

We now have code that we can use to create a lambda function. Next, we’ll add some Terraform code to deploy the lambda. 10. In the sidebar on the left, ensure the Code dropdown is visible, and click on Source Repositories. Click on demoCode in the list of repositories. 11. Click on the Create file button again in the Files section of the repository screen. 12. In the **File name **field, enter terraform/demo.tf to create a new terraform file. 13. Add the following provider configuration into the editing area, replacing eu-west-1 in the provider configuration with your preferred region. This will configure the terraform to create a state file stored in an S3 bucket.

Configure AWS provider, ready to store statefile in S3 ` provider "aws" { region = "eu-west-1" }

terraform { backend "s3" { } }`

Create an IAM role we can use to execute the lambda

`resource "aws_iam_role" "lambda_execution_role" { name_prefix = "codecatalyst-demo-lambda-role"

assume_role_policy = jsonencode({ Version = "2012-10-17" Statement = [ { Action = "sts:AssumeRole" Effect = "Allow" Sid = "" Principal = { Service = "lambda.amazonaws.com" } }, ] }) }`

Assign the basic lambda policy to the role

resource "aws_iam_role_policy_attachment" "lambda_role_basic_policy_attachment" { role = aws_iam_role.lambda_execution_role.name policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole" }

Generate a zip file containing our lambda

data "archive_file" "lambda_function_zip" { type = "zip" source_file = "../source/lambda_function.py" output_path = "../source/lambda_function.zip" }

Generate a random suffix for our lambda function.

resource "random_id" "id" { byte_length = 8 }

Create a lambda function

resource "aws_lambda_function" "lambda_function" { function_name = "test-lambda-function-${random_id.id.hex}" filename = data.archive_file.lambda_function_zip.output_path source_code_hash = data.archive_file.lambda_function_zip.output_base64sha256 role = aws_iam_role.lambda_execution_role.arn description = "Demo Lambda" handler = "lambda_function.lambda_handler" runtime = "python3.9" }

Create a CloudWatch Log Group for the lambda

resource "aws_cloudwatch_log_group" "lambda_log_group" { name = "/aws/lambda/${aws_lambda_function.lambda_function.function_name}" retention_in_days = 1 }

Add URL to lambda

`resource "aws_lambda_function_url" "lambda_url" { function_name = aws_lambda_function.lambda_function.function_name authorization_type = "NONE" }

output "lambda_url_url" { value = aws_lambda_function_url.lambda_url.function_url }`

  1. To save the file click the Commit button at the top of the editor file, and once in the Commit file form, click the Commit button.

CodeCatalyst Workflows

As mentioned earlier, workflows are the backbone of CI/CD deployments within CodeCatalyst. These workflows are stored as YAML files within the .codecatalyst/workflows folder in your repository. CodeCatalyst offers three approaches for creating and modifying workflows:

  1. Visual Editor (YAML): You can work directly with the YAML version of the workflow.
  2. Visual Editor (GUI Mode): This option provides a graphical interface with dialog boxes to define each action.
  3. Direct YAML Editing: Alternatively, you can edit the YAML workflow file directly.

Creating a workflow.

We now have Terraform and Python code stored in our repository. Next, we will add a workflow to deploy this to our AWS account. 15. In the sidebar on the left, ensure the CI/CD dropdown is visible, and click on Workflows, then click on Create workflow. 16. On the Create workflow screen, click the Create button to create a workflow in our repository on the main branch.

  1. This will place us in the workflow editor.

  1. Replace the workflow name on line 1, with deployTerraform.
  2. The remainder of this section defines what will trigger our workflow, in this case, a push (i.e., a commit) to the main branch.

Add a workflow action to validate the Terraform Code

To start the workflow, we’re first going to use the visual mode to add an action which will use to check our Terraform. 20. Click on the Visual button in the center of the workflow editor, and then click on +Actions button on the top left of the editor.

  1. Choose the Test action block from the list of actions, by clicking on the + in the appropriate action block.

  1. Click on the Configuration tab in the action, then on the pen icon next to the Action name. Replace the default name with Run_tfsec and click on the tick icon.

  2. Scroll down to the Shell Commands section in the Configuration tab, and replace the example commands with the following:

    `- Run: |
        echo "Installing tfsec"
        wget https://github.com/aquasecurity/tfsec/releases/download/v1.28.1/tfsec-linux-amd64 -O tfsec
        chmod +x ./tfsec
    - Run: echo "Setup report folder"
    - Run: mkdir reports
    - Run: |
        cd terraform
        echo "Run tfsec"
        ../tfsec . --format sarif > ../reports/tfsec.sarif; true`
    

These commands will i. download a tool called tfsec which scans terraform for any security issues, ii. ensures the utility is executable, iii. creates a reports folder, iv. runs the command, outputting the SARIFSCA report to the reports folder, ready to be picked up by the Outputs section.

  1. Finally, to complete the action, we need to tell it how to process the report generated by the commands above. Click on the Outputs tab.

  1. Disable the automatic report discovery and click on **Add report in the Manually configure reports **section.
  2. Change the report name to **tfsec_results **and change the Report type dropdown to select Software composition analysis.

  1. Click on the Include/exclude paths section to reveal the Include paths and change this to reports/tfsec.sarif.
  2. To ensure we have set up this action correctly, click on the Validate button in the editor title bar and ensure we get confirmation that the workflow is valid, then click on the X at the top of the action section to close the first action. If you see any errors, check the items are configured as described above.

Add a workflow action to Deploy the Terraform Code

Now that we’ve added an action to validate our code, we’ll add an action to download Terraform and deploy our Terraform IaC. This time, we’ll use the YAML option within the editor. 29. Click on the YAML button at the top of the editor. 30. Below the previous action, paste the following code:

Run_terraform: Identifier: aws/build@v1 Outputs: AutoDiscoverReports: Enabled: false ReportNamePrefix: rpt Compute: Type: EC2 Environment: Connections: - Role: CodeCatalystPreviewDevelopmentAdministrator-kxxcnd Name: LabAccount Name: LabAccount DependsOn: - Run_tfsec Inputs: Sources: - WorkflowSource Configuration: Steps: - Run: echo "Installing Terraform" - Run: | sudo yum install -y yum-utils shadow-utils sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo sudo yum -y --nogpgcheck install terraform - Run: ls -lR .. - Run: echo "Initialising Terraform" - Run: | cd terraform terraform init -backend-config="bucket=_insert_state_s3_bucket_name_" -backend-config="key=demoTerraform" -backend-config="dynamodb_table=_insert_dynamodb_table_name_" -backend-config="region=_insert_deployment_region_" - Run: echo "Run terraform plan" - Run: terraform plan -no-color - Run: echo "Run terraform apply" - Run: terraform apply --auto-approve

  1. This creates an action called Run_terraform. Again, this action contains several sections: a. Identifier describes the type of action we will run, which in this case is a build step. b. Environment describes the AWS account we have configured in the project. Ensure that you replace the details in the highlighted fields with those matching your configuration and that the specified CodeCatalyst IAM role contains appropriate permissions as described earlier. c. DependsOn ensures that this action will only run once the Run_tfsec action has been completed successfully. d. Again, Configuration describes the commands we will run in this action – i. Install Terraform via yum, ii. Initialise the Terraform configuration to download the required providers. We also define the configuration we will use for the S3 backend, specifically the S3 bucket, key name and DynamoDB table to manage the state file, (please replace the highlighted fields with ones appropriate to your setup) iii. Run a terraform plan command to ensure that our terraform code is valid, and to generate a list of the resources that will be deployed. iv. Run a terraform apply command to deploy those resources to our AWS account.
  2. Again, we want to ensure our workflow is valid, so click the Validate button at the top of the editor and check we get a confirmation of a valid workflow.
  3. Once we’re sure we have a valid definition, click the Commit button, enter an appropriate message in the Commit message field and press the Commit button to store our new workflow in the repository.

Monitoring a workflow Run. We configured the workflow to trigger when we commit to the repository, so the steps above should have triggered a run of our new workflow. 34. In the workflow screen, click on Runs. 35. We should now see the workflow in progress, as below:

  1. Click on the Run ID value, and you’ll be taken to a screen where you can monitor the progress of the workflow. Depending on how quickly you click through, the Status field is likely to show either In Progress or Succeeded. The screen will also show the id of the commit that triggered the run, when the run started and how long it took to complete. The status screen also shows the steps of the workflow along with an indication of whether that step is queued, in progress, failed or completed.

  1. After a few minutes, the run should complete, hopefully successfully, and If we click on the **Run_terraform **step, we can review its progress, which should look similar to:

  1. In the above output, we can see each of the steps in this particular action. Scroll to the step terraform apply --auto-approve -no-color and click on the expand icon next to the step to see how the output from that step.

  1. The log shows the output from our terraform deployment, finishing with a URL which gives access to the lambda. If you take the URL and paste it into a browser, you should see something similar to

Reviewing the Workflow Validation

When we defined the workflow, we configured the first action to validate our terraform code. We can review the output of this validation as part of the workflow. 40. Click on the Reports tab on the workflow status screen.   41. We should see the tfsec_results reports that we defined in the first action, and if we click on the report name, the details of the validation will be displayed:

  1. In this case, we can see that the report has met the criteria of 0 high vulnerabilities, but we do have 2 informational warnings. Click on the results tab and we can see further information on these items.

Conclusion

In this article, we've embarked on a journey of setting up a repository within our CodeCatalyst workflow. We've seamlessly integrated Python lambda code and Terraform Infrastructure as Code (IaC) files. This has allowed us to create a structured workflow for infrastructure validation and deployment. Though our example is deliberately simple, it forms the foundation for a more extensive CI/CD pipeline. You can enhance it by adding validation or unit tests to ensure precise lambda function performance. Additionally, consider expanding deployments to staging environments, elevating your workflow's adaptability and potency. In summary, you've acquired the essential skills to establish a robust workflow, with the potential for future complexity and versatility

Co-published by : Simon Hanmer and Sundaresh R Iyer

AWS
EXPERT
published 8 months ago1344 views