Skip to content

How to override resourceRequirements with aws_cloudwatch_event_target?

0

I have a AWS Batch Job defined in Terraform and working fine in AWS. I am creating several override rules with cronExpression() so the job is invoked at specific times. I am using aws_cloudwatch_event_target and aws_cloudwatch_event_rule. Most of the functionality works. I am the rule triggers with the right, name, arn, etc. even the right command. My TF code looks like:

resource "aws_cloudwatch_event_rule" "batchjob" {
  name       = "foo"
  is_enabled = true
  # runs on the first of every month at 15:00 UTC 
  # https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/ScheduledEvents.html
  schedule_expression = "cron(0 15 1 * ? *)"
}

resource "aws_cloudwatch_event_target" "batchjob_target" {
  rule     = "foo"
  arn      = "<queue arn>"
  role_arn = "<role arn>"
  input = jsonencode({
    # https://docs.aws.amazon.com/batch/latest/APIReference/API_ContainerOverrides.html
    "ContainerOverrides" : {
      "Command" : [
        "/bin/java",
        "-jar",
        "/batch-jobs.jar",
        "foobar"
      ],
      # VCPU and MEMORY are linked: https://docs.aws.amazon.com/batch/latest/APIReference/API_ResourceRequirement.html
      # https://repost.aws/questions/QUP1kkGbDrT1uC4jTW6DXZ4A/set-cpu-and-memory-requirements-for-a-fargate-aws-batch-job-from-an-aws-cloudwatch-event
      "ResourceRequirements" : [
        {
          "Type"  = "VCPU"
          "Value" = "4"
        },
        {
          "Type"  = "MEMORY"
          "Value" = "8192"
        }
      ]
    }
    }
  )
}
batch_target {
  job_name       = "foo"
  job_definition = aws_batch_job_definition.job_definition.arn
}

I have confirmed that ContainerOverrides and Command must be pascal case for it work. If I use camel case, it doesn't see the overrides.

I was able to get it to work vai the AWS CLI with the following command:

AWS_PROFILE=dev aws batch submit-job \
        --job-name "foobar" \
        --job-queue "arn:aws:batch:us-west-2:<queue>" \
        --job-definition "arn:aws:batch:us-west-2:<definition>" \
        --container-overrides '{"command": ["/bin/java","-jar","/batch-jobs.jar","foobar"], "resourceRequirements": [{"value": "4", "type": "VCPU"}, {"value": "8192", "type": "MEMORY"}]}'

But I cannot figure out the proper syntax, in terraform to override the resource requirements. What am I missing?

1 Answer
1
Accepted Answer

Hi Eric, thanks for reaching out!

Looking in to this, I was able to confirm internally that the ResourceRequirement parameters under ContainerOverrides are not processed when configuring a Batch Job as an EventBridge rule target. The parameters under Command will be accepted, but ResourceRequirements do not get passed from the EventBridge Target to the Batch Job. I can also confirm that there is a Feature Request in place to allow for this, but I'm unable to provide an ETA on when this may be available.

However, I can recommend that instead of using an EventBridge rule with a cron expression, you can accomplish your use case using the newer EventBridge Scheduler feature. This feature has TerraForm support and will allow you to schedule direct Batch SubmitJob calls with all the parameters you wish to pass, including Container Override parameters with ResourceRequirement fields.

I tested this feature with VCPU and MEMORY ResourceRequirements with a Scheduler schedule in the console and passed the following JSON for the API call:

{
	"JobDefinition": "arn:aws:batch:us-west-1:123456789012:<definition>",
	"JobName": "MyData",
	"JobQueue": "arn:aws:batch:us-west-1:123456789012:<queue>",
	"ContainerOverrides": {
		"ResourceRequirements": [{
				"Type": "VCPU",
				"Value": "4"
			},
			{
				"Type": "MEMORY",
				"Value": "8192"
			}
		]
	}
}

The job was successfully started using the VCPU and MEMORY requirements specified in my ContainerOverride fields.

I hope this information was helpful!

AWS
SUPPORT ENGINEER
answered 3 years ago
EXPERT
reviewed 2 years ago
  • Justin, thank you very much for all the info. I will get with our Cloud team and see if we can start using Scheduler. That definitely seems like the better way to go!

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.