By using AWS re:Post, you agree to the Terms of Use
/Amazon EventBridge/

Questions tagged with Amazon EventBridge

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Unable to create new OpsItems from EventBridge when using Input Transformer for deduplication and adding category and severity values

Apologize to all for the duplicate post. I created my login under the wrong account when I initially posted this question. I’m able to generate a new OpsItem for any EC2, SecurityGroup, or VPC configuration change using an EventBridge rule with the following event pattern. { "source": "aws.config", "detail-type": "Config Configuration Item Change", "detail": { "messageType": "ConfigurationItemChangeNotification", "configurationItem": { "resourceType": "AWS::EC2::Instance", "AWS::EC2::SecurityGroup", "AWS::EC2::VPC" } } } The rule and target work great when using Matched event for the Input but I noticed that launching one EC2 using the AWS wizard creates at least three OpsItems, one for each resourceType. Therefore I’d like to implement a deduplication string to cut down on the number of OpsItems generated to one if possible and I’d also like to attach a category and severity to the new OpsItem. I’m trying to use an Input Transformer as recommended by the AWS documentation but even the most simplest of Input Transformers when applied prevent any new OpsItems from being generated. When I've tested, I've also ensured that all previous OpsItems were resolved. Can anyone tell me what might be blocking the creation of any new OpsItems when using this Input Transformer configuration? Here’s what I have configured now. Input path { "awsAccountId": "$.detail.configurationItem.awsAccountId", "awsRegion": "$.detail.configurationItem.awsRegion", "configurationItemCaptureTime": "$.detail.configurationItem.configurationItemCaptureTime", "detail-type": "$.detail-type", "messageType": "$.detail.messageType", "notificationCreationTime": "$.detail.notificationCreationTime", "region": "$.region", "resourceId": "$.detail.configurationItem.resourceId", "resourceType": "$.detail.configurationItem.resourceType", "resources": "$.resources", "source": "$.source", "time": "$.time" } Input template { "awsAccountId": "<awsAccountId>", "awsRegion": "<awsRegion>", "configurationItemCaptureTime": "<configurationItemCaptureTime>", "resourceId": "<resourceId>", "resourceType": "<resourceType>", "title": "Template under ConfigDrift-EC2-Dedup4", "description": "Configuration Drift Detected.", "category": "Security", "severity": "3", "origination": "EventBridge Rule - ConfigDrift-EC2-Dedup", "detail-type": "<detail-type>", "source": "<source>", "time": "<time>", "region": "<region>", "resources": "<resources>", "messageType": "<messageType>", "notificationCreationTime": "<notificationCreationTime>", "operationalData": { "/aws/dedup": { "type": "SearchableString", "value": "{\"dedupString\":\"ConfigurationItemChangeNotification\"}" } } } Output when using the AWS supplied Sample event called “Config Configuration Item Change” { "awsAccountId": "123456789012", "awsRegion": "us-east-1", "configurationItemCaptureTime": "2022-03-16T01:10:50.837Z", "resourceId": "fs-01f0d526165b57f95", "resourceType": "AWS::EFS::FileSystem", "title": "Template under ConfigDrift-EC2-Dedup4", "description": "Configuration Drift Detected.", "category": "Security", "severity": "3", "origination": "EventBridge Rule - ConfigDrift-EC2-Dedup", "detail-type": "Config Configuration Item Change", "source": "aws.config", "time": "2022-03-16T01:10:51Z", "region": "us-east-1", "resources": "arn:aws:elasticfilesystem:us-east-1:123456789012:file-system/fs-01f0d526165b57f95", "messageType": "ConfigurationItemChangeNotification", "notificationCreationTime": "2022-03-16T01:10:51.976Z", "operationalData": { "/aws/dedup": { "type": "SearchableString", "value": "{"dedupString":"ConfigurationItemChangeNotification"}" } } }
1
answers
0
votes
2
views
AWS-User-1369b
asked 9 days ago

Unable to create new OpsItems from EventBridge when using Input Transformer for deduplication and adding category and severity values

I’m able to generate a new OpsItem for any EC2, SecurityGroup, or VPC configuration change using an EventBridge rule with the following event pattern. { "source": ["aws.config"], "detail-type": ["Config Configuration Item Change"], "detail": { "messageType": ["ConfigurationItemChangeNotification"], "configurationItem": { "resourceType": ["AWS::EC2::Instance", "AWS::EC2::SecurityGroup", "AWS::EC2::VPC"] } } } The rule and target work great when using Matched event for the Input but I noticed that launching one EC2 using the AWS wizard creates at least three OpsItems, one for each resourceType. Therefore I’d like to implement a deduplication string to cut down on the number of OpsItems generated to one if possible and I’d also like to attach a category and severity to the new OpsItem. I’m trying to use an Input Transformer as recommended by the AWS documentation but even the most simplest of Input Transformers when applied prevent any new OpsItems from being generated. When I've tested, I've also ensured that all previous OpsItems were resolved. Can anyone tell me what might be blocking the creation of any new OpsItems when using this Input Transformer configuration? Here’s what I have configured now. Input path { "awsAccountId": "$.detail.configurationItem.awsAccountId", "awsRegion": "$.detail.configurationItem.awsRegion", "configurationItemCaptureTime": "$.detail.configurationItem.configurationItemCaptureTime", "detail-type": "$.detail-type", "messageType": "$.detail.messageType", "notificationCreationTime": "$.detail.notificationCreationTime", "region": "$.region", "resourceId": "$.detail.configurationItem.resourceId", "resourceType": "$.detail.configurationItem.resourceType", "resources": "$.resources", "source": "$.source", "time": "$.time" } Input template { "awsAccountId": "<awsAccountId>", "awsRegion": "<awsRegion>", "configurationItemCaptureTime": "<configurationItemCaptureTime>", "resourceId": "<resourceId>", "resourceType": "<resourceType>", "title": "Template under ConfigDrift-EC2-Dedup4", "description": "Configuration Drift Detected.", "category": "Security", "severity": "3", "origination": "EventBridge Rule - ConfigDrift-EC2-Dedup", "detail-type": "<detail-type>", "source": "<source>", "time": "<time>", "region": "<region>", "resources": "<resources>", "messageType": "<messageType>", "notificationCreationTime": "<notificationCreationTime>", "operationalData": { "/aws/dedup": { "type": "SearchableString", "value": "{\"dedupString\":\"ConfigurationItemChangeNotification\"}" } } } Output when using the AWS supplied Sample event called “Config Configuration Item Change” { "awsAccountId": "123456789012", "awsRegion": "us-east-1", "configurationItemCaptureTime": "2022-03-16T01:10:50.837Z", "resourceId": "fs-01f0d526165b57f95", "resourceType": "AWS::EFS::FileSystem", "title": "Template under ConfigDrift-EC2-Dedup4", "description": "Configuration Drift Detected.", "category": "Security", "severity": "3", "origination": "EventBridge Rule - ConfigDrift-EC2-Dedup", "detail-type": "Config Configuration Item Change", "source": "aws.config", "time": "2022-03-16T01:10:51Z", "region": "us-east-1", "resources": "arn:aws:elasticfilesystem:us-east-1:123456789012:file-system/fs-01f0d526165b57f95", "messageType": "ConfigurationItemChangeNotification", "notificationCreationTime": "2022-03-16T01:10:51.976Z", "operationalData": { "/aws/dedup": { "type": "SearchableString", "value": "{"dedupString":"ConfigurationItemChangeNotification"}" } } }
0
answers
0
votes
1
views
AWS-User-1369
asked 9 days ago

How to set up EventBridge Api Destination/Connection to Google Cloud?

I am setting up a project which will publish events to EventBridge. One of the targets of these events will be an HTTP Api in Google Cloud. I know EventBridge supports Api Destinations and I am trying to set that up to send these events. I have been unable to get the connection working to the Google Api and could use some suggestions. I am trying to use OAuth credentials from the Google account to create an Api Destination/Connection. So far, the Connection is always marked as "Deauthorized". I have not been able to find any details or debug information about the connection attempt that fails. I created credentials in the Google account and downloaded the credentials json file. Setting up the connection in AWS console, I used the "client_id" property from the json file as the "Client ID" field for the connection. I think one issue may be the "Client secret" value. I was surprised that the "private_key" property in the Google json file looks like: ``` "private_key": "-----BEGIN PRIVATE KEY-----\n<lots of Base64 and several newlines>\n-----END PRIVATE KEY-----\n" ``` I tried using the value between the BEGIN PRIVATE KEY and END PRIVATE KEY tags but AWS rejected that saying it was too long. I tried a single value from between newlines, which I was allowed to save, but doesn't work. I have also tried setting this value to the "private_key_id" which also doesn't work (I didn't really expect it to, but worth a shot). There are also options to send OAuth Http Parameters and Invocation Http Parameters. I've tried adding key="scopes" and value="https://www.googleapis.com/auth/cloud-platform" for both Parameters. Has anyone had luck setting up a Connection like this to a Google account? I've also looked at Google's Workload Identity Federation, but it appears there isn't a way to use that in a no-code case like the EventBridge Api Destinations.
1
answers
0
votes
4
views
ScottD
asked 12 days ago

Lambda Events not triggering EventBridge destination

I am using the Amazon Selling Partner API (SP-API) and am trying to set up a Pub/Sub like system for receiving customer orders etc. The Notifications API in SP-API sends notifications of different types in 2 different ways depending on what event you are using. Some send directly to eventBridge and others are sent to SQS. https://developer-docs.amazon.com/sp-api/docs/notifications-api-v1-use-case-guide#section-notification-workflows I have correctly set up the notifications that are directly sent to eventBridge, but am struggling to work the SQS notifications. I want all notifications to be send to my own endpoint. For the SQS model, I am receiving notifications in SQS, which is set as a trigger for a Lambda function (This part works). The destination for this function is set as another eventBridge (this is that part that doesn't work). This gives the architecture as: `SQS => Lambda => eventBridge => my endpoint` Why is lambda not triggering my eventBridge destination in order to send the notifications? **Execution Role Policies:** * Lambda 1. AWSLambdaBasicExecutionRole 2. AmazonSQSFullAccess 3. AmazonEventBridgeFullAccess 4. AWSLambda_FullAccess * EventBridge 1. Amazon_EventBridge_Invoke_Api_Destination 2. AmazonEventBridgeFullAccess 3. AWSLambda_FullAccess **EventBridge Event Pattern:** `{"source": ["aws.lambda"]}` **Execution Role Trusted Entities:** * EventBridge Role `"Service": [ "events.amazonaws.com", "lambda.amazonaws.com", "sqs.amazonaws.com" ]` * Lambda Role `"Service": [ "lambda.amazonaws.com", "events.amazonaws.com", "sqs.amazonaws.com" ]` **Lambda Code:** ``` exports.handler = function(event, context, callback) { console.log("Received event: ", event); context.callbackWaitForEmptyEventLoop = false callback(null, event); return { statusCode: 200, } } ```
1
answers
0
votes
3
views
AWS-User-7055818
asked 13 days ago

EventBridge API Destinations - Created Auth0 tokens are already expired

I think there is an issue with how auth tokens are being handled/supplied. When my event bus receives an event and my rule passes the event into my API Destination my API rejects the communication with a 403. After looking at the bearer token jwt, the token creation date iat value is set to the time the API Destination & connection was authorized, even hours later. The concept of bearer tokens are to be short lived and I would expect that the API Destination would request a new bearer token each time it is invoked. example: 1. I created an API Destination w/ a valid connection on Friday Apr 1 at 7am. 2. My bearer tokens have a 60 min TTL 3. My event bus receives a valid event on Friday Apr 1 at 730am 4. A rule sends the event into my API Destination which uses its token send the event to my API and it is successful 5. My event bus receives another valid event on Friday Apr 1 at 830am 6. A rule sends the event into my API Destination which uses its token send the event to my API and it fails. For step 4 & 6 above the token is identical. I would have expected the API destination to call the auth url with its credentials to get a new bearer token From what I can tell the JWT created time will always be this date/time here and I have been fully unable to get a valid & unexpired JWT created anytime after an hour from launching the API Destination. Two supporting images [here](https://photos.app.goo.gl/nkKJG6YpKuE31fQq5)
1
answers
1
votes
6
views
AWS-User-3833226
asked a month ago

Can't get EventBridge rule to create a message in SQS

I am trying to setup the [AWS node termination handler](https://github.com/aws/aws-node-termination-handler), and am running into an issue where the EventBridge rule is invoked, but no messages are showing up in the sqs queue. I have tested and the termination handler is able to communicate with the SQS queue. I have also tested spinning instances up and down, and see the rule invocations for the EventBridge rules. However, there are no messages appearing in the queue... NOTE: I tried adding a photo here from cloudwatch showing rule invocations but no messages appearing in the queue, it seems like pictures are not supported here yet... Below are my configs for this: SQS policy: ```hcl resource "aws_sqs_queue_policy" "termination_handler_queue_policy" { queue_url = module.termination_handler_queue.sqs_queue_id policy = jsonencode({ "Version" : "2012-10-17", "Id" : "sqspolicy", "Statement" : [ { "Sid" : "TermEventsToHandlerQueue", "Effect" : "Allow", "Principal" : { "Service" : ["events.amazonaws.com", "sqs.amazonaws.com"] }, "Action" : "sqs:*", "Resource" : "${module.termination_handler_queue.sqs_queue_name}", "Condition" : { "ArnEquals" : { "aws:SourceArn" : ["arn:aws:events:us-east-2:${local.account_id}:rule/node-termination-asg-lifecycle-rule", "arn:aws:events:us-east-2:${local.account_id}:rule/node-termination-ec2-status-rule", "arn:aws:events:us-east-2:${local.account_id}:rule/node-termination-ec2-spot-interruption-rule", "arn:aws:events:us-east-2:${local.account_id}:rule/node-termination-ec2-rebalance-rule" ] } } } ] }) } ``` EventBridge Config: ```hcl module "termination_handler_eventbridge" { source = "terraform-aws-modules/eventbridge/aws" version = "~> 1.14.0" create_bus = false rules = { node-termination-asg-lifecycle = { description = "Capture eks asg lifecycle events." event_pattern = jsonencode({ "source" : ["aws.autoscaling"], "detail-type" : ["EC2 Instance Launch Successful", "EC2 Instance Terminate Successful", "EC2 Instance Launch Unsuccessful", "EC2 Instance Terminate Unsuccessful", "EC2 Instance-launch Lifecycle Action", "EC2 Instance-terminate Lifecycle Action"], "detail" : { "AutoScalingGroupName" : ["eks-Group_A", "eks-Group_B"] } }) enabled = true } node-termination-ec2-status = { description = "Capture ec2 status events" event_pattern = jsonencode({ "source" : ["aws.ec2"], "detail-type" : ["EC2 Instance State-change Notification"] }) enabled = true } node-termination-ec2-spot-interruption = { description = "Capture spot interruption events" event_pattern = jsonencode({ "source" : ["aws.ec2"], "detail-type" : ["EC2 Spot Instance Interruption Warning"] }) enabled = true } node-termination-ec2-rebalance = { description = "Capture ec2 rebalance events" event_pattern = jsonencode({ "source" : ["aws.ec2"], "detail-type" : ["EC2 Instance Rebalance Recommendation"] }) enabled = true } } targets = { node-termination-asg-lifecycle = [ { name = "termination_handler-sqs-life" arn = module.termination_handler_queue.sqs_queue_arn }, ] node-termination-ec2-status = [ { name = "termination_handler-sqs-status" arn = module.termination_handler_queue.sqs_queue_arn }, ] node-termination-ec2-spot-interruption = [ { name = "termination_handler-sqs-int" arn = module.termination_handler_queue.sqs_queue_arn }, ] node-termination-ec2-rebalance = [ { name = "termination_handler-sqs-rebalance" arn = module.termination_handler_queue.sqs_queue_arn }, ] } tags = { Name = "node-termination-handler-bus" Service = "aws-node-termination-handler" } } ```
1
answers
0
votes
5
views
AWS-User-1197982
asked a month ago

How can I do Distributed Transaction with EventBridge?

I'm using the following scenario to explain the problem. I have an ecommerce app which allows the customers to sign up and get an immediate coupon to use in the application. I want to use **EventBridge ** and a few other resources like a Microsoft SQL Database and Lambdas. The coupon is retrieved from a third-party API which exists outside of AWS. The event flow is: Customer --- *sends web form data* --> EventBridge Bus --> Lambda -- *create customer in SQL DB* --*get a coupon from third-party API* -- *sends customer created successfully event* --> EventBridge Bus Creating a customer in SQL DB, getting the coupon from the third-party API should happen in a single transaction. There is a good chance that either of that can fail due to network error or whatever information that the customer provides. Even if the customer has provided the correct data and a new customer is created in the SQL DB, the third-party API call can fail. These two operations should succeed only if both succeed. Does EventBridge provide distributed transaction through its .NET SDK? In the above example, if the third-party call fails, the data created in the SQL database for the customer is rolled back as well as the message is sent back to the queue so it can be tried again later. I'm looking for something similar to [TransactionScope](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/servicebus/Azure.Messaging.ServiceBus/samples/Sample06_Transactions.md) that is available in Azure. If that is not available, how can I achieve distributed transaction with EventBridge, other AWS resources and third-party services which have a greater chance of failure as a unit.
3
answers
0
votes
8
views
wonderful world
asked a month ago

How to invoke a private REST API (created with AWS Gateway) endpoint from an EventBusRule?

I have setup the following workflow: - private REST API with sources `/POST/event` and `/POST/process` - a `VPCLink` to an `NLB` (which points to an `ALB` pointing to a microservice running on `EKS`) - a `VPC endpoint` with DNS name `vpce-<id>-<id>.execute-api.eu-central-1.vpce.amazonaws.com` with `Private DNS enabled` - an EventBridge `EventBus` with a rule that has two targets: 1 `API Destination` for debugging/testing and 1 `AWS Service` which points to my private REST Api on the source `/POST/process` - all required `Resource Policies` and `Roles` - all resources are defined within the same AWS Account The **designed** worflow is as follows: - invoke `POST/event` on the VPC endpoint (any other invocation is prohibited by the `Resource Policy`) with an `event` payload - the API puts the `event` payload to the `EventBus` - the `EventBusRule` is triggered and sends the `event` payload to the `POST/process` endpoint on the private REST API - the `POST/process` endpoint proxies the payload to a microservice running on EKS (via `VPCLink` > `NLB` > `ALB`> `k8s Service`) **What does work** so far: - invoking `POST/event` on the VPC endpoint - putting the `event` payload to the `EventBus` - forwarding the `event` payload to the `API Destination` set up for testing/debugging (it's a temporary endpoint on https://webhook.site) - testing the `POST/event` and `POST/process` integration in the AWS Console (the latter is verified by checking that the `event` payload reaches the microservice on EKS successfully) That is all single steps in the workflow seem to work, and all permissions seem to be set properly. **Whad does not work **is invoking the `POST/process` endpoint from the `EventBusRule`, i.e. invoking `POST/event` does not invoke `POST/process` via the `EventBus`, _although_ the `EventBusRule` was triggered. So my **question** is: **How to invoke a private REST API endpoint from an EventBusRule?** **What I have already tried:** - change the order of the `EventBusRule targets` - create a Route 53 record pointing to the `VPC endpoint` and treat it as an (external) `API Destination` - allow access from _anywhere_ by _anyone_ to the REST API (temporarily only, of course) **Remark on the design:** I create _two_ endpoints (one for receiving an `event`, one for processing it) with an EventBus in between because - I have to expect a delay of several minutes between the `Event Creation/Notification` and the successful `Event Processing` - I expect several hundred `event sources`, which are different AWS and Azure accounts - I want to keep track of all events that _reach_ our API and of their successful _processing_ in one central EventBus and _not_ inside each AWS account where the event stems from - I want to keep track each _failed_ event processing in the same central EventBus with only one central DeadLetterQueue
1
answers
0
votes
5
views
ernst_vonoelsen
asked 2 months ago

Set cpu and memory requirements for a Fargate AWS Batch job from an AWS Cloudwatch event

I am trying to automate Fargate AWS Batch jobs by means of AWS Cloudwatch Events. So far, so good. I am trying to run the same job definition with different configurations. I am able to set the batch job as a cloudwatch event target. I have learned how to use the Constant (JSON text) configuration to set a parameter of the job. Thus, I can set the name parameter successfully and the job runs. However, I am not able to also set the memory and cpu settings in the Cloudwatch event. I would like to use a larger machine for a a bigger port such as Singapore, without changing the job definition. After all, at the moment it still uses the default vpcu and memory settings of the job definition. ``` { "Parameters": {"name":"wilhelmshaven"}, "ContainerOverrides": { "Command": ["upload_to_day.py", "-port_name","Ref::name"], "resourceRequirements": [ {"type": "MEMORY", "value": "4096"}, {"type": "VCPU", "value": "2"} ] } } ``` Does any one know how to set the Constant (JSON text) configuration or input transformer correctly? Edit: If I try the same thing using the AWS CLI, I can achieve what I would like to do. ``` aws batch submit-job \ --job-name "run-wilhelmshaven" \ --job-queue "arn:aws:batch:eu-central-1:123666072061:job-queue/upload-raw-to-day-vtexplorer" \ --job-definition "arn:aws:batch:eu-central-1:123666072061:job-definition/upload-to-day:2" \ --container-overrides '{"command": ["upload_to_day.py", "-port_name","wilhelmshaven"], "resourceRequirements": [{"value": "2", "type": "VCPU"}, {"value": "4096", "type": "MEMORY"}]}' ```
1
answers
0
votes
3
views
AWS-User-6786633
asked 2 months ago

Eventbridge Bugs

Hey, since I started logging my API calls I realized there might be some bugs in Eventbridge scheduled HTTP Api Invocations. In general, the Apis don't get invoked when they should be. Often they get invoked twice instead of once. For example my Eventbridge cron expression is: 10 8-18 ? \* 2-6 \* so at minute 10 from 8-18 on any day of the month on any month from 2nd to 6th day of the week (which ends up being Monday to Friday) and any year. Now my API logs for this look like this: - 2022-03-14 13:30:08.963665 +00:00 - 2022-03-14 13:34:02.215564 +00:00 - 2022-03-14 13:38:29.776793 +00:00 - 2022-03-14 13:43:29.320522 +00:00 - 2022-03-14 13:46:57.916126 +00:00 - 2022-03-14 13:51:55.419461 +00:00 - 2022-03-14 13:56:47.090243 +00:00 - 2022-03-14 14:00:41.538169 +00:00 - 2022-03-14 14:06:09.226878 +00:00 - 2022-03-14 14:09:23.691206 +00:00 - 2022-03-14 14:10:22.682902 +00:00 - 2022-03-14 14:11:25.746832 +00:00 - 2022-03-14 14:13:23.880627 +00:00 - 2022-03-14 14:15:23.361370 +00:00 - 2022-03-14 14:16:43.967781 +00:00 - 2022-03-14 14:19:01.799442 +00:00 - 2022-03-14 14:24:30.163322 +00:00 - 2022-03-14 14:25:09.470810 +00:00 - 2022-03-14 14:27:52.924405 +00:00 - 2022-03-14 14:33:32.610750 +00:00 - 2022-03-14 14:33:44.313787 +00:00 It gets invoked many more times!!! Day of the week, month etc. seem to work and in the header it shows AWS eventbridge. "user-agent": "Amazon/EventBridge/ApiDestinations" so it definitely gets invoked by Evenbridge. What is going on??
2
answers
0
votes
8
views
AWS-User-8763443
asked 2 months ago

CannotPullContainerError in public VPC/Subnet. What am I missing/doing wrong?

I have created a brand new AWS account (just to troubleshoot this issue) and the default VPC and subnets of every region in this account are left pristine and unmodified. Here's the default VPC in `us-east-1`: $ aws ec2 describe-vpcs { "Vpcs": [ { "CidrBlock": "172.31.0.0/16", "DhcpOptionsId": "dopt-095a7873b289557a1", "State": "available", "VpcId": "vpc-08ba51697a37c5ad9", "OwnerId": "...", "InstanceTenancy": "default", "CidrBlockAssociationSet": [ { "AssociationId": "vpc-cidr-assoc-0dba5df7b176877b7", "CidrBlock": "172.31.0.0/16", "CidrBlockState": { "State": "associated" } } ], "IsDefault": true } ] } Here's the route table for this VPC: $ aws ec2 describe-route-tables --filters Name=vpc-id,Values=vpc-08ba51697a37c5ad9 { "RouteTables": [ { "Associations": [ { "Main": true, "RouteTableAssociationId": "rtbassoc-08e6f9833f341f6c4", "RouteTableId": "rtb-000d61d5d0236d276", "AssociationState": { "State": "associated" } } ], "PropagatingVgws": [], "RouteTableId": "rtb-000d61d5d0236d276", "Routes": [ { "DestinationCidrBlock": "172.31.0.0/16", "GatewayId": "local", "Origin": "CreateRouteTable", "State": "active" }, { "DestinationCidrBlock": "0.0.0.0/0", "GatewayId": "igw-0b7ed209f5cd38fa6", "Origin": "CreateRoute", "State": "active" } ], "Tags": [], "VpcId": "vpc-08ba51697a37c5ad9", "OwnerId": "..." } ] } As you can see, the second route permits egress to the internet: { "DestinationCidrBlock": "0.0.0.0/0", "GatewayId": "igw-0b7ed209f5cd38fa6", "Origin": "CreateRoute", "State": "active" } So I assume if I deploy an ECS Fargate task in this VPC, it should be able to pull `amazoncorretto:17-alpine3.15` from `docker.io`. Despite that, whenever I deploy my CloudFormation stack, ECS fails to run the scheduled task as it cannot fetch the images from DockerHub and prints this error: > CannotPullContainerError: > inspect image has been retried 5 time(s): > failed to resolve ref "docker.io/library/amazoncorretto:17-alpine3.15": > failed to do request: Head https://registry-1.docker.io/v2/library/amazoncorretto/manifests/17-alpine3.15: dial ... Here's my CloudFormation template (I have intentionally given wide open permissions to all the roles involved to make sure this issue is not due to insufficient IAM permissions): AWSTemplateFormatVersion: "2010-09-09" Description: ECS Cron Task Parameters: AppName: Type: String Default: CronTask AppImage: Type: String Default: amazoncorretto:17-alpine3.15 AppLogGroup: Type: String Default: ECS AppLogPrefix: Type: String Default: CronTask ScheduledTaskSubnets: Type: List<AWS::EC2::Subnet::Id> Default: "subnet-0031a6eaf7e52173c, subnet-01950a0d2d1e04dc1, subnet-0a1aa70f0421e2025, subnet-036abb95995a86c73, subnet-0f8b5043babfb9a7e, subnet-07cb2210ce2d5bb8f" Resources: Cluster: Type: AWS::ECS::Cluster TaskRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Action: sts:AssumeRole Effect: Allow Principal: Service: ecs-tasks.amazonaws.com Policies: - PolicyName: AdminAccess PolicyDocument: Version: "2012-10-17" Statement: - Action: "*" Effect: Allow Resource: "*" TaskExecutionRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Action: sts:AssumeRole Effect: Allow Principal: Service: ecs-tasks.amazonaws.com Path: / ManagedPolicyArns: - arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy Policies: - PolicyName: AdminAccess PolicyDocument: Version: "2012-10-17" Statement: - Action: "*" Effect: Allow Resource: "*" TaskScheduleRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Statement: - Action: sts:AssumeRole Effect: Allow Principal: Service: events.amazonaws.com Path: / Policies: - PolicyName: AdminAccess PolicyDocument: Statement: - Action: "*" Effect: Allow Resource: "*" TaskDefinition: Type: AWS::ECS::TaskDefinition Properties: Cpu: 256 Memory: 512 NetworkMode: awsvpc TaskRoleArn: !Ref TaskRole ExecutionRoleArn: !Ref TaskExecutionRole Family: !Ref AppName RequiresCompatibilities: - FARGATE ContainerDefinitions: - Name: !Ref AppName Image: !Ref AppImage Command: ["java", "--version"] Essential: true LogConfiguration: LogDriver: awslogs Options: awslogs-create-group: true awslogs-group: !Ref AppLogGroup awslogs-region: !Ref "AWS::Region" awslogs-stream-prefix: !Ref AppLogPrefix TaskSchedule: Type: AWS::Events::Rule DependsOn: - TaskScheduleRole - DeadLetterQueue Properties: Description: Trigger the task once every minute ScheduleExpression: cron(0/1 * * * ? *) State: ENABLED Targets: - Arn: !GetAtt Cluster.Arn Id: ClusterTarget RoleArn: !GetAtt TaskScheduleRole.Arn DeadLetterConfig: Arn: !GetAtt DeadLetterQueue.Arn EcsParameters: LaunchType: FARGATE TaskCount: 1 TaskDefinitionArn: !Ref TaskDefinition NetworkConfiguration: AwsVpcConfiguration: Subnets: !Ref ScheduledTaskSubnets DeadLetterQueue: Type: AWS::SQS::Queue Properties: QueueName: "CronTaskDeadLetterQueue" DeadLetterQueuePolicy: Type: AWS::SQS::QueuePolicy Properties: Queues: - !Ref DeadLetterQueue PolicyDocument: Statement: - Action: "*" Effect: Allow Resource: "*" What am I missing here? Why despite running the task in a public subnet/VPC (below), AWS cannot pull the image from `docker.io`? Is something missing in my `TaskSchedule` resource? TaskSchedule: Type: AWS::Events::Rule ... Properties: ... Targets: - ... EcsParameters: LaunchType: FARGATE TaskCount: 1 TaskDefinitionArn: !Ref TaskDefinition NetworkConfiguration: AwsVpcConfiguration: Subnets: !Ref ScheduledTaskSubnets Thanks in advance.
1
answers
0
votes
10
views
MobyDick
asked 3 months ago

Updating EventBridge Target in a separate CloudFormation script

Hello! I have two separate serverless.yml projects that have to interact with an EventBridge rule and I'm struggling to figure out how to update the resources section in both to support this. Here's an example of what I mean: **EntryPoint Lambda**: This lambda gets the request from the front end and then submits a message to the EventBridge rule we have setup. The EventBridge rule is setup in the serverless.yml file like so: ``` resources: Resources: RegisterEventBridge: Type: "AWS::Events::Rule" Properties: Name: "RegisterCloseEventBridge" Description: "Event rule for Closing Register event." State: "ENABLED" EventPattern: source: - "register.close.${self:provider.stage}" ``` **Consumer**: This lambda function gets triggered by a SQS message. The SQS is a target for the EventBridge rule so it seems like the queue itself should be created in this service. Below is the resources: ``` resources: Resources: ClosingRegisterQueue: Type: "AWS::SQS::Queue" Properties: QueueName: "closing-register-sqs-${self:provider.stage}" RegisterEventBridge: Type: "AWS::Events::Rule" Properties: Name: "RegisterCloseEventBridge" EventPattern: source: - "register.close.${self:provider.stage}" Targets: - Arn: !GetAtt ClosingRegisterQueue.Arn Id: "SA" ``` But obviously this does not work because the RegisterEventBridge rule already exists in the previous stack. Is there any way I can simply import it into this stack for this purpose?
1
answers
0
votes
6
views
KenHarris627
asked 3 months ago

Cloudtrail event notifications

Hello, we have configured configured Control Tower landing zone and enrolled tens of accounts in our organization. We would like to monitor some of the actions (ConsoleLogin, SwitchRole, CreateUser, CreatePolicy, CreateRole, PutGroupPolicy, ...) across all accounts in organization and be notified when the action occurs via Slack or Pagerduty. Is there any out of box solution or recommended approach? I am considering two approaches: 1. Listen Cloudtrail S3 logs bucket Create an account which will have read only access to cloudtrail logs S3 bucket in Log Archive account. Lambda function will be triggered on new records in bucket. It will download the files from S3 and parse the events. Huge disadvantage is that it'll have to parse all cloudtrail entries which could be expensive and in inefficient. 2. Aggregate events using EventBridge buses Create dedicated account "Audit Notifications" where will be EventBridge event bus aggregating matched events from all other accounts. There will be configured event rule with Lambda target forwarding matched events from all accounts to Slack/Pagerduty/... in "Audit Notifications" account. Event rule forwarding matched events to Event Bus target in "Audit Notifications" will be deployed into each governed region in each member account. Similar as described in https://aws.amazon.com/premiumsupport/knowledge-center/root-user-account-eventbridge-rule/ I favor second approach, but maybe there are some other options. thanks
1
answers
0
votes
11
views
Martin Halamicek
asked 3 months ago

RegEx for ScheduleExpression that can be used in CloudFormation Template

I was writing a CF template today that creates a periodic EventBridge rule. One of the parameters my template therefore accepts is the cron expression for setting the ScheduleExpression on this rule. I struggled for a while getting the expression right for my own use case, and wanted to add an AllowedPattern rule to my parameter. I went looking for a RegEx that I could use, and had no luck. There is a Gist, not updated in a few years, out there that presents this monster: ``` "^(rate\\(((1 (hour|minute|day))|(\\d+ (hours|minutes|days)))\\))|(cron\\(\\s*($|#|\\w+\\s*=|(\\?|\\*|(?:[0-5]?\\d)(?:(?:-|\/|\\,)(?:[0-5]?\\d))?(?:,(?:[0-5]?\\d)(?:(?:-|\/|\\,)(?:[0-5]?\\d))?)*)\\s+(\\?|\\*|(?:[0-5]?\\d)(?:(?:-|\/|\\,)(?:[0-5]?\\d))?(?:,(?:[0-5]?\\d)(?:(?:-|\/|\\,)(?:[0-5]?\\d))?)*)\\s+(\\?|\\*|(?:[01]?\\d|2[0-3])(?:(?:-|\/|\\,)(?:[01]?\\d|2[0-3]))?(?:,(?:[01]?\\d|2[0-3])(?:(?:-|\/|\\,)(?:[01]?\\d|2[0-3]))?)*)\\s+(\\?|\\*|(?:0?[1-9]|[12]\\d|3[01])(?:(?:-|\/|\\,)(?:0?[1-9]|[12]\\d|3[01]))?(?:,(?:0?[1-9]|[12]\\d|3[01])(?:(?:-|\/|\\,)(?:0?[1-9]|[12]\\d|3[01]))?)*)\\s+(\\?|\\*|(?:[1-9]|1[012])(?:(?:-|\/|\\,)(?:[1-9]|1[012]))?(?:L|W)?(?:,(?:[1-9]|1[012])(?:(?:-|\/|\\,)(?:[1-9]|1[012]))?(?:L|W)?)*|\\?|\\*|(?:JAN|FEB|MAR|APR|MAY|JUN|JUL|AUG|SEP|OCT|NOV|DEC)(?:(?:-)(?:JAN|FEB|MAR|APR|MAY|JUN|JUL|AUG|SEP|OCT|NOV|DEC))?(?:,(?:JAN|FEB|MAR|APR|MAY|JUN|JUL|AUG|SEP|OCT|NOV|DEC)(?:(?:-)(?:JAN|FEB|MAR|APR|MAY|JUN|JUL|AUG|SEP|OCT|NOV|DEC))?)*)\\s+(\\?|\\*|(?:[0-6])(?:(?:-|\/|\\,|#)(?:[0-6]))?(?:L)?(?:,(?:[0-6])(?:(?:-|\/|\\,|#)(?:[0-6]))?(?:L)?)*|\\?|\\*|(?:MON|TUE|WED|THU|FRI|SAT|SUN)(?:(?:-)(?:MON|TUE|WED|THU|FRI|SAT|SUN))?(?:,(?:MON|TUE|WED|THU|FRI|SAT|SUN)(?:(?:-)(?:MON|TUE|WED|THU|FRI|SAT|SUN))?)*)(|\\s)+(\\?|\\*|(?:|\\d{4})(?:(?:-|\/|\\,)(?:|\\d{4}))?(?:,(?:|\\d{4})(?:(?:-|\/|\\,)(?:|\\d{4}))?)*))\\))$" ``` However, when I tried to use this in a CF template... ``` Parameters: Schedule: { Type: String, Description: "AWS Schedule Expression representing how often the Lambda will run. Default is once per week, Sunday at noon.", AllowedPattern: "^(rate\\(((1 (hour|minute|day))|(\\d+ (hours|minutes|days)))\\))|(cron\\(\\s*($|#|\\w+\\s*=|(\... ...?)*))\\))$" } ``` This failed and said my yaml was invalid. The column specified pointed to the cause being the escaped forward slashes, i.e. "\/". I have read some places that "\/" is not a valid escape sequence in the version of yaml that CF uses. Is this true, and does anyone have a regex for ScheduleExpressions that works in a yaml CF template? The Gist I can the above RegEx from is here: [https://gist.github.com/andrew-templeton/aca7fc6c166e9b8a46aa]()
1
answers
0
votes
4
views
AWS-User-1848579
asked 3 months ago

How to include context variables in AWS API-EventBridge Integration Detail body

I have an ApiGatewayv2 EventBridge-PutEvents Integration running. The POST body is sent into event bridge as the event Detail. The Integration request parameters are as below: ``` { 'Source': 'com.me.api', 'DetailType': 'ApiEvent', 'Detail': $request.body, 'EventBusName': BUS_NAME, } ``` If I post {"foo": "bar"} to the API endpoint, I end up with an event in my bus with {"foo": "bar"} as the `Detail`. So far, all straightforward. I have also enabled an authorizer on the API GW and I want to pass the context from that authorizer into the event Detail as well. I'm testing with an IAM authorizer for now, but would like to use Cognito. I can update the request parameters to change the DetailType to `$context.identity.caller` and I get my Access Key in the DetailType of the Event. This isn't what I want, but it shows that I **can** access these [context variables][1] in my integration. What I want is for that to be in the Detail though - not the DetailType. So, when I POST {"foo": "bar"} to my API GW I get an event with Detail: ``` { "body": {"foo":, "bar"}, "auth": {"user": "AAA1111ACCESSKEY"} } ``` But, I can't use anything other than `$request.body` as the Detail value in the Integration's request parameters. If I use a Detail like `{"body": $request.body}` I get an error on saving the integration - `Invalid selection expression specified: Validation Result: warnings : [], errors : [Invalid source: {"body": $request.body} specified for destination: Detail]` And, I've tried that with stringified {\"body\": $request.body} as well. How can I wrap the POST data in a key like "body" within the event's Detail, and add key/values from the context variables I get from the authorizer. Or, some other way to inject the authorizer context into my Event. Should I just give up and use a lambda to access the ApiGw context and run PutEvent? Seems defeatist! [1]: https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-logging-variables.html
0
answers
0
votes
4
views
AWS-User-6724726
asked 3 months ago
1
answers
0
votes
9
views
Neonglb
asked 4 months ago

S3 Event Bridge events have null values for VersionId. Is this a bug?

When working with Lambda Functions to handle EventBridge events from an S3 bucket with versioning enabled, I find that the VersionId field of the AWS Event object always shows a null value instead of the true value. For example, here is the JSON AWSEvent that uses the aws.s3@ObjectDeleted schema. This JSON was the event payload that went to my Lambda Function when I deleted an object from a bucket that had versioning enabled: Note that $.object.versionId is null but when I look in the bucket, I see unique Version ID values for both the original cat pic "BeardCat.jpg" and its delete marker. Also, I found the same problem in the AWSEvent JSON for an aws.s3@ObjectCreated event, too. There should have been a non-null VersionId in the ObjectCreated event and the ObjectDeleted event. Have I found a bug? Note: Where you see 'xxxx' or 'XXXXXXXXX' I was simply redacting AWS Account numbers and S3 bucket names for privacy reasons. ``` { detail: class ObjectDeleted { bucket: class Bucket { name: tails-dev-images-xxxx } object: class Object { etag: d41d8cd98f00b204e9800998ecf8427e key: BeardCat.jpg sequencer: 0061CDD784B140A4CB versionId: null } deletionType: null reason: DeleteObject requestId: null requester: XXXXXXXXX sourceIpAddress: null version: 0 } detailType: null resources: [arn:aws:s3:::tails-dev-images-xxxx] id: 82b7602e-a2fe-cffb-67c8-73b4c8753f5f source: aws.s3 time: Thu Dec 30 16:00:04 UTC 2021 region: us-east-2 version: 0 account: XXXXXXXXXX } ```
2
answers
0
votes
7
views
TheSpunicorn
asked 5 months ago

IAM Role for Event Bridge

Hi, I am trying to trigger a run command document on a bunch of ec2 instances when a parameter in parameter store is updated. The rule gets triggered as expected but I can see from the Events in CloudWatch that all invocations fail. I'm a bit lost as how to troubleshoot it as there don't seem to be any logs available in Event Bridge. I'm thinking it might be to do with the IAM role used for the targets. If you set up the targets manually through the Event Bridge console this role can be created automatically, however I am required to create all infra via Terraform so I need to create and assign the role separately. Documentation on the role requirements is a bit thin on the ground, but this is what I have so far ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Action": "ssm:SendCommand", "Resource": "arn:aws:ec2:eu-west-2:xxxxxxxxxxxx:instance/*", "Condition": { "StringEquals": { "ec2:ResourceTag/os_type": "*" } } }, { "Sid": "", "Effect": "Allow", "Action": "ssm:SendCommand", "Resource": "arn:aws:ssm:eu-west-2::document/AmazonCloudWatch-ManageAgent" }, { "Sid": "", "Effect": "Allow", "Action": "ssm:GetParameter", "Resource": "arn:aws:ssm:eu-west-2:xxxxxxxxxxxx:parameter/cloud-watch-config-linux" } ] } ``` with `events.amazonaws.com` being able to assume the role. Any suggestions on how to troubleshoot this further, or advice on how the IAM role permissions required would be much appreciated. Many thanks.
3
answers
0
votes
15
views
AWS-User-9542792
asked 5 months ago

EventBus Rule Target ECS Fargate Task - Unable to invoke set version

When building a rule targeting a specific ECS Task version (not latest), we're observing that the rule fails to be invoked. Let me provide some scenarios: 1. * In the EventBirdge-> Events-> Rules-> Add Target UI, define everything about your ECS Task. Do not update the task definition revision. * Verify your event invokes successfully. * Pull up the json from aws cli for referencing: aws events list-targets-by-rule --rule rule-name-here --event-bus-name bus-name-here * Edit your rule/target to "Configure task definition revision and task count" to a Revision of the latest version for your task. * Observe if your rule invokes successfully or not. From my tests, it will fail. You can see this in the Cloudwatch "Monitoring" of the rule and observe your Invocations and FailedInvocations. * Pull up the json from aws cli for referencing: aws events list-targets-by-rule --rule rule-name-here --event-bus-name bus-name-here You will notice the working version does not contain the version appended to the end (i.e. Works - "arn::aws::task-definition" vs Non-working "arn::aws::task-definition:16") 2. In Cloudformation, build your CF template with the appropriate settings that can be matched/compared with #1. Example (with lots of actual links replaced) Targets: - Arn: !GetAtt ClusterArn.Value RoleArn: !GetAtt RoleArn.Value Id: project-name-here EcsParameters: TaskCount: 1 TaskDefinitionArn: !GetAtt RoleArn.Value LaunchType: FARGATE NetworkConfiguration: AwsVpcConfiguration: AssignPublicIp: DISABLED SecurityGroups: Fn::Split: - "," - Fn::ImportValue: !Sub ${EnvironmentName}:sec-groups Subnets: Fn::Split: - "," - Fn::ImportValue: !Sub ${EnvironmentName}:subnets If you attempt this CF, it will build the stack successfully when providing a valid ARN for the ecs task definition (The rule invoking this target will fail). If you try to provide the task definition Arn without the version, that's not a valid ARN, so CF will fail during stack creation. Let me know if more information is required to test this scenario in other environments, but we have validated it on our end to not be working as expected. Any help/guidance would be greatly appreciated! Edited by: rsNate on Jun 29, 2021 2:32 PM Edited by: rsNate on Jun 29, 2021 2:32 PM
2
answers
0
votes
0
views
rsNate
asked a year ago
  • 1
  • 90 / page