Questions tagged with Amazon EventBridge
Content language: English
Sort by most recent
So I have a lambda function that's the following:
def lambda_handler(event, context):
http = urllib3.PoolManager()
return {
"statusCode": 200,
"headers": {
"Content-Type": "application/json"
},
"body": "{\"message\": \"Hello from Lambda!\"}",
"test": event
}
If I test run the event with some event JSON, it returns what I want. But...
when I test to run the lambda through my rest API I created I either get the error 502 when "Use Lambda Proxy integration" is enabled. Or when it's disabled event is always empty. I have tried to setup a link the following way:
https://something.execute-api.eu-north-1.amazonaws.com/test/test/{id}
Where you enter an {id} for example and I can catch it in the event. But how can I fix so event is an actual thing and actually getting any information?
If you feel like im missing some information that might be important just ask, I don't know what could be useful.
Hello everyone,
I am facing an odd situation here.
I have some events since a few days that are fired 2 times in the same bus (default). They are exactly the same : content and id.
And so they triggered some lambdas two times messing with our event process.
I thought it should be impossible.
I assume that if there are two logs in events/debug, there are two event fired.
Look at the photo. You can see the same id in the JSON at the same hour.

If you have any idea about what can cause that.
Thanks for your help.
EDIT 1 : The events are generated by a lambda using aws sdk for nodeJs and method putEvents.
Hi all,
as a **Security Requirement** we need to setup a **notification system** using **SNS** to notify our **Security Team** when someone access an AWS Account using a specific SSO PermissionSet "for example : **AdministratorAccess** " as shown in the image below :

I'm trying to setup a simple **EventBridge Rule** based on the **IAM Identity Center** **Federate** Event on **Cloudtrail** with an **SNS topic** as a target but I can't get it working.
**CloudTrail Event** :
```
{
"eventVersion": "1.08",
"userIdentity": {
"type": "Unknown",
"principalId": "xxxx-43ce-996a-0530772c083a",
"accountId": "xxxxxxxxxxx",
"userName": "userName"
},
"eventTime": "2023-03-23T00:07:29Z",
"eventSource": "sso.amazonaws.com",
"eventName": "Federate",
"awsRegion": "us-east-1",
"sourceIPAddress": "1.1.1.1",
"userAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/111.0",
"requestParameters": null,
"responseElements": null,
"requestID": "c99b-48ea-a9e4-fc2194bc0f27",
"eventID": "415e-b57e-99764a0f0fdf",
"readOnly": false,
"eventType": "AwsServiceEvent",
"managementEvent": true,
"recipientAccountId": "xxxxxxxxxx",
"serviceEventDetails": {
"role_name": "AWSAdministratorAccess",
"account_id": "xxxxxxxx"
},
"eventCategory": "Management"
}
```
**EventBridge Event Pattern** is the Following :
```
{
"detail-type": ["AWS API Call via CloudTrail"],
"detail": {
"eventSource": ["sso.amazonaws.com"],
"eventName": ["Federate"]
}
}
```
anyone could help on how to get this working ?
Thanks in advance
Hi, We would like to trigger events based on when an instance (EC2 or OnPrem) is registered in Systems Manager, an example, would be to trigger an instance tagging mechanism so that tags are applied as soon as the instance is registered.
I can see PutInventory in Cloudtrail gets created on registration and then occurs every 12h thereafter, but we dont need anything that cyclical. We are considering an Association that is configured without a Schedule, but this triggers runCommand on the instance which is unnecessary data transit across the WAN. It would be good if the event could trigger eventBridge to then trigger a StepFunction, but we're looking for the best trigger.
Does anyone have any suggestions on the best trigger for this?
Use case : New documents are added through a web application on ongoing basis to S3. I am trying to build a document search for the documents stored in S3 that can display documents uploaded in near real time. Does Kendra sync data source with index based on an event trigger?
I set up the resources to trigger glue job through eventbridge. But when I tested in console, Invocations == FailedInvocations == TriggeredRules == 1.
What can I do to fix it?
```
######### AWS Glue Workflow ############
# Create a Glue workflow that triggers the Glue job
resource "aws_glue_workflow" "example_glue_workflow" {
name = "example_glue_workflow"
description = "Glue workflow that triggers the example_glue_job"
}
resource "aws_glue_trigger" "example_glue_trigger" {
name = "example_glue_trigger"
workflow_name = aws_glue_workflow.example_glue_workflow.name
type = "EVENT"
actions {
job_name = aws_glue_job.example_glue_job.name
}
}
######### AWS EventBridge ##############
resource "aws_cloudwatch_event_rule" "example_etl_trigger" {
name = "example_etl_trigger"
description = "Trigger Glue job when a request is made to the API endpoint"
event_pattern = jsonencode({
"source": ["example_api"]
})
}
resource "aws_cloudwatch_event_target" "glue_job_target" {
rule = aws_cloudwatch_event_rule.example_etl_trigger.name
target_id = "example_event_target"
arn = aws_glue_workflow.example_glue_workflow.arn
role_arn = local.example_role_arn
}
```
I checked the rule itself and the event scheduler. Just as a test, I had it running every five minutes. However, when I checked on the lambda's cloudwatch log group, there was nothing. Rule state is enabled. The trigger is there in the lambda. In the cloudwatch metrics, there's actually data, but nothing is logged. It also reports a success so I don't think it's a permissions issue. Also, when I tried to add the rule's ARN to the Configuration>Permissions tab, it reports an error because the ARN contains a forward slash and it allows backward slashes.
I'm using the code from this guide: https://aws.amazon.com/blogs/security/how-to-monitor-expirations-of-imported-certificates-in-aws-certificate-manager-acm/ so I don't think it's the lambda itself. I also just put a print statement in the lambda_handler just to see if anything outputs. Nothing does.
How do I connect an EventBridge Bus directly to an EventBridge Pipes as a Source. So EventBridge Bus -> EventBridge Pipes -> Enrichment (Lambda) -> Pipes Target Event Pattern -> Target (Lambda). As far as I can tell by the documentation and console ops I can only select Steaming services as Pipes Sources. Is this a limitation that is fixed forever?
The scenario I was wanting to implement was my EventBridge Bus events being enriched with feature flag detail pre-populated based on identity and detail-type and to discourage target services making any tightly coupled call(s) to feature flag service. I thought EventBridge Pipes sound best idea as no code would have to be written to plum messages along the "Pipeline" just the Lambda code to enrich messages.
One possible work around I was planning to try was to setup my pipeline. EventBridge Bus -> Rule Event Pattern (*) -> Lambda Target (enriches events based on data from DynamoDb Table w/ Cache) and then code to push events to a second EventBridge Bus -> EventBridge Bus -> Rule Event Pattern(s) -> Target(s).
Would love expert suggestions for alternatives or maybe that this is a planned feature change.
Thanks
Hello,
I'm trying use EventBridge to schedule Batch submissions. However, I'm getting this error:
```
"User: arn:aws:sts::[account ID]:assumed-role/[IAM Batch invoker role] is not authorized to perform: batch:SubmitJob on resource: arn:aws:batch:[account ID]:job-definition/[job definition name]"
```
The invoker role's permissions are as follows:
```
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "batch:SubmitJob",
"Resource": [
"arn:aws:batch:[account ID]:job-definition/[job definition name]:*",
"arn:aws:batch:[account ID]:job/[job name]",
"arn:aws:batch:[account ID]:job-queue/[job queue name]"
]
}
]
}
```
For whatever reason, the rules work fine if I list the most recent job revision as the rule's target (i.e., arn:aws:batch:[account ID]:job-definition/[job definition name]:235). However, if I don't list the most recent revision number, I get the above error. My team updates this job definition frequently and I'm trying to make several rules like this, so manually changing the revision number every time isn't a good option. The rules also work if I just use "Resource": "*" for permissions, but this security policy is unacceptably broad for my organization. Is there a way I can get rules like this to work without listing the revision number?
Following [a tutorial example](https://docs.aws.amazon.com/batch/latest/userguide/batch-cwe-target.html) it is possible to configure an eventbridge rule to trigger an AWS Batch job.
It is even possible to set other container override configurations like `Environment` as a list of `Name`, `Value` objects in the request json. After using uppercase to start these attribute names, you can see them in Cloudtrail in the SubmitJob request as lowercase values that match the [job definition](https://docs.aws.amazon.com/batch/latest/APIReference/API_JobDefinition.html).
The issue I have is that we have is that the jobs started by eventbridge do not contain any tags. **How can I get Tags to be configured on the AWS Batch job started by the Eventbridge?**
Random facts:
- The submission role has `batch:TagResource` permission
- Using `Tags` or `tags` as an attribute in the transformer next to `ContainerOverrides` does not work
- Both tags have been tried with `[{"Name": "tag_name", "Value": "tag value"}]` and `{"tag_name":"tag value"}` values
- In the Cloudtrail I can see the SubmitJob coming from events.amazon.com and the request parameters include `jobName`, `containerOverrides`, `jobQueue` and `jobDefinition`
- The Batch Job Definition does have tags configured
I am trying to create rule in eventbridge to trigger a workflow when a specific file format is upload in the desired object of s3 bucket.
```
{
"source": ["aws.s3"],
"detail-type": ["AWS API Call via CloudTrail"],
"detail": {
"eventSource": ["s3.amazonaws.com"],
"eventName": ["PutObject"],
"requestParameters": {
"bucketName": ["my-bucket"],
"key": [{
"prefix": "folder1/folder2"
}],
"FileName": [ { "suffix": ".xlsx" } ]
}
}
}
```
I upload files say in s3://my-bucket/folder1/folder2/folder3/test.xlsx, glue workflow is not triggered.
Can someone help me in this event pattern to trigger workflow for specific file type?
Is there a way to configure an apiDestination connection not to be deauthorized on a 401 error??
Currently I am using EventBridge to schedule custom api endpoint calls; this tasks could be programmed to be executed after several days. In the meantime if a diferent unauthorized task get executed, the connection became deauthorized an any other future call is not executed.
I dont want to create a different connection for each rule, as all my tasks are executed over the same apiDestination but using different payloads