Questions tagged with Amazon EventBridge
Content language: English
Sort by most recent
FlexMatch sends various notifications described [here](https://docs.aws.amazon.com/gamelift/latest/flexmatchguide/match-events.html). I have a service getting these messages via SQS polling. The event payloads in the messages are serialized to JSON. Are there models defined for these events in the Java AWS SDK 2.0? I sure cannot find them.
Are you supposed to roll your own models to deserialize and work with these events?
I have an eventbridge rule with an SQS target, and the lambda function that puts the event on the bus is configured to use xray (traces lead up to eventbridge in xray, so this is working fine).
In the SQS messages (received with a ReceiveMessageCommand) there is no AWSTraceHeader attribute, so I cannot continue the trace downstream.
I have added an identical rule with lambda target with tracing to test if the trace is propagated correctly, and this is the case, I have a lambda node linked after the events node in the service map.
I read that eventbridge should propagate trace headers to SQS targets, mentioned here:
https://aws.amazon.com/about-aws/whats-new/2021/03/amazon-eventbridge-now-supports-propagation-of-x-ray-trace-context/?nc1=h_ls
Is this actually the case? If so, is there anything I am missing for this to work?
I am sharing my release message notification on Slack via SNS, EventBridge and Lambda functions. I have accomplished it successfully, but we do not get the commit message or PR link or the PR details which triggers the deploy. Is it possible already in someway? Or is there some feature in progress? Or is it not possible?
Also, my repository is in BitBucket.
Hello,
We're using EventBridge to trigger an application API on a periodic basis. It is public facing and hosted on our EC2 instances, which is behind a Load Balancer.
It had been working fine up until the day the domain name's SSL certificate expired. However, after SSL cert had been renewed and re-imported via Certificate Manager, the error still persists (from the deadletterqueue):
| Name | Value |
| --- | --- |
| ERROR_CODE | SDK_CLIENT_ERROR |
| ERROR_MESSAGE | Unable to invoke ApiDestination endpoint: API destination endpoint cannot be reached. |
| RULE_ARN | ... |
| TARGET_ARN | ... |
Connection is Authorized, API Destination has been checked and works when called in the browser. Everything seems to be in order. Have tried re-creating new connection/APIDestination/new rule but hitting same error.
What might cause such an error "SDK_CLIENT_ERROR" ?
Thanks in advance.
SCENARIO:
I have a cloudwatch alarm action that triggers an SNS topic.
The alarm metric is configured to filter CRITICAL events in a Lambda Log group.
The Lambda (invoked every 15 minutes) checks for CloudFormation stacks in 'error' states and logs the critical event for each stack in the error state.
```
Logs::MetricFilter
FilterPattern: '{$.level="CRITICAL"}'
MetricValue: 1
CloudWatch::Alarm
AlarmActions: Send to SNS Topic
Period: 600
TreatMissingData: notBreaching
ComparisonOperator: GreaterThanOrEqualToThreshold
Threshold: 1
EvaluationPeriods: 1
Statistic: Maximum
```
Cloudwatch alarm works as expected when 1 stack is in the error state:
* Picks the CRITICAL event
* ALARM changes state to 'In Alarm'
* SNS Topic triggered
CHALLENGE:
If any other stack goes into error (like 15 minutes later), and the initial stack is still in error, the Alarm doesn't act on it. i.e. trigger the SNS topic.
I understand from research that this is normal behavior because *" If your metric value is still in breach of your threshold, the alarm will remain in the ALARM state until it no longer breaches the threshold."*
I have also tested this and confirmed - I used boto3 to **set_alarm_state** back to OK, invoked the Lambda manually, the Alarm state was changed back to 'In Alarm', and the SNS topic triggered.
QUESTION:
is there any other suitable configuration or logic I can use to trigger the SNS topic for every stack in the error state?
Dear Community,
Please imagine the following scenario:
* I have multiple long running computation tasks. I'm planning to package them as container images and use ECS Tasks to run them.
* I'm planning to have a server less part for administrating the tasks
Once a computation tasks starts, it takes the input data from a SQS queue and can start its computation. Also all results end up in an SQS queue for storage. So far, so good.
Now the tricky bit: The computation task needs some human input in the middle of its computation, based on intermediate results.
Simplified, the task says "I have the intermediate result of 42, should I resume with route A or route B?". Saving the state and resuming in a different container (based on A or B) is not an option, it just takes too long. Instead I would like to have a server less input form, which sends the human input (A or B) to this specific container. What is the best way of doing it?
My idea so far:
Each container creates his own SQS queue and includes the url in his intermediate result message. But this might result in many queues. Also potentially abandoned queues, should a container crash. There must be a better way to communicate with a single container. I have seen ECS Exec, but this seams more build for debugging purposes.
I have an EventBridge rule with cron to run once day. When I enable the rule, it keeps firing the event and executing the API Gateway. I tried to disabled the rule but checking the logs for API Gateway, I keep seeing executions.
Is this expected behavior for a rule?
Hopefully not an indication of a bug....
I've got my storage gateway sending eventbridge events that then triggers a step function. I also, for debugging, have the same filtered event going to SNS to email me, and a seperate eventbridge rule which does not filter the events, it just emails me. everything the file gateway sends.
This setup is working well for using the upload complete event to trigger my step function and I have verified it is working on a mix of uploaded data
HOWEVER, I used the storage gateway to upload 3 very large files (200GB+ each) to the S3 bucket and noticed these did NOT trigger the upload complete step function and I did NOT receive event emails (ie the storage gateway did not send any events and because I have the 2nd rule that is unfilted and sends every event I know the storage gateway didn't send an event). I'm seeing no further upload activity from the metrics on the storage gateway, and the object appear in S3 as their full size so the uploads have clearly completed.
Its clear the file gateway uploaded these very large files as multi-part uploads, and I'm wondering if this is the reason I didn't receive any event emails (or the step function being triggered). I highly doubt that is the desired behaviour as I expect 'upload complete' to apply to all objects, not just ones uploaded whole. I was hoping the Storage gateway events would be 'more reliable' as AWS's own blog post about file upload events suggested over S3 based event triggering.
Can you confirm if this behaviour is actually the expected desired behavior, or if I have to employ a work-around using s3 eventbridge events (multipart upload complete? )
Thanks in advance,
Owen.
I defined a EventBridge Rule which targets a System Manager Automation I wrote. The Automation runs a Document that starts like:
```
schemaVersion: '0.3'
assumeRole: "{{ AutomationAssumeRole }}"
parameters:
AutomationAssumeRole:
type: String
description: "(Required)"
default: ""
```
For the EventBridge Rule, I define the Execution Role to be an IAM Role I created. I also configure an Input Transformer with some data about the event that triggered the Rule.
I want to pass in the Execution Role into the Automation's "AutomationAssumeRole" parameter.
If I manually trigger the Automation in the UI and select the Role from the dropdown, the Automation execution works as expected. I have not been able to figure out the right way to define the Input Transformer to have the EventBridge Rule trigger the Automation and populate the parameter "AutomationAssumeRole".
I am trying to create an alarm on the metric filter for every 2 hours. I need it for stopping channels that are running but are not receiving streams.
What I need is the query for channels in a RUNNING state, where the SET event for the alarm message was received for X minutes and where a CLEARED event was not received (consecutive order).
Something like this: { ($.detail-type= "MediaLive Channel Alert") && ($.detail.alert_type= "RTMP Stream Not Found") && ($.detail.alarm_state= "SET") && ($.resources[0]= "")}
Here is a setup.
* BUCKET_1 - Source Endpoint on Prem Replication Task with table preparation mode of "DROP_AND_CREATE".
* BUCKET_2 - Synced by Lambda from events in BUCKET_1 and is the source endpoint for a migration task to an Aurora RDS instance
The BUCKET_1 has Lambda triggers defined for the events below (in order to copy and delete objects in BUCKET_2):
s3:ObjectCreated:*
s3:ObjectRemoved:*
The goal is to keep BUCKET_2 in perfect sync with BUCKET_1.
Recently, we have found that the ObjectRemoved* and ObjectCreated* events are not always in chronological order. I found documentation that states the order in which S3 event triggers for lambda are received are not guaranteed to be in order. This leaves a situation where files in BUCKET_2 can be deleted right after creation (the create and delete are out of order).
I have been researching work arounds. One would be to lookup the last update time of the object, when the event is ObjectRemoved*, and if it is within 2 minutes (or some reasonable time frame) then don't delete.
The other option would be to create a CloudWatch Rule like below and bind that to Lamba that would check if the task's eventid = 'DMS-EVENT-0069' and then clean up all associated "dbo" files in
```
BUCKET_2:
{
"source": [
"aws.dms"
],
"detail-type": [
"DMS Replication State Change"
]
}
```
My concern with the above is whether there will be enough lag time between DMS-EVENT-0069 and the start of data transfer to allow emptying BUCKET_2 of all contents.
We will have up to 450 tasks and 300 buckets supporting the replication of 150 databases, so I am looking for a best practice solution to ensure that BUCKET_1 and BUCKET_2 are in perfect sync. This is critical for replication.
Perhaps there are better options to ensure two buckets are in sync?
**UPDATE**: Not wanting to persist sequencers due to the lack of persistence storage in our solution the Lean is toward the following solution (this will only work if the ObjectCreated* event is fired after the object has been created and the ObjectRemoved* event is fired after the object has been deleted). There will be no other processes touching these obejects, just DMS and the Lambda.
```
IN BUCKET_1 ObjectRemoved* EVENT raised during full load DROP_AND_CREATE Lambda Handler
IF BUCKET_2 has an Object with the same name
GET bucket_2_object_creation_date
IF time_span_in_minutes ( now - bucket_2_object_creation_date ) > 2
DELETE Object
ELSE
--Object was created by the same Data Migration Task instance, leave it there.
```
Hi!
In our company, we provide customers a backoffice where they can perform multiple actions (executing payments, refunds, etc). We need to log these actions in a chronological order so we can check what each user does.
I've been checking eventbridge so, each time users do something, we send an event to eventbridge and this would trigger a lambda that would store the event information in dynamoDB for example.
But we have also thought of using SQS, which would also trigger that same lambda.
Which service would be more suitable to send all this customer actions? We can also send them directly to the lambda, so we don't know that would be the main advantage about using eventbridge or SQS.
As events won't be able to be modified, would DynamoDB be the best storage option to store them?
Thank you for your help!