- Newest
- Most votes
- Most comments
Below some comment:
- Use LocalStack for EventBridge Emulation
-
LocalStack is a great tool for emulating AWS services locally, including EventBridge. You can:
-
Export your EventBridge events to a file (e.g., JSON format).
-
Use LocalStack to replay these events during testing.
-
Bake the exported events into your Docker image by copying the JSON file into the image and configuring LocalStack to load them at runtime.
FROM localstack/localstack
COPY events.json /etc/localstack/events.json
CMD ["localstack", "start"]
You can configure LocalStack to load the events from events.json during startup.
- Export Events to S3 and Use a Pre-Baked File
-
Export your EventBridge events to an S3 bucket using an AWS Lambda function or EventBridge rule.
-
Download the events as a JSON file and bake it into your Docker image.
-
During testing, load the events from the baked file into your testing framework.
FROM python:3.9
COPY events.json /app/events.json
WORKDIR /app
CMD ["python", "test_runner.py"]
Your test_runner.py script can load the events from events.json and simulate them.
- Use AWS CLI for Replay If you want to avoid temporary infrastructure, you can use the AWS CLI to fetch and replay events directly in your Dockerfile:
RUN aws events list-events --event-bus-name my-event-bus > /app/events.json
This approach avoids creating temporary SQS targets but requires AWS CLI credentials during the build process.
- Pre-Process Events Outside Docker
-
Instead of handling everything in the Dockerfile, pre-process the events outside Docker:
-
Use a script to fetch and filter EventBridge events.
-
Save the processed events as a file.
-
Include the file in your Docker image.
This keeps your Dockerfile simpler and avoids the need for temporary infrastructure.
- Use EventBridge Archive and Replay
- If you need to replay past events, consider using EventBridge's Archive and Replay feature. You can archive events and replay them during testing. While this requires AWS infrastructure, it simplifies the process of fetching and replaying events.
You can try a different approach. Using EventBridge send all events to Kinesis data streams. The event pattern will be
{ "source": [""] }
For target, choose Kinesis data stream and choose PutRecord for the operation. Then, enable the rule
You need to update the IAM role as well and give access to Kinesis.
{
"Effect": "Allow",
"Action": "kinesis:PutRecord*",
"Resource": "arn:aws:kinesis:{AWS::Region}:{AWS::Account-id}:stream/my-kinesis-stream"
}
Then, from your docker, you can consume the data from Kinesis.
Why is this better? If your docker is down, you will miss out on the events when polling directly from EventBridge. However, with Kinesis, you can still poll previous events.
Interesting idea, making Kinesis a more permanent part of the infrastructure, but I don't like the permanent hourly charges which would apply even when the tests are not running. I'm thinking a similar idea but cheaper would be to add a Lambda to permanently dump a copy of the archive into S3.
Accepting this answer, it's not ideal but it's close enough if I delete the Kinesis after the replay is complete. Also the other one is factually incorrect (contains AI hallucination of non-existent CLI command).
Relevant content
- asked 3 years ago
- AWS OFFICIALUpdated a month ago
- AWS OFFICIALUpdated 2 months ago
There is no such thing as
aws events list-events
... search results for aws-guru are not relevant to event-bridge.