1 Answer
- Newest
- Most votes
- Most comments
1
As you can see here, one of the options that a Lambda transformation function can return is Dropped
. So your function can decide to drop records and these records will not make it to the destination. You can't decide that based on the result some records will go to the original destination and some will go to some other destination. You can use dynamic partitioning, but that only works for an S3 destination.
Saying that, due to the stateless nature of Lambda, implementing throttling is challenging. Also, S3 can easily support any load that you give it, so I am not sure why you would like to drop some records.
Relevant content
- asked 6 months ago
- asked 2 years ago
- AWS OFFICIALUpdated a month ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 2 years ago
TYVM for the answer and info.
Downstream we are ingesting data into Splunk and when a change is made, we get a large volume of data that sees us exceed our license so I was looking for way on the AWS side to handle that sudden increase of log volume.
Would there be a way for a Lambda function that would keep a count of events per minute or some time range and if that threshold is exceeded then drop events?
As I said, Lambda is stateless, so you will need to use some external source to store that information. DynamoDB is a good example of such external source.
Thx for the follow up - greatly appreciated