- Newest
- Most votes
- Most comments
There are no restrictions on Lambda triggers.
However, if you continue to set triggers, the "Function resource-based policy" will increase and you may reach this limit.
"Function resource-based policy" can only be configured up to 20 KB, as described in the following document.
This means that if a large number of triggers are set, the 20 KB limit may be reached.
https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html
So another possible method is to use SNS topics without setting up Lambda for S3 event triggers.
By using SNS topics, Lambda's "Function resource-based policy" will only allow access from SNS, even if multiple S3s are configured, so the limit will not be exceeded.
SNS topics can specify Lambda as the target.
https://docs.aws.amazon.com/sns/latest/dg/sns-event-destinations.html
There is no limit on number of triggers to a lambda function. However you can consider using eventbridge rules as well.
When you say s3 bucket trigger, do you mean data event inside that bucket would be used to trigger lambda or just bucket creation itself. Hope you understand it's implications when it comes to lambda function concurrency as it's quite possible that at later point your lambda function concurrency can become a bottleneck.
Hope this helps.
Abhishek
Sorry for not being clear. I meant S3 usage (PUT, COPY, DELETE...) not the creation of the bucket. Could you guide me to a reference to better understand the concurrency bottleneck problem?
Yes absolutely. I'd suggest you to use the eventbridge rule option instead of s3 trigger as adding s3 event trigger to lambda function is a manual process however eventbridge rules can be added to lambda using cloudformation or console both. Imagine, all these s3 buckets start sending events to your lambda function, this lambda function may not be handle to process al those events if function's concurrency limit reaches or lambda functions runs for a while. See this Scaling concurrency and managing concurrency for more details.
Putting all of these buckets data events to a single lambda function may not be a good idea if you expect number of such events to be high until you implement some fan-out mechanism like add an extra hop between s3 event and lambda such SQS or SNS.
Comment here if you have additional questions on this, I can provide how to add event rule to lambda function via cloudformation.
Relevant content
- AWS OFFICIALUpdated 6 months ago
- AWS OFFICIALUpdated a day ago
- AWS OFFICIALUpdated 2 years ago
Is there any information on how many would I need to add to reach the 20KB limit? Also, does the SNS topics allow concurrent invocation if I have multiple objects being added to multiple buckets at the same time?
Also, would it be possible to implement something like this: https://stackoverflow.com/questions/46603701/the-final-policy-size-20539-is-bigger-than-the-limit-20480
So I could maybe allow all bucket with a certain prefix?
In my environment, I have three access permissions. It is about 1.6 KB in this state. So, a simple calculation suggests that it could be set up for about 36 pieces.
Yes, they are activated at the same time.
I was a little curious, so I added the following "Function resource-based policy" and verified it. For this policy, Lambda was not triggered when the file was created in S3. In other words, you cannot set a prefix to allow all buckets. The ARN of the S3 bucket had to be set correctly.
Could you technically then set:
"ArnLike": { "AWS:SourceArn": "arn:aws:s3:::*" } And that would work for all s3, no exception?