Lambda trigger amount limit

0

Is there any limitation to the amount of triggers we can set to a single Lambda function?

  • My case I want to be able to assign an indefinite amount of S3 bucket (PUT ITEM) triggers to a Lambda function. Every new project will have a new S3 bucket, and I want to add as a trigger, the PUT ITEM event of that new bucket to an existing lambda function. Will there be a limit of buckets that I can add?
asked a year ago1825 views
2 Answers
1
Accepted Answer

There are no restrictions on Lambda triggers.
However, if you continue to set triggers, the "Function resource-based policy" will increase and you may reach this limit.
"Function resource-based policy" can only be configured up to 20 KB, as described in the following document.
This means that if a large number of triggers are set, the 20 KB limit may be reached.
https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html

So another possible method is to use SNS topics without setting up Lambda for S3 event triggers.
By using SNS topics, Lambda's "Function resource-based policy" will only allow access from SNS, even if multiple S3s are configured, so the limit will not be exceeded.
SNS topics can specify Lambda as the target.
https://docs.aws.amazon.com/sns/latest/dg/sns-event-destinations.html

profile picture
EXPERT
answered a year ago
profile picture
EXPERT
reviewed a year ago
profile pictureAWS
EXPERT
reviewed a year ago
  • Is there any information on how many would I need to add to reach the 20KB limit? Also, does the SNS topics allow concurrent invocation if I have multiple objects being added to multiple buckets at the same time?

  • Also, would it be possible to implement something like this: https://stackoverflow.com/questions/46603701/the-final-policy-size-20539-is-bigger-than-the-limit-20480

    So I could maybe allow all bucket with a certain prefix?

  • In my environment, I have three access permissions. It is about 1.6 KB in this state. So, a simple calculation suggests that it could be set up for about 36 pieces.

    Also, does the SNS topics allow concurrent invocation if I have multiple objects being added to multiple buckets at the same time?

    Yes, they are activated at the same time.

  • Also, would it be possible to implement something like this: https://stackoverflow.com/questions/46603701/the-final-policy-size-20539-is-bigger-than-the-limit-20480 So I could maybe allow all bucket with a certain prefix?

    I was a little curious, so I added the following "Function resource-based policy" and verified it. For this policy, Lambda was not triggered when the file was created in S3. In other words, you cannot set a prefix to allow all buckets. The ARN of the S3 bucket had to be set correctly.

    {
      "Version": "2012-10-17",
      "Id": "default",
      "Statement": [
        {
          "Sid": "test",
          "Effect": "Allow",
          "Principal": {
            "Service": "s3.amazonaws.com"
          },
          "Action": "lambda:InvokeFunction",
          "Resource": "arn:aws:lambda:ap-northeast-1:<AWS Account ID>:function:test",
          "Condition": {
            "StringEquals": {
              "AWS:SourceAccount": "<AWS Account ID>"
            },
            "ArnLike": {
              "AWS:SourceArn": "arn:aws:s3:::kobayashi-*"
            }
          }
        }
      ]
    }
    
  • Could you technically then set:

    "ArnLike": { "AWS:SourceArn": "arn:aws:s3:::*" } And that would work for all s3, no exception?

0

There is no limit on number of triggers to a lambda function. However you can consider using eventbridge rules as well.

When you say s3 bucket trigger, do you mean data event inside that bucket would be used to trigger lambda or just bucket creation itself. Hope you understand it's implications when it comes to lambda function concurrency as it's quite possible that at later point your lambda function concurrency can become a bottleneck.

Hope this helps.

Abhishek

profile pictureAWS
EXPERT
answered a year ago
  • Sorry for not being clear. I meant S3 usage (PUT, COPY, DELETE...) not the creation of the bucket. Could you guide me to a reference to better understand the concurrency bottleneck problem?

  • Yes absolutely. I'd suggest you to use the eventbridge rule option instead of s3 trigger as adding s3 event trigger to lambda function is a manual process however eventbridge rules can be added to lambda using cloudformation or console both. Imagine, all these s3 buckets start sending events to your lambda function, this lambda function may not be handle to process al those events if function's concurrency limit reaches or lambda functions runs for a while. See this Scaling concurrency and managing concurrency for more details.

    Putting all of these buckets data events to a single lambda function may not be a good idea if you expect number of such events to be high until you implement some fan-out mechanism like add an extra hop between s3 event and lambda such SQS or SNS.

    Comment here if you have additional questions on this, I can provide how to add event rule to lambda function via cloudformation.

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions