1 Answer
- Newest
- Most votes
- Most comments
1
Hi,
you probably reach the 5500 GET requests per second in your use case: each of your 2'000 Lambdas reads fast since it's all on AWS cloud. So, each Lambda triggers multiple reads per second and you reach the limit.
To overcome this limit, I would suggest to place some form of cache between S3 and your Lambda (like AWS MemoryDb for Redis): the first Lambda that get the trigger reads the file and writes it into the cache. Then, all other lambdas read from the cache to obtain the file. Of course, you need some form of semaphore (via ad hoc Redis primitives) to make sure that only 1 lambda reads from cache and the others wait until finished before reading from cache.
Best,
Didier
Relevant content
- asked 7 months ago
- AWS OFFICIALUpdated 24 days ago
- AWS OFFICIALUpdated 6 months ago
- AWS OFFICIALUpdated 2 years ago
Hi Didier, thanks for the reply.
But I am a bit confused on the "each Lambda triggers multiple reads per second and you reach the limit" part as I am getting a single object and if 2000 Lambdas will only send out 2000 GET requests even considering all are sent at same time. Could you please explain how the limits might be reached?
On the redis part I guess it will have a higher pricing involved. I am looking for some solutions which can be done with s3 itself as I would like all components of my application as serverless. I looked in EFS but it also has higher pricing and it's not serverless.