- Newest
- Most votes
- Most comments
Processing millions of files in S3 is exactly what Step Functions Distributed Map was built for.
I would advise a two-part solution for this scenario. First, for automating the processing of these files in the future, you can set up a trigger for your lambda so that it is invoked every time a new file is uploaded to the S3 bucket. One of the simpler approaches would be to use Eventbridge as shown in this example: https://serverlessland.com/patterns/s3-eventbridge. Alternatively, depending on the frequency these files will be uploaded, a solution using SQS to trigger the Lambda may be preferred to avoid any issues with invocation limits (e.g. https://serverlessland.com/patterns/s3-sqs-lambda) As for processing the existing files, there again are multiple possible solutions and which is optimal will depend on details of the processing. One solution as you mention would be to have a separate Lambda that will traverse the bucket and invoke the processing Lambda on each file while managing concurrency. A drawback to this approach is the parent lambda may run into the execution time limit. Similarly, a script with the same logic to traverse the bucket and invoke lambdas could be run on an EC2 machine or your local machine as a one-time process. One final approach could be to first implement the future automation using the SQS solution. Then write a separate Lambda that will traverse the bucket and rather than invoke the Lambda directly, place an event in the SQS queue. This removes the need to manage Lambda concurrency from the script, though Lambda execution timeout should still be considered.
Relevant content
- Accepted Answerasked a month ago
- asked a year ago
- asked 2 years ago
- AWS OFFICIALUpdated 9 months ago
- AWS OFFICIALUpdated 6 months ago
- AWS OFFICIALUpdated 4 months ago