1 Answer
- Newest
- Most votes
- Most comments
0
I would look into Lambda functions. You will have 2 buckets, one for the large files and one for the small files. One function will trigger from the first bucket, it will read the file and split it into multiple, smaller files, which it will save in the second bucket. The second function will be triggered from the second bucket and will run the analysis on the small files.
This is assuming that the large files can be loaded into a function (size wise) and that it takes less than 15 minutes to split a large file and less than 15 minutes to analyze a small file.
Relevant content
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 3 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago