- Newest
- Most votes
- Most comments
If you create an EC2 instance with enough memory then it should be possible to copy the files onto the instance and compress them into a single file. However, if speed is the goal then parallelizing the compression of sets of the files would probably be faster and your ECS approach (perhaps with smaller chunks and more containers) would work well.
If this is an ongoing process then perhaps a Lambda function could be used to compress all new files and transfer them directly?
Hi,
For your question: Is there a way by which we can generate the zip of the whole data in one go? Currently there is no S3 provided functionality to do this. This must be handled via individually the objects from S3 and creating a ZIP archive. If you want to do it within the AWS Cloud, as an example, you could use Lambda (if within the timeout) or ECS or EC2 or AWS Batch
For your question: How do we then stream this large zip file from our destination S3 to users' local? It is possible to download large files from Amazon S3 using a browser by using the AWS SDK Please refer to these articles for understanding/examples: https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/s3-browser-examples.html https://docs.aws.amazon.com/AmazonS3/latest/userguide/example_s3_Scenario_UsingLargeFiles_section.html
Thanks
Relevant content
- Accepted Answerasked 2 years ago
- AWS OFFICIALUpdated 8 months ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 9 months ago
Also to fetch the zip to clients' local i believe we can use S3 transfer manager of AWS SDK. But any idea around how much data can be transferred in a go using transfer manager?