Reliably archiving many s3 files

0

I would like to be able to add many (hundreds of thousands) of small s3 files to a single archive elsewhere on s3. It doesn't necessarily need to be fast, but it does need to be reliable. I can stream data through an archiver and back to s3 in a single lambda on a small scale, but since I need to get every single object, at full scale it's a lot to ask from a single lambda.

Could I, for instance, use step function to run archiving lambdas against a subset of the files and perform multipart uploads into a single combined archive? Are there any better ways to achieve this sort of thing?

1개 답변
0

If you can leverage something like Firehose in your application you can stream files to a data lake in S3 and then archive to glacier with lifecycle policies.

profile pictureAWS
전문가
Rob_H
답변함 2년 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인