How can I optimize performance when uploading large files to S3 across multiple AWS Regions?

0

We need the ability to upload large files—between 500 MB and 1 GB per file—to Amazon S3 in under a minute. The files are already compressed, and the files come from our application's end users, who are located across continental United States.

We've already looked at using multiple AWS Regions so that we can send uploads to the same Region that the end user is already in, but that didn't improve performance enough. We also looked at activating S3 Transfer Acceleration, but the S3 Transfer Acceleration was slower than going over the public internet in several locations.

What's the best way to optimize performance when uploading large files to S3 across multiple Regions?

1개 답변
0
수락된 답변

To upload a 1 GB file to S3 in less than 1 minute, your application's end users need to achieve an average upload rate of 17 MB per second. Assuming that criteria can be met for your use case, you'll need to plan for a transfer method that can reach an upload rate of 17 MB per second.

When you upload large files to S3, it's a best practice to use multipart uploads. Using multipart uploads allows you to break down a larger file into smaller parts for quicker upload speeds. For more information, see How can I optimize performance when I upload large files to Amazon S3?

Note: A multipart upload requires that a single file is uploaded in not more than 10,000 distinct parts. You must be sure that the chunksize that you set balances the part file size and the number of parts. For example: If the file size is 1 GB and your average upload rate is 500 KB per second for each TCP connection, then you'd want to configure at least a 34-part upload with a part size that's equal to or smaller than 30 MB.

AWS
Marco_L
답변함 3년 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠