S3 performance problems: How can i achieve a s3 request rate above the limit of 3500 PUT per second with multiple prefixes?

1

According to the official document, I try to scale write operations by writing to multiple prefixes, i got nothing, but the same limits(3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second). The official document seems misleading or incorrect.

Q: How can i achieve a s3 request rate above the limit of 3500 PUT per second? Or, how can i make the prefix partitioned ?

PS:I got a solution in the link , but how to "gradually scale up" as described in it ?

質問済み 2年前393ビュー
1回答
0

Throughput on S3 is based on the prefix of the object. The prefix is the initial part of the object name, so for an object with the name upload/data/incoming/file.txt, the prefix is upload/data/incoming/. Each prefix will receive upto the limits you list above, so to increase the 'bucket' throughput, ensure you use unique prefixes.

One thing to remember is that whilst we often think of the components in the prefix as folders, they're not really, so upload is a different prefix to upload/data.

profile picture
回答済み 1年前

ログインしていません。 ログイン 回答を投稿する。

優れた回答とは、質問に明確に答え、建設的なフィードバックを提供し、質問者の専門分野におけるスキルの向上を促すものです。

質問に答えるためのガイドライン

関連するコンテンツ