How many requests is needed to make a replication of a bucket?

0

Hi, I need to store 20TB data in a bucket, I will choose S3 Standard class. After that I want to make a replication to the second bucket. I am trying to calculate a cost of making the replication. Of course there will be cost of storing data in second bucket, but also there is charge for requests. I wonder, how many requests is needed to replicate the data? Probably it will also depends on chunk size in multipart upload when doing replication. Can this chunk size be configured when using the S3 console?

asked 10 months ago250 views
2 Answers
0

Hi there.

You are right about the storage costs on both buckets, there are also costs associated with data transfer which depend on how apart the buckets (i.e. different AZ or Region), and also the replication PUT requests. To get the latest information on pricing please see the Amazon S3 Replication Pricing page. You can also see an example of calculating the replication cost from this blog post.

I hope this helps.

profile pictureAWS
EXPERT
answered 10 months ago
  • I plan to have both buckets in the same region, so no transfer costs. Example from https://aws.amazon.com/blogs/storage/monitor-data-transfer-costs-related-to-amazon-s3-replication/ says:

    Source S3 bucket (N. Virginia): 100 GB Destination Region: US West (N. California) Number replication PUT requests at destination: 100 " S3 Standard storage cost for source: 100 GB * $0.023 = $2.30 S3 Standard storage cost for replicated data at destination: 100 GB * $0.023 = $2.30 Data transfer: 100 GB * $0.02 (per GB data transferred) = $2.00 Price per PUT request: $0.005 (per 1000 requests) / 1000 = $0.000005 Replication PUT requests: 100 * $0.000005 = $0.0005 Total: $2.30 + $2.30 + $2.00 + $0.0005 = $6.6005 " But why there is 100 PUT requests at destination? Can it be another value?

  • It is not clear from the example why 100. That said, it is not a random number. It may be that from the 100GB of storage, there were 100 files each of 1GB, therefore, replicating each of the files consisted of 1 PUT operation for a total of 100.

    The example simply states the aspects to consider when reviewing the price for replication. You may need to review your bucket objects and applications to try to determine the number of files, size, and use pattern to estimate the charges.

  • We can guess that maybe there was 100 files 1GB each, but it is just guessing. What about big files? There is multipart upload feature on aws. Aws cli has 8MB default chunk size. If this default will be used for large file when replicating configured on aws S3 console it can generate huge amount of PUT requests and huge costs.

  • Correct. As mentioned, the point of the example was to show all the elements to consider in replication costs. I would recommend running a test with average file sizes, some small and some large to see how it behaves and if it meets your needs.

0

Hi.What about replicating only some very small amount of data and calculating the results below? It would be a short run and should have some accuracy.

profile picture
EXPERT
answered 10 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions