2 Answers
- Newest
- Most votes
- Most comments
0
Hi there.
You are right about the storage costs on both buckets, there are also costs associated with data transfer which depend on how apart the buckets (i.e. different AZ or Region), and also the replication PUT requests. To get the latest information on pricing please see the Amazon S3 Replication Pricing page. You can also see an example of calculating the replication cost from this blog post.
I hope this helps.
0
Hi.What about replicating only some very small amount of data and calculating the results below? It would be a short run and should have some accuracy.
Relevant content
- Accepted Answerasked 4 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated a year ago
I plan to have both buckets in the same region, so no transfer costs. Example from https://aws.amazon.com/blogs/storage/monitor-data-transfer-costs-related-to-amazon-s3-replication/ says:
Source S3 bucket (N. Virginia): 100 GB Destination Region: US West (N. California) Number replication PUT requests at destination: 100 " S3 Standard storage cost for source: 100 GB * $0.023 = $2.30 S3 Standard storage cost for replicated data at destination: 100 GB * $0.023 = $2.30 Data transfer: 100 GB * $0.02 (per GB data transferred) = $2.00 Price per PUT request: $0.005 (per 1000 requests) / 1000 = $0.000005 Replication PUT requests: 100 * $0.000005 = $0.0005 Total: $2.30 + $2.30 + $2.00 + $0.0005 = $6.6005 " But why there is 100 PUT requests at destination? Can it be another value?
It is not clear from the example why 100. That said, it is not a random number. It may be that from the 100GB of storage, there were 100 files each of 1GB, therefore, replicating each of the files consisted of 1 PUT operation for a total of 100.
The example simply states the aspects to consider when reviewing the price for replication. You may need to review your bucket objects and applications to try to determine the number of files, size, and use pattern to estimate the charges.
We can guess that maybe there was 100 files 1GB each, but it is just guessing. What about big files? There is multipart upload feature on aws. Aws cli has 8MB default chunk size. If this default will be used for large file when replicating configured on aws S3 console it can generate huge amount of PUT requests and huge costs.
Correct. As mentioned, the point of the example was to show all the elements to consider in replication costs. I would recommend running a test with average file sizes, some small and some large to see how it behaves and if it meets your needs.