- 新しい順
- 投票が多い順
- コメントが多い順
Hi,
I use S3 pretty extensively but I'm not employed by Amazon so take this as it is.
That amount of HEAD requests per hour will be totally fine, it should not be anywhere close to stressing out the service.
There isn't really a better way to check for existence of a random key. However, if you have a lot of keys at once and the keys share a path-like structure, you could do better by executing a list request on the common prefix and checking the contents.
Keep in mind that S3 may not be immediately consistent. You should read the docs to fully understand the impact to your particular use case, but a couple things stick out:
- if you do a GET/HEAD prior to uploading, then PUT, then GET/HEAD -- that response will be eventually consistent i.e. not guaranteed to return that the object does exist
- LIST requests are eventually consistent
Given that and that your expectations for number of objects to check, I would recommend keeping it simple and just doing HEAD - maybe with time-based retries to clear up the eventual consistency issue. If you make the wrong decision, you simply re-upload a duplicate, which is not data loss but just extra cost to you, so if this happens every once in a while, shouldn't be too big of a deal.
Hope this helps!
Thank you. I agreed to keep it simple is the right way to start. Your suggestions are very much appreciated.
関連するコンテンツ
- 質問済み 6年前
- AWS公式更新しました 1年前