- 新しい順
- 投票が多い順
- コメントが多い順
When the upload buffer fills up completely you can get to a state where the gateway has to verify what's in the upload buffer and what hasn't been uploaded from the cache disk yet.
From the sounds of this it would seem you are writing faster than can be uploaded. When the upload buffer was added, was the sizing formula used? https://docs.aws.amazon.com/storagegateway/latest/userguide/ManagingLocalStorage-common.html#CachedLocalDiskUploadBufferSizing-common
If you can PM me your gateway ID and region I can look into some metrics on my side as well to see if I can find any bottlenecks.
Thank you,
Brian C
We've established the maximum of 2TB upload buffer from the beginning.
I can't find the AWS documents but I remember seeing that each tape job is limited to 30mbit and the entire storage gateway is limited to 120mbit. So I don't understand why we are unable to get more than 30mbit when two tape jobs are running in parallel.
I'll PM you our gateway info. Thanks.
We no longer have the network limits doc as you mentioned due to the limits mentioned not being a hard limit of the service. The previous limits mentioned were more of a max performance we saw in our own testing, however since then scaling improvements have been made which invalidated the previous information.
Hi,
Just to update that we've replaced our ISP and now and the entire bandwidth is consumed by the storage gateway. the previous ISP was most likely limiting AWS uploads.
Thanks for the help.
関連するコンテンツ
- AWS公式更新しました 2年前
- AWS公式更新しました 2年前