2 Answers
- Newest
- Most votes
- Most comments
0
Don't know if this is the only solution, but I ended up setting the throughput target to a value closer to my network bandwidth (home internet), instead of using the 20 Gbps from the AWS example
s3AsyncClient =
S3AsyncClient.crtBuilder()
.credentialsProvider(DefaultCredentialsProvider.create())
.region(Region.US_EAST_1)
.targetThroughputInGbps(.2) <================
.minimumPartSizeInBytes(8 * MB)
.build();
Upload completed in 43 minutes for a 3GB file
0
Did you see this documentation? Hope it helps.
answered 3 years ago
Relevant content
- asked 3 months ago
- asked 4 years ago

Unfortunately, I think the link is to a Java 1.x sdk version. My original upload program was using the 1.x version but the upload would get stuck at 99%. So based on https://stackoverflow.com/questions/65207720/multipart-upload-using-aws-java-sdk-hangs-at-99, I looked for Java 2 flavor of TransferManager. Example found at https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/transfer-manager.html. The documentation for the .targetThroughputInGbsp() method indicates the default is 10, and that hung. Documentation also says the target should be set to max network bandwidth of the host, so I set the target to .2 It has managed run for more than 8 minutes so far, but progressing more slowly. Will continue to monitor