upload last part to glacier confusion

0

Hi everyone, I'm new to glacier and aws in general. I'm trying to save a large zipped (46G) mysql-dump to glacier from the linux commandline via the aws command. From the documentation, I got that it is needed to split this in equal parts and initiate a multipart-upload, which I've done. After calculating the start and end bytes of every part, I've managed to successfully transmit all of the parts except the last one. When I try to upload this particular part with this command:

aws glacier upload-multipart-part --upload-id $UPLOADID --body xbt --range 'bytes 48318382080-48702934664/*' --account-id - --vault-name foobar

It throws me this error: An error occurred (InvalidParameterValueException) when calling the UploadMultipartPart operation: Content-Range: bytes 48318382080-48702934664/* is incompatible with Content-Length: 384552584

Unfortunately I was not able to find any type of command in the official documentation on how to tell aws that this shall be the last part which, logically, cannot be the same size as all of the other ones. Does anyone know how I can upload the last part, which is smaller than the others via the aws command? I'm running Ubuntu Server 20.04

Cheers, Marc

질문됨 2년 전306회 조회
1개 답변
1

There are two types of AWS "Glacier" services.

The first type is Amazon Glacier, which uses 'vaults' and 'archives'. It's a very slow and difficult service to use and is best avoided.

The second type is the "Glacier" storage classes available in Amazon S3. This is where you store data as normal in an S3 bucket, but you can change the Storage Class on objects to options like S3 Glacier Instant Retrieval and S3 Glacier Deep Archive. These are much easier to use since Amazon S3 has a nice interface and plenty of tools know how to use it. Plus, using Glacier storage classes in S3 is actually cheaper than storing directly into Glacier!

I can see from your sample command that you are using the Amazon Glacier service, so I recommend that you instead change to using S3. When uploading objects, you can specify a storage class immediately like this:

aws s3 cp foo.txt s3://bucketname/foo.txt --storage-class DEEP_ARCHIVE

The AWS CLI can handle multi-part uploads automatically for you.

답변함 2년 전
  • John Rotenstein, can you please elaborate on S3 to glacier being cheaper than going directly to glacier? I thought the whole point of glacier deep archive is that it costs less, and even spending one day in S3 is going to add up if you are uploading large archives every day.

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠