By using AWS re:Post, you agree to the Terms of Use

S3 mv and rename does not work for large files


As the S3 Management Console does not have an option to move files to folders, I try to use the

s3cmd -r mv s3://bucket/file1 s3://bucket/folder1/file1

to move a file to another folder. The command returns immediately (with return code 1) and no output. This problem occurs only for large files (mine is 12.3 GB). The command works as expected in small files.

A similar problem occurs for large files in the S3 Management Console when I try to simply rename a file. It either fails or takes too much time (several minutes).

What is the problem with "large" files?

Although the original command does not display any error, I ran the same command with the debug (-d) option and I saw the following message:
DEBUG: ErrorXML: Message: 'The specified copy source is larger than the maximum allowable size for a copy source: 5368709120'

asked 5 years ago236 views
6 Answers
Accepted Answer

well, both AWS CLI commands cp and move use underlying API COPY (service-side COPY operation), when both source and destination are s3 buckets, so you don't have to wory about costs of passing the data via network.

and if to use aws s3 rm immediately or shortly after aws s3 cp, the additional storage cost would be small, unless you have huge amount of data (10s or 100s of TBs or more)

but this get me to better question:
not sure if you tried aws s3 mv, and if so, did it fail and how?
(note that iirc for debug logging, we have to run aws --debug s3 mv ...)

answered 5 years ago

I guess S3 is not suitable for files > 5 GB.

answered 5 years ago

I'd suggest to use AWS CLI (I've used that a lot for tasks like this one)

and use 2 commands
aws s3 cp s3://bucket/file1 s3://bucket/folder1/file1 --metadata-directive COPY
aws s3 rm s3://bucket/file1

aws s3 cp can even copy recursively between 'folders' in a single bucket, or between buckets
(also aws s3 sync worth checking)

for more examples , you could check
aws s3 cp help
aws s3 rm help

UPD: regarding large size, I'd pay also attention to aws s3 cp option --expected-size

--expected-size (string) This argument specifies the expected size of a stream in terms of bytes. Note that this argument is needed only when a stream is being uploaded to s3 and the size is larger than 5GB. Failure to include this argument under these conditions may result in a failed upload due to too many parts in upload.

hope this helps

Edited by: vit on Nov 23, 2017 4:17 PM

answered 5 years ago

Thank you vit for the suggestion. But, I think cp and rm would be expensive operations for large files. And also, having no --expected-size option, the mv command is still of no use. :(

answered 5 years ago

I am using s3cmd from The latest version (2.0.1) does not provide an --expected-size option for the cp subcommand, so I was not able to test it from CLI.

However, I did the copy (and delete) operation you suggested from the S3 Management Console. Of course, the "copy" operation took several minutes to complete. However, this time is similar to a simple file "rename" operation, so I guess, any file management operation creates a new file in the bucket (and then deletes the old one).

Thank you for your help.

answered 5 years ago

glad to hear you solved the issue.

just wanted to add, that s3cmd is not part of AWS CLI , so it looks like we talked about different command line software.

and yes, pretty much everything you can do in AWS web console, you can do using AWS CLI (or even more); perhaps with some exceptions.

answered 5 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions