Why can't I copy an object between two Amazon S3 buckets?

8 minute read
0

I want to copy an object from one Amazon Simple Storage Service (Amazon S3) bucket to another, but I can’t.

Resolution

Bucket policies and IAM policies

To copy an object between buckets, make sure that you configured the correct permissions. To copy an object between buckets in the same AWS account, use AWS Identity and Access Management (IAM) policies to set permissions. To copy an object between buckets in different accounts, you must set permissions on both the relevant IAM policies and bucket policies.

Note: For instructions on how to modify a bucket policy, see Adding a bucket policy by using the Amazon S3 console. For instructions on how to modify the permissions for an IAM user, see Changing permissions for an IAM user. For instructions on how to modify the permissions for an IAM role, see Modifying a role.

Confirm these required permissions:

  • At minimum, your IAM identity (user or role) must have permissions to the s3:ListBucket and s3:GetObject actions on the source bucket. If the buckets are in the same account, then set these permissions with your IAM identity's policies or the S3 bucket policy. If the buckets are in different accounts, then set these permissions with both the bucket policy and your IAM identity's policies.
  • At minimum, your IAM identity must have permissions to the s3:ListBucket and s3:PutObject actions on the destination bucket. If the buckets are in the same account, then set these permissions with your IAM identity's policies or the S3 bucket policy. If the buckets are in different accounts, then set these permissions with both the bucket policy and your IAM identity's policies.
  • Review the relevant bucket policies and IAM policies to confirm that there are no explicit deny statements that conflict with the permissions that you need. An explicit deny statement overrides an allow statement.
  • For specific operations, confirm that your IAM identity has permissions to all the necessary actions within the operation. For example, to run the command aws s3 cp, you need permission to s3:GetObject and s3:PutObject. To run the command aws s3 cp with the --recursive option, you need permission to s3:GetObject, s3:PutObject, and s3:ListBucket. To run the command aws s3 sync, then you need permission to s3:GetObject, s3:PutObject, and s3:ListBucket.
    Note: If you use the AssumeRole API operation to access Amazon S3, verify that you properly configured the trust relationship.
  • For version-specific operations, confirm that your IAM identity has permissions to version-specific actions. For example, to copy a specific version of an object, you need the permission for s3:GetObjectVersion and s3:GetObject.
  • To copy objects that have object tags, your IAM identity must have s3:GetObjectTagging and s3:PutObjectTagging permissions. You must have s3:GetObjectTagging permission for the source object and s3:PutObjectTagging permission for objects in the destination bucket.
  • Review the relevant bucket policies and IAM policies to verify that the Resource element has the correct path. For bucket-level permissions, the Resource element must point to a bucket. For object-level permissions, the Resource element must point to an object or objects.

For example, a policy statement for a bucket-level action such as s3:ListBucket must specify a bucket in the Resource element:

"Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET"

A policy statement for object-level actions like s3:GetObject or s3:PutObject must specify an object or objects in the Resource element:

"Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*"

Object ownership

If the bucket policies have the correct permissions and you still can't copy an object between buckets, then check which account owns the object. The bucket policy applies only to objects that belong to the bucket owner. An object that belongs to a different account might have conflicting permissions on its access control list (ACL).

Note: The object ownership and ACL issue typically occurs when you copy AWS service logs across accounts. Examples of service logs include AWS CloudTrail logs and Elastic Load Balancing access logs.

To find the account that owns an object, follow these steps:

  1. Open the Amazon S3 console.
  2. Navigate to the object that you can't copy between buckets.
  3. Choose the object's Permissions tab.
  4. Review the values under Access for object owner and Access for other AWS accounts:
  • If the object is owned by your account, then the Canonical ID under Access for object owner contains (Your AWS account).
  • If the object is owned by another account and you can access the object, then these are true:
    The Canonical ID under Access for object owner contains (External account).
    The Canonical ID under Access for other AWS accounts contains (Your AWS account).
  • If the object is owned by another account and you can't access the object, then this is true:
    Canonical ID fields for both Access for object owner and Access for other AWS accounts are empty.

If the object that you can't copy between buckets is owned by another account, then the object owner can complete one of these options:

  • The object owner can grant the bucket owner full control of the object. After the bucket owner owns the object, the bucket policy applies to the object.
  • The object owner can keep ownership of the object, but they must change the ACL to the settings that you need for your use case.

AWS KMS encryption

An object might be encrypted with an AWS Key Management Service (AWS KMS) key. Int his case, confirm that your IAM identity has the correct permissions to the key. If your IAM identity and AWS KMS key belong to the same account, then confirm that your key policy grants the required AWS KMS permissions.

If your IAM identity and AWS KMS key belong to different accounts, then confirm that both the key and IAM policies grant the required permissions.

For example, if you copy objects between two buckets (and each bucket has its own key), then the IAM identity must specify these permissions:

  • kms:Decrypt permissions, referencing the first KMS key
  • kms:GenerateDataKey and kms:Decrypt permissions, referencing the second KMS key

For more information, see Using key policies in AWS KMS and Actions, resources, and condition keys for AWS Key Management Service.

Amazon Archive storage classes or Amazon Glacier and Intelligent Archive storage classes

You can't copy an object from the Amazon S3 Glacier storage class. You must first restore the object from Amazon S3 Glacier before you can copy the object. For instructions, see Restoring an archived object.

Requester Pays activated on bucket

If the source or destination bucket has Requester Pays activated and you want to access the bucket from another account, then check your request. Make sure that your request includes the correct Requester Pays parameter:

  • For AWS Command Line Interface (AWS CLI) commands, include the --request-payer option.
    Note: If you receive errors when running AWS CLI commands, make sure that you're using the most recent version of the AWS CLI.
  • For GET, HEAD, and POST requests, include x-amz-request-payer : requester.
  • For signed URLs, include x-amz-request-payer=requester.

AWS Organizations service control policy

If you use AWS Organizations, then check the service control policies to verify that they allow access to Amazon S3.

For example, this policy results in a 403 Forbidden error when you try to access Amazon S3. This is because it explicitly denies access:

{  
  "Version": "2012-10-17",  
  "Statement": [  
    {  
      "Effect": "Deny",  
      "Action": "S3:*",  
      "Resource": "*"  
    }  
  ]  
}

For more information on the features of AWS Organizations, see Activating all features in your organization.

Cross-Region request issues with VPC endpoints for Amazon S3

VPC endpoints for Amazon S3 don't support requests across different AWS Regions. For example, suppose that you have an Amazon Elastic Compute Cloud (Amazon EC2) instance in Region A. This instance has a virtual private cloud (VPC) endpoint configured in its associated route table. The EC2 instance can't copy an object from Region B to a bucket in Region A. Instead, you receive an error message similar to this example:

"An error occurred (AccessDenied) when calling the CopyObject operation: VPC endpoints do not support cross-region requests"

To troubleshoot this cross-Region request issue, try these methods:

  • Remove the VPC endpoint from the route table. If you remove the VPC endpoint, then the instance must be able to connect to the internet instead.
  • Run the copy command from another instance that doesn't use the VPC endpoint. Or, run the copy command from an instance that's in neither Region A nor Region B.
  • If you must use the VPC endpoint, first send a GET request to copy the object from the source bucket to the EC2 instance. Then, send a PUT request to copy the object from the EC2 instance to the destination bucket.

Related information

Copying objects

How do I troubleshoot 403 Access Denied errors from Amazon S3?

AWS OFFICIAL
AWS OFFICIALUpdated a year ago
1 Comment

FYI: I was dealing with the cross-region s3 copy error that occurred in my lambda function after I enabled S3 gateway endpoint.

An error occurred (AccessDenied) when calling the CopyObject operation: VPC endpoints do not support cross-region requests

In my case it is required to run this lambda inside of the VPC and I didn't want to modify the code to perform the copy operation using lambda as a proxy, so I focused on the option of removing the VPC endpoint from the route table.

It eventually worked, however it required to remove not only lambda related private subnet route table, but also NAT gateway subnet route table. Because traffic from lambda to public internet in my setup goes like this: lambda (in private subnet) -> NAT gateway (in public subnet) -> Internet Gateway. In case when only lambda private subnet route table was removed from S3 gateway endpoint, it still failed to work because route table of public subnet where NAT gateway resides did still have routing to S3 gateway endpoint.

Alex
replied 13 days ago