Resolution
To track job failures, request the job's details or a completion report. After you determine the cause and resolve the issue, resubmit the S3 Batch Operations job.
Note: If you receive errors when you run AWS Command Line Interface (AWS CLI) commands, then see Troubleshooting errors for the AWS CLI. Also, make sure that you're using the most recent AWS CLI version.
The manifest .csv or .json file format is incorrect
Amazon S3 Batch Operations supports .csv and .json inventory manifest files. If you don't correctly format the manifest file, then you must create a new batch job in Amazon S3 and specify the correct format.
When you're specifying the manifest, take the following actions:
- For the Amazon S3 Inventory report, use a .csv-formatted report and specify the manifest.json file that's associated with the inventory report.
- For .csv files, include the bucket name and object key in each row in the manifest file. You can also include the object version. If you include version IDs in the manifest, then you must specify IDs for all objects.
Note: You must URL-encode object keys.
- If the objects are in a versioned bucket, then you must specify the version IDs for the objects. Otherwise, the batch job fails. Or, Amazon S3 might apply the batch job to the incorrect version of the object.
The manifest file specifies multiple bucket names, or contains multiple header rows
If all objects that the manifest file lists don't exist in the same bucket, then you receive the following error message:
"Reasons for failure: Cannot have more than 1 bucket per Job. JOB_ID"
Make sure that your manifest file specifies only one bucket name and doesn't contain header rows.
Example manifest file that contains multiple header rows:
bucket,keymy-batch-bucket,object001.txt
my-batch-bucket,object002.txt
my-batch-bucket,object003.txt
my-batch-bucket,object004.txt
The IAM role doesn't have permission to read the manifest file
If the AWS Identity and Access Management (IAM) role doesn't have permission to read the manifest, then you receive one of the following errors:
"Reason for failure Reading the manifest is forbidden: AccessDenied" from the AWS CLI.
-or-
"Warning: Unable to get the manifest object's ETag. Specify a different object to continue" from the Amazon S3 console.
The IAM role that creates the S3 Batch Operations job must have GetObject read permission for the manifest file. Check the object's metadata for access mismatches with S3 Object Ownership. Also, check for unsupported AWS Key Management Service (AWS KMS) keys that encrypt the manifest file.
Note: S3 Batch Operations supports .csv inventory reports that are AWS KMS encrypted. S3 Batch Operations doesn't support .csv manifest files that are AWS KMS encrypted. For more information, see Configuring inventory by using the S3 console.
The batch job is in a different Region
S3 Batch Operations copy jobs must be in the same AWS Region as the destination bucket. For example, if the destination bucket is in the us-west-2 Region, then select us-west-2 as the Region when you create the batch job.
The target bucket for your S3 Inventory report is missing
There must be a target bucket for the manifest that S3 Batch Operations generates. The Amazon S3 bucket policy must also allow the s3:PutObject action. If the job sends the report to another AWS account, then confirm that the target bucket allows the IAM role to perform the s3:PutObject action.
The IAM role's trust policy is missing
The trust policy for an IAM role defines the required conditions for other principals to assume the role. To allow the S3 Batch Operations principal to assume the IAM role, attach a trust policy to the role.
Example policy:
{ "Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "batchoperations.s3.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Note: Make sure that you specify an IAM role and not an IAM user.
The IAM role is missing the permissions to create a batch job
To create an S3 Batch Operations job, grant the IAM role the s3:CreateJob permission. The entity that creates the job must also have iam:PassRole permission to pass the IAM role that you specify for the batch job. For more information, see IAM JSON policy elements: Resource.
The IAM role is missing the permissions to perform batch job operations
Make sure that you grant the IAM role the correct permissions to perform specific operations in a batch job.
Example IAM policy with required permissions for the copy operation:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:PutObjectTagging"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::{{DestinationBucket}}/*"
},
{
"Action": [
"s3:GetObject",
"s3:GetObjectAcl",
"s3:GetObjectTagging",
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::{{SourceBucket}}",
"arn:aws:s3:::{{SourceBucket}}/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::{{ManifestBucket}}/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::{{ReportBucket}}/*"
]
}
]
}
The Organizations SCP denies access
If you use AWS Organizations, then confirm that there no Deny statements in the service control policy (SCP) that deny access to Amazon S3. Otherwise, you might get an Access Denied error when you create a batch job.
Example SCP that explicitly denies all S3 actions:
{
"Version": "2012-10-17",
"Statement": [
{
"Principal": "*",
"Effect": "Deny",
"Action": "s3:*",
"Resource": "*"
}
]
}
To apply a restrictive policy, add the S3 Batch Operations assumed IAM role to the allow list.
Example restrictive policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Principal": "*",
"Effect": "Deny",
"Action": "s3:*",
"Resource": "*",
"Condition": {
"StringNotLike": {
"aws:userId": [
"AROAEXAMPLEID:*",
"AIDAEXAMPLEID",
"111111111111"
]
}
}
}
]
}
The version ID for an object is missing in the manifest
If a Batch Operations job finds an object in the manifest that has an empty version ID field, then you receive the following error:
"Error: BUCKET_NAME,prefix/file_name,failed,400,InvalidRequest,Task failed due to missing VersionId"
If the manifest format uses version IDs during the operation, then the version ID field can't be an empty string. Instead, the version ID field must be a null string. To resolve this issue, convert the empty version IDs to null strings.
Note: The Batch Operations fails only for the specified object, not the entire job.
Amazon S3 doesn't deliver the job report because you use Object Lock retention
When you configure S3 Object Lock retention on a destination bucket in either governance mode or compliance mode, you receive the following error:
"Error: Reasons for failure. The job report could not be written to your bucket. Check your permissions."
Amazon S3 doesn't support Object Lock for destination buckets with retention mode configurations. When you configure a retention mode, the bucket is write-once-read-many (WORM) protected. To resolve this issue, choose a destination bucket that doesn't have Object Lock retention.
Note: Only the completion report fails, not the job. The job completes successfully, and all objects process.
The ETag versions don't match
If the ETag value on the Amazon S3 console or AWS CLI doesn't match the Etag in the bucket, then you receive the following error:
"Error reading the manifest. Caused by: ETag mismatch. Expected ETag: 69f52a4e9f797e987155d9c8f5880897"
When you create the manifest in the Batch Operation job, you can specify the manifest object key, ETag, and optional version ID. Make sure that the ETag value matches the ETag of the manifest object's latest version in the S3 bucket. On the Amazon S3 console's Batch Operations tab, check the Manifest object ETag in the manifest file properties. In the AWS CLI, check the value of the Etag that the manifest specification passes.
HTTP 500 and 503 errors
When you receive a 500 Internal Error status code, Amazon S3 can't process the request. You receive the 503 Slow Down status code when the number of requests to your S3 bucket is high. It's a best practice to build retry logic into applications that make requests to Amazon S3. To resolve this issue, see How do I troubleshoot a HTTP 500 or 503 error from Amazon S3?