Skip to content

Server-side encryption settings - API Response UnknownError

0

I'm having a problem accessing an image saved in the Bucket, the return is in the image below . Error opening link

In the S3 settings I have an unknown error response in the API return, image below! Error Setings S3

I didn't change anything, the new images work correctly.

asked a year ago237 views
4 Answers
1
Accepted Answer

It sounds like you've used server-side encryption with a customer-provided key. It's called SSE-C for short in CLI commands, for example.

In SSE-C, encryption and decryption are done on the server side by S3, but S3 doesn't store the key. You have to store the encryption key and provide the actual key (not just its ID) in GetObject and PutObject requests, so that S3 is able to use the key to decrypt existing objects or encrypt new ones. It's explained in more detail in this documentation section: https://docs.aws.amazon.com/AmazonS3/latest/userguide/ServerSideEncryptionCustomerKeys.html. When you send a GetObject request without specifying an encryption key, S3 will give you an error message like the one you're receiving, because it can't decrypt the object without knowledge of the key.

In most cases, you'd use SSE-S3 or SSE-KMS for server-side encryption. In both cases, S3 has direct access to the keys stored in AWS's systems, so you don't have to store the keys or include them in requests. With SSE-C, you are responsible for storing the key in a place of your own choosing and for providing it to S3 whenever you expect it to perform a related read or write operation.

Note also that SSE-C didn't need to be configured in the bucket's settings in order for this to have happened. Encryption settings can be specified for every single upload (PutObject) operation separately in the request headers. If an application developer, for example, has specified an encryption key for S3 to use with the x-amz-server-side-encryption-customer-key header in a PutObject operation, it would have overridden the bucket-level settings that you see in AWS Config, CloudTrail logs, and the bucket's properties: https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html#AmazonS3-PutObject-request-header-SSECustomerKey

You can prevent developers and applications from overriding the desired encryption and other settings with bucket policies and/or identity-based policies. The section "Requiring and restricting SSE-C" in the above documentation article on SSE-C first shows an example of a policy statement that requires SSE-C to be used and a second example preventing SSE-C from being used.

EXPERT
answered a year ago
EXPERT
reviewed a year ago
  • SSE-S3 is used in this case, so I do not need to store the private key. I checked the link sent, but I didn't find an answer, I'll keep looking.

  • @PriorizaTec It sounds almost certain that the objects are encrypted with SSE-C. It isn't specified in the bucket settings, so you won't find it there. It's specified in the individual PutObject requests made by applications or the CLI. Would you like to tell more about how the objects prior to Aug 23 got placed in the bucket? Did you upload them, or was it done by an application, someone else, or by using AWS CLI commands, such as "aws s3 cp" or "aws s3 sync"?

  • I spoke to my superior and it seems that we were hacked and ended up encrypting these files with the old key. Before this happened it was all SSE-S3

0

To validate if the issue is caused by old images using a different server-side encryption method, follow these steps:

Check AWS Config History:

Review the AWS Config history to determine if there have been any changes to the S3 bucket's configuration, particularly related to server-side encryption settings.

Verify Access to the KMS Key: Ensure that the user or role you are using has the necessary permissions to access the KMS key associated with the old images. This includes permissions like kms:Decrypt and relevant S3 permissions.

Check S3 Access Logs:

Review the S3 access logs to identify the last time the old image or object was successfully retrieved. Pay attention to which user or role accessed it and the context in which it was accessed. This information can help pinpoint potential changes or access issues.

By following these steps, you can identify the root cause of the issue and determine whether it is related to encryption settings or access permissions.

answered a year ago
  • There has been no change in S3 policy.

    The access keys are correct, as I can access the new saved documents.

    the new documents are accessible, just Those prior to August 23rd have this problem.

0

I just quickly tested in my environment, and indeed, uploading an object with SSE-C encryption causes the object to show exactly the same UnknownError messages in the console for both the server-side encryption setting and additional checksums when trying to view the object's properties that you also showed in your screenshots.

If you have the encryption key, you can make copies of the old objects, specifying the current encryption key for the source, and not specifying SSE-C for the destination. That will cause the destination objects to obey the bucket's default encryption setting but allow S3 to read the source objects using the old encryption key you specify.

EXPERT
answered a year ago
  • I understand, would there be any documentation on how to copy the files?

0

If you have the current encryption key, you can copy the files with a command like this with the AWS CLI. You can also run it in CloudShell. In this example, objects with the prefix source-folder/ are copied to have the prefix destination-folder/. The encryption key for the source objects in this example is 12345678901234567890123456789012. The destination objects will be encrypted according to the bucket's default encryption setting, because no override is specified in this command:

aws s3 cp s3://my-bucket-name/source-folder/ s3://my-bucket-name/destination-folder/ --sse-c-copy-source AES256 --sse-c-copy-source-key 12345678901234567890123456789012 --recursive

The command above will copy all the files in the source folder recursively. You can also copy objects one by one by dropping the --recursive parameter and specifying exact object URLs:

aws s3 cp s3://my-bucket-name/source-folder/my-file-1.jpeg s3://my-bucket-name/destination-folder/my-file-1.jpeg --sse-c-copy-source AES256 --sse-c-copy-source-key 12345678901234567890123456789012

Note that you can specify the same source and destination to overwrite the objects, but if versioning isn't enabled for the bucket and anything goes wrong, the data will be lost forever. It's safer to copy them to a different prefix/folder, as in the above example, and to verify the correctness of the result, before copying the successfully decrypted files to the original location.

The example commands will fail to copy objects if the wrong encryption key is provided, so you can safely test different encryption keys, if there's any uncertainty as to which encryption key was used for each object. Only those objects will be copied for which the encryption key matches the one used originally.

EXPERT
answered a year ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.