- Newest
- Most votes
- Most comments
Hello,
You should have a look at this thread, it looks like the same question: https://repost.aws/questions/QU3h2LzJr0TGufxMbYkcw_EQ/accessdenied-in-old-files-of-bucket#ANtdLSGXf6QOulO2dc1eNPUg
Hope this helps :)
Hi,
Based on https://docs.aws.amazon.com/AmazonS3/latest/userguide/serv-side-encryption.html
Amazon S3 now applies server-side encryption with Amazon S3 managed keys (SSE-S3)
as the base level of encryption for every bucket in Amazon S3. Starting January 5, 2023,
all new object uploads to Amazon S3 are automatically encrypted at no additional cost
and with no impact on performance. The automatic encryption status for S3 bucket default
encryption configuration and for new object uploads is available in AWS CloudTrail logs, S3
Inventory, S3 Storage Lens, the Amazon S3 console, and as an additional Amazon S3 API
response header in the AWS Command Line Interface and AWS SDKs. For more information
it seems that you have different forms of encryption applied to your old vs new objects.
So, best to re-encrypt everything with the same key using the default key described in the note above.
The simplest way to re-encrypt all objects properly is to use S3 Batch Operations: see https://aws.amazon.com/blogs/storage/encrypting-objects-with-amazon-s3-batch-operations/ for precise guidance on how to do it.
Best,
Didier
The issue you're experiencing is likely related to changes in Amazon S3's default encryption settings. Starting January 5, 2023, Amazon S3 automatically applies server-side encryption with Amazon S3 managed keys (SSE-S3) as the base level of encryption for every bucket. This change affects new object uploads but does not retroactively encrypt existing objects.
The error message you're seeing suggests that your existing objects are not encrypted, while your bucket now has a default encryption setting. This mismatch is causing access issues for the older, unencrypted objects.
To resolve this issue, you have a few options:
-
Update your bucket policy to allow access to both encrypted and unencrypted objects. This would be a temporary solution while you work on encrypting your existing data.
-
Use S3 Batch Operations to encrypt your existing objects. This will apply the same encryption method (SSE-S3) to all your existing objects, bringing them in line with your new uploads.
-
If you need a different type of encryption for your objects (such as SSE-KMS), you can set this as the default encryption for your bucket and then use S3 Batch Operations to apply this encryption to your existing objects.
-
Temporarily disable the default encryption on your bucket, copy all existing objects within the same bucket (which will encrypt them with the current default settings), and then re-enable default encryption.
Remember, when you're working with encrypted objects, ensure that your applications and users have the necessary permissions to decrypt the objects when accessing them.
If you continue to face issues after trying these solutions, it would be advisable to contact AWS Support for more specific guidance tailored to your account and bucket configuration.
Sources
Using server-side encryption with Amazon S3 managed keys (SSE-S3) - Amazon Simple Storage Service
Protecting data with server-side encryption - Amazon Simple Storage Service
Encryption best practices for Amazon S3 - AWS Prescriptive Guidance
That error generally means that the object (file) is encrypted server-side but using an encryption key that was provided by the client when uploading/copying the file to its current location, rather than S3 generating an encryption key. This method of encrypting data on the server side using customer-provided keys is called SSE-C for short.
The reason you're only getting technical-looking error messages when trying to view the encryption settings in the console or to download the object is that S3 doesn't possess the encryption key. When you make any request to access the object's contents without providing the key in the request, S3 will fail to fulfil it due to the missing key.
When this happens unexpectedly, it usually means, I'm sorry to have to say, that your AWS account may have been compromised, typically by an IAM user's static access key having been leaked, and a bad actor having used it to encrypt all the objects in your bucket, using a key only in the possession of that bad actor. This is a well-known pattern for ransomware actors, because IAM users with static access keys often have excessive permissions, and encrypting the data inside an S3 bucket can be done at extremely high speed due to the enormous scale of S3.
There's another such case discussed here, with a remark from the account owner that they were indeed hacked in just that manner: https://repost.aws/questions/QU1kmi1sKGST2TYrOM5yZMJw
and a more recent discussion without final confirmation but strong indications of the same phenomenon here: https://repost.aws/questions/QUhePzmNG-RPSvboO8u7mkuw/files-across-all-s3-buckets-unaccessible-even-from-gui
Thank you all. I followed up on the instruction mentioned in this thread to identify whether the files are KMS or S3 encrypted: https://repost.aws/questions/QU3h2LzJr0TGufxMbYkcw_EQ/accessdenied-in-old-files-of-bucket#ANtdLSGXf6QOulO2dc1eNPUg
I took a single image file which is hosted in this bucket and I constantly get this in AWS CLI (in the command I added the '+' separator as the original image file name contains whitspaces:
**ALL ** my files in this bucket were uploaded after Autgust 2024 and were not modified since then (I upload jpg image files), and still I can see that **ALL ** files were last updated on December 15, 2024.
What can still be the reason and how can I overcome this instead of replacing (re-uploading) all the files? :(
Thank you!
In the aws s3api head-object command, the empty spaces shouldn't be URL-encoded but the unescaped key be surrounded by single quotes:
aws s3api head-object --bucket vrevit-image-bucket --key 'f8../Floor 4B 1.jpg'
As you mentioned, the object in your screenshot and all the other objects you can't access have been modified on December 15th. Unless one of your colleagues, for example, has intentionally encrypted them with a customer-provided encryption key, it's almost certain that you've been hacked and a malicious outsider has copied your original objects that were likely SSE-S3-encrypted (with S3-managed encryption keys) making the copies SSE-C-encrypted (with keys only known to the outsider). The fresh timestamps from Dec 15 that you're seeing indicate the moment when the encrypted copies were created.
I suggest you first check if there are IAM users in your account that have static access keys. If any do, disable those access keys. If there are any newly created IAM users that you don't recognise, disable their access keys and console login. It's very typical for attackers to create new IAM users so that they could maintain a foothold over your environment in case their initial access path (the key that was likely compromised first) gets cut off.
You should also check if any new IAM roles have been created in your account or if the S3 bucket policies or other resource-based policies of services you're using (such as your S3 bucket policies) have been modified. They could've been changed to allow the attacker to access your environment with credentials from a different AWS account. Also check if the trusted AWS account IDs (12-digit identifiers) listed for IAM roles contain any account IDs you don't recognise.
There are more instructions for regaining control after account compromise in this support article: https://repost.aws/knowledge-center/potential-account-compromise
Once the access keys and potential new users have been deactivated, I suggest you check if versioning is enabled for your S3 bucket. If it is, open a "folder" containing any of the inaccessible files and switch on the "Show versions" switch at the top of the object list. This will show if the earlier versions from August might still be there. Naturally, if the leaked credentials had the permission also to delete earlier object versions, a typical attacker would've deleted them after producing the encrypted versions. If the earlier versions are there, you should try opening a few of them to verify they are accessible, after which you could either create copies of them to supersede the encrypted versions, or you could delete the encrypted versions, leaving the original, working ones behind.
If there are no earlier versions or if you have the original files still available outside S3, then I suggest you first deactivate the attacker's access to your environment as per the above instructions regarding IAM users and roles, and then reupload your files to your S3 bucket.
You can quickly reproduce the issue as follows.
First, open CloudShell in the region where your bucket resides. In the CloudShell environment, create a dummy test file with the name testfile.txt with some dummy, non-zero amount of content.
Upload the file to your S3 bucket with this command, with BUCKETNAME replaced with your bucket name. The file will be uploaded SSE-C-encrypted with "00000000001111111111222222222233" as the encryption key.
aws s3api put-object \ --bucket BUCKETNAME \ --key _test/testfile.txt \ --body testfile.txt \ --sse-customer-algorithm AES256 \ --sse-customer-key 00000000001111111111222222222233
Try to download the file in the regular manner, without supplying the encryption key. This command will fail with the same error message you saw earlier: An error occurred (InvalidRequest) when calling the GetObject operation: The object was stored using a form of Server Side Encryption. The correct parameters must be provided to retrieve the object.
aws s3api get-object \ --bucket BUCKETNAME \ --key _test/testfile.txt \ testfile-downloaded.txt
You can also confirm in the graphical S3 console that the object _test/testfile.txt behaves the same way as the real objects that are inaccessible.
However, when you give otherwise the same download command but additionally include the encryption key for the test object, the download succeeds, and you'll find testfile-downloaded.txt in your working directory in CloudShell, with the same contents as the file you uploaded:
aws s3api get-object \ --bucket BUCKETNAME \ --key _test/testfile.txt \ testfile-downloaded.txt \ --sse-customer-algorithm AES256 \ --sse-customer-key 00000000001111111111222222222233
Relevant content
- asked 3 years ago
- asked a year ago
- AWS OFFICIALUpdated 4 months ago

SSE-S3 won't cause the error message in the question. The error message refers to SSE-C, which is server-side encryption with a customer-provided key. That's what the "correct parameters must be provided" part of the error refers to.