Questions tagged with Amazon Simple Storage Service
Content language: English
Sort by most recent
If it is still required to use FIPS endpoint when using tls1.2 communication with S3?
Just some documentation show that should use FIPS when need to follow FIPS requirement. But less documentation about how to enable this through Java SDK when connect to S3. So here are my questions? 1. We can confirm that TLS v1.2 is used when connected to S3, show I still need to use FIPS endpoint? Besides the SSL connection, what does the FIPS endpoint do exactly? Check tls version? 2. Any detailed document if I need to use FIPS endpoint with Java S3 SDK 1.x. Thanks!
Receiving S3 503 slow down responses
We have an application that stores a large volume of data in one S3 bucket. Lately, we have started receiving 503 slow-down error messages. Reading the docs, it seems like the API is related to the Prefix and Partitions that S3 creates internally based on the Folder structure. Our current structure is /uploads/receipt_image_v2/<UUID>/<FILENAME> ... I wonder if we should do something different? We have millions of UUID and 1-2 images per UUID in the S3 bucket. I can't find any way to see how many partitions there are for our S3 bucket.
S3 – file extension and metadata for compressed files
I store various files in an S3 bucket which I'd like to compress. Some using Gzip and some using Brotli. For the Gzip case, I set `Content-Encoding` as `gzip` and for the Brotli case, I set it to `br`. The files have the corresponding suffixes, i.e. `.gz` for Gzip-compressed file and `.br` for Brotli-compressed file. The problem is that when I download the files using Amazon S3 console, both types of files are correctly decompressed, but only the Gzip-compressed files have their suffix removed. E.g. when I download `file1.json.gz` (which has `Content-Type` set to `application/json` and `Content-Encoding` set to `gzip`), it gets decompressed and saved as `file1.json`. However, when I download `file2.json.br` (with the `Content-Type` set to `application/json` and `Content-Encoding` set to `br`), the file gets decompressed but another `.json` suffix is added so the file is saved as `file2.json.json`. I tried to also set `Content-Disposition` to contain `attachment; filename="file2.json"` but this doesn't help. So, I have a couple of questions: - What's the correct way how to store the compressed files in S3 to achieve a consistent handling? According to [`PutObject`](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html#API_PutObject_RequestSyntax) API it seems, that `Content-Encoding` is what specified that files has been compressed using a specific algorithm and that it needs to be decompressed when accessed by the client, so it seems that the file extension (e.g. `.br`) is not needed. However, some services, e.g. [Athena](https://docs.aws.amazon.com/athena/latest/ug/compression-formats.html) explicitely state that they need the files to have proper extension to be treated like a compressed files. - Is Gzip-compressed file handled differently than other types (e.g. Brotli)? And if so, why and is that browser or S3 which initiates this different handling?
How to get metadata of a File From FSx to S3
I have created an FSx. The FSx data repository is S3. Mounted the FSx on the EC2 machine. So whenever I create/modify/delete the file it gets sync with S3. Now how can I get who has created/modified the file in EC2? One way could be by doing ssh on EC2. But can S3 know this metadata?
Move objects in Deep Glacier Archive from one account to another while preserving storage class
Looking for the best approach to moving data in S3 Glacier Deep Archive from one account to another, while preseving current storage class. Any solutions I've come across thus far seem to indicate that all data must be retreived from Deep Archive before copying, which feels redundant and costly given that the data should ultimately remain in Glacier Deep Archive in the destination account. What is the least costly approach to doing this? Can it be done without first performing (e.g.) a Bulk Retrieval on the original data and then subsequently moving back to Glacier Deep Archive in the destination account?
s3 list object versions documentation is wrong
This documentation says you need read access to the bucket, but you also need the ListBucketVersions permission, too. Could someone please correct this from AWS? Here's the documentation that's wrong. https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Client.list_object_versions
Restored Glacier object cannot be copied
I need advice on why an object that has been successfully restored from Glacer can't be copied or have the ability to transition back to a S3 storage class. I'm not expecing to have to downlod and re-upload the object as I didn't have to do this last time. ![Snippet from S3 console](/media/postImages/original/IM1k0y_y_tRE-wDqYm5tjr9Q) I need to restore objects to trigger sync to a second AWS region and change the storage class to Gacier Deep Archive.
Reading multiple CSV files from S3 bucket which start from a specific string in AWS Glue
Hello all, I have multiple CSV files in S3 bucket with same schema. All the files has same schema and name of all these CSV files starts with "DUP" string. I want to build an AWS glue job that can read all these files whoe name start with "DUP" from S3 bucket. I have created a crawler that extracts schema of these files and store in the Glue catalog. Is there any component available in Glue that i can use to read all these files process them one by one and store processed files in another folder of the S3 bucket. I want a single Glue job that can do that. Any answer or suggestion will be highly appreciated thank you.
s3 backup error to aws backup
I am facing an error when trying to backup s3 bucket to AWS backup....Everything i have check but gett the following error: Unable to perform events:ListRules on AwsBackupManagedRule The backup job failed to create a recovery point for your resource arn:aws:s3:::hppp due to missing permissions on role arn:aws:iam::676968646773:role/service-role/AWSBackupDefaultServiceRole. Help me out with this, if possible please provide steps to troubleshoot it.Thanks