Questions tagged with Amazon Simple Storage Service

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

S3Transfer class ProgressPercentage writes frequently

I'm uploading a large file and I have it configured to use 1gb parts in the upload. I just recently added the ProgressPercentage class to S3Transfer and it's writing much more frequently than I expect. I would expect it to write once per part. Why is there 262k between values of what it's uploading? I'm using the standard class posted here, but I only modified how it writes. EDIT: I modified my script to only print when it's an even percentage, so it will print at 0.00, 1.00, etc. Still, why is it doing this? https://boto3.amazonaws.com/v1/documentation/api/latest/_modules/boto3/s3/transfer.html 2022-11-21 07:14:42,774 - aws_pc_backup - [CRITICAL] - Progress for C:\\Temp_Download\\C_VOL-b031.spf 3580887040 / 92093203968.0 (3.89%) 2022-11-21 07:14:42,852 - aws_pc_backup - [CRITICAL] - Progress for C:\\Temp_Download\\C_VOL-b031.spf 3581149184 / 92093203968.0 (3.89%) 2022-11-21 07:14:42,930 - aws_pc_backup - [CRITICAL] - Progress for C:\\Temp_Download\\C_VOL-b031.spf 3581411328 / 92093203968.0 (3.89%) 2022-11-21 07:14:42,972 - aws_pc_backup - [CRITICAL] - Progress for C:\\Temp_Download\\C_VOL-b031.spf 3581673472 / 92093203968.0 (3.89%) 2022-11-21 07:14:43,179 - aws_pc_backup - [CRITICAL] - Progress for C:\\Temp_Download\\C_VOL-b031.spf 3581935616 / 92093203968.0 (3.89%) 2022-11-21 07:14:43,240 - aws_pc_backup - [CRITICAL] - Progress for C:\\Temp_Download\\C_VOL-b031.spf 3582197760 / 92093203968.0 (3.89%) 2022-11-21 07:14:43,334 - aws_pc_backup - [CRITICAL] - Progress for C:\\Temp_Download\\C_VOL-b031.spf 3582459904 / 92093203968.0 (3.89%) 2022-11-21 07:14:43,428 - aws_pc_backup - [CRITICAL] - Progress for C:\\Temp_Download\\C_VOL-b031.spf 3582722048 / 92093203968.0 (3.89%) 2022-11-21 07:14:43,459 - aws_pc_backup - [CRITICAL] - Progress for C:\\Temp_Download\\C_VOL-b031.spf 3582984192 / 92093203968.0 (3.89%) 2022-11-21 07:14:43,475 - aws_pc_backup - [CRITICAL] - Progress for C:\\Temp_Download\\C_VOL-b031.spf 3583246336 / 92093203968.0 (3.89%) 2022-11-21 07:14:43,662 - aws_pc_backup - [CRITICAL] - Progress for C:\\Temp_Download\\C_VOL-b031.spf 3583508480 / 92093203968.0 (3.89%)
2
answers
0
votes
15
views
asked 18 days ago

How do you change S3 Content-Type metadata through the API?

[This Document](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingMetadata.html) makes it clear that you can change the Metadata of an object. Through the console it works fine. I can't figure out how you're supposed to do it through the API. I have a problem that has been asked before: CloudFront sourced by an S3 bucket, and if the Content-Type is wrong it does the wrong thing. When I upload an xyz.js file it labels the content as 'text/plain'. I'm doing this through a variety of clients including plain old s3cmd. I can specify the type on the s3cmd command line but I don't particularly want to - I'm trying to make this easy for people who are not me. What I'm trying to do is an S3 - Lambda notification, in two steps: 1) When I receive the S3CreateEvent, execute a HeadObjectCommand on that bucket/key to get the current metadata. 2) In that response, look at GetObjectCommandOutput.Metadata and see if it has a key "content-type". If so, does it match? If it does not match, do a CopyObjectCommand but I set both ContentType and Metadata: { "content-type": "text/javascript" } The trouble is, I never find an existing content-type in Metadata using the keys "Content-Type", "content-type", nor "contentType". I GUESS I could just do the CopyObjectCommand every time, but it seems better to check first and avoid any kind of recursion. It's not clear to me if CopyObjectCommand will trigger another notification but a test left me believing it does. It's still weird to me that if you upload an .js file the default content-type seems to be text/plain. In fact, in the s3 console it shows up in two places: Type: js, and in the metadata it shows "System Generated, Content-Type, text/plain". If I use `aws s3 cp junk2.js s3://mybucketname --content-type="text/javascript" it does the correct thing. This problem is much discussed on stackoverflow but it's mostly just workarounds. There isn't a clear answer.
1
answers
0
votes
24
views
wz2b
asked 20 days ago
1
answers
0
votes
15
views
Hash
asked 22 days ago

S3 – file extension and metadata for compressed files

I store various files in an S3 bucket which I'd like to compress. Some using Gzip and some using Brotli. For the Gzip case, I set `Content-Encoding` as `gzip` and for the Brotli case, I set it to `br`. The files have the corresponding suffixes, i.e. `.gz` for Gzip-compressed file and `.br` for Brotli-compressed file. The problem is that when I download the files using Amazon S3 console, both types of files are correctly decompressed, but only the Gzip-compressed files have their suffix removed. E.g. when I download `file1.json.gz` (which has `Content-Type` set to `application/json` and `Content-Encoding` set to `gzip`), it gets decompressed and saved as `file1.json`. However, when I download `file2.json.br` (with the `Content-Type` set to `application/json` and `Content-Encoding` set to `br`), the file gets decompressed but another `.json` suffix is added so the file is saved as `file2.json.json`. I tried to also set `Content-Disposition` to contain `attachment; filename="file2.json"` but this doesn't help. So, I have a couple of questions: - What's the correct way how to store the compressed files in S3 to achieve a consistent handling? According to [`PutObject`](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html#API_PutObject_RequestSyntax) API it seems, that `Content-Encoding` is what specified that files has been compressed using a specific algorithm and that it needs to be decompressed when accessed by the client, so it seems that the file extension (e.g. `.br`) is not needed. However, some services, e.g. [Athena](https://docs.aws.amazon.com/athena/latest/ug/compression-formats.html) explicitely state that they need the files to have proper extension to be treated like a compressed files. - Is Gzip-compressed file handled differently than other types (e.g. Brotli)? And if so, why and is that browser or S3 which initiates this different handling?
0
answers
0
votes
16
views
asked 24 days ago