Questions tagged with Amazon Simple Storage Service

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Policies applied on organization trail logs bucket created by AWS Tower

Hello, We just setup AWS Tower on our organization. Everything ran smoothly but we detected a strange policy applied by AWS Tower on the bucket responsible to aggregate Cloudtrail trails from all of our organization. This bucket is located on the Log Archive account of Tower architecture. The policy is : ``` { "Sid": "AWSBucketDeliveryForOrganizationTrail", "Effect": "Allow", "Principal": { "Service": "cloudtrail.amazonaws.com" }, "Action": "s3:PutObject", "Resource": [ "arn:aws:s3:::CLOUDTRAIL_BUCKET/ORGANIZATION_ID/AWSLogs/ORGANIZATION_ID/*" ] } ``` This policy allows `cloudtrail` service to push objects on the provided path. Out of curiosity, we tried to configure a Cloudtrail trail located on non-related AWS account (by non-related I mean an AWS account that doesn't belong to the AWS organization) to use this S3 path to push data on. And it worked. Is there any reason why this policy doesn't have a `condition` field to restrict access to accounts that belong to the organization like : ``` "Condition": { "StringEquals": { "aws:PrincipalOrgID": [ "ORGANIZATION_ID" ]} } } ``` Our Tower landing zone version is 3.0. This version enabled Organization-based trail instead of Account-based trails, so I think this policy exists since this version. I know there are some non easily guessable variables (like the Org ID and the bucket name) in the process, but as a compliance tool, AWS Tower should restrict access to the organization itself as it's restricted to it by design. Thanks for your time
0
answers
1
votes
38
views
asked 14 days ago

S3 Bucket Object Lock - Deleting an object version with no retention settings requires 'BypassGovernanceRetention' permissions

**Scenario:** An S3 Bucket has 'Object Lock' Enabled. Default retention is, and always has been - 'Disabled' An S3 Object in the bucket has multiple versions. Object Lock (Legal Hold & Retention) are both 'Disabled' for all versions of the object. Object Lock (Legal Hold & Retention) settings have never been enabled for the object or any of its previous versions **Issue:** An IAM User with 'DeleteObjectVersion' permission receives 'access denied' when attempting to perform 'version delete' on a version of the object. The delete succeeds with the additional 'BypassGovernanceRetention' allowed for the same user **Question:** Is this the expected behavior? It seems like a bug to me! I understood the purpose of the 'BypassGovernanceRetention' is to allow changes to objects where 'governance mode' retention is enabled for the object. But it appears 'BypassGovernanceRetention' is required to delete a version in the bucket, even if the version does not have 'governance mode' enabled. I can find no reference in documentation for this behavior I have confirmed this behavior occurs only for objects in buckets where object lock is enabled. For objects in buckets with versioning only (object lock disabled) - the behavior is as expected. Only the 'DeleteObjectVersion' permission is required to delete object versions. Please advise Regards Jason
1
answers
0
votes
33
views
asked 15 days ago

S3Transfer class ProgressPercentage writes frequently

I'm uploading a large file and I have it configured to use 1gb parts in the upload. I just recently added the ProgressPercentage class to S3Transfer and it's writing much more frequently than I expect. I would expect it to write once per part. Why is there 262k between values of what it's uploading? I'm using the standard class posted here, but I only modified how it writes. EDIT: I modified my script to only print when it's an even percentage, so it will print at 0.00, 1.00, etc. Still, why is it doing this? https://boto3.amazonaws.com/v1/documentation/api/latest/_modules/boto3/s3/transfer.html 2022-11-21 07:14:42,774 - aws_pc_backup - [CRITICAL] - Progress for C:\\Temp_Download\\C_VOL-b031.spf 3580887040 / 92093203968.0 (3.89%) 2022-11-21 07:14:42,852 - aws_pc_backup - [CRITICAL] - Progress for C:\\Temp_Download\\C_VOL-b031.spf 3581149184 / 92093203968.0 (3.89%) 2022-11-21 07:14:42,930 - aws_pc_backup - [CRITICAL] - Progress for C:\\Temp_Download\\C_VOL-b031.spf 3581411328 / 92093203968.0 (3.89%) 2022-11-21 07:14:42,972 - aws_pc_backup - [CRITICAL] - Progress for C:\\Temp_Download\\C_VOL-b031.spf 3581673472 / 92093203968.0 (3.89%) 2022-11-21 07:14:43,179 - aws_pc_backup - [CRITICAL] - Progress for C:\\Temp_Download\\C_VOL-b031.spf 3581935616 / 92093203968.0 (3.89%) 2022-11-21 07:14:43,240 - aws_pc_backup - [CRITICAL] - Progress for C:\\Temp_Download\\C_VOL-b031.spf 3582197760 / 92093203968.0 (3.89%) 2022-11-21 07:14:43,334 - aws_pc_backup - [CRITICAL] - Progress for C:\\Temp_Download\\C_VOL-b031.spf 3582459904 / 92093203968.0 (3.89%) 2022-11-21 07:14:43,428 - aws_pc_backup - [CRITICAL] - Progress for C:\\Temp_Download\\C_VOL-b031.spf 3582722048 / 92093203968.0 (3.89%) 2022-11-21 07:14:43,459 - aws_pc_backup - [CRITICAL] - Progress for C:\\Temp_Download\\C_VOL-b031.spf 3582984192 / 92093203968.0 (3.89%) 2022-11-21 07:14:43,475 - aws_pc_backup - [CRITICAL] - Progress for C:\\Temp_Download\\C_VOL-b031.spf 3583246336 / 92093203968.0 (3.89%) 2022-11-21 07:14:43,662 - aws_pc_backup - [CRITICAL] - Progress for C:\\Temp_Download\\C_VOL-b031.spf 3583508480 / 92093203968.0 (3.89%)
2
answers
0
votes
14
views
asked 15 days ago

How do you change S3 Content-Type metadata through the API?

[This Document](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingMetadata.html) makes it clear that you can change the Metadata of an object. Through the console it works fine. I can't figure out how you're supposed to do it through the API. I have a problem that has been asked before: CloudFront sourced by an S3 bucket, and if the Content-Type is wrong it does the wrong thing. When I upload an xyz.js file it labels the content as 'text/plain'. I'm doing this through a variety of clients including plain old s3cmd. I can specify the type on the s3cmd command line but I don't particularly want to - I'm trying to make this easy for people who are not me. What I'm trying to do is an S3 - Lambda notification, in two steps: 1) When I receive the S3CreateEvent, execute a HeadObjectCommand on that bucket/key to get the current metadata. 2) In that response, look at GetObjectCommandOutput.Metadata and see if it has a key "content-type". If so, does it match? If it does not match, do a CopyObjectCommand but I set both ContentType and Metadata: { "content-type": "text/javascript" } The trouble is, I never find an existing content-type in Metadata using the keys "Content-Type", "content-type", nor "contentType". I GUESS I could just do the CopyObjectCommand every time, but it seems better to check first and avoid any kind of recursion. It's not clear to me if CopyObjectCommand will trigger another notification but a test left me believing it does. It's still weird to me that if you upload an .js file the default content-type seems to be text/plain. In fact, in the s3 console it shows up in two places: Type: js, and in the metadata it shows "System Generated, Content-Type, text/plain". If I use `aws s3 cp junk2.js s3://mybucketname --content-type="text/javascript" it does the correct thing. This problem is much discussed on stackoverflow but it's mostly just workarounds. There isn't a clear answer.
1
answers
0
votes
24
views
wz2b
asked 18 days ago
1
answers
0
votes
15
views
Hash
asked 20 days ago