Questions tagged with Amazon Simple Storage Service

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Hi, i have Administrator access and full s3 access as well. I was updating S3 bucket policy and i think i wrongly did deny to all. After that even i have all the admin access , i am not able to access the particular S3 and hence not able to reconfigure the bucket policy. This seems like a bug in AWS , please clarify the same.
1
answers
0
votes
18
views
asked 10 days ago
Hello, I've followed [these instructions ](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/GettingStarted.SimpleDistribution.html) to create a publicly accessible S3 bucket, with Cloudfront connected to it. I can access all S3 objects in my browser, but the Cloudfront URL always returns "Access Denied". I'm confused because my S3 bucket is publicly available, and I'm able to access the bucket objects. What could be causing this error?
1
answers
0
votes
32
views
logan_b
asked 10 days ago
Cloudfront automatically added these bucket policy rules, but now my IAM user + production IAM role can't access the bucket to perform head object operation. How do I modify this to allow for object access for my server? I also already tried using principle. It didn't work. The IAM user has full access to S3 ``` { "Version": "2008-10-17", "Id": "PolicyForCloudFrontPrivateContent", "Statement": [ { "Sid": "Server access", "Effect": "Allow", "Principal": "*", "Action": [ "s3:GetObject", "s3:ListBucket", "s3:DeleteObject" ], "Resource": [ "arn:aws:s3:::bucket-name", "arn:aws:s3:::bucket-name/*" ], "Condition": { "StringEquals": { "AWS:SourceArn": "arn:aws:iam::1234567890:user/dev" } } }, { "Sid": "AllowCloudFrontServicePrincipal", "Effect": "Allow", "Principal": { "Service": "cloudfront.amazonaws.com" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::bucket-name/*", "Condition": { "StringEquals": { "AWS:SourceArn": "arn:aws:cloudfront::1234567890:distribution/asdf" } } } ] } ``` [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/ox5E7.png
1
answers
0
votes
57
views
ACW
asked 10 days ago
I have a bucket in account A. This bucket is configured to block all public access, and to allow GetObject to requests from cloudfront with "aws:ResourceOrgID" matching my orgId. In Account B (inside my organisation) I can create an OriginAccessControl, and using this and the console, i can manually add the url bucketname.s3.region.amazonaws.com as an s3 origin, and using this OriginAccessControl, I can access the files from cloudformation. (I have also verified that this is not possible form an account outside my org, I believe my bucket policy is ok) Trying to configure the exact same origin using cloudformation leads to the following error: Resource handler returned message: "Access denied for operation 'Access Denied. (Service: CloudFront, Status Code: 403, Request ID: .... I assume, that cloudformation is either trying to verify that the bucket is accessbile in the background, or is trying to change the bucket permissions. Unfortunately changing the bucket policy action to * for resources BucketName, BUcketname/* doesnt help. What could cloudformation be doing that the console doesnt which causes this failure?
5
answers
0
votes
47
views
James
asked 10 days ago
Use case : New documents are added through a web application on ongoing basis to S3. I am trying to build a document search for the documents stored in S3 that can display documents uploaded in near real time. Does Kendra sync data source with index based on an event trigger?
1
answers
0
votes
8
views
asked 10 days ago
I am getting NoSuchUpload error when uploading a part of multi-part upload file via PUT request to presigned url. The error message says, "The specified upload does not exist. The upload ID may be invalid, or the upload may have been aborted or completed." The upload id is not invalid as I have checked the same via list-multipart-uploads command on AWS CLI. And neither I have aborted or completed the upload. I am getting the following error on multi part upload ``` <?xml version="1.0"?> <Error> <Code>NoSuchUpload</Code> <Message>The specified upload does not exist. The upload ID may be invalid, or the upload may have been aborted or completed.</Message> <UploadId>1kPnI95Sy9xim3DzudwQ4Yno1wIrKT.Lv.wzZ6wqXTM792QfKYZZLavSWOrQxCAgc9mj3E09Nos2xJu_YvaRzAIjD4sx6hO1pOoBNWvzfoFf_Tabbt9d62ebjrKgHHfN</UploadId> <RequestId>BNK1E884Y30TM5MF</RequestId> <HostId>Iz2brROW9q4ym9UnxLZwoBZp+Af8KkXmFfTm2C86tRHIW1r5w/LWAKU0wSg2bQS4c5K0Xo/yL1A=</HostId> </Error> ``` I am trying to upload a file to s3 bucket via multi-part upload method. I am using boto3 python SDK to do the same. I generated the upload_id for a 20MB file with key `<uuid4>/files/test-user-data/<uuid4>_0001.mp4` using `create_multipart_upload` method. Then I generated presigned-url for each 5MB chunk of the file as follows: params = {'Bucket': <bucket_name>, 'Key': <key>, 'UploadId': <upload_id>, 'PartNumber': <chunk_id>} s3_client.generate_presigned_url(ClientMethod='upload_part', Params=params, ExpiresIn=3600) I got the following presigned url: ``` https://<bucket_name>.s3.amazonaws.com/3be4b390-f01c-4cfb-bac0-ecf1534a335a/files/3be4b390-f01c-4cfb-bac0-ecf1534a335a/files/test-user-data/725f5643-6dc8-4d48-ad7b-d73479aa5752_25bceabd-343e-4d9c-82ab-0577dc551a69_0001.mp4?uploadId=1kPnI95Sy9xim3DzudwQ4Yno1wIrKT.Lv.wzZ6wqXTM792QfKYZZLavSWOrQxCAgc9mj3E09Nos2xJu_YvaRzAIjD4sx6hO1pOoBNWvzfoFf_Tabbt9d62ebjrKgHHfN&partNumber=1&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATTAUJC2AI5OKS5GA%2F20221119%2Fap-south-1%2Fs3%2Faws4_request&X-Amz-Date=20221119T041617Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=aa26c2263d36fd3e47e05077fa32c4631cefc42c0b7c009341f9b52804cbc97e ``` Then to the presigned url, I sent a put request as follows: ``` s3_response = requests.put(url=<presigned_url>, files={'file': <chunk>}) ``` Here `chunk` is a bytes object. I expected the file to upload successfully. Initially I doubted that I may be sending the incorrect upload_id as per the error message. But I eliminated that probability after writing an automated test case.
1
answers
0
votes
12
views
asked 11 days ago
The version of the replication instance is 3.4.7 It was working fine, but after maintenance, the error below has occurred and the work stopped. I wonder what could be the cause of the possible error, also where the file /rdsdbdata/data/tasks/ is created. ```https://aws.amazon.com/premiumsupport/knowledge-center/dms-task-successful-no-data-s3/?nc1=h_ls``` I referred to the document in the link above, but in the example of the document, it is a case of downloading file, i'm not sure if it fits my case. ``` [SOURCE_CAPTURE ]W: Got 7 headers at error exception (anw_retry_strategy.cpp:91) [SOURCE_CAPTURE ]W: header: 'connection'='close' (anw_retry_strategy.cpp:94) [SOURCE_CAPTURE ]W: header: 'server'='AmazonS3' (anw_retry_strategy.cpp:94) [SOURCE_CAPTURE ]W: header: 'transfer-encoding'='chunked' [SOURCE_CAPTURE ]E: ExpiredToken: Unable to parse ExceptionName: ExpiredToken Message: The provided token has expired. [SOURCE_CAPTURE ]E: Failed to upload file '/rdsdbdata/data/tasks/...' to bucket <bucket> as 'filename', status = 4 ```
0
answers
0
votes
23
views
asked 11 days ago
I am serving images from S3 and want to migrate to CloudFront. The S3 bucket is ACL-enabled. Some files are made public (ACL: public-read) and some are private, so they can be accessed like (where public files don't require signature): * public -> https://xxx.s3.ap-northeast-1.amazonaws.com/public.jpg * private -> https://xxx.s3.ap-northeast-1.amazonaws.com/private.jpg?AWSAccessKeyId=…&Signature=…&Expires=… But when I set up CloudFront for this S3 bucket: 1. If I don't restrict viewer access (in Behavior setting), both public and private files can be accessed without signature. 2. If I restrict viewer access using the key pair, then both types require signature in the URLs. Is it possible to set up this as S3 does, which means, requires signature based on the ACL of the objects in S3?
2
answers
0
votes
12
views
asked 11 days ago
I have a S3 bucket that I have been using to serve up static web pages for a couple of years. I finally decided to get a domain to make it easier to share the location. Following the documentation I tried to create a Simple Record: Record Type: A Value/Route traffic to: Alias to website S3 endpoint Region: US-East (Ohio) [us-east-2] It should then show me available S3 endpoints, but it says "No resources found". The static site is https://kghhome.s3.us-east-2.amazonaws.com/index.html What I have tried so far: - Entering variations of the S3 address in the search bar. - Logging off and back in again. - Waiting 48 hours in case the database mapping the endpoint and user was slow to update. - Logging off and back in a second time. The next thing that I can think of to try is to rebuild the static website in another bucket, but I'm hoping that there is something a little less obnoxious to try first. Thanks, Kai
1
answers
0
votes
14
views
kaigh
asked 11 days ago
How can I download S3 bucket data shared by somoene on my S3 bucket? I have been given access key id and secret access key by the person who has shared the bucket.
1
answers
0
votes
19
views
asked 11 days ago
I am using an MDR service called Adlumin that consumes CloudWatch log streams created by my Org CloudTrail log. Part of that requirement is that my Log files use SSE-KMS encryption, which is not the case by default for Control Tower. I would like to enable it, but while my management account owns the CloudTrail, my logging account owns the S3 bucket. So when I attempt to update that setting in my CloudTrail it let's me know that I "don't have adequate permissions in S3 to perform this operation." My Questions: Will updating this setting for my S3 bucket be blocked by any Control Tower Guardrails? What kind of policies would I need to establish with my bucket (and IAM?) to give my management account access to update this configuration for my logging accounts S3 bucket?
1
answers
0
votes
35
views
asked 11 days ago
https://aws.amazon.com/cn/blogs/machine-learning/analyze-us-census-data-for-population-segmentation-using-amazon-sagemaker/#Comments,官方示例文档中的美国人口普查数据无法下载,我执行aws s3 ls s3://aws-ml-blog-sagemaker-census-segmentation。提示我“An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied”拒绝访问,IAM账号已经给了AmazonS3FullAccess权限
1
answers
0
votes
14
views
asked 12 days ago