Questions tagged with Amazon Simple Storage Service

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Hi, just wondering why I try to upload an image on ubuntu OS but the got the error code 403. The following is the screenshot of the error message: ![Enter image description here](/media/postImages/original/IMXVXZvrNyQrGh9h9Fa939zQ) However, when I try the same code on my MAC OS, everything works fine. I can upload the image to bucket. Does anyone know why? Soooo urgent!!!
1
answers
0
votes
25
views
asked 18 days ago
Hello, I am running a job to apply an ETL on a semi-colon-separated CSV on S3, however, when I read the file using the DynamicFrame feature of AWS and try to use any method like `printSchema` or `toDF`, I get the following error: ``` py4j.protocol.Py4JJavaError: An error occurred while calling o77.schema. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1) (52bff5da55da executor driver): com.amazonaws.services.glue.util.FatalException: Unable to parse file: s3://my-bucket/my-file.csv ``` I have already verified the codification, it is UTF-8 so there should be no problem. When I read the CSV using `spark.read.csv`, it works fine, and the Crawlers can also recognize the schema. The data has some special characters that shouldn't be there, and that's part of the ETL I am looking to perform. Neither using the `from_catalog` nor `from_options` function from AWS Glue works, the problem is the same whether I run the job locally on docker or Glue Studio. My data have a folder date partition so I would prefer to avoid using directly Spark to read the data and take advantage of the Glue Data Catalog as well. Thanks in advance.
1
answers
0
votes
39
views
Aftu
asked 18 days ago
Is it possible to have a secure (https) site hosted on S3 without CloudFront? I am using AWS GovCloud, and CloudFront is not a service available to me, nor can I use the commercially available CloudFront. I need to find other methods that allow me to have an https site that can reach out to an authenticator. What services can I use to accomplish this? Do I use a VPN?
1
answers
0
votes
70
views
asked 19 days ago
Hello, I am a basic user of Amazon S3. I do know what a "bucket" is and how to use the privacy settings. But that's it. I prefer not to know the advanced uses of it. I won't be using it for anything other than personal use. What are the steps to backup my MacBook (which the file size is about 600 GB) to my Amazon S3 account? I could just pay Apple for one of their backup plans, but I already have an Amazon S3 account (and it seems more secure and cheaper). So why not use S3? I might backup my computer twice monthly. I would like a simple process, but not have to buy a 3rd party software/plugin to do it. Any helpful link to a help file with steps, or advice is much appreciated. T
3
answers
0
votes
27
views
asked 19 days ago
I'm using the following code (excerpt) in my backend to create a presigned URL to an mp3 file within an AWS S3 bucket: const s3Client = new S3Client({ credentials: { accessKeyId: "AAAAAAAAAAAAAAAAAAAA", secretAccessKey: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb" } }); const getObjectCommand = new GetObjectCommand({ Bucket: "my-bucket-name", Key: "file-to-be-used" }); // Generate a signed URL for the GET request const url = await getSignedUrl(s3Client, getObjectCommand, { expiresIn: 300 }).then(data => { ... do some stuff and return the url ... }); ### Postman Calling this code in Postman works as expected and the presigned URL returned from my backend works **immediately** after getting the response in Postman (e.g. by just clicking on it). ### Browser In my browser, however, I get `403 - Access forbidden` errors when using the URL, e.g. for either setting it as the source for an `HTMLAudioElement` or getting it via a `fetch` request. ### Confusing behaviour (in the browser) When I wait for about 15 seconds before using/accessing the returned URL, then it works as expected (just clicking on it opens in the browser, setting it as source for the `HTMLAudioElement` also works). ### Screen Recording The screen recording from the browser shows how the request returns a presigned URL. It also shows the 403 errors (one when trying to set the URL as source for an `HTMLAudioElement`, the second when trying to fetch the presigned URL from S3). You can also see that clicking on the URL at first leads to 403 errors (in the new browser tab that opens) twice. When clicking on the **same** URL for the third time it works (also works when just waiting for about 15 seconds as described above). As I can't embed animated GIFs, here is the corresponding [link](https://i.stack.imgur.com/qDctg.gif). - I do **not** get any CORS related errors (and I've set up CORS with `AllowedHeaders` and `AllowedOrigins` set to `*` each) - In Postman the returned URL works immediately - The URL I try to access is exactly the same (as I'm using my own backend) for Postman and within the browser Any ideas what might be wrong? ## Update I've also tried with another bucket in the standard region (us-east-1), same results: Screen recording #2 ([link](https://i.stack.imgur.com/em1M0.gif)) ### The request and headers in Postman As it was sent by Postman `https://presigned-url-test-tim.s3.us-east-1.amazonaws.com/4f65f31b-7d63-4be0-a289-1c3a8e056ea4.mp3?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIASAQ3L4XYIZI4YD73%2F20230311%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230311T220846Z&X-Amz-Expires=300&X-Amz-Signature=b3c081018ea7e9f7979a65267f66bfdcf2f185d2f824bd86fa9ae9141e8ba627&X-Amz-SignedHeaders=host&x-id=GetObject` and again for better readability https://presigned-url-test-tim.s3.us-east-1.amazonaws.com/4f65f31b-7d63-4be0-a289-1c3a8e056ea4.mp3? X-Amz-Algorithm=AWS4-HMAC-SHA256& X-Amz-Content-Sha256=UNSIGNED-PAYLOAD& X-Amz-Credential=AKIASAQ3L4XYIZI4YD73%2F20230311%2Fus-east-1%2Fs3%2Faws4_request& X-Amz-Date=20230311T220846Z& X-Amz-Expires=300& X-Amz-Signature=b3c081018ea7e9f7979a65267f66bfdcf2f185d2f824bd86fa9ae9141e8ba627& X-Amz-SignedHeaders=host& x-id=GetObject Additionally, Postman shows the following info under `Request headers` in the Console log: User-Agent: PostmanRuntime/7.31.1 Accept: */* Postman-Token: 4364aac9-1334-4434-b9a9-0b792dbde7b5 Host: presigned-url-test-tim.s3.us-east-1.amazonaws.com Accept-Encoding: gzip, deflate, br Connection: keep-alive ### The request headers in the browser As it was sent by the browser `https://presigned-url-test-tim.s3.us-east-1.amazonaws.com/4f65f31b-7d63-4be0-a289-1c3a8e056ea4.mp3?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIASAQ3L4XYIZI4YD73/20230311/us-east-1/s3/aws4_request&X-Amz-Date=20230311T220557Z&X-Amz-Expires=300&X-Amz-Signature=e58bdbb1af3d4d47e1ec7c2aeaab3f77dd3a1fd7987831f1d0e6367a359637ef&X-Amz-SignedHeaders=host&x-id=GetObject` and again for better readability https://presigned-url-test-tim.s3.us-east-1.amazonaws.com/4f65f31b-7d63-4be0-a289-1c3a8e056ea4.mp3? X-Amz-Algorithm=AWS4-HMAC-SHA256& X-Amz-Content-Sha256=UNSIGNED-PAYLOAD& X-Amz-Credential=AKIASAQ3L4XYIZI4YD73/20230311/us-east-1/s3/aws4_request& X-Amz-Date=20230311T220557Z& X-Amz-Expires=300& X-Amz-Signature=e58bdbb1af3d4d47e1ec7c2aeaab3f77dd3a1fd7987831f1d0e6367a359637ef& X-Amz-SignedHeaders=host& x-id=GetObject [1]: https://i.stack.imgur.com/qDctg.gif [2]: https://i.stack.imgur.com/em1M0.gif
1
answers
0
votes
21
views
asked 20 days ago
Hi, All my files are saved in Glacier Deep Archive. How come I am still getting charged monthly for Simple Storage Service? Thank you.
2
answers
0
votes
27
views
asked 20 days ago
Yesterday we wanted to store my network load balancer access logs in a S3 bucket so by following the [docs](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-access-logs.html) we copied and edited the policy and when we pasted it and saved it, the NLB didn't have permission to use the bucket. We noticed that the `"aws:SourceAccount": ["0123456789"]` kept getting saved as `"aws:SourceAccount": "0123456789"` even when we updated the policy using the AWS CLI (e.g. `aws s3api put-bucket-policy --bucket my-bucket --policy file://policy.json`) Is this a bug in the API that is preventing me to use this as we want? Any help would be greatly appreciated.
2
answers
0
votes
35
views
Rocky
asked 22 days ago
I have been trying to create a secure website with a domain name registered in route 53. I Requested a public certificate so that Amazon CloudFront distributions require HTTPS. I created 2 buckets in s3 and selected on Block all public access. I followed the instructions to create a cloudfront distribution in "Configuring Amazon Route 53 to route traffic to a CloudFront distribution". I created OAC and copied the policy to the bucket policy. I created an alias record that points to my CloudFront distribution. Can't access the website. If the Block all public access is set to on for a bucket used for a static website, can the website be accessed by routing traffic to a CloudFront distribution.
4
answers
0
votes
80
views
asked 22 days ago
We have 2 AWS accounts and we are sending huge data from primary account to secondary account (**data is being transfer from EC2 to S3Bucket of secondary account**) so its costing us data transfer charges at huge. We are looking for solution where we can transfer the data from primary account to secondary account without going via internet. I was thinking about VPC endpoint but we are confused if this can be worked accross AWS Account or not.
2
answers
0
votes
47
views
asked 23 days ago
Hello, I've made a storage change from an S3 Standard bucket to S3 Glacier Flexible Retrieval and I get a message that it completed successfully but if I go back into the bucket - Actions - Edit Storage Class, it's still marked S3 Standard. This is normal? How can I know if the bucket storage has really been changed? Thank you. All the best.
1
answers
0
votes
25
views
asked 23 days ago
Since AWS now applies SSE to all new object uploads to S3 buckets (since 1/5/23), how should this impact testing of S3 encryption via the CLI, such as using ‘get-bucket-encryption’? https://docs.aws.amazon.com/AmazonS3/latest/userguide/serv-side-encryption.html If an S3 bucket was previously unencrypted, it should now show up in our CLI results as having SSE, correct? Mainly, my question is, if a existing S3 bucket just sat there with no actions occurring, would the SSE automatically trigger and therefore any CLI output would reflect this new SSE status? Or is it possible the CLI would incorrectly show the bucket as unencrypted until some kind of put or get type action was run on the S3 bucket? In some earlier testing of S3 CLI that is dated no **earlier **than 1/26 the results included a lot of unencrypted buckets. However, since everything now has SSE because of this change from AWS and we randomly selected 2 buckets shown as not encrypted and re-ran the CLI, now the CLI output indicates that they have SSE. Just not sure what happened here.
1
answers
0
votes
19
views
asked 23 days ago
I am trying to create rule in eventbridge to trigger a workflow when a specific file format is upload in the desired object of s3 bucket. ``` { "source": ["aws.s3"], "detail-type": ["AWS API Call via CloudTrail"], "detail": { "eventSource": ["s3.amazonaws.com"], "eventName": ["PutObject"], "requestParameters": { "bucketName": ["my-bucket"], "key": [{ "prefix": "folder1/folder2" }], "FileName": [ { "suffix": ".xlsx" } ] } } } ``` I upload files say in s3://my-bucket/folder1/folder2/folder3/test.xlsx, glue workflow is not triggered. Can someone help me in this event pattern to trigger workflow for specific file type?
2
answers
0
votes
44
views
asked 23 days ago