Questions tagged with Amazon Simple Storage Service

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Bug: S3 bucket static website hosting requires an index document value, even if it's just one space (when set in the management console)

The AWS S3 service allows turning on [static website hosting](https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html) on a S3 bucket. In the AWS CloudFormation [user guide](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-websiteconfiguration.html#cfn-s3-websiteconfiguration-indexdocument), the `IndexDocument` field is specified as optional (`Required: No`). However, in the [management console](https://s3.console.aws.amazon.com/s3/), when configuring an S3 bucket for static website hosting, the "Error document" field is marked "optional", meanwhile "Index document" is not, and trying to save the changes with that field left blank doesn't work (it is highlighted with "Must be at least 1 characters long."). Making "Index document" one empty space makes it no problem. This is in line with what the [S3 user guide](https://docs.aws.amazon.com/AmazonS3/latest/userguide/IndexDocumentSupport.html) says: > When you enable website hosting, you must also configure and upload an index document. ### Steps to reproduce - View a specific S3 bucket in the [S3 management console](https://s3.console.aws.amazon.com/s3/) - In the "Properties" tab, scroll down to "Static website hosting" and click "Edit" - Under "Static website hosting", select "Enable" - Leave "Index document" blank - Click "Save changes" The "Index document" field will be highlighted in red with "Must be at least 1 characters long." - Enter one empty space in the "Index document" field - Click "Save changes" The changes will now be saved. ### Expected results "Index document" should be optional.
0
answers
0
votes
12
views
profile picture
asked a month ago

S3 bucket per tenant approach. Can I assign different IAM roles for diefferent users in the amplify project?

Please let me know if this a valid approach or I am missing something fundamental.. **Requirement**: - I need to be able to restrict each tenant users from accessing each other s3 files - and be able to measure each tenant space usage in the s3. **Solution I think to implement**: Upon user signup, we check if this is a sign up by invitation to already existing tenant space or a new registration - if it's a new tenant than we register him in a custom dynamodb table and create an s3 bucket for him - if it's a new user in existing tenant we we only adding him to the IAM Role that can access the tenant s3 bucket **Details**: I currently am using cognito custom attribute to save tenant ID (it's configured to not be changeable by the user itself) and struggle to figure out how I can affect the role mapping in the cognito Identity pool to implement the above logic. Please give me directions to dig further or advises on the overall approach in general. Some of the ideas are taken from this article https://medium.com/@dantasfiles/multi-tenant-aws-amplify-method-2-cognito-groups-38b40ace2e9e and it also suggests to use cognito dynamic groups to differ tenants and it seems to resolve the s3 issue as well, but with dynamic groups sync events won't work, right? > Known limitation: Real-time subscriptions are not supported for dynamic group authorization. https://docs.amplify.aws/cli/graphql/authorization-rules/#user-group-based-data-access There is also this question https://repost.aws/questions/QUW1WibDWjQd2rOll4mDiPMA which suggest to use a lymbda and presigned s3 urls to regulate the access to s3 files based on the tenant logic
1
answers
0
votes
64
views
Arsen
asked a month ago

Loading data from RDS postgres database to S3 bucket but getting Unable to execute HTTP request error.

Hi all. I created a Glue job that extracts data from RDS postgresql database and load to S3 bucket. i used crawler to create schema of RDS postgresql source database. But when i run the job it keeps running almost one hour and after got failed with the following error. I have created role that give AWSGluerole acces to it which i am using for running the glue job. I have given permissions to lake formation and also added inline policy to access to s3 bucket. Any solution will be highly appreciated.![The image of error is also attached](/media/postImages/original/IMOiM94Y5RQGm1Hia-xEBR1A) 22/10/31 13:55:48 ERROR GlueExceptionAnalysisListener: [Glue Exception Analysis] { "Event": "GlueExceptionAnalysisTaskFailed", "Timestamp": 1667224548950, "Failure Reason": "Unable to execute HTTP request: Connect to my-first-bucket-0.s3.ap-northeast-1.amazonaws.com:443 [my-first-bucket-0.s3.ap-northeast-1.amazonaws.com/52.219.196.98, my-first-bucket-0.s3.ap-northeast-1.amazonaws.com/52.219.8.63, my-first-bucket-0.s3.ap-northeast-1.amazonaws.com/52.219.4.75, my-first-bucket-0.s3.ap-northeast-1.amazonaws.com/52.219.16.187, my-first-bucket-0.s3.ap-northeast-1.amazonaws.com/52.219.1.27, my-first-bucket-0.s3.ap-northeast-1.amazonaws.com/52.219.152.58] failed: connect timed out", "Stack Trace": [ { "Declaring Class": "com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor", "Method Name": "handleRetryableException", "File Name": "AmazonHttpClient.java", "Line Number": 1207 }, { "Declaring Class": "com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor", "Method Name": "executeHelper", "File Name": "AmazonHttpClient.java", "Line Number": 1153 }, { "Declaring Class": "com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor", "Method Name": "doExecute", "File Name": "AmazonHttpClient.java", "Line Number": 802 }, { "Declaring Class": "com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor", "Method Name": "executeWithTimer", "File Name": "AmazonHttpClient.java", "Line Number": 770 }, { "Declaring Class": "com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor", "Method Name": "execute", "File Name": "AmazonHttpClient.java", "Line Number": 744 }, { "Declaring Class": "com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor", "Method Name": "access$500", "File Name": "AmazonHttpClient.java", "Line Number": 704 }, { "Declaring Class": "com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl", "Method Name": "execute", "File Name": "AmazonHttpClient.java", "Line Number": 686 }, { "Declaring Class": "com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient", "Method Name": "execute", "File Name": "AmazonHttpClient.java", "Line Number": 550 }, { "Declaring Class": "com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient", "Method Name": "execute", "File Name": "AmazonHttpClient.java", "Line Number": 530 }, { "Declaring Class": "com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.AmazonS3Client", "Method Name": "invoke", "File Name": "AmazonS3Client.java", "Line Number": 5140 }, { "Declaring Class": "com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.AmazonS3Client", "Method Name": "invoke", "File Name": "AmazonS3Client.java", "Line Number": 5086 }, { "Declaring Class": "com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.AmazonS3Client", "Method Name": "access$300", "File Name": "AmazonS3Client.java", "Line Number": 394 }, { "Declaring Class": "com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.AmazonS3Client$PutObjectStrategy", "Method Name": "invokeServiceCall", "File Name": "AmazonS3Client.java", "Line Number": 6032 }, { "Declaring Class": "com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.AmazonS3Client", "Method Name": "uploadObject", "File Name": "AmazonS3Client.java", "Line Number": 1812 }, { "Declaring Class": "com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.AmazonS3Client", "Method Name": "putObject", "File Name": "AmazonS3Client.java", "Line Number": 1772 }, { "Declaring Class": "com.amazon.ws.emr.hadoop.fs.s3.lite.call.PutObjectCall", "Method Name": "performCall", "File Name": "PutObjectCall.java", "Line Number": 35 }, { "Declaring Class": "com.amazon.ws.emr.hadoop.fs.s3.lite.call.PutObjectCall", "Method Name": "performCall", "File Name": "PutObjectCall.java", "Line Number": 10 }, { "Declaring Class": "com.amazon.ws.emr.hadoop.fs.s3.lite.call.AbstractUploadingS3Call", "Method Name": "perform", "File Name": "AbstractUploadingS3Call.java", "Line Number": 87 }, { "Declaring Class": "com.amazon.ws.emr.hadoop.fs.s3.lite.executor.GlobalS3Executor", "Method Name": "execute", "File Name": "GlobalS3Executor.java", "Line Number": 114 }, { "Declaring Class": "com.amazon.ws.emr.hadoop.fs.s3.lite.AmazonS3LiteClient", "Method Name": "invoke", "File Name": "AmazonS3LiteClient.java", "Line Number": 191 }, { "Declaring Class": "com.amazon.ws.emr.hadoop.fs.s3.lite.AmazonS3LiteClient", "Method Name": "invoke", "File Name": "AmazonS3LiteClient.java", "Line Number": 186 }, { "Declaring Class": "com.amazon.ws.emr.hadoop.fs.s3.lite.AmazonS3LiteClient", "Method Name": "putObject", "File Name": "AmazonS3LiteClient.java", "Line Number": 107 }, { "Declaring Class": "com.amazon.ws.emr.hadoop.fs.s3n.Jets3tNativeFileSystemStore", "Method Name": "storeFile", "File Name": "Jets3tNativeFileSystemStore.java", "Line Number": 152 }, { "Declaring Class": "com.amazon.ws.emr.hadoop.fs.s3n.MultipartUploadOutputStream", "Method Name": "uploadSinglePart", "File Name": "MultipartUploadOutputStream.java", "Line Number": 198 }, { "Declaring Class": "com.amazon.ws.emr.hadoop.fs.s3n.MultipartUploadOutputStream", "Method Name": "close", "File Name": "MultipartUploadOutputStream.java", "Line Number": 427 }, { "Declaring Class": "org.apache.hadoop.fs.FSDataOutputStream$PositionCache", "Method Name": "close", "File Name": "FSDataOutputStream.java", "Line Number": 73 }, { "Declaring Class": "org.apache.hadoop.fs.FSDataOutputStream", "Method Name": "close", "File Name": "FSDataOutputStream.java", "Line Number": 102 }, { "Declaring Class": "com.fasterxml.jackson.dataformat.csv.impl.UTF8Writer", "Method Name": "close", "File Name": "UTF8Writer.java", "Line Number": 74 }, { "Declaring Class": "com.fasterxml.jackson.dataformat.csv.impl.CsvEncoder", "Method Name": "close", "File Name": "CsvEncoder.java", "Line Number": 989 }, { "Declaring Class": "com.fasterxml.jackson.dataformat.csv.CsvGenerator", "Method Name": "close", "File Name": "CsvGenerator.java", "Line Number": 479 }, { "Declaring Class": "com.amazonaws.services.glue.writers.JacksonWriter", "Method Name": "done", "File Name": "JacksonWriter.scala", "Line Number": 73 }, { "Declaring Class": "com.amazonaws.services.glue.hadoop.TapeOutputFormat$$anon$1", "Method Name": "close", "File Name": "TapeOutputFormat.scala", "Line Number": 217 }, { "Declaring Class": "com.amazonaws.services.glue.sinks.HadoopWriters$", "Method Name": "$anonfun$writeNotPartitioned$3", "File Name": "HadoopWriters.scala", "Line Number": 125 }, { "Declaring Class": "org.apache.spark.util.Utils$", "Method Name": "tryWithSafeFinallyAndFailureCallbacks", "File Name": "Utils.scala", "Line Number": 1495 }, { "Declaring Class": "org.apache.spark.sql.glue.SparkUtility$", "Method Name": "tryWithSafeFinallyAndFailureCallbacks", "File Name": "SparkUtility.scala", "Line Number": 39 }, { "Declaring Class": "com.amazonaws.services.glue.sinks.HadoopWriters$", "Method Name": "writeNotPartitioned", "File Name": "HadoopWriters.scala", "Line Number": 125 }, { "Declaring Class": "com.amazonaws.services.glue.sinks.HadoopWriters$", "Method Name": "$anonfun$doStreamWrite$1", "File Name": "HadoopWriters.scala", "Line Number": 138 }, { "Declaring Class": "com.amazonaws.services.glue.sinks.HadoopWriters$", "Method Name": "$anonfun$doStreamWrite$1$adapted", "File Name": "HadoopWriters.scala", "Line Number": 129 }, { "Declaring Class": "org.apache.spark.scheduler.ResultTask", "Method Name": "runTask", "File Name": "ResultTask.scala", "Line Number": 90 }, { "Declaring Class": "org.apache.spark.scheduler.Task", "Method Name": "run", "File Name": "Task.scala", "Line Number": 131 }, { "Declaring Class": "org.apache.spark.executor.Executor$TaskRunner", "Method Name": "$anonfun$run$3", "File Name": "Executor.scala", "Line Number": 497 }, { "Declaring Class": "org.apache.spark.util.Utils$", "Method Name": "tryWithSafeFinally", "File Name": "Utils.scala", "Line Number": 1439 }, { "Declaring Class": "org.apache.spark.executor.Executor$TaskRunner", "Method Name": "run", "File Name": "Executor.scala", "Line Number": 500 }, { "Declaring Class": "java.util.concurrent.ThreadPoolExecutor", "Method Name": "runWorker", "File Name": "ThreadPoolExecutor.java", "Line Number": 1149 }, { "Declaring Class": "java.util.concurrent.ThreadPoolExecutor$Worker", "Method Name": "run", "File Name": "ThreadPoolExecutor.java", "Line Number": 624 }, { "Declaring Class": "java.lang.Thread", "Method Name": "run", "File Name": "Thread.java", "Line Number": 750 } ], "Task Launch Time": 1667223676080, "Stage ID": 1, "Stage Attempt ID": 0, "Task Type": "ResultTask", "Executor ID": "9", "Task ID": 11 }
1
answers
0
votes
17
views
asked a month ago