Questions tagged with Amazon Simple Storage Service
Content language: English
Sort by most recent
I have an API from which I can retrieve data and upload it to my s3 buckets. I plan to clean this data for missing values, duplicates, invalid values, and also remove outliers. After the data is cleaned, I want to visualize this data and use it as an API so the users can access my visualized charts.
I have tried different tools for cleaning processes such as AWS Glue DataBrew, Sagemaker Data Wrangler, and using python (pandas). However, I am unaware of what's the best method to go about especially if I want to automate the entire process so that the cleaning and visualization works as soon as I add a CSV file into the S3 bucket.
Hi AWS,
Is this workflow architecture possible:
RDS (PostgreSQL) --------------------> Amazon MQ Broker --------------> Lambda Function -----------------------> S3 Bucket
(Data is stored for customers)
The database can be in DynamoDB as well. Amazon MQ is used as an event-source for the lambda function and the lambda is sending the request to API Gateway and getting the JSON response and further sending it to S3 to be stored as output.
Please suggest
i want to use all request redirect to https://www.smyro.com.tr
My current settings (S3, Route53, Cloudfront) all requested.




| Request Url | Redirect Url | Result |
| --- | --- | --- |
| http://smyro.com.tr | https://smyro.com.tr | Bad |
| https://smyro.com.tr | https://smyro.com.tr | Bad |
| http://www.smyro.com.tr | https://www.smyro.com.tr | Good |
| https://www.smyro.com.tr | https://www.smyro.com.tr | Good |
how can your help me for this problem ?
Thanks for all answers.
I enabled CloudTrail to debug some sns interaction. I stored them to a new s3 bucket. I also enabled management events as part of this trail.
The next day I got an alert that I was hitting my free tier limit.
* 2000.0 Requests for free for 12 months as part of AWS Free Usage Tier (Global-Requests-Tier1)
It appears that relates to Puts to s3 (using object lambdas? )
I have about 100 events from my sns and 1900 from aws management.
On the pricing page it states :
You can deliver one copy of your ongoing management events to your Amazon Simple Storage Service (S3) bucket for free by creating trails. Limits may apply.
I have stopped management event logging.
Did I misconfigure something?
Do I need to make a separate bucket or trail for management events?
Did I misunderstand the pricing?
Hi,
We are having trouble finding a way to make requests to an S3 Multi-region Access Point (MRAP) using Rails Active Storage (we have no issues using Active Storage to access S3 buckets by their name + region).
Any help is much appreciated.
I have a lambda that generates the S3 presigned download URL and send back the presigned URL. These presigned URLs expire after 12 hours, assuming this is due to the expiry of temporary authenticaton tokens (I have set presign expiry to 7 days). Wondering how I can increase this 12 hours to 24 hours.
Ref - https://repost.aws/knowledge-center/presigned-url-s3-bucket-expiration
This is how I'm creating the s3 client in go.
```
awsSession := session.Must(
session.NewSessionWithOptions(
session.Options{
Config: aws.Config{Region: aws.String("us-west-2")},
AssumeRoleDuration: sessionExpiry,
},
),
)
return &s3Client{
client: s3.New(awsSession),
}
```
I am the Root User on my S3 account.
I have created a new Bucket inside my S3 account. I have unblocked all access.
I have uploaded an object into that bucket. Inside that object is the index.html file. I click on that, open it and click on the Object URL link .
The response I get is as follows. How do I resolve this, or do I just wait ? Thanks
This XML file does not appear to have any style information associated with it. The document tree is shown below.
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>SVGNSSHFS3F1F9C3</RequestId>
<HostId>ThL853r2VfHWQNBejWNv41mo6wXM3jDHOSTPOJu16Ct5VXrqrE5QsSgYEh14X6pXeyMHWxhH2KM=</HostId>
</Error>
Hi,
```
$ aws iot create-job-template --job-template-id 'test-job-template' --description 'test job template' --document-source 'https://s3.amazonaws.com/bucketName/iot-jobs-documents/testJobDocument.json'
```
Creates a job template in IoT Core, however, if I navigate to the template from the AWS console and click on the Job file link (labelled testJobDocument.json), it tries to take me to `https://eu-north-1.console.aws.amazon.com/s3/object/s3.a?region=eu-north-1&prefix=bucketName/iot-jobs-documents/testJobDocument.json`, i.e. Amazon S3 > Buckets > s3.a > bucketName/iot-jobs-documents/testJobDocument.json, which I don't have permission to access. I'm not sure how or why the extra "s3.a" level appeared.
Is this expected behaviour?
Thanks, Gary
Hey, I am trying to execute cross region replication for s3 object for a partticular prefix with KMs Enabled. I am Getting Error (Failure Reason) - SrcGetObjectNotPermitted. I am giving the [s3:GETObjectVersionForReplication ] Action to IAM Role policy that is attached to replication iam role.
Can you please help!!
I am logged into AWS console as administrator and trying to use Athena to read files on s3 that I don't allow public access to, but it doesn't work. budget policy is as follows, work group is The budget policy is as follows and the work group is the primary Athena SQL one. database
I have confirmed that the database is using the one generated by default and that the Data lake permissions also give All permissions to the IAM user used to log in to the console and I can open and download s3 budget file.
The DDL query for the create table including the s3 LOCATION succeeds, but when I try to hit the select statement
"Permission denied on s3 path: (s3 url)
This query ran against the "default" database, unless qualified by the query. Please post the error message on our forum or contact customer support with Query Id: 444f5547-4c37-4e05-a4a7-d1cd67cb865d"
I think this is probably because the IAM role used in the Athena query that I type in AWS console is different from the IAM user used for login, but I don't know where to refer to the Athena IAM User. (The work group in spark has IAM, but the primary in Athena SQL didn't have IAM.)
```
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "sample",
"Effect": "Allow",
"Principal": {
"AWS": [
"IAM user login to console"
]
},
"Action": "s3:*",
"Resource": [
"s3 arn",
"s3 arn/*"
]
}
]
}
```
about answer
1.About Output folder, budget policy has already been set.
2.And Glue Data Catalog Policy is configured as this.
```
{
"Version" : "2012-10-17",
"Statement" : [ {
"Effect" : "Allow",
"Principal" : {
"AWS" : [ "my Iam user arn" ]
},
"Action" : "glue:*",
"Resource" : "arn:aws:glue:ap-northeast-1:my id number:*"
} ]
}
```
3.I confirmed s3 is encrypted by Amazon S3 managed keys (SSE-S3).
I mistaked encrypted by my KSM key. but user and administrater key policy is attach to my iam account
but same error happened on AWS management console Athena
error messages s3 url is one I wanted to read from s3. not output folder
Permission denied on s3 path: (s3 url)
Hi! I have a public AWS video file here: https://s3-external-1.amazonaws.com/media.twiliocdn.com/ACb2722d11b73d22b594c81d79aed6b8d2/23ff33d3428202c6a24e7a8c6e5f4140
It only opens on Safari and won't open on Chrome (which I desperately need). I've tried removing all my extensions and using incognito mode and even clearing my cache and cookies but nothing helps. Any ideas?
Somehow the following S3 buckets were created in the AWS account without a region and we cannot delete them or view them

We can also see them on the CLI doing
```
aws s3 ls
```
but trying to delete them we get the following
```
aws s3 rb s3://manccs-sanda-xxxxxxxx
remove_bucket failed: s3://manccs-sanda-xxxxxxxx An error occurred (NoSuchBucket) when calling the DeleteBucket operation: The specified bucket does not exist
```
Doing this as Account admin and have even tried using the root user but same result. How do we get rid of these buckets ?
Note same issue as [this](https://repost.aws/questions/QUQrSwX2PNSQKDb-y9MTibtA/i-find-a-bucket-without-aws-region-in-my-s-3-but-i-cannot-delete-it) but no answer