Questions tagged with Amazon Simple Storage Service
Content language: English
Sort by most recent
Can we configure mtls when using the [S3 rest api](https://docs.aws.amazon.com/AmazonS3/latest/API/Welcome.html)?
From looking at the documentation, I understand that the way to perform such activity would be to put the call behind an API gateway service and have it manage the [mtls part](https://docs.aws.amazon.com/apigateway/latest/developerguide/rest-api-mutual-tls.html)
I am trying to find a way to create Athena queries that handle information from AWS Security Hub, such as the 'Findings' displayed within it. Athena's input data comes from S3. Is there a way to specify a location in S3 that will receive the findings from AWS Security Hub, or is there already a location I should try looking into? Is there any other way to feed Security Hub information into Athena?
Hello,
For a while, I was simply storing the contents of my website in a s3 bucket and could access all pages via the full url just fine. I wanted to make my website more secure by adding an SSL so I created a CloudFront Distribution to point to my s3 bucket.
The site will load just fine, but if the user tries to refresh the page, they will receive an AccessDenied page.
I have a policy on my s3 bucket that restricts access to only the Origin Access Identity and index.html is set as my domain root object.
I am not understanding what I am missing.
Any help is much appreciated.
`{
"Version": "2012-10-17",
"Id": "PersonalizeS3BucketAccessPolicy",
"Statement": [
{
"Sid": "PersonalizeS3BucketAccessPolicy",
"Effect": "Allow",
"Principal": {
"Service": "personalize.amazonaws.com"
},
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::fashionrecommendationsystem",
"arn:aws:s3:::fashionrecommendationsystem/*"
]
}
]
}
`
This is the bucket policy I have attached to my S3 bucket.

Hi!
I would like to upload my files (documents, photos and videos) to S3 Deep Archive. My question is: do folders count as objects? I would want to minimize the price, that is why I'm asking.
Also, what is the maximum file size I can upload to Deep Archive? I've found mixed infos on this matter. I'd use FastGlacier to manage my vault. I am planning to upload my files bulked in zips.
Template format error: Unresolved resource dependencies [VpcFlowLogBucket] in the Resources block of the template
I am getting the above error in my cloudformation template when i use conditions while creating resources.
I have a usecase where if user enters a specific parameter then i will apply a particular condition to avoid creating an s3 bucket and use the one that user has given the arn to.
```
AWSTemplateFormatVersion: "2010-09-09"
Description: CloudFormation stack for relaying AWS VPC flow logs for security analysis and storage.
Outputs:
StackName:
Description: The name of the stack deployed by this CloudFormation template.
Value: !Ref "AWS::StackName"
Parameters:
VpcIds:
Description: The IDs of the VPCs for which flow logs will be relayed. VPC Flow Logs will be enabled for these VPCs.
Type: List<AWS::EC2::VPC::Id>
VpcFlowLogBucketArn:
Type: String
Description: (Optional) The ARN of an existing S3 bucket to use for VPC flow logs. If specified, VpcFlowLogDestination will be ignored.
TrafficType:
AllowedValues:
- ACCEPT
- REJECT
- ALL
Default: ALL
Description: Whether to log only rejected or accepted traffic, or log all traffic. Logging all traffic (default) enables more security outcomes.
Type: String
OrgId:
Description: Your account number.
Type: Number
RetentionInDays:
Description: The number of days to retain AWS VPC Flow Logs in the S3 bucket. This is effectively the size of your recovery window if the flow of logs is interrupted.
Type: Number
Default: 3
Conditions:
HasExpirationInDays: !Not [!Equals [!Ref RetentionInDays, 0]]
UseExistingS3Bucket: !Equals [!Ref VpcFlowLogBucketArn, ""]
Resources:
VpcFlowLogBucket:
Type: "AWS::S3::Bucket"
Condition: UseExistingS3Bucket
Properties:
BucketName: !Join
- "-"
- - aarmo-vpc-flow-bucket
- !Ref OrgId
- !Ref "AWS::StackName"
- !Ref "AWS::Region"
LifecycleConfiguration:
Rules:
- ExpirationInDays: !If [HasExpirationInDays, !Ref RetentionInDays, 1]
Status: !If [HasExpirationInDays, Enabled, Disabled]
NotificationConfiguration:
QueueConfigurations:
- Event: "s3:ObjectCreated:*"
Queue: !GetAtt [MyQueue, Arn]
DependsOn:
- MyQueue
VpcFlowLogBucketPolicy:
Type: "AWS::S3::BucketPolicy"
Condition: UseExistingS3Bucket
DependsOn:
- VpcFlowLogBucket
Properties:
Bucket: !Ref VpcFlowLogBucket
PolicyDocument:
Version: "2012-10-17"
Statement: # https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs-s3.html#flow-logs-s3-permissions
- Sid: AWSLogDeliveryWrite
Effect: Allow
Principal:
Service: "delivery.logs.amazonaws.com"
Action: "s3:PutObject"
Resource: !Sub "${VpcFlowLogBucket.Arn}/AWSLogs/${AWS::AccountId}/*"
Condition:
StringEquals:
"s3:x-amz-acl": "bucket-owner-full-control"
- Sid: AWSLogDeliveryAclCheck
Effect: Allow
Principal:
Service: "delivery.logs.amazonaws.com"
Action: "s3:GetBucketAcl"
Resource: !GetAtt "VpcFlowLogBucket.Arn"
MyQueue:
Type: AWS::SQS::Queue
Properties:
QueueName: "SampleQueue12345128"
MyQueuePolicy:
Type: AWS::SQS::QueuePolicy
Properties:
PolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: sns.amazonaws.com
Action:
- sqs:SendMessage
- sqs:DeleteMessage
- sqs:RecieveMessage
Resource: "*"
Queues:
- Ref: MyQueue
```
What is the issue with the above cloudformation template? I have tried debugging the template multiple times but still getting nowhere. any help would be greatly appretiated!
Hello,
After filling a website form and sending it, it triggers sendmail.json 500 error and in the headers we have:
General
Response URL: https://xxxxxxxxxxx/sendmail.json
Request method: Post
Status code: 500
etc
Response headers
age: 498
server: amazonS3
x-cache error from cloudfront
What could be the issue and how do I fix it please?
Many thanks in advance
Hi,
I have a S3 bucket to which files are being uploaded by a Kafka sink connector. I am trying to setup a monitoring dashboard for this S3 bucket. However, I could only find `BucketSizeBytes` and `NumberOfObjects` metrics in CloudWatch but not request metrics like `BytesUploaded`, `BytesDownloaded`. Metrics seems to be enabled by default since `BucketSizeBytes` and `NumberOfObjects` metrics are being recorded already. Is there any extra configuration I need to do to get request metrics reported in CloudWatch.
Thank you for your time.
Thank you,
Sreeni
Good afternoon folks,
We have an AWS EC2 instance configured as a Commvault VSA proxy which needs to read and write from multiple S3 buckets.
A S3 gateway endpoint has been configured as per best practice so all communications between the EC2 instance and S3.
We have noticed (and confirmed by Commvault) that the EC2 write speeds to S3 appears to be limited to approx. 30MB compared to the read speeds which fluctuate between 300MB to 800MB.
Commvault have checked over our setup and have confirmed our performance issue is NOT a Commvault issue - it appears to be a S3 bottleneck
Are there any S3 restrictions in terms of write performance?
How can i fetch s3 bucket name associated with the given RDS DB identifier name using java code?
I converted a CSV(from S3) to parquet(to S3) using AWS glue and the file which is converted to Parquet was named randomly .How do i choose the name of the file that is to be converted to Parquet from CSV ?

when i add data.parquet at the end of the s3 path (in target) without '/' ,AWS glues is creating a subfloder in the bucket with the name data.parquet instead of file name, where as the new file parquet file is created with the name like this "run-1678983665978-part-block-0-r-00000-snappy.parquet"
where should i give a name to the parquet file ?
From [customize-file-delivery-notifications-using-aws-transfer-family-managed-workflows](https://aws.amazon.com/blogs/storage/customize-file-delivery-notifications-using-aws-transfer-family-managed-workflows/) blog, it reads AWS Transfer Family is a secure transfer service that enables you to transfer files into and out of AWS storage services.

Does this mean Transfer family support to transfer files from S3 to external servers outside of AWS?
Providing my use case for better understanding: I need to transfer large files like 70-80 Gb to external server using Akamai NetStorage.