Questions tagged with Amazon Simple Storage Service

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Assume a user connects via a Websocket connection to a server, which serves a personalized typescript function based on a personalized JSON file So when a user connects, - the personalized JSON file is loaded from an S3 bucket (around 60-100 MB per user) - and when he types a Typescript/JavaScript/Python code is executed which returns some string a reply and the JSON-like data structure gets updates - when the user disconnects the JSON gets persisted back to the S3-like bucket. In total, you can think about 10,000 users, so 600 GB in total. It should - spin up fast for a user, - should be very scalable given the number of users (such that we do not waste money) and - have a global latency of a few tens of ms. Is that possible? If so, what architecture seems to be the most fitting?
1
answers
0
votes
13
views
asked 17 hours ago
I can't see couple of my buckets. I am sure I have not deleted it, it certainly contain important files. How Can I found or recover it.
1
answers
0
votes
18
views
lernky
asked 21 hours ago
Hello, I am new in AWS and started self-studying and I have very simple question. I made a bucket and in the permission tab , I unchecked the block items for Bucket and now I want to make public on my files inside bucket , but I can not find 'make public' on the Action button. I follow a video training and it belongs to 2 years ago and does not have new changes in the interface. How can I make public my files in the bucket? Thanks to help me. Maryam S
1
answers
0
votes
12
views
profile picture
asked a day ago
Can we configure mtls when using the [S3 rest api](https://docs.aws.amazon.com/AmazonS3/latest/API/Welcome.html)? From looking at the documentation, I understand that the way to perform such activity would be to put the call behind an API gateway service and have it manage the [mtls part](https://docs.aws.amazon.com/apigateway/latest/developerguide/rest-api-mutual-tls.html)
0
answers
0
votes
7
views
asked a day ago
I am trying to find a way to create Athena queries that handle information from AWS Security Hub, such as the 'Findings' displayed within it. Athena's input data comes from S3. Is there a way to specify a location in S3 that will receive the findings from AWS Security Hub, or is there already a location I should try looking into? Is there any other way to feed Security Hub information into Athena?
1
answers
0
votes
16
views
asked 2 days ago
Hello, For a while, I was simply storing the contents of my website in a s3 bucket and could access all pages via the full url just fine. I wanted to make my website more secure by adding an SSL so I created a CloudFront Distribution to point to my s3 bucket. The site will load just fine, but if the user tries to refresh the page, they will receive an AccessDenied page. I have a policy on my s3 bucket that restricts access to only the Origin Access Identity and index.html is set as my domain root object. I am not understanding what I am missing. Any help is much appreciated.
0
answers
0
votes
12
views
Maan
asked 2 days ago
`{ "Version": "2012-10-17", "Id": "PersonalizeS3BucketAccessPolicy", "Statement": [ { "Sid": "PersonalizeS3BucketAccessPolicy", "Effect": "Allow", "Principal": { "Service": "personalize.amazonaws.com" }, "Action": [ "s3:GetObject", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::fashionrecommendationsystem", "arn:aws:s3:::fashionrecommendationsystem/*" ] } ] } ` This is the bucket policy I have attached to my S3 bucket. ![But it is still giving the error message](/media/postImages/original/IMuVCFuLA3Qpe65lSfVyCkcg)
1
answers
0
votes
22
views
asked 2 days ago
Hi! I would like to upload my files (documents, photos and videos) to S3 Deep Archive. My question is: do folders count as objects? I would want to minimize the price, that is why I'm asking. Also, what is the maximum file size I can upload to Deep Archive? I've found mixed infos on this matter. I'd use FastGlacier to manage my vault. I am planning to upload my files bulked in zips.
1
answers
0
votes
19
views
Agnes
asked 2 days ago
Template format error: Unresolved resource dependencies [VpcFlowLogBucket] in the Resources block of the template I am getting the above error in my cloudformation template when i use conditions while creating resources. I have a usecase where if user enters a specific parameter then i will apply a particular condition to avoid creating an s3 bucket and use the one that user has given the arn to. ``` AWSTemplateFormatVersion: "2010-09-09" Description: CloudFormation stack for relaying AWS VPC flow logs for security analysis and storage. Outputs: StackName: Description: The name of the stack deployed by this CloudFormation template. Value: !Ref "AWS::StackName" Parameters: VpcIds: Description: The IDs of the VPCs for which flow logs will be relayed. VPC Flow Logs will be enabled for these VPCs. Type: List<AWS::EC2::VPC::Id> VpcFlowLogBucketArn: Type: String Description: (Optional) The ARN of an existing S3 bucket to use for VPC flow logs. If specified, VpcFlowLogDestination will be ignored. TrafficType: AllowedValues: - ACCEPT - REJECT - ALL Default: ALL Description: Whether to log only rejected or accepted traffic, or log all traffic. Logging all traffic (default) enables more security outcomes. Type: String OrgId: Description: Your account number. Type: Number RetentionInDays: Description: The number of days to retain AWS VPC Flow Logs in the S3 bucket. This is effectively the size of your recovery window if the flow of logs is interrupted. Type: Number Default: 3 Conditions: HasExpirationInDays: !Not [!Equals [!Ref RetentionInDays, 0]] UseExistingS3Bucket: !Equals [!Ref VpcFlowLogBucketArn, ""] Resources: VpcFlowLogBucket: Type: "AWS::S3::Bucket" Condition: UseExistingS3Bucket Properties: BucketName: !Join - "-" - - aarmo-vpc-flow-bucket - !Ref OrgId - !Ref "AWS::StackName" - !Ref "AWS::Region" LifecycleConfiguration: Rules: - ExpirationInDays: !If [HasExpirationInDays, !Ref RetentionInDays, 1] Status: !If [HasExpirationInDays, Enabled, Disabled] NotificationConfiguration: QueueConfigurations: - Event: "s3:ObjectCreated:*" Queue: !GetAtt [MyQueue, Arn] DependsOn: - MyQueue VpcFlowLogBucketPolicy: Type: "AWS::S3::BucketPolicy" Condition: UseExistingS3Bucket DependsOn: - VpcFlowLogBucket Properties: Bucket: !Ref VpcFlowLogBucket PolicyDocument: Version: "2012-10-17" Statement: # https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs-s3.html#flow-logs-s3-permissions - Sid: AWSLogDeliveryWrite Effect: Allow Principal: Service: "delivery.logs.amazonaws.com" Action: "s3:PutObject" Resource: !Sub "${VpcFlowLogBucket.Arn}/AWSLogs/${AWS::AccountId}/*" Condition: StringEquals: "s3:x-amz-acl": "bucket-owner-full-control" - Sid: AWSLogDeliveryAclCheck Effect: Allow Principal: Service: "delivery.logs.amazonaws.com" Action: "s3:GetBucketAcl" Resource: !GetAtt "VpcFlowLogBucket.Arn" MyQueue: Type: AWS::SQS::Queue Properties: QueueName: "SampleQueue12345128" MyQueuePolicy: Type: AWS::SQS::QueuePolicy Properties: PolicyDocument: Statement: - Effect: Allow Principal: Service: sns.amazonaws.com Action: - sqs:SendMessage - sqs:DeleteMessage - sqs:RecieveMessage Resource: "*" Queues: - Ref: MyQueue ``` What is the issue with the above cloudformation template? I have tried debugging the template multiple times but still getting nowhere. any help would be greatly appretiated!
0
answers
0
votes
21
views
asked 2 days ago
Hello, After filling a website form and sending it, it triggers sendmail.json 500 error and in the headers we have: General Response URL: https://xxxxxxxxxxx/sendmail.json Request method: Post Status code: 500 etc Response headers age: 498 server: amazonS3 x-cache error from cloudfront What could be the issue and how do I fix it please? Many thanks in advance
0
answers
0
votes
6
views
Abdel
asked 2 days ago
Hi, I have a S3 bucket to which files are being uploaded by a Kafka sink connector. I am trying to setup a monitoring dashboard for this S3 bucket. However, I could only find `BucketSizeBytes` and `NumberOfObjects` metrics in CloudWatch but not request metrics like `BytesUploaded`, `BytesDownloaded`. Metrics seems to be enabled by default since `BucketSizeBytes` and `NumberOfObjects` metrics are being recorded already. Is there any extra configuration I need to do to get request metrics reported in CloudWatch. Thank you for your time. Thank you, Sreeni
1
answers
0
votes
16
views
Sreeni
asked 2 days ago
Good afternoon folks, We have an AWS EC2 instance configured as a Commvault VSA proxy which needs to read and write from multiple S3 buckets. A S3 gateway endpoint has been configured as per best practice so all communications between the EC2 instance and S3. We have noticed (and confirmed by Commvault) that the EC2 write speeds to S3 appears to be limited to approx. 30MB compared to the read speeds which fluctuate between 300MB to 800MB. Commvault have checked over our setup and have confirmed our performance issue is NOT a Commvault issue - it appears to be a S3 bottleneck Are there any S3 restrictions in terms of write performance?
1
answers
0
votes
15
views
asked 3 days ago