Questions tagged with Amazon Simple Storage Service

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Is there any way to automatically move all WorkMail email attachments to S3 bucket and get their link of course.? If yes, how can I do that? Thanks Regards
1
answers
0
votes
4
views
asked an hour ago
Hi AWS, I am trying to impose a condition on S3 `BucketEncryption` property whether it should be customer managed (SSE-KMS) or AWS managed key (SSE-S3). The code for the template is: ``` # version: 1.0 AWSTemplateFormatVersion: "2010-09-09" Description: Create standardized S3 bucket using CloudFormation Template Parameters: BucketName: Type: String Description: "Name of the S3 bucket" KMSKeyArn: Type: String Description: "KMS Key Arn to encrypt S3 bucket" Default: "" SSEAlgorithm: Type: String Description: "Encryption algorithm for KMS" AllowedValues: - aws:kms - AES256 Conditions: KMSKeysProvided: !Not [!Equals [!Ref KMSKeyArn, ""]] Resources: S3Bucket: Type: 'AWS::S3::Bucket' DeletionPolicy: Retain UpdateReplacePolicy: Retain Properties: BucketName: !Ref BucketName PublicAccessBlockConfiguration: BlockPublicAcls: true BlockPublicPolicy: true IgnorePublicAcls: true RestrictPublicBuckets: true BucketEncryption: ServerSideEncryptionConfiguration: - !If - KMSKeysProvided - ServerSideEncryptionByDefault: SSEAlgorithm: !Ref SSEAlgorithm KMSMasterKeyID: !Ref KMSKeyArn BucketKeyEnabled: true - !Ref "AWS::NoValue" ``` When I am selecting the SSEAlgorithm as `AES256` I am receiving this error **Property ServerSideEncryptionConfiguration cannot be empty**. I know `KMSMasterKeyID` should not be present when the SSEAlgorithm is of AES256 type but I am confused how to get rid of this error. Please help.
2
answers
0
votes
10
views
profile picture
asked 6 hours ago
I set up AdministratorAccess for my role, this is a master level policy for this role to pass all the services, specially is AWS Glue, I want to create crawler for build etl pipeline and pour data to database in catalog of AWS Glue, but I stuck in the error 400 denied access. I tried many way like: - Change the credit card, set default on it - Add permission many times, still failed.
0
answers
0
votes
11
views
asked a day ago
Hi, I'm using a django file upload project and this issue keeps on popping up. > An error occurred (SignatureDoesNotMatch) when calling the CreateMultipartUpload operation: The request signature we calculated does not match the signature you provided. Check your key and signing method. This is my settings.py ``` AWS_STORAGE_BUCKET_NAME = 'bucket-name' AWS_S3_REGION_NAME = 'region-name' ``` My .env ``` AWS_ACCESS_KEY_ID='Access_key_ID' AWS_SECRET_ACCESS_KEY='Secret_Key_ID' ``` A detailed helpful with step-by-step procedure to solve this issue would be highly helpful as I'm super beginner with AWS (I started this project 2 days ago). Thank you
0
answers
0
votes
13
views
asked a day ago
Hi I'm trying to analyze a multipage pdf using Textract and the `start_document_analysis` API. I understand that the document I'm analyzing must be present in an S3 bucket. However when calling this function, I receive the following error message: ``` InvalidS3ObjectException: An error occurred (InvalidS3ObjectException) when calling the StartDocumentAnalysis operation: Unable to get object metadata from S3. Check object key, region and/or access permissions. ``` I've verified that the bucket name and key are correct, and the document works in the test console, leaving me to think this is related to permissions. Here is my test script (note, I am running this from my local computer, NOT lambda): ``` import boto3 session = boto3.Session(profile_name="default") s3 = s.client("s3") tx = s.client("textract") doc = "test.pdf" bucket = "test" s3.upload_file(doc, bucket, doc) resp = tx.start_document_analysis( DocumentLocation = { "S3Object": { "Bucket": bucket, "Name": doc } }, FeatureTypes = ["TABLES"] ) ``` How do I configure my bucket to allow access from Textract? Thanks
2
answers
0
votes
27
views
danem
asked a day ago
Hello, I store data in S3 as part of Amazon Data Exchange Products. I want to create an API product so that users can pull my data in csv or json format directly from the bucket. The bucket contains multiple csv's per day Do I need a lambda fronting an api in api gateway which will read the contents of the file first? Or, can I simply create an API which will parse the files and return data to the subscribers? I would then package that API up into a "data product" Ideally the data would be pulled as json by default, but they would also be able to pull it as csv. I will need to be able to give users the ability to structure a payload via SDK or CLI or some other IDE in which they specify various fields, date ranges etc... Thank you.
0
answers
0
votes
15
views
asked 2 days ago
I have 5 Terabytes of information that I want to backup in AWS deep Glacier. I hope to never download them, just wishing to archive them in the cloud besides physical disks. Is there a way to Uoload it directly from S3?
1
answers
0
votes
10
views
asked 2 days ago
When I execute command: aws ec2 export-image --image-id ami-04f516c --disk-image-format vmdk --s3-export-location S3Bucket=ami-export I had a returned error: An error occurred (InvalidParameter) when calling the ExportImage operation: Insufficient permissions - please verify bucket ownership and write permissions on the bucket. Bucket: ami-export I couldn't change the permissions. Can someone help me?
3
answers
0
votes
8
views
asked 2 days ago
Hello, I am transfering monitron sensor data from S3 bucket into a self-made app. (visualizing vibration and temperature in a different way than the monitron-app is doing) How can store the monitron data longer - right now I can only fetch the last 24 hours. Can I extend it to 7 days ?
0
answers
0
votes
1
views
asked 2 days ago
Hello All, I've been trying to setup a Multi-region access point for S3. The basic problem is that I have users in Asia that I'm trying to get better performance for. The simple part. I've created two buckets, I've put an html file in each of them with a simple name of us-east-1 and ap-south-1. Initially those were private access only but for the purpose of getting anything working they are now public. They are setup in an mrap which for now is not replicating. I setup a cloudfront distribution and pointed it at the mrap but only ever get errors. https://corridor-ap-south-1.s3.ap-south-1.amazonaws.com/test/region-test/region.html - ap-south-1 html https://corridor-cdn.s3.amazonaws.com/test/region-test/region.html - us-east-1 html mrap alias: mbzcc59bo9dy4.mrap mrap access point? https://mbzcc59bo9dy4.mrap.accesspoint.s3-global.amazonaws.com/test/region-test/region.html The errors are: The authorization mechanism you have provided is not supported. Please use Signature Version 4. I hope I'm wrong, but is there a ridiculously stupid world where I have to put a signature on my object request? This seems like it would be a complete waste of money to setup a lambda to do this for all my object requests.
0
answers
0
votes
11
views
asked 2 days ago
I tried with below command but it is not getting the results like if applied version Id of the tag if not applied not applying the tag to s3 object Help me to write shell script aws s3api get-object-tagging --bucket your-bucket --key your-object-key --query 'TagSet[?Key==`retention` && Value==`10yearsretention` || Value==`6yearsretention`]' >/dev/null 2>> error.log || aws s3api put-object-tagging --bucket your-bucket --key your-object-key --tagging 'TagSet=[{Key=retention,Value=10yearsretention}]' >> error.log The above command not working properly, put-object command is working but not both commands correctly when combined Results I am getting like [] when trying with aws s3api get-object-tagging --bucket your-bucket --key your-object-key --query 'TagSet[?Key==`retention` && Value==`10yearsretention` || Value==`6yearsretention`]'
2
answers
0
votes
16
views
asked 2 days ago
I am facing issue when i am calculating size of s3 bucket . when i check from the bucket metrics , it shows 54TB , but when i calculate the total size of bucket through api/console it shows only 1 TB. we haven't enabled any versioning. the cost is also calculate on basis of 54 TB . please help me with that. this bucket is used for datalake operation and there are lot of read/write/deletion operation happen over the time.
2
answers
0
votes
18
views
Cfr
asked 2 days ago