By using AWS re:Post, you agree to the Terms of Use
/Amazon Simple Storage Service/

Questions tagged with Amazon Simple Storage Service

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

RDS Backup & Restore SP Failing with Error - Provided Token Is Expired

We had scheduled, daily auto back-up of our SqlServer Db on RDS, to S3 bucket for at least last 6 years. It was working fine since then. Suddenly it stopped working, which means we don't see any Backup in our S3 bucket since 24th March. Up on diagnosing the problem, we realized that it is failing since then and the error is STEP1 exec msdb.dbo.rds_restore_database @restore_db_name='RestoreDbFromS3', @s3_arn_to_restore_from='arn:aws:s3:::awsbucketName/SqlServerDb.bak'; STEP 2 exec msdb.dbo.rds_task_status @task_id=7; Response indicates Error with following Task Description [2022-05-28 12:51:22.030] Task execution has started. [2022-05-28 12:51:22.237] Aborted the task because of a task failure or an overlap with your preferred backup window for RDS automated backup. [2022-05-28 12:51:22.240] Task has been aborted [2022-05-28 12:51:22.240] The provided token has expired. We studied a lot to identify the root cause and solution but could not find anything accurately relevant. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Procedural.Importing.html#SQLServer.Procedural.Importing.Native.Troubleshooting above link shows troubleshooting options as per the error responses, but this does not include the error response that we are getting. Note: between 25th & 26th March, our aws instance was suspended for couple of hours due to delayed payment of monthly invoice. we restored the same quickly. Everything on the same aws account is working fine since then, but we just found out that db backup service has impacted as we see the last successful backup available in S3 bucket is of dated 24th March. We suspect that some token has expired up on account suspension, but are unable to identify which one and how to restore the same back to normal. Help, Assistance and Guidance would be much appreciated.
0
answers
0
votes
1
views
asked 15 minutes ago

What is the relationship between AWS Config retention period and AWS S3 Lifecycle policy?

I found here: https://aws.amazon.com/blogs/mt/configuration-history-configuration-snapshot-files-aws-config/ " AWS Config delivers three types of configuration files to the S3 bucket: Configuration history (A configuration history is a collection of the configuration items for a given resource over any time period. ) Configuration snapshot OversizedChangeNotification" However, in this docs: https://docs.aws.amazon.com/ja_jp/config/latest/developerguide/delete-config-data-with-retention-period.html It only said that retention period delete the "ConfigurationItems" (A configuration item represents a point-in-time view of the various attributes of a supported AWS resource that exists in your account. ) In this docs: https://docs.aws.amazon.com/config/latest/developerguide/config-concepts.html#config-history: "The components of a configuration item include metadata, attributes, relationships, current configuration, and related events. AWS Config creates a configuration item whenever it detects a change to a resource type that it is recording. " I wonder that: Is ConfigurationItems a subset of Configuration history? Is the things that saved to S3 equal to ConfigurationItems? If not, where is ConfigurationItems stored? And if things stored in S3, is ConfigurationItems deleted or become damaged? I am setting AWS S3 lifcycle is expire objects in 300 days and AWS Config retention period is 7 years. Therefore, I am wondering what is the relationship between those 2? Because S3 lifecycle period is 300 days, will AWS Config data is deleted in 300 days? Thank you so much!
1
answers
0
votes
13
views
asked 2 days ago

Problem uploading media to AWS S3 with Django Storages / Boto3 (form a website on Lambda)

Hi all! I have a Django website which is deployed on AWS Lambda. All the static/media is stored in the S3 bucket. I managed to serve static from S3 and it works fine, however, when trying to upload media through admin (I was trying to add an article with a pic attached to it), I get a message "Endpoint request timed out". Here is my AWS and storage configuration: **ukraine101.aws.utils.py** ``` from storages.backends.s3boto3 import S3Boto3Storage StaticRootS3BotoStorage = lambda: S3Boto3Storage(location='static') MediaRootS3BotoStorage = lambda: S3Boto3Storage(location='media') ``` **settings.py** ``` STATICFILES_DIRS = [BASE_DIR / "static"] STATIC_URL = 'https://<my-bucket-name>.s3.amazonaws.com/' MEDIA_URL = 'https://<my-bucket-name>.s3.amazonaws.com/media/' MEDIA_ROOT = MEDIA_URL DEFAULT_FILE_STORAGE = 'ukraine101.aws.utils.MediaRootS3BotoStorage' STATICFILES_STORAGE = 'ukraine101.aws.utils.StaticRootS3BotoStorage' AWS_STORAGE_BUCKET_NAME = '<my-bucket-name>' AWS_S3_REGION_NAME = 'us-east-1' AWS_ACCESS_KEY_ID = '<my-key-i-dont-show>' AWS_SECRET_ACCESS_KEY = '<my-secret-key-i-dont-show>' AWS_S3_SIGNATURE_VERSION = 's3v4' AWS_S3_FILE_OVERWRITE = False AWS_DEFAULT_ACL = None AWS_S3_VERIFY = True AWS_S3_CUSTOM_DOMAIN = '%s.s3.amazonaws.com' % AWS_STORAGE_BUCKET_NAME STATICFILES_LOCATION = 'static' ``` **My Article model:** ``` class Article(models.Model): title = models.CharField(max_length=250, ) summary = models.TextField(blank=False, null=False, ) image = models.ImageField(blank=False, null=False, upload_to='articles/', ) text = RichTextField(blank=False, null=False, ) category = models.ForeignKey(Category, null=True, blank=True, default='', on_delete=models.SET_DEFAULT) featured = models.BooleanField(default=False) date_created = models.DateField(auto_now_add=True) slug = AutoSlugField(populate_from='title') related_book = models.ForeignKey(Book, null=True, blank=True, default='', on_delete=models.SET_DEFAULT) def get_absolute_url(self): return reverse("articles:article-detail", kwargs={"slug": self.slug}) def get_comments(self): return Comment.objects.filter(article=self.id) author = models.ForeignKey(User, null=True, blank=True, default='', on_delete=models.SET_DEFAULT) ``` **AWS bucket policy:** ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "PublicRead", "Effect": "Allow", "Principal": "*", "Action": [ "s3:PutObject", "s3:PutObjectAcl", "s3:GetObject", "s3:GetObjectVersion", "s3:GetObjectAcl" ], "Resource": "arn:aws:s3:::<my-bucket-name>/*" } ] } ``` **CORS:** ``` [ { "AllowedHeaders": [ "*" ], "AllowedMethods": [ "GET", "POST", "PUT", "HEAD" ], "AllowedOrigins": [ "*" ], "ExposeHeaders": [], "MaxAgeSeconds": 3000 } ] ``` **User permissions policies (there are two attached): ** Policy 1: ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListAllMyBuckets" ], "Resource": "arn:aws:s3:::*" }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation", "s3:ListBucketMultipartUploads", "s3:ListBucketVersions" ], "Resource": "arn:aws:s3:::<my-bucket-name>" }, { "Effect": "Allow", "Action": [ "s3:*Object*", "s3:ListMultipartUploadParts", "s3:AbortMultipartUpload" ], "Resource": "arn:aws:s3:::<my-bucket-name>/*" } ] } ``` Policy 2: ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:*", "s3-object-lambda:*" ], "Resource": [ "arn:aws:s3:::<my-bucket-name>", "arn:aws:s3:::<my-bucket-name>/*" ] } ] } ``` Please, if someone knows what can be wrong and why this timeout is happening, help me.
1
answers
0
votes
12
views
asked 11 days ago

Enabling S3 Encryption-at-rest on a go-forward basis with s3fs

Hi, We have some buckets (have been around for a while, approx 200GB+ data) and we want to **turn on** encryption-at-rest using SSE-S3 (the most "transparent" way) https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-encryption.html The S3 buckets are mounted to our Linux VMs using S3FS https://github.com/s3fs-fuse/s3fs-fuse which has support for this (seems fairly transparent) So, it seems like the way this works is that you can only enable this on files on a go-forward basis so the older files that already exist will not be in encrypted-at-rest (which is ok, we can backfill this later) Has anybody tried to do this before using this combo? If we mount the bucket using s3fs with `-o use_sse` option, what will happen as the files will be half-and-half? Will it "just work"? s3fs will be mounted with `-o use_sse` and it will be able to handle files that are BOTH the old way (not encrypted-at-rest) and the newer files (encrypted-at-rest) ... we can then start backfilling the older files and we have time or will this fail catastrophically the minute we mount the s3 bucket :( Is the solution to just start a new bucket and do the SSE-S3 and then just start moving the files over (we have done this before in terms of having code in our application check for a file in multiple buckets before giving up) Of course, we will test all this stuff, just wanted to ask a quick question in case we are worried about this too much and if this is a "no big deal" or "be very careful" Thanks!
1
answers
0
votes
17
views
asked 12 days ago

Using DMS and SCT for extracting/migrating data from Cassandra to S3

IHAC who is doing scoping with an Architecture using DMS and SCT. I had a few questions I was hoping you can get answered for me. 1. Does AWS DMS support data validation with Cassandra as a source? I don’t see it here - https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html#CHAP_BestPractices.DataValidation but I do see Cassandra as a valid source target here https://aws.amazon.com/about-aws/whats-new/2018/09/aws-dms-aws-sct-now-support-the-migration-of-apache-cassandra-databases/ 2. Does AWS DMS support ongoing replication with Cassandra as a source? Reading the docs it looks like if I wanted to extract data from Cassandra and write to s3 (Using DMS) then post process that data into a different format (Like json) and write to a different S3 bucket, I could so by attaching a Lamba to the original S3 event from the DMS extract and drop. Can you confirm my understanding? 3. How is incremental data loaded ongoing after initial load from Cassandra (with DMS)? In the docs it looks like its stored in s3 in csv form. Does it write 1 csv per source table and keep appending or updating the existing csv? does it create 1 csv per row, per batch...etc? I’m wondering how the event in step 3 would be triggered if I did want to continuously post process updates as they come in in real time and covert source data from Cassandra into Json data I store on s3.
0
answers
0
votes
9
views
asked 16 days ago

`RequestTimeout`s for S3 put requests from a Lambda in a VPC for larger payloads

# Update I added a VPC gateway endpoint for S3 in the same region (US East 1). I selected the route table for it that the lambda uses. But still, the bug persists. Below I've included details regarding my network configuration. The lambda is located in the "api" subnet. ## Network Configuration 1 VPC 4 subnets: * public &nbsp;&nbsp;&nbsp;&nbsp;IPv4 CIDR: 10.0.0.0/24 &nbsp;&nbsp;&nbsp;&nbsp;route table: public &nbsp;&nbsp;&nbsp;&nbsp;Network ACL: public * private &nbsp;&nbsp;&nbsp;&nbsp;IPv4 CIDR: 10.0.1.0/24 &nbsp;&nbsp;&nbsp;&nbsp;route table: private &nbsp;&nbsp;&nbsp;&nbsp;Network ACL: private * api &nbsp;&nbsp;&nbsp;&nbsp;IPv4 CIDR: 10.0.4.0/24 &nbsp;&nbsp;&nbsp;&nbsp;route table: api &nbsp;&nbsp;&nbsp;&nbsp;Network ACL: api * private2-required &nbsp;&nbsp;&nbsp;&nbsp;IPv4 CIDR: 10.0.2.0/24 &nbsp;&nbsp;&nbsp;&nbsp;route table: public &nbsp;&nbsp;&nbsp;&nbsp;Network ACL: - 3 route tables: * public &nbsp;&nbsp;&nbsp;&nbsp;Destination: 10.0.0.0/16 &nbsp;&nbsp;&nbsp;&nbsp;Target: local &nbsp;&nbsp;&nbsp;&nbsp;Destination: 0.0.0.0/0 &nbsp;&nbsp;&nbsp;&nbsp;Target: igw-xxxxxxx &nbsp;&nbsp;&nbsp;&nbsp;Destination: ::/0 &nbsp;&nbsp;&nbsp;&nbsp;Target: igw-xxxxxxxx * private &nbsp;&nbsp;&nbsp;&nbsp;Destination: 10.0.0.0/16 &nbsp;&nbsp;&nbsp;&nbsp;Target: local * api &nbsp;&nbsp;&nbsp;&nbsp;Destination: 10.0.0.0/16 &nbsp;&nbsp;&nbsp;&nbsp;Target: local &nbsp;&nbsp;&nbsp;&nbsp;Destination: 0.0.0.0/0 &nbsp;&nbsp;&nbsp;&nbsp;Target: nat-xxxxxxxx &nbsp;&nbsp;&nbsp;&nbsp;Destination: pl-xxxxxxxx &nbsp;&nbsp;&nbsp;&nbsp;Target: vpce-xxxxxxxx &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;(VPC S3 endpoint) 4 network ACLs * public &nbsp;&nbsp;&nbsp;&nbsp;inbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;All traffic (allow) &nbsp;&nbsp;&nbsp;&nbsp;outbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;All traffic (allow) * private &nbsp;&nbsp;&nbsp;&nbsp;inbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;100: PostgreSQL TCP 5432 10.0.0.48/32 (allow) &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;101: PostgreSQL TCP 5432 10.0.4.0/24 (allow) &nbsp;&nbsp;&nbsp;&nbsp;outbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;100: Custom TCP TCP 32768-65535 10.0.0.48/32 (allow) &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;101: Custom TCP TCP 1024-65535 10.0.4.0/24 (allow) * api &nbsp;&nbsp;&nbsp;&nbsp;inbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;All traffic (allow) &nbsp;&nbsp;&nbsp;&nbsp;outbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;All traffic (allow) * \- &nbsp;&nbsp;&nbsp;&nbsp;inbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;All traffic (allow) &nbsp;&nbsp;&nbsp;&nbsp;outbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;All traffic (allow) # Update # I increased the timeout of the lambda to 5 minutes, and the timeout of the PUT request to the S3 bucket to 5 minutes as well. Before this the request itself would timeout, but now I'm actually getting a response back from S3. It is a 400 Bad Request response. The error code is `RequestTimeout`. And the message in the payload of the response is "Your socket connection to the server was not read from or written to within the timeout period." This exact same code works 100% of the time for a small payload (on the order of 1KB), but apparently for payloads on the order of 1MB it starts breaking. There is no logic in _my code_ that does anything differently based on the size of the payload. I've read similar issues that suggest the issue is with the wrong number of bytes being provided in the "content-length" header, but I've never provided a value for that header. Furthermore, the lambda works flawlessly when executed in my local environment. The problem definitely appears to be a networking one. At first glance it might seem like this is just an issue with the lambda being able to interact with services outside of the VPC, but that's not the case because the lambda _does_ work exactly as expected for smaller file sizes (<1KB). So it's not that it flat out can't communicate with S3. Scratching my head here... # Original # I use S3 to host images for an application. In my local testing environment the images upload at an acceptable speed. However, when I run the same exact code from an AWS Lambda (in my VPC), the speeds are untenably slow. I've concluded this because I've tested with smaller images (< 1KB) and they work 100% of the time without making any changes to the code. Then I use 1MB sized payloads and they fail 98% percent of the time. I know the request to S3 is the issue because of logs made from within the Lambda that indicate the execution reaches the upload request, but — almost — never successfully passes it (times out).
1
answers
0
votes
32
views
asked 17 days ago

Help with copying s3 bucket to another location missing objects

Hello All, Today I was trying to copy a directory from one location to another, and was using the following command to execute my copy. aws s3 s3://bucketname/directory/ s3://bucketname/directory/subdirectory --recursive The copy took overnight to complete because it was 16.4TB in size, but when I got into work the next day, it was done, or at least it had completed. But when I do a compare between the two locations I get the following bucketname/directory/ 103,690 objects - 16.4TB bucketname/directory/subdirectory/ 103,650 - 16.4TB So there is a 40 object difference between the source location and the destination location. I tried using the following command to copy over the files that were missing aws s3 sync s3://bucketname/directory/ s3://bucket/directory/subdirectory/ which returned no results. It sat for a while maybe like 2 minutes or so, and then just returned to the next line. I am at my wits end trying to copy of the missing objects, and my boss thinks that I lost the data, so I need to figure out a way to get the difference between the source and destination copied over. If anyone could help me with this, I would REALY appreciate it. I am a newbie with AWS, so I may not understand everything that I am told, but I will try anything to get this resolved. I am doing all the commands through an EC2 instance that I am ssh into, and then use AWS CLI commands. Thanks to anyone who might be able to help me. Take care, -Tired & Frustrated :)
1
answers
0
votes
6
views
asked 18 days ago

Non guessable CloudFront URL

I'm wondering if there's a way to make the S3 path unguessable. Let's suppose I have an S3 path like this: https://s3-bucket.com/{singer_id}/album/song/song.mp3, this file will be served through CloudFront, so the path will be: https://cloundfront-dist-id.com/{singer_id}/album/song/song.mp3?signature=... (I'm using signed URLs). My question is : it is possible to make the /{singer_id}/album/song/song.mp3 not guessable by hashing it using for example Lambda or Lambda@Edge function so the client will see a url like this https://cloundfront-dist-id.com/some_hash?signature= ? Thanks in advance. https://stackoverflow.com/questions/70885356/non-guessable-cloudfront-url I am also facing issue. Question may arise why need of hash because signed url are secure. For my side, I need such url with s3 path hidden. I am using same AWS bucket for retrieving image for internal use without signed url and sharing that file to others using signed url. Internal USe CDN without signed url after CNAMe https://data.example.com/{singer_id}/album/song/song.mp3 Signed url https://secured-data.example.com/{singer_id}/album/song/song.mp3?signature=. &Expires == Since both using same AWS bucket and if someone guesses in signed url then access content https://data.example.com/{singer_id}/album/song/song.mp3?signature=. &Expires . File opens . In this scenario, I want to hide {singer_id}/album/song/song.mp3 to some new value and file is displayed under new name
1
answers
0
votes
9
views
asked 18 days ago

Cannot create S3 Backup using AWS Backup

I am trying to make an S3 Backup using AWS Backup. The error message I'm getting is (I have deliberately changed the bucket name and account number) ``` Unable to perform s3:PutBucketNotification on my-bucket-name-123 The backup job failed to create a recovery point for your resource arn:aws:s3:::my-bucket-name-123 due to missing permissions on role arn:aws:iam::123456789000:role/service-role/AWSBackupDefaultServiceRole. ``` I have attached the inline policies described in the [documentation](https://docs.aws.amazon.com/aws-backup/latest/devguide/s3-backups.html) to AWSBackupDefaultServiceRole (note: the role also contains the AWS managed policy AWSBackupServiceRolePolicyForBackup as well as the following) ``` { "Version":"2012-10-17", "Statement":[ { "Sid":"S3BucketBackupPermissions", "Action":[ "s3:GetInventoryConfiguration", "s3:PutInventoryConfiguration", "s3:ListBucketVersions", "s3:ListBucket", "s3:GetBucketVersioning", "s3:GetBucketNotification", "s3:PutBucketNotification", "s3:GetBucketLocation", "s3:GetBucketTagging" ], "Effect":"Allow", "Resource":[ "arn:aws:s3:::*" ] }, { "Sid":"S3ObjectBackupPermissions", "Action":[ "s3:GetObjectAcl", "s3:GetObject", "s3:GetObjectVersionTagging", "s3:GetObjectVersionAcl", "s3:GetObjectTagging", "s3:GetObjectVersion" ], "Effect":"Allow", "Resource":[ "arn:aws:s3:::*/*" ] }, { "Sid":"S3GlobalPermissions", "Action":[ "s3:ListAllMyBuckets" ], "Effect":"Allow", "Resource":[ "*" ] }, { "Sid":"KMSBackupPermissions", "Action":[ "kms:Decrypt", "kms:DescribeKey" ], "Effect":"Allow", "Resource":"*", "Condition":{ "StringLike":{ "kms:ViaService":"s3.*.amazonaws.com" } } }, { "Sid":"EventsPermissions", "Action":[ "events:DescribeRule", "events:EnableRule", "events:PutRule", "events:DeleteRule", "events:PutTargets", "events:RemoveTargets", "events:ListTargetsByRule", "events:DisableRule" ], "Effect":"Allow", "Resource":"arn:aws:events:*:*:rule/AwsBackupManagedRule*" }, { "Sid":"EventsMetricsGlobalPermissions", "Action":[ "cloudwatch:GetMetricData", "events:ListRules" ], "Effect":"Allow", "Resource":"*" } ] } ``` This to me, looks correct and it not should be giving that error. Is there a bug? Or is there a step which is not described in the documentation? I would really appreciate some help. Many thanks ``` ```
0
answers
0
votes
10
views
asked 21 days ago

Failed to convert 'Body' to string S3.InvalidContent arn:aws:states:::aws-sdk:s3:getObject step function

I am a newbie so pardon my ignorance. I am writing a very simple step function state machine that uses the AWS SDK to retrieve a file from S3. Every time I run it the task that gets the file from S3 fails with an "S3.InvalidContent" error with "Failed to convert 'Body' to string" as the cause. The full definition of my state machine is: ``` { "Comment": "A description of my state machine", "StartAt": "GetAudioFile", "States": { "GetAudioFile": { "Type": "Task", "Parameters": { "Bucket": "11123", "Key": "test.wav" }, "Resource": "arn:aws:states:::aws-sdk:s3:getObject", "End": true } } } ``` The full text of the TaskFailed event is: ``` { "resourceType": "aws-sdk:s3", "resource": "getObject", "error": "S3.InvalidContent", "cause": "Failed to convert 'Body' to string" } ``` The full text of the CloudWatch log entry with the error is: ``` { "id": "5", "type": "TaskFailed", "details": { "cause": "Failed to convert 'Body' to string", "error": "S3.InvalidContent", "resource": "getObject", "resourceType": "aws-sdk:s3" }, "previous_event_id": "4", "event_timestamp": "1651894187569", "execution_arn": "arn:aws:states:us-east-1:601423303632:execution:test:44ae6102-b544-3cfa-e186-181cdf331493" } ``` 1. What am I doing wrong? 2. How do I fix it? 3. What additional information do you need from me? 4. Most importantly, where can I find answers to these stupid questions so I don't have to post these stupid questions on re:Post again? (I have spent nearly a day scouring AWS docs and Googling without finding anything.)
1
answers
0
votes
10
views
asked 22 days ago

s3 create Presigned Multipart Upload URL using API

I'm trying to use the AWS S3 API to perform a multi-part upload with Signed URLs. This will allow us to send a request to the server (which is configured with the correct credentials), and then return a pre-signed URL to the client (which will not have credentials configured). The client should then be able to complete the request, computing subsequent signatures as appropriate. This appears to be possible as per the AWS S3 documentation: https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html Signature Calculations for the Authorization Header: Transferring Payload in Multiple Chunks (Chunked Upload) (AWS Signature Version 4) - Amazon Simple Storage Service - AWS Documentation As described in the Overview, when authenticating requests using the Authorization header, you have an option of uploading the payload in chunks. You can send data in fixed size or variable size chunks. This section describes the signature calculation process in chunked upload, how you create the chunk body, and how the delayed signing works where you first upload the chunk, and send its ... docs.aws.amazon.com The main caveat here is that it seems to need the Content-Length​ up front, but we won't know the value of that as we'll be streaming the value. Is there a way for us to use signed URLs to do multipart upload without knowing the length of the blob to be uploaded beforehand?
0
answers
0
votes
1
views
asked 23 days ago

Sync DynamoDB to S3

What is the best way to sync my DynamoDB tables to S3, so that I can perform serverless 'big data' queries using Athena? The data must be kept in sync without any intervention. The frequency of sync would depend on the cost, ideally daily but perhaps weekly. I have had this question a long time. I will cover what I have considered, and why I don't like the options. 1) AWS Glue Elastic Views. Sounds like this will do the job with no code, but it was announced 18 months ago and there have been no updates since. Its not generally available, and there is not information on when it might be. 2) Use dynamodb native backup following this blog https://aws.amazon.com/blogs/aws/new-export-amazon-dynamodb-table-data-to-data-lake-amazon-s3/. I actually already use this method for 'one-off' data transfers that I kick-off manually and then configure in Athena. I have two issues with this option. The first is that, to my knowledge, the export cannot be scheduled natively. The blog suggests using the CLI to kick off exports, and I assume the writer intends that the CLI would need scheduling on a cron job somewhere. I don't run any servers for this. I imagine I could do it via a scheduled Lambda with an SDK. The second issue is that the export path in S3 always includes a unique export ID. This means I can't configure the Athena table to point to a static location for the data and just switch over the new data after a scheduled export. Perhaps I could write another lambda to move the data around to a static location after the export has finished, but it seems a shame to have to do so much work and I've not seen that covered anywhere before. 3) I can use data pipeline as described in https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBPipeline.html. This post is more about backing data up than making it accessible to Athena. I feel like this use case must be so common, and yet none of the ideas I've seen online are really complete. I was wondering if anyone had any ideas or experiences that would be useful here?
2
answers
0
votes
9
views
asked 24 days ago

I'd like to request to S3 as a cognito certification qualification.

I'd like to request to S3 as a cognito certification qualification. S3 is using sdk Cognito is using amplify. Use an angular typescript. I would like to replace the secret key with the cognito authentication information when creating S3. I want to access s3 with the user I received from Auth.signIn, but the credentials are missing. I need your help. ``` public signIn(user: IUser): Promise<any> { return Auth.signIn(user.email, user.password).then((user) => { AWS.config.region = 'ap-northeast-2'; AWS.config.credentials = new AWS.CognitoIdentityCredentials({ IdentityPoolId: 'ap-northeast-2:aaaaaaaa-bbbb-dddd-eeee-ffffffff', }); const userSession = Auth.userSession(user); const idToken = userSession['__zone_symbol__value']['idToken']['jwtToken']; AWS.config.region = 'ap-northeast-2'; AWS.config.credentials = new AWS.CognitoIdentityCredentials({ IdentityPoolId: 'ap-northeast-2:aaaaaaaa-bbbb-dddd-eeee-ffffffff', RoleArn: 'arn:aws:iam::111111111111:role/Cognito_role', Logins: { CognitoIdentityPool: 'ap-northeast-2:aaaaaaaa-bbbb-dddd-eeee-ffffffff', idToken: idToken, }, })); const s3 = new AWS.S3({ apiVersion: '2012-10-17', region: 'ap-northeast-2', params: { Bucket: 'Bucketname', }, }); s3.config.credentials.sessionToken = user.signInUserSession['accessToken']['jwtToken']; s3.listObjects(function (err, data) { if (err) { return alert( 'There was an error: ' + err.message ); } else { console.log('***********s3List***********', data); } }); } ``` bucket policy ``` { "Version": "2012-10-17", "Id": "Policy", "Statement": [ { "Sid": "AllowIPmix", "Effect": "Allow", "Principal": "*", "Action": "*", "Resource": "arn:aws:s3:::s3name/*", } ] } ``` cognito Role Policies - AmazonS3FullAccess ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:*", ], "Resource": "*" } ] } ```
0
answers
0
votes
5
views
asked 24 days ago

Expired s3 Backup Recovery Point

I configured AWS Backup in CDK to enable continuous backups for s3 buckets with this configuration : - backup rule : with `enableContinuousBackup: true` and `deleteAfter 35 days` - backup selection : with `resources` array having the ARN of the bucket directly set and roles setup following the docs of aws : https://docs.aws.amazon.com/aws-backup/latest/devguide/s3-backups.html Later I deleted the stack in CDK and ,as expected, all the resources were deleted except for the vault that was orphaned. The problem happens when trying to delete the recovery points inside the vault, I get back the status as `Expired` with a message `Insufficient permission to delete recovery point`. - I am logged in as a user with AdministratorAccess - I changed the access policy of the vault to allow anyone to delete the vault / recovery point - even when logged as the root of the account, I still get the same message. --- - For reference, this is aws managed policy attached to my user : `AdministratorAccess` , it Allows (325 of 325 services) including AWS Backup obviously. - Here's the vault access policy that I set : ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": [ "backup:DeleteBackupVault", "backup:DeleteBackupVaultAccessPolicy", "backup:DeleteRecoveryPoint", "backup:StartCopyJob", "backup:StartRestoreJob", "backup:UpdateRecoveryPointLifecycle" ], "Resource": "*" } ] } ``` Any ideas what I'm missing here ? **Update ** : - A full week after creating the backup recovery point, and still unable to delete it. - I tried deleting it from the AWS CLI but no luck. - I tried suspending the versioning for the bucket in question and tried, but no luck too.
0
answers
1
votes
18
views
asked a month ago

S3 Static Website Objects 403 Forbidden when Uploaded from Different Account

### Quick Summary: If objects are put into a bucket owned by "Account A" from a different account ("Account B"), you cannot access files via S3 static website (http) from "Account A" (bucket owner). This is true regardless of the bucket policy granting GetObject on all objects, and regardless of if bucket-owner-full-control ACL is enabled on the object. - If trying to download a file from Account A via S3 API (console/cli), it works fine. - If trying to download a file from Account A via S3 static website (http), S3 responds HTTP 403 Forbidden if the file was uploaded by Account B. Files uploaded by Account A download fine. - Disabling Object ACL's fixes the problem but is not feasible (explained below) ### OVERVIEW I have a unique setup where I need to publish files to an S3 bucket from an account that does not own the bucket. The upload actions work fine. My problem is that I cannot access files from the bucket-owner account over the S3 static website *if the files were published from another account* (403 Forbidden response). **The problem only exists if the files were pushed to S3 FROM a different account.** Because the issue is only for those files, the problem seems like it would be in the Object Ownership ACL configuration. I've confirmed I can access other files (that weren't uploaded by the other acct) in the bucket through the S3 static website endpoint, so I know my bucket policy and VPC endpoint config is correct. If I completely disable Object ACL's completely **it works fine**, however I cannot do that because of two issues: - Ansible does not support publishing files to buckets with ACL's disabled. (Disabling ACL is a relatively new S3 feature and Ansible doesn't support it) - The primary utility I'm using to publish files (Aptly) also doesn't support publishing to buckets with ACL's disabled. (Disabling ACL is a relatively new S3 feature and Aptly doesn't support it) Because of these above constraints, I must use Object ACL's enabled on the bucket. I've tried both settings "Object Writer" and "Bucket owner preferred", neither are working. All files are uploaded with the `bucket-owner-full-control` object ACL. SCREENSHOT: https://i.stack.imgur.com/G1FxK.png As mentioned, disabling ACL fixes everything, but since my client tools (Ansible and Aptly) cannot upload to S3 without an ACL set, ACL's must remain enabled. SCREENSHOT: https://i.stack.imgur.com/NcKOd.png ### ENVIRONMENT EXPLAINED: - Bucket `test-bucket-a` is in "Account A", it's not a "private" bucket but it does not allow public access. Access is granted via policies (snippet below). - Bucket objects (files) are pushed to `test-bucket-a` from an "Account B" role. - Access from "Account B" to put files into the bucket is granted via policies (not shown here). Files upload without issue. - Objects are given the `bucket-owner-full-control` ACL when uploading. - I have verified that the ACL's look correct and both "Account A" and "Account B" have object access. (screenshot at bottom of question) - I am trying to access the files from the bucket-owner account (Account A) over the S3 static website access (over http). I can access files that were not uploaded by "Account B" but files uploaded by "Account B" return 403 Forbidden I am using VPC Endpoint to access (files cannot be public facing), and this is added to the bucket policy. All the needed routes and endpoint config are in-place. I know my policy config is good because everything works perfectly for files uploaded within the same account or if I disable object ACL. ``` { "Sid": "AllowGetThroughVPCEndpoint", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::test-bucket-a/*", "Condition": { "StringEquals": { "aws:sourceVpce": "vpce-0bfb94<scrubbed>" } } }, ``` **Here is an example of how this file is uploaded using Ansible:** Reminder: the role doing the uploading is NOT part of the bucket owner account. ``` - name: "publish gpg pubkey to s3 from Account B" aws_s3: bucket: "test-bucket-a" object: "/files/pubkey.gpg" src: "/home/file/pubkey.gpg" mode: "put" permission: "bucket-owner-full-control" ``` **Some key troubleshooting notes:** - From "Account A" when logged into the console, **I can download the file.** This is very strange and shows that API requests to GetObject are working. Does the S3 website config follow some different rule structure?? - From "Account A" when accessing the file from an HTTP endpoint (S3 website) it returns **HTTP 403 Forbidden** - I have tried deleting and re-uploading the file multiple times. - I have tried manually setting object ACL via the aws cli (ex: `aws s3api put-object-acl --acl bucket-owner-full-control ...`) - When viewing the "object" ACL, I have confirmed that both "Account A" and "Account B" have access. See below screenshot. Note that it confirms the object owner is an external account. SCREENSHOT: https://i.stack.imgur.com/TCYvv.png
0
answers
0
votes
3
views
asked a month ago

AWS Glue not properly crawling s3 bucket populated by "Resource Data Sync" -- specifically, "AWS: InstanceInformation" is not made into a table

I set up an s3 bucket that collects inventory data from multiple AWS accounts using the Systems Manager "Resource Data Sync". I was able to set up the Data Syncs to feed into the single bucket without issue and the Glue crawler was created automatically. Now that I'm trying to query the data in Athena, I noticed there is an issue with how the Crawler is parsing the data in the bucket. The folder "AWS:InstanceInformation" is not being turned into a table. Instead, it is turning all of the "region=us-east-1/" and "test.json" sub-items into tables which are, obviously, not queryable. To illustrate further, each of the following paths is being turned into it's own table. * s3://resource-data-sync-bucket/AWS:InstanceInformation/accountid=12345679012/region=us-east-1 * s3://resource-data-sync-bucket/AWS:InstanceInformation/accountid=12345679012/test.json * s3://resource-data-sync-bucket/AWS:InstanceInformation/accountid=23456790123/region=us-east-1 * s3://resource-data-sync-bucket/AWS:InstanceInformation/accountid=23456790123/test.json * s3://resource-data-sync-bucket/AWS:InstanceInformation/accountid=34567901234/region=us-east-1 * s3://resource-data-sync-bucket/AWS:InstanceInformation/accountid=34567901234/test.json This is ONLY happening with the "AWS:InstanceInformation" folder. All of the other folders (e.g. "AWS:DetailedInstanceInformation") are being properly turned into tables. Since all of this data was populated automatically, I'm assuming that we are dealing with a bug? Is there anything I can do to fix this?
1
answers
0
votes
5
views
asked a month ago

Access S3 files from Unity for mobile development

I'm trying to configure the AWS S3 service to download the included files in a bucket using Unity for mobile. I downloaded the SDK package and I got it installed. From AWS console I set up a IAM policy and roles for unauth users I created a Cognito IdentityPool and got the relative id I set up the S3 bucket and its policy using the generator, including the **arn:aws:iam::{id}:role/{cognito unauth role}** and the resource **arn:aws:s3:::{bucket name}/***. In code I set credentials and region and create CognitoAWSCredentials (C# used) ```C# _credentials = new CognitoAWSCredentials(IdentityPoolId, _CognitoIdentityRegion); ``` then I create the client: ```C# _s3Client = new AmazonS3Client(_credentials, RegionEndpoint.EUCentral1); // the region is the same in _CognitoIdentityRegion ``` I then try to use the s3Client to get my files (in bucketname subfolders) ``` private void GetAWSObject(string S3BucketName, string folder, string sampleFileName, IAmazonS3 s3Client) { string message = string.Format("fetching {0} from bucket {1}", sampleFileName, S3BucketName); Debug.LogWarning(message); s3Client.GetObjectAsync(S3BucketName, folder + "/" + sampleFileName, (responseObj) => { var response = responseObj.Response; if (response.ResponseStream != null) { string path = Application.persistentDataPath + "/" + folder + "/" + sampleFileName; Debug.LogWarning("\nDownload path AWS: " + path); using (var fs = System.IO.File.Create(path)) { byte[] buffer = new byte[81920]; int count; while ((count = response.ResponseStream.Read(buffer, 0, buffer.Length)) != 0) fs.Write(buffer, 0, count); fs.Flush(); } } else { Debug.LogWarning("-----> response.ResponseStream is null"); } }); } ``` At this point I cannot debug into the Async method, I don't get any kind of error, I don't get any file downloaded and I even cannot check is connection to AWS S3 has worked in some part of the script. What am I doing wrong? Thanks for help a lot!
0
answers
0
votes
3
views
asked 2 months ago
1
answers
0
votes
68
views
asked 2 months ago

Cannot access encrypted files from RDS in S3 bucket

I export data from an Aurora Postgres instance to S3 via the `aws_s3.query_export_to_s3` function. The destination bucket does not have default encryption enabled. When I try to download one of the files I get the following error: > The ciphertext refers to a customer mast3r key that does not exist, does not exist in this region, or you are not allowed to access. Note: I had to change the word mast3r because this forum doesn't allow me to post it as it is a "non-inclusive" word... The reasons seems to be that the files got encrypted with the AWS managed RDS key which has the following policy: ``` { "Version": "2012-10-17", "Id": "auto-rds-2", "Statement": [ { "Sid": "Allow access through RDS for all principals in the account that are authorized to use RDS", "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": [ "kms:Encrypt", "kms:Decrypt", "kms:ReEncrypt*", "kms:GenerateDataKey*", "kms:CreateGrant", "kms:ListGrants", "kms:DescribeKey" ], "Resource": "*", "Condition": { "StringEquals": { "kms:CallerAccount": "123456789", "kms:ViaService": "rds.eu-central-1.amazonaws.com" } } }, { "Sid": "Allow direct access to key metadata to the account", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::123456789:root" }, "Action": [ "kms:Describe*", "kms:Get*", "kms:List*", "kms:RevokeGrant" ], "Resource": "*" } ] } ``` I assume that the access doesn't work because of the `ViaService` condition when trying to decrypt the file via S3. I tried to access to files with the root user instead of an IAM user and it works. Is there any way to get access with an IAM user? As far as I know, you cannot modify the policy of an AWS managed key. I also don't understand why the root user can decrypt the file as the policy doesn't explicitly grant decrypt permissions to it other than the permissions when called from RDS.
1
answers
0
votes
9
views
asked 2 months ago

Adding S3 Bucket Policy Cause S3 Replication Failed

Hello, Can anyone help me below case? I wanted my bucket to access from specific IPs only, otherwise deny. I set up S3 bucket policy as follow: ``` { "Version": "2012-10-17", "Id": "S3PolicyId1", "Statement": [ { "Sid": "IPAllow", "Effect": "Deny", "Principal": "*", "Action": "s3:*", "Resource": [ "arn:aws:s3:::DOC-EXAMPLE-BUCKET", "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*" ], "Condition": { "NotIpAddress": { "aws:SourceIp": "x.x.x.x" }, "Bool":{ "aws:ViaAWSService":"false" } } } ] } ``` For S3 replication, I configured S3 Replication Rule as per AWS Docs by setting policies and attaching to IAM role as follow: ``` { "Version":"2012-10-17", "Statement":[ { "Effect":"Allow", "Principal":{ "Service":"s3.amazonaws.com" }, "Action":"sts:AssumeRole" } ] } ``` ``` { "Version":"2012-10-17", "Statement":[ { "Effect":"Allow", "Action":[ "s3:GetReplicationConfiguration", "s3:ListBucket" ], "Resource":[ "arn:aws:s3:::SourceBucket" ] }, { "Effect":"Allow", "Action":[ "s3:GetObjectVersionForReplication", "s3:GetObjectVersionAcl", "s3:GetObjectVersionTagging" ], "Resource":[ "arn:aws:s3:::SourceBucket/*" ] }, { "Effect":"Allow", "Action":[ "s3:ReplicateObject", "s3:ReplicateDelete", "s3:ReplicateTags" ], "Resource":"arn:aws:s3:::DestinationBucket/*" } ] } ``` Without bucket policy, objects are replicated smoothly. Once I add the bucket policy, replication is failed every time. I have no idea. Regards, Ohnmar
1
answers
0
votes
5
views
asked 2 months ago
  • 1
  • 90 / page