By using AWS re:Post, you agree to the Terms of Use
/Storage/Questions/
Questions in Storage
Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

S3 Get object not working properly in Unity

I am using AWS SDK .NET for Unity to download zip files from S3. I implemented the get method just as this tutorial for .NET https://docs.aws.amazon.com/AmazonS3/latest/userguide/download-objects.html But when I call the method with ReadObjectDataAsync().Wait(); Unity stops and crashes, like is in an infinite loop. This is my code, has a different name but is practically the same: /// <summary> /// Start is called before the first frame update /// </summary> void Start() { customSongsManager = gameObject.GetComponent<CustomSongsManager>(); GetZip(S3SampleFile).Wait(); } /// <summary> /// Get Object from S3 Bucket /// </summary> public async Task GetZip(string pFile) { string folder = "Assets/Audio/Custom/"; try { GetObjectRequest request = new GetObjectRequest { BucketName = S3Bucket, Key = pFile }; using (GetObjectResponse response = await S3Client.GetObjectAsync(request)) using (Stream responseStream = response.ResponseStream) { string title = response.Metadata["x-amz-meta-title"]; // Assume you have "title" as medata added to the object. string contentType = response.Headers["Content-Type"]; Debug.Log("Object metadata, Title: " + title); Debug.Log("Content type: " + contentType); if (responseStream != null) { using (BinaryReader bReader = new BinaryReader(response.ResponseStream)) { byte[] buffer = bReader.ReadBytes((int)response.ResponseStream.Length); File.WriteAllBytes(folder + S3SampleFile, buffer); Debug.Log("Writed all bytes"); StartCoroutine(customSongsManager.ReadDownloadedSong(folder + S3SampleFile)); } } } } catch (AmazonS3Exception e) { // If bucket or object does not exist Debug.Log("Error encountered ***. Message:"+ e.Message + " when reading object"); } catch (Exception e) { Debug.Log("Unknown encountered on server. Message:"+ e.Message + " when reading object"); } } The game crashes in this line: using (GetObjectResponse response = await S3Client.GetObjectAsync(request))
1
answers
0
votes
16
views
asked 3 days ago

Problem uploading media to AWS S3 with Django Storages / Boto3 (form a website on Lambda)

Hi all! I have a Django website which is deployed on AWS Lambda. All the static/media is stored in the S3 bucket. I managed to serve static from S3 and it works fine, however, when trying to upload media through admin (I was trying to add an article with a pic attached to it), I get a message "Endpoint request timed out". Here is my AWS and storage configuration: **ukraine101.aws.utils.py** ``` from storages.backends.s3boto3 import S3Boto3Storage StaticRootS3BotoStorage = lambda: S3Boto3Storage(location='static') MediaRootS3BotoStorage = lambda: S3Boto3Storage(location='media') ``` **settings.py** ``` STATICFILES_DIRS = [BASE_DIR / "static"] STATIC_URL = 'https://<my-bucket-name>.s3.amazonaws.com/' MEDIA_URL = 'https://<my-bucket-name>.s3.amazonaws.com/media/' MEDIA_ROOT = MEDIA_URL DEFAULT_FILE_STORAGE = 'ukraine101.aws.utils.MediaRootS3BotoStorage' STATICFILES_STORAGE = 'ukraine101.aws.utils.StaticRootS3BotoStorage' AWS_STORAGE_BUCKET_NAME = '<my-bucket-name>' AWS_S3_REGION_NAME = 'us-east-1' AWS_ACCESS_KEY_ID = '<my-key-i-dont-show>' AWS_SECRET_ACCESS_KEY = '<my-secret-key-i-dont-show>' AWS_S3_SIGNATURE_VERSION = 's3v4' AWS_S3_FILE_OVERWRITE = False AWS_DEFAULT_ACL = None AWS_S3_VERIFY = True AWS_S3_CUSTOM_DOMAIN = '%s.s3.amazonaws.com' % AWS_STORAGE_BUCKET_NAME STATICFILES_LOCATION = 'static' ``` **My Article model:** ``` class Article(models.Model): title = models.CharField(max_length=250, ) summary = models.TextField(blank=False, null=False, ) image = models.ImageField(blank=False, null=False, upload_to='articles/', ) text = RichTextField(blank=False, null=False, ) category = models.ForeignKey(Category, null=True, blank=True, default='', on_delete=models.SET_DEFAULT) featured = models.BooleanField(default=False) date_created = models.DateField(auto_now_add=True) slug = AutoSlugField(populate_from='title') related_book = models.ForeignKey(Book, null=True, blank=True, default='', on_delete=models.SET_DEFAULT) def get_absolute_url(self): return reverse("articles:article-detail", kwargs={"slug": self.slug}) def get_comments(self): return Comment.objects.filter(article=self.id) author = models.ForeignKey(User, null=True, blank=True, default='', on_delete=models.SET_DEFAULT) ``` **AWS bucket policy:** ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "PublicRead", "Effect": "Allow", "Principal": "*", "Action": [ "s3:PutObject", "s3:PutObjectAcl", "s3:GetObject", "s3:GetObjectVersion", "s3:GetObjectAcl" ], "Resource": "arn:aws:s3:::<my-bucket-name>/*" } ] } ``` **CORS:** ``` [ { "AllowedHeaders": [ "*" ], "AllowedMethods": [ "GET", "POST", "PUT", "HEAD" ], "AllowedOrigins": [ "*" ], "ExposeHeaders": [], "MaxAgeSeconds": 3000 } ] ``` **User permissions policies (there are two attached): ** Policy 1: ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListAllMyBuckets" ], "Resource": "arn:aws:s3:::*" }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation", "s3:ListBucketMultipartUploads", "s3:ListBucketVersions" ], "Resource": "arn:aws:s3:::<my-bucket-name>" }, { "Effect": "Allow", "Action": [ "s3:*Object*", "s3:ListMultipartUploadParts", "s3:AbortMultipartUpload" ], "Resource": "arn:aws:s3:::<my-bucket-name>/*" } ] } ``` Policy 2: ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:*", "s3-object-lambda:*" ], "Resource": [ "arn:aws:s3:::<my-bucket-name>", "arn:aws:s3:::<my-bucket-name>/*" ] } ] } ``` Please, if someone knows what can be wrong and why this timeout is happening, help me.
1
answers
0
votes
9
views
asked 4 days ago

Enabling S3 Encryption-at-rest on a go-forward basis with s3fs

Hi, We have some buckets (have been around for a while, approx 200GB+ data) and we want to **turn on** encryption-at-rest using SSE-S3 (the most "transparent" way) https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-encryption.html The S3 buckets are mounted to our Linux VMs using S3FS https://github.com/s3fs-fuse/s3fs-fuse which has support for this (seems fairly transparent) So, it seems like the way this works is that you can only enable this on files on a go-forward basis so the older files that already exist will not be in encrypted-at-rest (which is ok, we can backfill this later) Has anybody tried to do this before using this combo? If we mount the bucket using s3fs with `-o use_sse` option, what will happen as the files will be half-and-half? Will it "just work"? s3fs will be mounted with `-o use_see` and it will be able to handle files that are BOTH the old way (not encrypted-at-rest) and the newer files (encrypted-at-rest) ... we can then start backfilling the older files and we have time or will this fail catastrophically the minute we mount the s3 bucket :( Is the solution to just start a new bucket and do the SSE-S3 and then just start moving the files over (we have done this before in terms of having code in our application check for a file in multiple buckets before giving up) Of course, we will test all this stuff, just wanted to ask a quick question in case we are worried about this too much and if this is a "no big deal" or "be very careful" Thanks!
0
answers
0
votes
12
views
asked 5 days ago

Using DMS and SCT for extracting/migrating data from Cassandra to S3

IHAC who is doing scoping with an Architecture using DMS and SCT. I had a few questions I was hoping you can get answered for me. 1. Does AWS DMS support data validation with Cassandra as a source? I don’t see it here - https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html#CHAP_BestPractices.DataValidation but I do see Cassandra as a valid source target here https://aws.amazon.com/about-aws/whats-new/2018/09/aws-dms-aws-sct-now-support-the-migration-of-apache-cassandra-databases/ 2. Does AWS DMS support ongoing replication with Cassandra as a source? Reading the docs it looks like if I wanted to extract data from Cassandra and write to s3 (Using DMS) then post process that data into a different format (Like json) and write to a different S3 bucket, I could so by attaching a Lamba to the original S3 event from the DMS extract and drop. Can you confirm my understanding? 3. How is incremental data loaded ongoing after initial load from Cassandra (with DMS)? In the docs it looks like its stored in s3 in csv form. Does it write 1 csv per source table and keep appending or updating the existing csv? does it create 1 csv per row, per batch...etc? I’m wondering how the event in step 3 would be triggered if I did want to continuously post process updates as they come in in real time and covert source data from Cassandra into Json data I store on s3.
0
answers
0
votes
3
views
asked 8 days ago

_temp AWS lake formation blueprint pipeline tables appears to IAM user in athena editor although I didn't give this user permission on them

_temp lake formation blueprint pipeline tables appears to IAM user in Athena editor, although I didn't give this user permission on them below the policy granted to this IAM user,also in lake formation permsissions ,I didnt give this user any permissions on _temp tables: { "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1652364721496", "Action": [ "athena:BatchGetNamedQuery", "athena:BatchGetQueryExecution", "athena:GetDataCatalog", "athena:GetDatabase", "athena:GetNamedQuery", "athena:GetPreparedStatement", "athena:GetQueryExecution", "athena:GetQueryResults", "athena:GetQueryResultsStream", "athena:GetTableMetadata", "athena:GetWorkGroup", "athena:ListDataCatalogs", "athena:ListDatabases", "athena:ListEngineVersions", "athena:ListNamedQueries", "athena:ListPreparedStatements", "athena:ListQueryExecutions", "athena:ListTableMetadata", "athena:ListTagsForResource", "athena:ListWorkGroups", "athena:StartQueryExecution", "athena:StopQueryExecution" ], "Effect": "Allow", "Resource": "*" }, { "Effect": "Allow", "Action": [ "glue:GetDatabase", "glue:GetDatabases", "glue:BatchDeleteTable", "glue:GetTable", "glue:GetTables", "glue:GetPartition", "glue:GetPartitions", "glue:BatchGetPartition" ], "Resource": [ "*" ] }, { "Sid": "Stmt1652365282568", "Action": "s3:*", "Effect": "Allow", "Resource": [ "arn:aws:s3:::queryresults-all", "arn:aws:s3:::queryresults-all/*" ] }, { "Effect": "Allow", "Action": [ "lakeformation:GetDataAccess" ], "Resource": [ "*" ] } ] }
1
answers
0
votes
8
views
asked 9 days ago

`RequestTimeout`s for S3 put requests from a Lambda in a VPC for larger payloads

# Update I added a VPC gateway endpoint for S3 in the same region (US East 1). I selected the route table for it that the lambda uses. But still, the bug persists. Below I've included details regarding my network configuration. The lambda is located in the "api" subnet. ## Network Configuration 1 VPC 4 subnets: * public &nbsp;&nbsp;&nbsp;&nbsp;IPv4 CIDR: 10.0.0.0/24 &nbsp;&nbsp;&nbsp;&nbsp;route table: public &nbsp;&nbsp;&nbsp;&nbsp;Network ACL: public * private &nbsp;&nbsp;&nbsp;&nbsp;IPv4 CIDR: 10.0.1.0/24 &nbsp;&nbsp;&nbsp;&nbsp;route table: private &nbsp;&nbsp;&nbsp;&nbsp;Network ACL: private * api &nbsp;&nbsp;&nbsp;&nbsp;IPv4 CIDR: 10.0.4.0/24 &nbsp;&nbsp;&nbsp;&nbsp;route table: api &nbsp;&nbsp;&nbsp;&nbsp;Network ACL: api * private2-required &nbsp;&nbsp;&nbsp;&nbsp;IPv4 CIDR: 10.0.2.0/24 &nbsp;&nbsp;&nbsp;&nbsp;route table: public &nbsp;&nbsp;&nbsp;&nbsp;Network ACL: - 3 route tables: * public &nbsp;&nbsp;&nbsp;&nbsp;Destination: 10.0.0.0/16 &nbsp;&nbsp;&nbsp;&nbsp;Target: local &nbsp;&nbsp;&nbsp;&nbsp;Destination: 0.0.0.0/0 &nbsp;&nbsp;&nbsp;&nbsp;Target: igw-xxxxxxx &nbsp;&nbsp;&nbsp;&nbsp;Destination: ::/0 &nbsp;&nbsp;&nbsp;&nbsp;Target: igw-xxxxxxxx * private &nbsp;&nbsp;&nbsp;&nbsp;Destination: 10.0.0.0/16 &nbsp;&nbsp;&nbsp;&nbsp;Target: local * api &nbsp;&nbsp;&nbsp;&nbsp;Destination: 10.0.0.0/16 &nbsp;&nbsp;&nbsp;&nbsp;Target: local &nbsp;&nbsp;&nbsp;&nbsp;Destination: 0.0.0.0/0 &nbsp;&nbsp;&nbsp;&nbsp;Target: nat-xxxxxxxx &nbsp;&nbsp;&nbsp;&nbsp;Destination: pl-xxxxxxxx &nbsp;&nbsp;&nbsp;&nbsp;Target: vpce-xxxxxxxx &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;(VPC S3 endpoint) 4 network ACLs * public &nbsp;&nbsp;&nbsp;&nbsp;inbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;All traffic (allow) &nbsp;&nbsp;&nbsp;&nbsp;outbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;All traffic (allow) * private &nbsp;&nbsp;&nbsp;&nbsp;inbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;100: PostgreSQL TCP 5432 10.0.0.48/32 (allow) &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;101: PostgreSQL TCP 5432 10.0.4.0/24 (allow) &nbsp;&nbsp;&nbsp;&nbsp;outbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;100: Custom TCP TCP 32768-65535 10.0.0.48/32 (allow) &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;101: Custom TCP TCP 1024-65535 10.0.4.0/24 (allow) * api &nbsp;&nbsp;&nbsp;&nbsp;inbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;All traffic (allow) &nbsp;&nbsp;&nbsp;&nbsp;outbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;All traffic (allow) * \- &nbsp;&nbsp;&nbsp;&nbsp;inbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;All traffic (allow) &nbsp;&nbsp;&nbsp;&nbsp;outbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;All traffic (allow) # Update # I increased the timeout of the lambda to 5 minutes, and the timeout of the PUT request to the S3 bucket to 5 minutes as well. Before this the request itself would timeout, but now I'm actually getting a response back from S3. It is a 400 Bad Request response. The error code is `RequestTimeout`. And the message in the payload of the response is "Your socket connection to the server was not read from or written to within the timeout period." This exact same code works 100% of the time for a small payload (on the order of 1KB), but apparently for payloads on the order of 1MB it starts breaking. There is no logic in _my code_ that does anything differently based on the size of the payload. I've read similar issues that suggest the issue is with the wrong number of bytes being provided in the "content-length" header, but I've never provided a value for that header. Furthermore, the lambda works flawlessly when executed in my local environment. The problem definitely appears to be a networking one. At first glance it might seem like this is just an issue with the lambda being able to interact with services outside of the VPC, but that's not the case because the lambda _does_ work exactly as expected for smaller file sizes (<1KB). So it's not that it flat out can't communicate with S3. Scratching my head here... # Original # I use S3 to host images for an application. In my local testing environment the images upload at an acceptable speed. However, when I run the same exact code from an AWS Lambda (in my VPC), the speeds are untenably slow. I've concluded this because I've tested with smaller images (< 1KB) and they work 100% of the time without making any changes to the code. Then I use 1MB sized payloads and they fail 98% percent of the time. I know the request to S3 is the issue because of logs made from within the Lambda that indicate the execution reaches the upload request, but — almost — never successfully passes it (times out).
1
answers
0
votes
32
views
asked 10 days ago

Help with copying s3 bucket to another location missing objects

Hello All, Today I was trying to copy a directory from one location to another, and was using the following command to execute my copy. aws s3 s3://bucketname/directory/ s3://bucketname/directory/subdirectory --recursive The copy took overnight to complete because it was 16.4TB in size, but when I got into work the next day, it was done, or at least it had completed. But when I do a compare between the two locations I get the following bucketname/directory/ 103,690 objects - 16.4TB bucketname/directory/subdirectory/ 103,650 - 16.4TB So there is a 40 object difference between the source location and the destination location. I tried using the following command to copy over the files that were missing aws s3 sync s3://bucketname/directory/ s3://bucket/directory/subdirectory/ which returned no results. It sat for a while maybe like 2 minutes or so, and then just returned to the next line. I am at my wits end trying to copy of the missing objects, and my boss thinks that I lost the data, so I need to figure out a way to get the difference between the source and destination copied over. If anyone could help me with this, I would REALY appreciate it. I am a newbie with AWS, so I may not understand everything that I am told, but I will try anything to get this resolved. I am doing all the commands through an EC2 instance that I am ssh into, and then use AWS CLI commands. Thanks to anyone who might be able to help me. Take care, -Tired & Frustrated :)
1
answers
0
votes
4
views
asked 11 days ago

Non guessable CloudFront URL

I'm wondering if there's a way to make the S3 path unguessable. Let's suppose I have an S3 path like this: https://s3-bucket.com/{singer_id}/album/song/song.mp3, this file will be served through CloudFront, so the path will be: https://cloundfront-dist-id.com/{singer_id}/album/song/song.mp3?signature=... (I'm using signed URLs). My question is : it is possible to make the /{singer_id}/album/song/song.mp3 not guessable by hashing it using for example Lambda or Lambda@Edge function so the client will see a url like this https://cloundfront-dist-id.com/some_hash?signature= ? Thanks in advance. https://stackoverflow.com/questions/70885356/non-guessable-cloudfront-url I am also facing issue. Question may arise why need of hash because signed url are secure. For my side, I need such url with s3 path hidden. I am using same AWS bucket for retrieving image for internal use without signed url and sharing that file to others using signed url. Internal USe CDN without signed url after CNAMe https://data.example.com/{singer_id}/album/song/song.mp3 Signed url https://secured-data.example.com/{singer_id}/album/song/song.mp3?signature=. &Expires == Since both using same AWS bucket and if someone guesses in signed url then access content https://data.example.com/{singer_id}/album/song/song.mp3?signature=. &Expires . File opens . In this scenario, I want to hide {singer_id}/album/song/song.mp3 to some new value and file is displayed under new name
1
answers
0
votes
7
views
asked 11 days ago

Cannot create S3 Backup using AWS Backup

I am trying to make an S3 Backup using AWS Backup. The error message I'm getting is (I have deliberately changed the bucket name and account number) ``` Unable to perform s3:PutBucketNotification on my-bucket-name-123 The backup job failed to create a recovery point for your resource arn:aws:s3:::my-bucket-name-123 due to missing permissions on role arn:aws:iam::123456789000:role/service-role/AWSBackupDefaultServiceRole. ``` I have attached the inline policies described in the [documentation](https://docs.aws.amazon.com/aws-backup/latest/devguide/s3-backups.html) to AWSBackupDefaultServiceRole (note: the role also contains the AWS managed policy AWSBackupServiceRolePolicyForBackup as well as the following) ``` { "Version":"2012-10-17", "Statement":[ { "Sid":"S3BucketBackupPermissions", "Action":[ "s3:GetInventoryConfiguration", "s3:PutInventoryConfiguration", "s3:ListBucketVersions", "s3:ListBucket", "s3:GetBucketVersioning", "s3:GetBucketNotification", "s3:PutBucketNotification", "s3:GetBucketLocation", "s3:GetBucketTagging" ], "Effect":"Allow", "Resource":[ "arn:aws:s3:::*" ] }, { "Sid":"S3ObjectBackupPermissions", "Action":[ "s3:GetObjectAcl", "s3:GetObject", "s3:GetObjectVersionTagging", "s3:GetObjectVersionAcl", "s3:GetObjectTagging", "s3:GetObjectVersion" ], "Effect":"Allow", "Resource":[ "arn:aws:s3:::*/*" ] }, { "Sid":"S3GlobalPermissions", "Action":[ "s3:ListAllMyBuckets" ], "Effect":"Allow", "Resource":[ "*" ] }, { "Sid":"KMSBackupPermissions", "Action":[ "kms:Decrypt", "kms:DescribeKey" ], "Effect":"Allow", "Resource":"*", "Condition":{ "StringLike":{ "kms:ViaService":"s3.*.amazonaws.com" } } }, { "Sid":"EventsPermissions", "Action":[ "events:DescribeRule", "events:EnableRule", "events:PutRule", "events:DeleteRule", "events:PutTargets", "events:RemoveTargets", "events:ListTargetsByRule", "events:DisableRule" ], "Effect":"Allow", "Resource":"arn:aws:events:*:*:rule/AwsBackupManagedRule*" }, { "Sid":"EventsMetricsGlobalPermissions", "Action":[ "cloudwatch:GetMetricData", "events:ListRules" ], "Effect":"Allow", "Resource":"*" } ] } ``` This to me, looks correct and it not should be giving that error. Is there a bug? Or is there a step which is not described in the documentation? I would really appreciate some help. Many thanks ``` ```
0
answers
0
votes
4
views
asked 14 days ago

Failed to convert 'Body' to string S3.InvalidContent arn:aws:states:::aws-sdk:s3:getObject step function

I am a newbie so pardon my ignorance. I am writing a very simple step function state machine that uses the AWS SDK to retrieve a file from S3. Every time I run it the task that gets the file from S3 fails with an "S3.InvalidContent" error with "Failed to convert 'Body' to string" as the cause. The full definition of my state machine is: ``` { "Comment": "A description of my state machine", "StartAt": "GetAudioFile", "States": { "GetAudioFile": { "Type": "Task", "Parameters": { "Bucket": "11123", "Key": "test.wav" }, "Resource": "arn:aws:states:::aws-sdk:s3:getObject", "End": true } } } ``` The full text of the TaskFailed event is: ``` { "resourceType": "aws-sdk:s3", "resource": "getObject", "error": "S3.InvalidContent", "cause": "Failed to convert 'Body' to string" } ``` The full text of the CloudWatch log entry with the error is: ``` { "id": "5", "type": "TaskFailed", "details": { "cause": "Failed to convert 'Body' to string", "error": "S3.InvalidContent", "resource": "getObject", "resourceType": "aws-sdk:s3" }, "previous_event_id": "4", "event_timestamp": "1651894187569", "execution_arn": "arn:aws:states:us-east-1:601423303632:execution:test:44ae6102-b544-3cfa-e186-181cdf331493" } ``` 1. What am I doing wrong? 2. How do I fix it? 3. What additional information do you need from me? 4. Most importantly, where can I find answers to these stupid questions so I don't have to post these stupid questions on re:Post again? (I have spent nearly a day scouring AWS docs and Googling without finding anything.)
1
answers
0
votes
4
views
asked 15 days ago

s3 create Presigned Multipart Upload URL using API

I'm trying to use the AWS S3 API to perform a multi-part upload with Signed URLs. This will allow us to send a request to the server (which is configured with the correct credentials), and then return a pre-signed URL to the client (which will not have credentials configured). The client should then be able to complete the request, computing subsequent signatures as appropriate. This appears to be possible as per the AWS S3 documentation: https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html Signature Calculations for the Authorization Header: Transferring Payload in Multiple Chunks (Chunked Upload) (AWS Signature Version 4) - Amazon Simple Storage Service - AWS Documentation As described in the Overview, when authenticating requests using the Authorization header, you have an option of uploading the payload in chunks. You can send data in fixed size or variable size chunks. This section describes the signature calculation process in chunked upload, how you create the chunk body, and how the delayed signing works where you first upload the chunk, and send its ... docs.aws.amazon.com The main caveat here is that it seems to need the Content-Length​ up front, but we won't know the value of that as we'll be streaming the value. Is there a way for us to use signed URLs to do multipart upload without knowing the length of the blob to be uploaded beforehand?
0
answers
0
votes
0
views
asked 15 days ago

Sync DynamoDB to S3

What is the best way to sync my DynamoDB tables to S3, so that I can perform serverless 'big data' queries using Athena? The data must be kept in sync without any intervention. The frequency of sync would depend on the cost, ideally daily but perhaps weekly. I have had this question a long time. I will cover what I have considered, and why I don't like the options. 1) AWS Glue Elastic Views. Sounds like this will do the job with no code, but it was announced 18 months ago and there have been no updates since. Its not generally available, and there is not information on when it might be. 2) Use dynamodb native backup following this blog https://aws.amazon.com/blogs/aws/new-export-amazon-dynamodb-table-data-to-data-lake-amazon-s3/. I actually already use this method for 'one-off' data transfers that I kick-off manually and then configure in Athena. I have two issues with this option. The first is that, to my knowledge, the export cannot be scheduled natively. The blog suggests using the CLI to kick off exports, and I assume the writer intends that the CLI would need scheduling on a cron job somewhere. I don't run any servers for this. I imagine I could do it via a scheduled Lambda with an SDK. The second issue is that the export path in S3 always includes a unique export ID. This means I can't configure the Athena table to point to a static location for the data and just switch over the new data after a scheduled export. Perhaps I could write another lambda to move the data around to a static location after the export has finished, but it seems a shame to have to do so much work and I've not seen that covered anywhere before. 3) I can use data pipeline as described in https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBPipeline.html. This post is more about backing data up than making it accessible to Athena. I feel like this use case must be so common, and yet none of the ideas I've seen online are really complete. I was wondering if anyone had any ideas or experiences that would be useful here?
2
answers
0
votes
9
views
asked 16 days ago

I'd like to request to S3 as a cognito certification qualification.

I'd like to request to S3 as a cognito certification qualification. S3 is using sdk Cognito is using amplify. Use an angular typescript. I would like to replace the secret key with the cognito authentication information when creating S3. I want to access s3 with the user I received from Auth.signIn, but the credentials are missing. I need your help. ``` public signIn(user: IUser): Promise<any> { return Auth.signIn(user.email, user.password).then((user) => { AWS.config.region = 'ap-northeast-2'; AWS.config.credentials = new AWS.CognitoIdentityCredentials({ IdentityPoolId: 'ap-northeast-2:aaaaaaaa-bbbb-dddd-eeee-ffffffff', }); const userSession = Auth.userSession(user); const idToken = userSession['__zone_symbol__value']['idToken']['jwtToken']; AWS.config.region = 'ap-northeast-2'; AWS.config.credentials = new AWS.CognitoIdentityCredentials({ IdentityPoolId: 'ap-northeast-2:aaaaaaaa-bbbb-dddd-eeee-ffffffff', RoleArn: 'arn:aws:iam::111111111111:role/Cognito_role', Logins: { CognitoIdentityPool: 'ap-northeast-2:aaaaaaaa-bbbb-dddd-eeee-ffffffff', idToken: idToken, }, })); const s3 = new AWS.S3({ apiVersion: '2012-10-17', region: 'ap-northeast-2', params: { Bucket: 'Bucketname', }, }); s3.config.credentials.sessionToken = user.signInUserSession['accessToken']['jwtToken']; s3.listObjects(function (err, data) { if (err) { return alert( 'There was an error: ' + err.message ); } else { console.log('***********s3List***********', data); } }); } ``` bucket policy ``` { "Version": "2012-10-17", "Id": "Policy", "Statement": [ { "Sid": "AllowIPmix", "Effect": "Allow", "Principal": "*", "Action": "*", "Resource": "arn:aws:s3:::s3name/*", } ] } ``` cognito Role Policies - AmazonS3FullAccess ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:*", ], "Resource": "*" } ] } ```
0
answers
0
votes
5
views
asked 16 days ago

AWS Backup DynamoDB billing

Hi everyone! I'd like to understand better the billing composition regardless of AWS Backup on DynamoDB resources since I got an unexpected increase in my billing. I'm aware of AWS Backup billing itself thanks to the [documentation](https://aws.amazon.com/backup/pricing/), anyway, when I access the Billing service I can notice an exponential billing pricing in DynamoDB service, on the section `Amazon DynamoDB USE1-TimedBackupStorage-ByteHrs` the description allows me to see that I'll be paying $0.10 per GB-month of storage used for on-demand backup, showing me that I've used 14,247.295 GB-Month (This makes sense with the billing I got) but where my doubt comes from is, **where does all those GB come from?** The last snapshot-size just shows 175.5 GB I've configured my backup plan with the following parameters: ``` { "ruleName": "hourly-basis", "scheduleExpression": "cron(0 * ? * * *)", "startWindowMinutes": 60, "completionWindowMinutes": 180, "lifecycle": { "toDeletedAfterDays": 30 } } ``` I'm also copying snapshots into a second region on `us-west-2` As you can see, I'm handling a schedule expression on an hourly basis backup because of compliance requirements. *Is this enough justification for the high billing?* I'm aware that backups with low RPO are commonly expensive but I just want to be sure that this billing is not higher than it should be because of any wrong Backup configuration. Thanks in advance!
0
answers
0
votes
9
views
asked 19 days ago

Expired s3 Backup Recovery Point

I configured AWS Backup in CDK to enable continuous backups for s3 buckets with this configuration : - backup rule : with `enableContinuousBackup: true` and `deleteAfter 35 days` - backup selection : with `resources` array having the ARN of the bucket directly set and roles setup following the docs of aws : https://docs.aws.amazon.com/aws-backup/latest/devguide/s3-backups.html Later I deleted the stack in CDK and ,as expected, all the resources were deleted except for the vault that was orphaned. The problem happens when trying to delete the recovery points inside the vault, I get back the status as `Expired` with a message `Insufficient permission to delete recovery point`. - I am logged in as a user with AdministratorAccess - I changed the access policy of the vault to allow anyone to delete the vault / recovery point - even when logged as the root of the account, I still get the same message. --- - For reference, this is aws managed policy attached to my user : `AdministratorAccess` , it Allows (325 of 325 services) including AWS Backup obviously. - Here's the vault access policy that I set : ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": [ "backup:DeleteBackupVault", "backup:DeleteBackupVaultAccessPolicy", "backup:DeleteRecoveryPoint", "backup:StartCopyJob", "backup:StartRestoreJob", "backup:UpdateRecoveryPointLifecycle" ], "Resource": "*" } ] } ``` Any ideas what I'm missing here ? **Update ** : - A full week after creating the backup recovery point, and still unable to delete it. - I tried deleting it from the AWS CLI but no luck. - I tried suspending the versioning for the bucket in question and tried, but no luck too.
0
answers
0
votes
4
views
asked 24 days ago

S3 Static Website Objects 403 Forbidden when Uploaded from Different Account

### Quick Summary: If objects are put into a bucket owned by "Account A" from a different account ("Account B"), you cannot access files via S3 static website (http) from "Account A" (bucket owner). This is true regardless of the bucket policy granting GetObject on all objects, and regardless of if bucket-owner-full-control ACL is enabled on the object. - If trying to download a file from Account A via S3 API (console/cli), it works fine. - If trying to download a file from Account A via S3 static website (http), S3 responds HTTP 403 Forbidden if the file was uploaded by Account B. Files uploaded by Account A download fine. - Disabling Object ACL's fixes the problem but is not feasible (explained below) ### OVERVIEW I have a unique setup where I need to publish files to an S3 bucket from an account that does not own the bucket. The upload actions work fine. My problem is that I cannot access files from the bucket-owner account over the S3 static website *if the files were published from another account* (403 Forbidden response). **The problem only exists if the files were pushed to S3 FROM a different account.** Because the issue is only for those files, the problem seems like it would be in the Object Ownership ACL configuration. I've confirmed I can access other files (that weren't uploaded by the other acct) in the bucket through the S3 static website endpoint, so I know my bucket policy and VPC endpoint config is correct. If I completely disable Object ACL's completely **it works fine**, however I cannot do that because of two issues: - Ansible does not support publishing files to buckets with ACL's disabled. (Disabling ACL is a relatively new S3 feature and Ansible doesn't support it) - The primary utility I'm using to publish files (Aptly) also doesn't support publishing to buckets with ACL's disabled. (Disabling ACL is a relatively new S3 feature and Aptly doesn't support it) Because of these above constraints, I must use Object ACL's enabled on the bucket. I've tried both settings "Object Writer" and "Bucket owner preferred", neither are working. All files are uploaded with the `bucket-owner-full-control` object ACL. SCREENSHOT: https://i.stack.imgur.com/G1FxK.png As mentioned, disabling ACL fixes everything, but since my client tools (Ansible and Aptly) cannot upload to S3 without an ACL set, ACL's must remain enabled. SCREENSHOT: https://i.stack.imgur.com/NcKOd.png ### ENVIRONMENT EXPLAINED: - Bucket `test-bucket-a` is in "Account A", it's not a "private" bucket but it does not allow public access. Access is granted via policies (snippet below). - Bucket objects (files) are pushed to `test-bucket-a` from an "Account B" role. - Access from "Account B" to put files into the bucket is granted via policies (not shown here). Files upload without issue. - Objects are given the `bucket-owner-full-control` ACL when uploading. - I have verified that the ACL's look correct and both "Account A" and "Account B" have object access. (screenshot at bottom of question) - I am trying to access the files from the bucket-owner account (Account A) over the S3 static website access (over http). I can access files that were not uploaded by "Account B" but files uploaded by "Account B" return 403 Forbidden I am using VPC Endpoint to access (files cannot be public facing), and this is added to the bucket policy. All the needed routes and endpoint config are in-place. I know my policy config is good because everything works perfectly for files uploaded within the same account or if I disable object ACL. ``` { "Sid": "AllowGetThroughVPCEndpoint", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::test-bucket-a/*", "Condition": { "StringEquals": { "aws:sourceVpce": "vpce-0bfb94<scrubbed>" } } }, ``` **Here is an example of how this file is uploaded using Ansible:** Reminder: the role doing the uploading is NOT part of the bucket owner account. ``` - name: "publish gpg pubkey to s3 from Account B" aws_s3: bucket: "test-bucket-a" object: "/files/pubkey.gpg" src: "/home/file/pubkey.gpg" mode: "put" permission: "bucket-owner-full-control" ``` **Some key troubleshooting notes:** - From "Account A" when logged into the console, **I can download the file.** This is very strange and shows that API requests to GetObject are working. Does the S3 website config follow some different rule structure?? - From "Account A" when accessing the file from an HTTP endpoint (S3 website) it returns **HTTP 403 Forbidden** - I have tried deleting and re-uploading the file multiple times. - I have tried manually setting object ACL via the aws cli (ex: `aws s3api put-object-acl --acl bucket-owner-full-control ...`) - When viewing the "object" ACL, I have confirmed that both "Account A" and "Account B" have access. See below screenshot. Note that it confirms the object owner is an external account. SCREENSHOT: https://i.stack.imgur.com/TCYvv.png
0
answers
0
votes
3
views
asked 24 days ago
  • 1
  • 90 / page