Questions tagged with Amazon Simple Storage Service

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Multipart upload with aws S3 + checksums

I am trying to implement browser multipart upload to a S3 bucket. I should be able to pause and play the upload and also I'll like to automatically generate the checksums as I'm uploading. I have tried several approaches and I've been hitting a wall. Some of the approaches I've tried. * Using the amplify S3 upload, this works well, but has the caveat that I can't generate the checksums automatically, to generate the checksums, I run a lambda function after file upload, the caveat is for large files, the lambda function times out. Also, I'll like to avoid going this route as I believe It's quite computationally expensive. * Using https://blog.logrocket.com/multipart-uploads-s3-node-js-react/. This is also similar to the above, the caveat is when I add the checksum algorithm to the upload part query, I get a **checksum type mismatch occurred, expected checksum type sha256, actual checksum type: null site:stackoverflow.com s3**. After a lot of googling, I'm not sure I can compute the checksums using presigned url. * and the current approach is to do away with the presigned url and send the chunked data to the lambda functions which then sends to the bucket. Since I'm managing everything with amplify, I run into some problems with API gateway(multipart/form-data). I have set the gateway to accept binary data and followed other fixes I found online but I’m stuck on **execution failed due to configuration error unable to transform request**. How do I fix the above error and what will be the ideal approach to implement the functionalities(multipart file upload to support resumable uploads and checksum computation)
0
answers
0
votes
25
views
asked a month ago

Mounting a file system to Github actions.

I am attempting to shift a workflow into the cloud. So that I can keep costs down I am using Github actions to do some Mac specific stuff - build macOS install packages. This is done using a tool - autopkg. Autopkg caches the application download and package between runs. Unfortunately this cache is too large for Github and can include files too big for Github actions. Package building has to happen on a Mac. Since the next step is to do some uploading of the packages to multiple sites and run some Python to process th built packages and this can run on a small Linux EC2 instance it seems the logical solution is to provide a file system from AWS that autopkg can use as a cache and mount it on every Github action run. I have been tearing my hair out attempting this with either S3 and S3fs or EFS and can't seem to wrap my head around how all the bits hang together. For testing I tried the mount native on my Mac and I tried it in amazonlinux and Debian Docker containers. I'm figuring the solution will be using NFS or efs-utils to mount an EFS volume but I can't get it working. In a Debian container using efs-utils I got close but it seems I can't get the DNS name to resolve. The amazonlinux Docker container was too basic to get efs-utils to work. I also got the aws command line tool installed but it runs in to the same DNS resolution problems. I tried connecting the underlying Mac to an AWS VPN in the same VPC as the file system. still had the same DNS problems. Any help would be appreciated. I've just updated the question with some more stuff I have tried.
0
answers
0
votes
12
views
asked a month ago

Using Cognito and Cloudfront to control access to user files on S3

Hi, I'm putting together a media viewer website for myself to learn how AWS works. My first step was to host a webpage (index.html) on S3, and have this webpage allow for image/video uploads to a folder in my bucket using the AWS Javascript SDK (v2), and having the mediaviewer on the web page access these files directly through http. I have lambda functions that convert media formats appropriately, and hold metadata in DynamoDB that can be queried by the website using the javascript SDK. This all works fine. Now, I'd like to make it a bit more secure, and support users who login, individual user directories within the buckets, and control access to the media files so users can only view their own files. So the steps I used to do this were the following: 1. Create a user pool and identity pool in Cognito. 2. Add a google sign in button, and enable user pool sign in with the google button... To do this, Google requires the webpage to be served via https (not http). 3. Since S3 can't serve files via https, I put the S3 bucket behind cloudfront. 4. Modify my bucket to have a user directory, and subdirectories for each cognito identityid. Modify the access policies so that users can only read/write to their individual subdirectory, and can only read/write to a subset of DynamoDB based on their identity ID. The webpage uses AWS Javascript SDK calls to login with cognito, upload to S3, access dynamodb. It all appears to work well, and seems to give me secure user access control. 5. Now, the hole... I want the media viewer portion of my app to access the images/media via https:// links, and not via the javascript sdk. The way its currently configured, https access goes through cloudfront, and cloudfront has access to all the files in the S3 bucket. I'm trying to figure out how to make an https request via cloudfront (along with a cognito token), and then have cloudfront inspect the token, determine the identity ID of the user, and only serve contents for that user if he is logged in. Does this require lambda@edge or is there an easier way? I don't want to use signed urls, because I anticipate having a single user view hundreds of urls at a single time (in a gallery view), and figure generations signed urls will slow things down too much. 6. In the future, I may want to enable sharing of files... Could I enable that by having an entry in DynamoDB for every file, and have cloudfront check if the user is allowed to view the file, before serving it? Would this be part of the lambd@edge function? Thanks
0
answers
0
votes
31
views
rrrpdx
asked a month ago

Bug: S3 bucket static website hosting requires an index document value, even if it's just one space (when set in the management console)

The AWS S3 service allows turning on [static website hosting](https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html) on a S3 bucket. In the AWS CloudFormation [user guide](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-websiteconfiguration.html#cfn-s3-websiteconfiguration-indexdocument), the `IndexDocument` field is specified as optional (`Required: No`). However, in the [management console](https://s3.console.aws.amazon.com/s3/), when configuring an S3 bucket for static website hosting, the "Error document" field is marked "optional", meanwhile "Index document" is not, and trying to save the changes with that field left blank doesn't work (it is highlighted with "Must be at least 1 characters long."). Making "Index document" one empty space makes it no problem. This is in line with what the [S3 user guide](https://docs.aws.amazon.com/AmazonS3/latest/userguide/IndexDocumentSupport.html) says: > When you enable website hosting, you must also configure and upload an index document. ### Steps to reproduce - View a specific S3 bucket in the [S3 management console](https://s3.console.aws.amazon.com/s3/) - In the "Properties" tab, scroll down to "Static website hosting" and click "Edit" - Under "Static website hosting", select "Enable" - Leave "Index document" blank - Click "Save changes" The "Index document" field will be highlighted in red with "Must be at least 1 characters long." - Enter one empty space in the "Index document" field - Click "Save changes" The changes will now be saved. ### Expected results "Index document" should be optional.
0
answers
0
votes
13
views
profile picture
asked a month ago