By using AWS re:Post, you agree to the Terms of Use
/Storage/

Questions tagged with Storage

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

S3 Get object not working properly in Unity

I am using AWS SDK .NET for Unity to download zip files from S3. I implemented the get method just as this tutorial for .NET https://docs.aws.amazon.com/AmazonS3/latest/userguide/download-objects.html But when I call the method with ReadObjectDataAsync().Wait(); Unity stops and crashes, like is in an infinite loop. This is my code, has a different name but is practically the same: /// <summary> /// Start is called before the first frame update /// </summary> void Start() { customSongsManager = gameObject.GetComponent<CustomSongsManager>(); GetZip(S3SampleFile).Wait(); } /// <summary> /// Get Object from S3 Bucket /// </summary> public async Task GetZip(string pFile) { string folder = "Assets/Audio/Custom/"; try { GetObjectRequest request = new GetObjectRequest { BucketName = S3Bucket, Key = pFile }; using (GetObjectResponse response = await S3Client.GetObjectAsync(request)) using (Stream responseStream = response.ResponseStream) { string title = response.Metadata["x-amz-meta-title"]; // Assume you have "title" as medata added to the object. string contentType = response.Headers["Content-Type"]; Debug.Log("Object metadata, Title: " + title); Debug.Log("Content type: " + contentType); if (responseStream != null) { using (BinaryReader bReader = new BinaryReader(response.ResponseStream)) { byte[] buffer = bReader.ReadBytes((int)response.ResponseStream.Length); File.WriteAllBytes(folder + S3SampleFile, buffer); Debug.Log("Writed all bytes"); StartCoroutine(customSongsManager.ReadDownloadedSong(folder + S3SampleFile)); } } } } catch (AmazonS3Exception e) { // If bucket or object does not exist Debug.Log("Error encountered ***. Message:"+ e.Message + " when reading object"); } catch (Exception e) { Debug.Log("Unknown encountered on server. Message:"+ e.Message + " when reading object"); } } The game crashes in this line: using (GetObjectResponse response = await S3Client.GetObjectAsync(request))
1
answers
0
votes
22
views
asked 7 days ago

Help with copying s3 bucket to another location missing objects

Hello All, Today I was trying to copy a directory from one location to another, and was using the following command to execute my copy. aws s3 s3://bucketname/directory/ s3://bucketname/directory/subdirectory --recursive The copy took overnight to complete because it was 16.4TB in size, but when I got into work the next day, it was done, or at least it had completed. But when I do a compare between the two locations I get the following bucketname/directory/ 103,690 objects - 16.4TB bucketname/directory/subdirectory/ 103,650 - 16.4TB So there is a 40 object difference between the source location and the destination location. I tried using the following command to copy over the files that were missing aws s3 sync s3://bucketname/directory/ s3://bucket/directory/subdirectory/ which returned no results. It sat for a while maybe like 2 minutes or so, and then just returned to the next line. I am at my wits end trying to copy of the missing objects, and my boss thinks that I lost the data, so I need to figure out a way to get the difference between the source and destination copied over. If anyone could help me with this, I would REALY appreciate it. I am a newbie with AWS, so I may not understand everything that I am told, but I will try anything to get this resolved. I am doing all the commands through an EC2 instance that I am ssh into, and then use AWS CLI commands. Thanks to anyone who might be able to help me. Take care, -Tired & Frustrated :)
1
answers
0
votes
5
views
asked 15 days ago

s3 create Presigned Multipart Upload URL using API

I'm trying to use the AWS S3 API to perform a multi-part upload with Signed URLs. This will allow us to send a request to the server (which is configured with the correct credentials), and then return a pre-signed URL to the client (which will not have credentials configured). The client should then be able to complete the request, computing subsequent signatures as appropriate. This appears to be possible as per the AWS S3 documentation: https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html Signature Calculations for the Authorization Header: Transferring Payload in Multiple Chunks (Chunked Upload) (AWS Signature Version 4) - Amazon Simple Storage Service - AWS Documentation As described in the Overview, when authenticating requests using the Authorization header, you have an option of uploading the payload in chunks. You can send data in fixed size or variable size chunks. This section describes the signature calculation process in chunked upload, how you create the chunk body, and how the delayed signing works where you first upload the chunk, and send its ... docs.aws.amazon.com The main caveat here is that it seems to need the Content-Length​ up front, but we won't know the value of that as we'll be streaming the value. Is there a way for us to use signed URLs to do multipart upload without knowing the length of the blob to be uploaded beforehand?
0
answers
0
votes
0
views
asked 19 days ago

S3 Static Website Objects 403 Forbidden when Uploaded from Different Account

### Quick Summary: If objects are put into a bucket owned by "Account A" from a different account ("Account B"), you cannot access files via S3 static website (http) from "Account A" (bucket owner). This is true regardless of the bucket policy granting GetObject on all objects, and regardless of if bucket-owner-full-control ACL is enabled on the object. - If trying to download a file from Account A via S3 API (console/cli), it works fine. - If trying to download a file from Account A via S3 static website (http), S3 responds HTTP 403 Forbidden if the file was uploaded by Account B. Files uploaded by Account A download fine. - Disabling Object ACL's fixes the problem but is not feasible (explained below) ### OVERVIEW I have a unique setup where I need to publish files to an S3 bucket from an account that does not own the bucket. The upload actions work fine. My problem is that I cannot access files from the bucket-owner account over the S3 static website *if the files were published from another account* (403 Forbidden response). **The problem only exists if the files were pushed to S3 FROM a different account.** Because the issue is only for those files, the problem seems like it would be in the Object Ownership ACL configuration. I've confirmed I can access other files (that weren't uploaded by the other acct) in the bucket through the S3 static website endpoint, so I know my bucket policy and VPC endpoint config is correct. If I completely disable Object ACL's completely **it works fine**, however I cannot do that because of two issues: - Ansible does not support publishing files to buckets with ACL's disabled. (Disabling ACL is a relatively new S3 feature and Ansible doesn't support it) - The primary utility I'm using to publish files (Aptly) also doesn't support publishing to buckets with ACL's disabled. (Disabling ACL is a relatively new S3 feature and Aptly doesn't support it) Because of these above constraints, I must use Object ACL's enabled on the bucket. I've tried both settings "Object Writer" and "Bucket owner preferred", neither are working. All files are uploaded with the `bucket-owner-full-control` object ACL. SCREENSHOT: https://i.stack.imgur.com/G1FxK.png As mentioned, disabling ACL fixes everything, but since my client tools (Ansible and Aptly) cannot upload to S3 without an ACL set, ACL's must remain enabled. SCREENSHOT: https://i.stack.imgur.com/NcKOd.png ### ENVIRONMENT EXPLAINED: - Bucket `test-bucket-a` is in "Account A", it's not a "private" bucket but it does not allow public access. Access is granted via policies (snippet below). - Bucket objects (files) are pushed to `test-bucket-a` from an "Account B" role. - Access from "Account B" to put files into the bucket is granted via policies (not shown here). Files upload without issue. - Objects are given the `bucket-owner-full-control` ACL when uploading. - I have verified that the ACL's look correct and both "Account A" and "Account B" have object access. (screenshot at bottom of question) - I am trying to access the files from the bucket-owner account (Account A) over the S3 static website access (over http). I can access files that were not uploaded by "Account B" but files uploaded by "Account B" return 403 Forbidden I am using VPC Endpoint to access (files cannot be public facing), and this is added to the bucket policy. All the needed routes and endpoint config are in-place. I know my policy config is good because everything works perfectly for files uploaded within the same account or if I disable object ACL. ``` { "Sid": "AllowGetThroughVPCEndpoint", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::test-bucket-a/*", "Condition": { "StringEquals": { "aws:sourceVpce": "vpce-0bfb94<scrubbed>" } } }, ``` **Here is an example of how this file is uploaded using Ansible:** Reminder: the role doing the uploading is NOT part of the bucket owner account. ``` - name: "publish gpg pubkey to s3 from Account B" aws_s3: bucket: "test-bucket-a" object: "/files/pubkey.gpg" src: "/home/file/pubkey.gpg" mode: "put" permission: "bucket-owner-full-control" ``` **Some key troubleshooting notes:** - From "Account A" when logged into the console, **I can download the file.** This is very strange and shows that API requests to GetObject are working. Does the S3 website config follow some different rule structure?? - From "Account A" when accessing the file from an HTTP endpoint (S3 website) it returns **HTTP 403 Forbidden** - I have tried deleting and re-uploading the file multiple times. - I have tried manually setting object ACL via the aws cli (ex: `aws s3api put-object-acl --acl bucket-owner-full-control ...`) - When viewing the "object" ACL, I have confirmed that both "Account A" and "Account B" have access. See below screenshot. Note that it confirms the object owner is an external account. SCREENSHOT: https://i.stack.imgur.com/TCYvv.png
0
answers
0
votes
3
views
asked a month ago

Amplify vs S3: Why is this behaviour different?

HI All, I'm still very much a novice with AWS and I'm following this tutorial to learn more [Build a Serverless Web Application](https://aws.amazon.com/getting-started/hands-on/build-serverless-web-app-lambda-apigateway-s3-dynamodb-cognito/). In my own test account I've followed this tutorial without issue. I'm currently trying to apply the same tutorial in a restricted environment where I don't have access to Amplify. I figured I could get away with swapping Amplify in this tutorial S3, however I'm getting some odd behaviour. * [Amplify hosted site](https://i.imgur.com/ZfcGqC1.png) * [S3 hosted site](https://i.imgur.com/cYx8gNN.png) I'm using this [github repo](https://github.com/alinenaoe/wildrydes-site) website for both. I'm creating the bucket and uploading the files via TF. ``` resource "aws_s3_bucket" "mrw-example-bucket" { bucket = "mrw-example" tags = { Name = "mrw-example" } } resource "aws_s3_bucket_acl" "mrw-example-bucket-acl" { bucket = aws_s3_bucket.mrw-example-bucket.id acl = "public-read" } resource "aws_s3_bucket_website_configuration" "mrw-example-bucket-website-config" { bucket = aws_s3_bucket.mrw-example-bucket.id index_document { suffix = "index.html" } error_document { key = "error.html" } } resource "aws_s3_bucket_public_access_block" "mrw-example-bucket-pab" { bucket = aws_s3_bucket.mrw-example-bucket.id block_public_acls = false block_public_policy = false } resource "aws_s3_bucket_policy" "mrw-example-bucket-policy" { bucket = aws_s3_bucket.mrw-example-bucket.id policy = file("policy.json") } resource "aws_s3_bucket_cors_configuration" "mrw-example-cors-config" { bucket = aws_s3_bucket.mrw-example-bucket.id cors_rule { allowed_headers = ["*"] allowed_methods = ["PUT", "POST", "GET"] allowed_origins = ["*"] max_age_seconds = 3000 } } ###WEBSITE FILES### locals { content_type_map = { html = "text/html", js = "application/javascript", css = "text/css", svg = "image/svg+xml", jpg = "image/jpeg", ico = "image/x-icon", png = "image/png", gif = "image/gif", pdf = "application/pdf" } } resource "aws_s3_object" "mrw-example-bucket-objects" { for_each = fileset("website/", "**/*.*") bucket = aws_s3_bucket.mrw-example-bucket.id key = each.value source = "website/${each.value}" etag = filemd5("website/${each.value}") content_type = lookup(local.content_type_map, regex("\\.(?P<extension>[A-Za-z0-9]+)$", each.value).extension, "application/octet-stream") } ``` My question to you all, is why is the behaviour different? It feels like I've missed something obvious somewhere.
1
answers
0
votes
2
views
asked a month ago

Using aws s3api put-object --sse-customer-key-md5 fails with CLI

I'm trying to use aws s3api put-object/get-object with server side encryption with customer keys. I'm using Powershell, but I don't believe that is the source of my issue. On the surface, sse-customer-key-md5 appears to be a pretty simple input: https://docs.aws.amazon.com/cli/latest/reference/s3api/put-object.html Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure that the encryption key was transmitted without error. put-object works when I don't use --sse-customer-key-md5: >aws s3api put-object ` --bucket abc ` --sse-customer-algorithm AES256 ` --sse-customer-key "testaes256testaes256testaes25612" ` --region us-east-1 ` --key test.pdf ` --body C:\test.pdf > { "SSECustomerKeyMD5": "ezatpv/Yg0KkjX+5ZcsxdQ==", "SSECustomerAlgorithm": "AES256", "ETag": "\"0d44c3df058c4e190bd7b2e6d227be73\"" } I agree with the SSECustomerKeyMD5 result: >$key = "testaes256testaes256testaes25612" $md5 = new-object -TypeName System.Security.Cryptography.MD5CryptoServiceProvider $utf8 = new-object -TypeName System.Text.UTF8Encoding $hash = $md5.ComputeHash($utf8.GetBytes($key)) $EncodedString =[Convert]::ToBase64String($hash) Write-Host "Base64 Encoded String: " $EncodedString Base64 Encoded String: ezatpv/Yg0KkjX+5ZcsxdQ== Now I resubmit my put request with the --sse-customer-key-md5 option. Before anyone jumps on the base64 encoding, I've tried submitting the MD5 hash in Base64, Hexidecimal (With and without delimiters), JSON of the MD5 hash result, and upper case and lower case versions of the aforementioned. None work. Has anyone gotten this to work and, if so, format did you use? >aws s3api put-object ` --bucket abc ` --sse-customer-algorithm AES256 ` --sse-customer-key "testaes256testaes256testaes25612" ` --sse-customer-key-md5 "ezatpv/Yg0KkjX+5ZcsxdQ==" ` --region us-east-1 ` --key test.pdf ` --body C:\test.pdf > aws : At line:1 char:1 + aws s3api put-object ` + ~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (:String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError An error occurred (InvalidArgument) when calling the PutObject operation: The calculated MD5 hash of the key did not match the hash that was provided. Thanks
2
answers
0
votes
8
views
asked a month ago

The image cannot be displayed because it contains errors

Hi, I'm working with saving and displaying images in and S3 bucket. I am on a mac and the images show fine on the mac. I am able to upload many images to the bucket and then I can display them using a pre-signed URL. All good... But then I have some other varied images such as .jpg that I see fine on the mac and seem to upload OK however do not display from s3 using pre-signed URL. When viewed in Mac Safari or chrome or Firefox I get the broken image symbol. Firefox also says: The image "https://xxxxxxxxxx" cannot be displayed because it contains errors" Someone suggested that possibly the original file creation might have been strange in someway and the Mac might be able to interpret the image however S3 cannot do this successfully. Possibly this might be a cross platform Windows / Mac / Linux image issue? Test: I took one of the .jpg images that did not show up from S3 - and I opened it in preview on the Mac and exported it also as .jpg under a different name. Then I uploaded this new Version add this did seem to fix the problem because it now she displays correctly from s3. However for what I'm doing I do not want to have to export every image and resave it - in order to go to S3. Q: Does anybody have any solutions as to why I am getting some Errors when trying to display images from S3? Any ideas how to fix this? Quick update - in Mac terminal I tried : file -I ~/Desktop/test.jpg and surprisingly it came back as = image/heic even though the file had .jpg suffix.... Any idea how to get s3 to read "heic files"? Thanks dave
0
answers
0
votes
4
views
asked a month ago

EFS performance/cost optimization

We have a relatively small EFS of about 20G in burst mode, it was setup about 2 months ago and there were not much performance issue, utilization are always under 2% even under our max load usage (only for a very short period of time) And yesterday, we suddenly noticed that our site are not responding, but our server have very minimal CPU loads. Then we saw that the utilization of the EFS suddenly went up to 100%, digging deeper, it seems that we had been slowing and consistently consuming the original 2.3T BurstCreditBalance for the past few weeks, and it went to zero yesterday. Problems 1. The EFS monitoring tab provided completely useless information and does NOT even include the report of BurstCreditBalance, we had to find it in CloudWatch ourselves. 2. The Throughput utilization is misleading that we are actually slowly using up the credits, but there are no indications of such 3. We had since switched to Provisioned mode at 10MBps in the meantime as we're not really sure how to get the correct throughput number we need for our system. CloudWatch is showing 1s average max value of MeteredIOBytes 7.3k, DataReadIOBytes 770k, DataWriteIOBytes 780k. 4. we're seeing BurstCreditBalance build up much quicker (w 10MBps Provisioned) than we had used previously (in Burst). However, when we switched to 2MBps Provisioned, our system is visibly throttled even though there are 1T BurstCreditBalance, why? Main questions 1. How to properly define a Provisioned rate that is not too excessive, but not limiting our system when it needs to use it based on the CloudWatch metrics? 2. Ideally, we'd like to use Burst as that fits better, but with just 20GB, we don't seem to accumulate any BurstCreditBalance
1
answers
0
votes
7
views
asked 2 months ago

Is there a way to identify an EBS Volume inside a Linux EC2 instance using its volume ID ?

We are working on a use case where we need to map the disk label within the instance to the corresponding Volume ID in EBS. While performing some validations on some AMIs, we found that there is a difference between the behavior for Windows and Linux We have observed that the requirement we need is present in case of Windows (AMI Used: Windows_Server-2016-English-Full-Containers-2022.01.19) The following query yields the required result. Here the serial number of the disk is mapping to the EBS volume id The device driver for this instance was the AWS PV Storage Host Adapter ``` PS C:\Users\Administrator> Get-WmiObject Win32_DiskDrive | select-object -property serialnumber,index serialnumber index ------------ ----- vol0b44250cf530aa7f3 0 vol0f38be626e3137975 1 vol0bdc570ca980fb5fb 2 ``` However in case of Linux instances (AMI Used: amzn2-ami-kernel-5.10-hvm-2.0.20220121.0-x86_64-gp2) we are seeing that the EBS volume ID is not present within the disk metadata. We checked the following points inside the Linux: 1. Directories within /dev/disk: For the above AMI, the disk serial number is not being exposed in the /dev/disk/by-id directory. In the /dev/disk/by-path directory, there are entries present in the following format xen-vbd-51712 -> ../../xvda . Is it possible to map the string xen-vbd-51712 to the EBS volume ? 2. udevadm info <disk_label>: This is yielding the following information attached below, however the volume id is not present in the below. ``` P: /devices/vbd-51712/block/xvda N: xvda S: disk/by-path/xen-vbd-51712 S: sda E: DEVLINKS=/dev/disk/by-path/xen-vbd-51712 /dev/sda E: DEVNAME=/dev/xvda E: DEVPATH=/devices/vbd-51712/block/xvda E: DEVTYPE=disk E: ID_PART_TABLE_TYPE=gpt E: ID_PART_TABLE_UUID=08cf25fb-6b18-47c3-b4cb-fea548b3a3a2 E: ID_PATH=xen-vbd-51712 E: ID_PATH_TAG=xen-vbd-51712 E: MAJOR=202 E: MINOR=0 E: SUBSYSTEM=block E: TAGS=:systemd: E: USEC_INITIALIZED=34430 ``` As per https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/device_naming.html, the device name which is provided when the EBS volume is attached to the instance is not guaranteed to be the same which is visible inside the instance. ``` "When you attach a volume to your instance, you include a device name for the volume. This device name is used by Amazon EC2. The block device driver for the instance assigns the actual volume name when mounting the volume, and the name assigned can be different from the name that Amazon EC2 uses" ``` Since our use case can involve frequent addition/removal of EBS volumes on an instance, we wanted to find a deterministic method to identify a volume inside a Linux instance. Could you please let us know that if there is a route by which we can relate the disk within EC2 instance with the corresponding EBS volume id ?
1
answers
0
votes
97
views
asked 3 months ago

How to access and/or mount Amazon public datasets to EC2

I have an EC2 instance running in us-east-1 that needs to be able to access/manipulate data available in the [KITTI Vision Benchmark public dataset](https://registry.opendata.aws/kitti/). I'd like to make this data available to the instance, but would also like to be able to reuse it with other instances in the future (more like a mounted S3 approach). I understand that I can view the bucket and recursively download the data to a local folder using AWS cli from within the instance: `aws s3 ls --no-sign-request s3://avg-kitti/` `aws s3 sync s3://avg-kitti/` or `aws s3 cp s3://avg-kitti/ . --recursive` However, this feels like a brute force approach and would likely require me to increase my EBS volume size... and would limit my reuse of this data elsewhere (unless I was to snapshot and reuse). I did find some stackoverflow solutions that mentioned some of the open data sets being available as [a snapshot you could copy over and attach as a volume](https://opendata.stackexchange.com/questions/12699/how-can-i-download-aws-public-datasets). But the [KITTI Vision Benchmark public dataset](https://registry.opendata.aws/kitti/) appears to be on S3 so I don't think it would have a snapshot like it would on EBS datasets... That being said, is there an easier way to copy public data over to an existing S3 bucket? and then mount my instance to that? I have played around with S3FS and feel like that might be my best bet, but I am worried about 1) the cost of copying / downloading all data from public bucket to my own 2) best approach for reusing this data on other instances 3) simply not knowing if there's a better/cheaper way to make this data available without downloading or needing to download again in the future.
2
answers
0
votes
6
views
asked 4 months ago

Cross Account Copy S3 Objects From Account B to AWS KMS-encrypted bucket in Account A

My Amazon Simple Storage Service (Amazon S3) bucket in Accounts A is encrypted with a AWS Managed AWS Key Management Service (AWS KMS) key. I have created lambda function to copy objects from Account B to Account A which has AWS Managed KMS key used as Server Side Encryption on S3 bucket. When function executes and tries to copy objects from AWS account (Account B) to Account A S3 bucket, I get an Access Denied error. I came across an Knowledge-center article which talks about the same scenario **except one difference ** and In that they are talking about **Customer Managed Key ** server side encryption mechanism. Because they have using Customer Managed Encryption Key they are able to modify KMS policy to allow Lambda function Role ARN permission to the **kms:Decrypt** action. As mentioned earlier, S3 bucket encrypted with AWS managed keys, we cant modify the key policy because it is managed by AWS. So, my question is how do we copy objects from S3 buckets from Account B to Account A ( with AWS Managed KMS encryption enabled)? Reference Links: * https://aws.amazon.com/premiumsupport/knowledge-center/cross-account-access-denied-error-s3/ **Changing a key policy documentation** https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying.html#key-policy-modifying-how-to-console-policy-view Thanks in advance.
2
answers
0
votes
49
views
asked 4 months ago

AWS: s3 bucket policy does not give IAM user access to upload to bucket, throws 403 error

I have an **S3 bucket** that works perfectly with root credentials (`AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`) to upload files to the bucket. I have created an **IAM user**. I tried to give this **IAM user** the privilege of uploading files to this bucket by creating this **policy** and attaching it to that bucket: { "Version": "2012-10-17", "Statement": [ { "Sid": "Statement2", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::122xxxxxxxx28:user/iam-user-name" }, "Action": "s3:*", "Resource": "arn:aws:s3:::bucket-name" } ] } However, when I try to upload a file, I get this error: ``` > PUT > https://bucket-name.s3.region-code.amazonaws.com/images/60ded1353752602bf4b364ee.jpeg?Content-Type=image%2F%2A&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIARZARRPPIBMVEWKUW%2F20220128%2Feu-west-3%2Fs3%2Faws4_request&X-Amz-Date=20220128T123229Z&X-Amz-Expires=300&X-Amz-Signature=dfdc3d92f6e52da5387c113ddd793990d1033fdd7318b42b2573594835c01643&X-Amz-SignedHeaders=host%3Bx-amz-acl&x-amz-acl=public-read > 403 (Forbidden) ``` This is how the upload works: 1. I generate a presigned-url in the backend: ```js var getImageSignedUrl = async function (key) { return new Promise((resolve, reject) => { s3.getSignedUrl( "putObject", { Bucket: AWS_BUCKET_NAME, Key: key, ContentType: "image/*", ACL: "public-read", Expires: 300, }, (err, url) => { if (err) { reject(err); } else { resolve(url); } } ); }); }; ``` 2. Then the file is uploaded in the frontend using that url: ```js await axios.put(uploadConfig.url, file, { headers: { "Content-Type": file.type, "x-amz-acl": "public-read", }, transformRequest: (data, headers) => { delete headers.common["Authorization"]; return data; }, }); ```
1
answers
0
votes
85
views
asked 4 months ago
  • 1
  • 90 / page