How to upload video files using rest API after receiving an "upload URL"
I'm working with ShotGrid (an AutoDesk service) who make it possible to upload media to their S3 buckets The basic idea: Developer sends a request to ShotGrid for an AWS S3 "upload URL"
ShotGrid's upload documentation explains how to make the request for the "upload URL", and it seems to work just, but then there's no documentation explaining how to actually execute the upload after receiving it.
So far I'm getting errors, the most promising of which shows "SignatureDoesNotMatch / The request signature we calculated does not match the signature you provided. Check your key and signing method."
More detail below...
I've tried the following: Request for 'upload URL' is
curl -X GET https//myshow.shotgrid.autodesk.com/api/v1/entity/Version/{VersionId}/_upload?\filename={FileName} \
-H 'Authorization: Bearer {BearerToken} \
-H 'Accept: application/json'
Result is
{
"UrlRequest": {
"data": {"timestamp": "[timestsamp]",
"upload_type": "Attachment",
"upload_id": null,
"storage_service": "s3",
"original_filename": "[FileName]",
"multipart_upload": false
},
"links": {
"upload": "https://[s3domain].amazonaws.com/[longstring1]/[longstring2]/[FileName]
?X-Amz-Algorithm=[Alg]
&X-Amz-Credential=[Creds]
&X-Amz-Date=[Date]
&X-Amz-Expires=900
&X-Amz-SignedHeaders=host
&X-Amz-Security-Token=[Token]
&X-Amz-Signature=[Signature]",
"complete_upload": "/api/v1/entity/versions/{VersionId}/_upload"
}
}
Then the upload request...
curl -X PUT -H 'x-amz-signature=[Signature-See-Above]' -d '@/Volumes/Path/To/Upload/Media' 'https://[uploadUrlFromAbove]'
And get the following error...
<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
</Error>
When uploading a file to S3 using a pre-signed url there are a few things you must be aware of:
Any header set when creating the url.
Any metadata set to the S3 object when creating the url.
These two are relevant, as you must set those headers when performing the PUT operation on the upload url. Whenever these headers don't match, you will get a SignatureDoesNotMatch error. On one hand, any header set when creating the pre-signed url must be passed in as it is to the PUT request. On the other hand, any metadata set to the S3 object when creating the pre-signed url must be passed in prefixed with x-amz-meta-. For instance, let's say you generate a pre-signed url to upload a jpg file:
const params = {
Bucket: 'your-bucket',
Key: 'your-key',
ContentType: 'image/jpg'
},
preSignedUrl = await s3Client.getSignedUrlPromise(params);
Then, you must upload the image and set the Content-Type header:
curl -X PUT -H 'Content-Type=image/jpg' -d '@/Volumes/Path/To/Upload/Image' 'https://[uploadUrlFromAbove]'
In contrast, if you would like to set some custom metadata like originalFilename. You will create the pre-signed url as follow:
const params = {
Bucket: 'your-bucket',
Key: 'your-key',
ContentType: 'image/jpg',
Metadata: {
originalFilename: 'image.jpg'
}
},
preSignedUrl = await s3Client.getSignedUrlPromise(params);
And you will upload the image as:
curl -X PUT -H 'Content-Type=image/jpg' -H 'x-amz-meta-original-filename=image.jpg' -d '@/Volumes/Path/To/Upload/Image' 'https://[uploadUrlFromAbove]'
IMPORTANT TO NOTE: I intentionally chose a metadata in camel-case to illustrate that it must be changed to snake-case when writing the header. The metadata will be stored in S3 in snake-case too!
Anyhow, according to your initial question... it seems ShotGrid's documentation has the answer:
In the response for getting an upload URL one of 3 types upload URLs will be returned. The 3 possible upload URLs are, a URL to the ShotGrid application server, a pre-signed S3 upload URL, or a pre-signed S3 multi-part upload URL (for upload part 1). All 3 behave very similarly, the file data must be the body of a PUT request to the URL and the headers for Content-Type and Content-Size must be set.
Have you tried to set Content-Type and Content-Size headers when performing the PUT request?
UPDATE - possible clue here: The ShotGrid request for the "Upload URL" returns not only the pre-signed URL but a data object. Here's the full return:
{ "data":{ "timestamp": "[Timestamp]", "upload_type": "Attachment", "upload_id": null, "storage_service": "s3", "original_filename": "[FileName]", "multipart_upload": false }, "links":{ "upload":"[PreSignedUploadUrl]", "complete_upload": "[ShotGridPath-ProbablyIrrelevantHere]" } }
I tried taking the data object, converting each element per your suggestion `-H 'x-amz-meta-timestamp: timestamp' so far same error. Ideas?
In the ShotGrid's documentation, there is information that the upload URL which you are getting is an S3 pre-signed URL, so you don't need to add any signature. Please check this page.
Pre-signed URLs are designed to give permission to get or upload a file to S3 bucket without any additional permissions. The URL is valid for a limited amount of time.
It means that you can make PUT on this URL and just pass the file.
@MG, thank you -- very helpful, though no luck just yet. If I do the following:
curl -X PUT -d '@/Path/To/Media/File' 'http://[region].amazonaws.com/entire/presigned/url?IncludingQueryString
, I get no response and the upload fails silently. If instead I do the same cURL replacing "PUT" with "POST" I get the error "SignatureDoesNotMatch"
Make sure you have given role to access s3 to upload object and recommend to use AWS SDK Follow the format mentioned on AWS user guide then try again
@AWS-User00319350 - Thanks. I'm not the dev creating the upload URL per se. I request it from ShotGrid (AutoDesk) software, who prepares it and sends back for use. So I think any roles associated with the process would be controlled by ShotGrid/AutoDesk. Re "AWS SDK" you mean AWS's
aws-cli
Terminal app? I've used it and it's great, BUT would be usable in a situation where I don't have direct developer level access to the S3 bucket?
Relevant questions
How to upload video files using rest API after receiving an "upload URL"
asked 5 months agoS3 Transferutility does not Upload and keeps waiting status
Accepted Answerasked 3 months agoUploading greater than 6 MB file to S3 through API Gateway results in Request too long error, Is that expected ?
asked 2 months agoHow can I optimize performance when uploading large files to S3 across multiple AWS Regions?
Accepted Answerasked 2 years agoS3 upload issues through the AWS console
asked 3 months agoTransfer files (1GB to 2G) from web url to S3 on a schedule
Accepted Answerasked 2 years agoUpload to S3 from Lambda doesn't create file in bucket, no error
asked a year agoHow to limit s3 mutlipart upload filesize?
asked a month agos3 create Presigned Multipart Upload URL using API
asked 18 days agoHow to handle failed lambda functions
asked 2 months ago
@cjuega Thank you! That's a lot fo detail and was enormously helpful. A couple of issues in my particular case: the only headers sent were
-H 'Authorization: Bearer {Token}
&-H 'Accept: application/json'
Sending the token gets you an error (can't send it to AWS), so I left that out. What made a major difference was including the Content-Type and Content-Length headers. After adding those thecurl
call for the first time uploaded the file. Unfortunately right at the end of the upload I get an error(56) LibreSSL SSL_read: SSL_ERROR_SYSCALL, errno 54
. Trying to work that out now.