1 Answer
- Newest
- Most votes
- Most comments
1
Because binary files are encoded for upload (usually Base64) that increases the size of them during transfer which is probably what you're seeing.
Instead, consider having an API call which triggers a Lambda function to return a pre-signed URL to the client which will allow upload directly to S3. That way you won't have to deal with the limit in API Gateway.
Some example Python code for the Lambda function:
s3 = boto3.client('s3')
presignedURL = s3.generate_presigned_url('put_object',
Params={'Bucket':s3BucketName,
'Key':s3ObjectKey,
'ContentType':objectContentType}
ExpiresIn=linkExpiryTimeInSeconds)
Relevant content
- asked 8 months ago
- Accepted Answerasked a year ago
- asked 3 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 10 days ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated a year ago