AWS: Multipart upload results in 403 Forbidden error even though single part upload works fine

0

CONTEXT:
In my app, I have a feature that allows the user to upload a video. I noticed that when the users try to upload large videos, sometimes the upload fails.
After, I did a bit of research, I found-out for files larger than 100 Mb, I should use multipart upload.
So I have been following this tutorial to implement multipart upload in my app. And I reached Stage Three.


PART 1: Previous single part upload works fine
This is the implementation of a single part upload using pre-signed urls:

BACKEND

var AWS = require("aws-sdk");
const REGION = "*************************"; //e.g. "us-east-1"
const BUCKET_NAME = "l****************";
AWS.config.update({ region: REGION });

const s3 = new AWS.S3({
  signatureVersion: "v4",
  apiVersion: "2006-03-01",
});

var getVideoSignedUrl = async function (key) {
  return new Promise((resolve, reject) => {
    s3.getSignedUrl(
      "putObject",
      {
        Bucket: BUCKET_NAME,
        Key: key,
        ContentType: "video/*",
        ACL: "public-read",
        Expires: 300,
      },
      (err, url) => {
        if (err) {
          reject(err);
        } else {
          resolve(url);
        }
      }
    );
  });
};

exports.getVideoSignedUrl = getVideoSignedUrl;

FRONTEND

export const getVideoPreSignedUrl = async () =>
  await axios.get("/api/profile/getVideoPreSignedURL");
export const uploadVideoFileToCloud = async (file) => {
  const { data: uploadConfig } = await getVideoPreSignedUrl();

  await axios.put(uploadConfig.url, file, {
    headers: {
      "Content-Type": file.type,
      "x-amz-acl": "public-read",
    },
    transformRequest: (data, headers) => {
      delete headers.common["Authorization"];
      return data;
    },
  });
};

PART 2: Multipart upload which throws 403 forbidden error

BACKEND

var AWS = require("aws-sdk");
const REGION = "***********************"; //e.g. "us-east-1"
const BUCKET_NAME = "************************";
AWS.config.update({ region: REGION });

const s3 = new AWS.S3({
  signatureVersion: "v4",
  apiVersion: "2006-03-01",
});

// ==========================================================
// Replacing getVideoSignedUrl with initiateMultipartUpload
// That would generate a presigned url for every part

const initiateMultipartUpload = async (object_name) => {
  const params = {
    Bucket: BUCKET_NAME,
    Key: object_name,
    ContentType: "video/*",
    ACL: "public-read",
    Expires: 300,
  };

  const res = await s3.createMultipartUpload(params).promise();

  return res.UploadId;
};

const generatePresignedUrlsParts = async (object_name, number_of_parts) => {
  const upload_id = await initiateMultipartUpload(object_name);
  const baseParams = {
    Bucket: BUCKET_NAME,
    Key: object_name,
    UploadId: upload_id,
  };

  const promises = [];

  for (let index = 0; index < number_of_parts; index++) {
    promises.push(
      s3.getSignedUrlPromise("uploadPart", {
        ...baseParams,
        PartNumber: index + 1,
      })
    );
  }

  const res = await Promise.all(promises);

  const signed_urls = {};

  res.map((signed_url, i) => {
    signed_urls[i] = signed_url;
  });
  return signed_urls;
};

exports.initiateMultipartUpload = initiateMultipartUpload;
exports.generatePresignedUrlsParts = generatePresignedUrlsParts;

FRONTEND

This is where the error occurs. See const resParts = await Promise.all(promises)

export const getMultiPartVideoUploadPresignedUrls = async (number_of_parts) => {
  const request_params = {
    params: {
      number_of_parts,
    },
  };
  return await axios.get(
    "/api/profile/get_multi_part_video_upload_presigned_urls",
    request_params
  );
};

// Using multipart upload

export const uploadVideoFileToCloud = async (video_file, dispatch) => {
  // Each chunk is 100Mb
  const FILE_CHUNK_SIZE = 100_000_000;
  let video_size = video_file.size;
  let video_size_in_mb = Math.floor(video_size / 1000000);

  const number_of_parts = Math.floor(video_size_in_mb / 100) + 1;

  const response = await getMultiPartVideoUploadPresignedUrls(number_of_parts);
  const urls = response.data;
  console.log(
    "🚀 ~ file: profileActions.js ~ line 654 ~ uploadParts ~ urls",
    urls
  );
  // async function uploadParts(file: Buffer, urls: Record<number, string>) {
  // const axios = Axios.create()
  // delete axios.defaults.headers.put["Content-Type"];

  const keys = Object.keys(urls);
  const promises = [];

  for (const indexStr of keys) {
    const index = parseInt(indexStr);
    const start = index * FILE_CHUNK_SIZE;
    const end = (index + 1) * FILE_CHUNK_SIZE;
    const blob =
      index < keys.length
        ? video_file.slice(start, end)
        : video_file.slice(start);
    console.log(
      "🚀 ~ file: profileActions.js ~ line 691 ~ uploadParts ~ urls[index]",
      urls[index]
    );

    console.log(
      "🚀 ~ file: profileActions.js ~ line 682 ~ uploadParts ~ blob",
      blob
    );
    const upload_params = {
      headers: {
        "Content-Type": video_file.type,
        "x-amz-acl": "public-read",
      },
      transformRequest: (data, headers) => {
        delete headers.common["Authorization"];
        return data;
      },
    };
    const axios_request = axios.put(urls[index], blob, upload_params);
    promises.push(axios_request);
    console.log(
      "🚀 ~ file: profileAction.helper.js ~ line 117 ~ uploadParts ~ promises",
      promises
    );
  }
  // Uploading video parts
  // This throws the 403 forbidden error
  const resParts = await Promise.all(promises);
  // This never gets logged
  console.log(
    "🚀 ~ file: profileAction.helper.js ~ line 124 ~ uploadParts ~ resParts",
    resParts
  );

  // return resParts.map((part, index) => ({
  //   ETag: (part as any).headers.etag,
  //   PartNumber: index + 1
  // }))
};

This is the error that's logged:
PUT 403 forbidden error

PART 3: AWS Bucket & CORS policy:

  1. CORS Policy:

    [ { "AllowedHeaders": [ "" ], "AllowedMethods": [ "PUT", "POST", "GET" ], "AllowedOrigins": [ "" ], "ExposeHeaders": [], "MaxAgeSeconds": 3000 } ]

  2. Bucket policy hasn't been changed since I created the bucket and it's still empty by default:

    { "Version": "2012-10-17", "Statement": [ { "Sid": "Statement1", "Principal": {}, "Effect": "Allow", "Action": [], "Resource": [] } ] }

Current bucket policy

So maybe I should add something here?
I also have all of these unchecked:
Bucket Permissions

NOTES:

  1. I tested multipart upload with files smaller and larger than 100 Mb. And it always throws the 403 forbidden error.
  2. I don't understand why I would get forbidden error if the single part upload works just fine. In other words, the upload is allowed and if both single part and multipart upload are using the same credentials, then that forbidden error should not occur.
  3. I have a piece of code that shows me the progress of the upload. And I see the upload progressing. And the error seems to occur AFTER the upload of EACH PART is done:
    Upload progress image 1
    Upload progress image 2
asked 2 years ago2495 views
1 Answer
0

Hi.

Good question. Multipart upload permissions are a little different from a standard s3:PutObject and given your errors only happening with Multipart upload and not standard S3 PutObject, it could be a permission issue.

If using IAM, the following permissions are typically needed:

  • s3:PutObject (Needed for object upload in general)

  • KMS permissions (Needed for object upload in general)

  • s3:ListMultipartUploadParts

  • s3:AbortMultipartUpload

  • s3:ListBucketMultipartUploads

From an ACL perspective, you may need to validate that the ACL for the bucket is setup properly (READ on the bucket will allow for ListBucketMultipartUploads):

https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html#acl-access-policy-permission-mapping https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html

If the permission issue is fixed, that should do the trick.

jsonc
answered 2 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions