Account_A is the account with the bucket and Account_B has a lambda that generates the presigned url and passes it to a UI hosted using API Gateway. From this UI the csv file is upload to S3 using presigned url.
There is a IAM role for Account_B lambda in Account_A to access the bucket. The lambda assumes this role using STS to get accessKey, secretaccessKey and session token. These are used to get s3 presigned url.
I get the following error
403 error - strict-origin-when-cross-origin
<Error><Code>AccessDenied</Code><Message>There were headers present in the request which were not signed</Message><HeadersNotSigned>x-amz-date, x-amz-signature, x-amz-expires, x-amz-security-token, x-amz-algorithm, x-amz-credential</HeadersNotSigned><RequestId>7QK2CN3B2AX214J3</RequestId><HostId>kudrqupGdwxAq+nmGSd7EInbJyO0SxAtVdcNM+Q9ACtYy8V1Azc6hJdnNsxtuf8YYfAZjrMcj+I=</HostId></Error>
Here is the bucket policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::581338366865:root"
},
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::procurementtest/*"
}
]
}
Here is the CORS policy
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT",
"GET",
"HEAD"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
Lambda Code
getS3Client = async () => {
const data = await stsClient
.assumeRole({
RoleArn: s3roleAttributes.ROLE_ARN,
RoleSessionName: roleAttributes.ROLE_SESSION_NAME
})
.promise();
const { AccessKeyId, SecretAccessKey, SessionToken } = data.Credentials;
const s3client = new S3({
accessKeyId: AccessKeyId,
secretAccessKey: SecretAccessKey,
region: s3bucketparams.region,
signatureVersion: s3bucketparams.signatureVersion,
sessionToken :SessionToken
});
return s3client;
}
const url = await s3client.getSignedUrlPromise("putObject", {
Bucket: s3bucketparams.bucket,
Key: `${filename}`,
ContentType: signedUrlparams.ContentType,
ACL: signedUrlparams.Acl,
Expires: signedUrlparams.signedUrlExpireSeconds,
})
UI Code
uploadToS3UsingPresignedUrl = (
presignedUrl: string,
file: File | null,
s3uploadparams : any
) => {
if (file === null) return;
console.log("File inside uploadToS3");
console.log(file);
console.log("content"+s3uploadparams);
return axios.put(presignedUrl, file, {
headers : {
"Content-Type ": "text/csv",
"X-Amz-Credential": s3uploadparams.x_amz_credential ,
"X-Amz-Algorithm": s3uploadparams.x_amz_algorithm,
"X-Amz-Date": s3uploadparams.x_amz_date,
"X-Amz-Signature": s3uploadparams.x_amz_signature,
"X-Amz-Security-Token": s3uploadparams.x_amz_security_token,
"X-Amz-Expires": s3uploadparams.x_amz_expires,
"X-Amz-Acl": "public-read"
}
});
};
I have these headers in my presigned url from where I extract them and add it in the headers. If I don't add them in the header and just use
axios.put(presignedUrl, file)
still I get an Access Denied with no other details in the error message.
But those headers aren't in your code where you are generating the presigned URL - that's why you get the original error. Note that the credentials that you're assuming in the Lambda role are temporary - they will expire in a short period of time, perhaps shorter than before you use the presigned URL? I would go back to basics: Experiment from your development machine by creating a presigned URL with the minimum components (content type really) and get it working from there; then gradually add things; then move to Lambda. Troubleshoot this like any other problem. Start simple and work up to more complex.