By using AWS re:Post, you agree to the Terms of Use

Access denied when trying to GET objects uploaded to s3 bucket via aws sdk using cloudfront

0

Using client-s3 sdk signed URLs, i was able to PUT and DELETE objects in my s3 bucket. But when trying to access those same objects using a GET request via cloudfront, s3 denies me access (Access Denied) to the objects. On the permissions section of the bucket, i set the bucket policy to allow GET requests from my cloudfront distribution. I know it works because when i upload objects via the s3 console, i can use cloudfront signed URLs to GET the objects from s3, that works. But once i programmatically use the client-s3 to upload the object, i am unable to GET it using my cloudfront distribution

5 Answers
1

Hi there,

Sharing access to objects uploaded to S3

I created an IAM user. And that user is what's used to PUT the objects in s3. So it means the policy actions only apply to that user, therefore, that user is the only one allowed to access those objects.

I might be misinterpreting, but if I understand what you're saying, this is not accurate. Once you write an object into your bucket, there are several ways to grant access to others. A few common examples include:

  1. The entity is an IAM role or user in your account whose IAM policy allows S3 access (e.g. s3:GetObject). Your root user will have full access by default.

  2. If the entity is an anonymous user (any non-AWS authenticated request), the S3 bucket policy or an individual object's ACL allows anonymous reads.

  3. If the entity is an IAM principle from another account, then both the principle's IAM policy and your S3 bucket policy need to grant the principle permission to S3.

  4. Your IAM users or roles can generate an S3 signed URL for a given object (assuming they have this permission via an IAM policy). Anyone with that URL can access your object as long as the URL hasn't expired.

  5. You set up a CloudFront distribution with an S3 bucket as it's origin and your bucket configuration is correct. If you're using SSE-S3 encryption, you must also use a CloudFront Origin Access Identity (OIA). If you're using SSE-KMS, you must use Lambda@Edge and to modify the request to allow the object to be decrypted (otherwise, you will get an access denied; see this blog post for detailed instructions).

There are certain scenarios where the above methods won't work (or won't work without additional steps). For example, a DENY in a bucket policy will always overrule an ALLOW in an IAM policy, and vice versa. Per #5, using SSE adds additional requirements.

Identity used by CloudFront

I am guessing cloudfront is using the root user of my account to access the objects in the bucket, thats why its getting denied access. So my new question is, how do i make it that cloudfront accesses the bucket as the user i created as opposed to the root user

This is not accurate. When communicating with your S3 bucket, CloudFront cannot use your root identity or any other IAM identity in your account. You should think of CloudFront like any other anonymous user (i.e. does not have an IAM identity), with one key exception: you can optionally create a special type of CloudFront user called an Origin Access Identity (OAI) and attach it to your CloudFront Distribution. Similar to an IAM identity, you can refer to this OAI in your bucket policy and grant CloudFront access to your bucket.

Why CloudFront can't access objects you create with the client-s3 SDK?

Under the hood, the S3 web console, AWS SDK, and AWS CLI all use the same S3 REST APIs.

Therefore, if you receive errors when uploading an object from the web console but do not receive errors if uploaded with the S3 SDK, that means one or both of only two possibilities:

  1. When uploading objects, the AWS principle (i.e. IAM user, role, or account root) you're using in the S3 web console is from an AWS account that is not the same as the account you use with the S3 SDK. TLDR; things get a bit more complicated if Account A uploads objects to Account B's bucket and Account B wants to read them or share them with others outside of Account B. I'm guessing you're doing everything from a single AWS account and if so, we can rule this out.

  2. While the same API (PutObject) is used in both methods, the parameters issued are different, and you're getting an error because a parameter is either missing or has the wrong value with your SDK.

#2 is almost certainly your problem, but without performing a detailed comparison of the settings you're choosing in the web console vs. the s3 SDK, but I can't say for certain. If this is the cause, the two settings I can think of that might cause a problem are:

  • S3 Object ACL
  • Encryption settings - depending on whether you're using server-side encryption and which method you use can impact whether CloudFront can read from S3.
answered 10 months ago
  • Thanks a lot. I just came to that theory about the identity being the issue because i just couldn't figure out the problem. Yes i'm doing everything from a single AWS account, so it can't be #1. As for #2, ACLs and server side encryption are disabled on my bucket. I haven't enabled SSE yet, because i want things to work first. If there's anything you need to help investigate the issue further, I will provide. Thanks again for your time

0
Accepted Answer

Apparently, the issue was from how i was naming the object in the bucket. I added a timestamp in ISO8601 format to each objects' name when creating the s3 presignedUrl for the PUT request. So when cloudfront tries to access it, it changes the some of those weird characters in the date string, and as a result, s3 denies access. Because the object cloudfront is trying to access doesn't exist. Thanks for the help

answered 10 months ago
0

Are you using ACLs in your bucket, by any chance? It sounds as though when you PutObject via the SDK, there is an object ACL being applied which prevents CloudFront from accessing it, despite this being allowed via your Bucket policy. We recommend that you disable ACLs, except in unusual circumstances where you do need to control access to objects individually: https://docs.aws.amazon.com/AmazonS3/latest/userguide/about-object-ownership.html

You can disable ACLs via the Object Ownership section on the permissions tab of your bucket, in the console.

answered 10 months ago
  • Agreed and it should be checked and verified. However, as far as I know, SDKs should not put any ACL on objects unless it's explicitly done so through setObjectAcl https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/examples-s3-access-permissions.html#set-the-access-control-list-for-an-object

  • ACLs are disbaled in my bucket. I use the bucket policy only.

    @Jason_S I don't use s3 pre signed URLs at all for GET. i am trying to use a cloudfront URL for that functionality. Although prior to deploying a cloudfront distribution, i used different s3 presigned URLs for GET and PUT, and that worked fine

  • Can you please share the code sample where you generated the CloudFront signed URL, including the import statements, if possible. thanks

  • @ Jason_S i actually used a hacky solution to do this. I am using the aws sdk v3 for javascript on my project. But at the moment, the sdk v3 doesn't support signed URLs for cloudfront, so i am running a shell command in aws-cli to get the signed URL and returning that URL to the client.

    Here's the code:

    import { Arg, Ctx, Mutation, Resolver, UseMiddleware } from "type-graphql"; import { exec } from "shelljs"; import path from "path"; import dayjs from "dayjs"; async getSignedFileFromS3(@Arg("key") fileName: string): Promise<string> { const time = dayjs().add(60, "second").unix(); const keyPairId = process.env.AWS_CLOUD_FRONT_KEY_PAIR_ID; const pathToPrivateKey = path.join( __dirname, "/../../../awsKeys/private_key.pem" ); const signedUrl = exec( aws cloudfront sign --url ${process.env.AWS_CLOUD_FRONT_DOMAIN}/${fileName} --key-pair-id ${keyPairId} --private-key file://${pathToPrivateKey} --date-less-than ${time} );

    return signedUrl;
    

    }

0

If you want to get signed URL fron CloudFront, you should use CloudFront SDK. S3 SDK will give you S3 pre-signed URL.

That is assuming that you have setup CloudFront OAI correctly in your S3, which seems to be true based on your question.

answered 10 months ago
  • I am able to get cloudfront signed URLs. The issue is that i can't access objects uploaded via s3 pre-signed URLs using the cloudfront signed URL. s3 denies me access to those kind of objects. I am thinking its a permission issue, but i am not sure about what to change

  • Just to clarify, are you using the same pre-signed URL you use for the PUT operation for the GET? If so, can you please try to have two pre-signed S3 URLs, one for the PUT another for the GET, specifically with different HTTP verb in SDK?

0

I think i figured out the issue. I created an IAM user. And that user is what's used to PUT the objects in s3. So it means the policy actions only apply to that user, therefore, that user is the only one allowed to access those objects. I am guessing cloudfront is using the root user of my account to access the objects in the bucket, thats why its getting denied access. So my new question is, how do i make it that cloudfront accesses the bucket as the user i created as opposed to the root user

answered 10 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions