Configure multiple static sites mapping subdomains to S3 subdirectories using CloudFront, a CloudFunction, and Route53

0

Is it possible to use a CloudFunction to configure the desired behavior? Are there some obvious things to try that I am missing?

  • http://subdomain1.my-domain.com >>> s3://my-bucket/subdomain1/index.html
  • http://subdomain2.my-domain.com >>> s3://my-bucket/subdomain2/index.html

Current configuration looks like this:

Route53

  • A Record is configured for wildcard *.domain.com

S3

  • A single private bucket
  • Contains deployments like the following:
my-bucket/subdirectory1/index.html
my-bucket/subdirectory2/index.html

CloudFront

  • CachePolicy exposes Host
  • OriginRequestPolicy uses the managed CORSS3Origin
  • OAI policy grants the distribution access to the private S3 Bucket

CloudFunction

  • Rewrites request.uri using subdomain as subdirectory from request.headers.host.value.

The wildcard domain is configured correctly, I can reach the Cloudfront distribution at abc.domain.com and def.domain.com etc.

I believe the problem exists between CloudFront and S3.

When the CachePolicy header includes Host for the CloudFunction, I am able to use the value to grab the subdomain to rewrite request.uri. However, including Host results in a SignatureDoesNotMatch error response from CloudFront.

I see this error:

<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
...

I have found an older example using Lambda@Edge not a CloudFunction— I thought I would ask here before giving it a try because this example edits the request.headers.host and I have a feeling that the example was created before the header was readonly.

FWIW, I also have many working deployments without the wildcard domain where the static site is in the root of the bucket.

Thanks for any insights!

3 Answers
1
Accepted Answer

I figured it out, I was missing s3:ListObjects in permissions.

There was no need to include Host in the caching policy header items and in fact including Host did break the signing—which sent me down this path. Removing it set things right.

I was able to use the CloudFunction just fine to rewrite request.uri.

profile picture
johnny
answered 2 months ago
profile picture
EXPERT
reviewed a month ago
  • I'm working on the same solution and I'm stuck on the same problem. Can you share more details how did you solved it? How do you rewrite request.uri?

  • here are the basics:

        var SUBDOMAIN = event.request.headers.host.value.split('.mydomain.com')[0];
        request.uri = '/' + SUBDOMAIN + request.uri;
        return request;
    
0

It's not clear to me why you need to rewrite the host header when requesting items from S3. I think what you're doing is setting up the S3 bucket as a website but you don't have to do that - it's far easier and more secure to disable public access on the S3 bucket and use Origin Access Control (OAC) to secure access from CloudFront to S3: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html

When you do that you can configure a combination of behaviours and origins (you can have multiple origins, each pointing to a different prefix (read: folder) in S3) and then use a Lambda@Edge function to ensure that the correct origin is selected based on the requests.headers.host (because that will match the DNS request by the client).

That said: I'm also curious as to why use a wildcard in Route 53 and a single CloudFront distribution here. I'm not saying it doesn't make sense - if you have thousands of potential hosts/websites then having a single distribution and a single Lambda@Edge then it is (in a sense) quite simple.

However, consider that there is no charge CloudFront distributions. A simpler configuration would be to configure a CloudFront distribution per host/website. This gives you far more granular control over each individual website - there's no shared fate in terms of caching or other CloudFront settings. You can still use a single S3 bucket at the back end (sharing the OAC) so you don't have to change the structure there at all; but you could also have multiple S3 buckets if you wanted to. This mechanism allows you to do away with Lambda@Edge which saves some cost. But there would need to be more Route 53 records.

I could also argue that the existing solution is quite complex in that an erroneous change could affect all of your hosts/websites.

Either way: Whether you use the existing solution (wildcard DNS record, single CloudFront distribution, Lambda@Edge) or a different one (multiple DNS records, multiple CloudFront distributions) I strongly recommend that you automate the deployment and management of the solution. As you do more and add additional hosts/websites it will save you time and reduce errors through misconfiguration.

profile pictureAWS
EXPERT
answered 2 months ago
profile picture
EXPERT
reviewed 2 months ago
  • Thanks for the detailed response. The bucket is private, the distribution has access through an OAI policy. I can switch to OAC but it's working as-is.

    I'm not trying to rewrite the host, I am trying to access the subdomain from the request to rewrite the request.uri to the appropriate subdirectory in the origin bucket. Exposing the Host header in the CachingPolicy seems like the correct way to give the CloudFunction access to the information but in doing so, it also causes the SignatureDoesNotMatch error.

  • In my digging, an example using Lambda@Edge suggested manually changing the request.header.host to the origin host (bucket) in the function (my assumption being this is a means to avoiding the SignatureDoesNotMatch issue)— and I agree with you, I don't want to do this. I don't think it should be necessary.

    I am building an internal dev tool with automated deployments that are low traffic / ephemeral / discarded. Using a single CloudFront distribution for the wildcard DNS and single bucket felt like the right amount of complexity. There's nothing more to automate other than syncing to s3 and letting the deployments expire with a reasonable cache policy.

0

Not sure if I missed something in details, do you have a single target bucket or multiple ? If its single bucket and only sub folders need to map to subdomains, why cant you use OAC combined with origin Edge lambda to pick dynamic origin and apply the path instead ?

Anand
answered 2 months ago
  • Single target bucket, I am asking if it's possible to use a CloudFunction and not Lambda@Edge. It does not seem to work.

  • Sure, I see now !

    Why you would want to have it at CF function, is it so that you can take take advantage of CF caching I suppose (which will be no different even if you use Lambda@Edge)?

    Anyway, were you able to test if path rewrite is working properly ?

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions