Route53 not resolving custom domain for CloudFront

0

I have set up an S3 bucket to serve a website via https using CloudFront. Within that bucket, I have a folder for each environment and a set of version folders within the production folder:

my-bucket
|-dev
|-prod
  |-v1.0
  |-v1.0.1
...etc.

My initial (development) attempt worked fine using https://dev.mydomain.org, but when I decided to change the alternate domain name in the CloudFront distribution to the production name (i.e. https://mydomain.org and https://www.mydomain.org) and updated the Origin path, it stopped working.

I am using Route53 to manage DNS and I haven't deleted the hosted zone. As far as I can tell, all the name server entries match like they're supposed to (makes sense, given I haven't touched the zone). I HAVE removed and recreated AAAA alias records that point to my CloudFront distribution with the two alternate domain names. Several times. No dice. I have also removed the alternate domain names from the distribution and re-added them. (I re-created the Route53 AAAA records after changing the distribution.)

I can dig my domain without error. The Route53 test also comes back fine (for what it's worth). However, if I try to ping or curl, I get 'unknown host' errors. Visiting https://mydomain.org in a browser similarly returns a DNS_PROBE_FINISHED_NXDOMAIN response.

In case it's relevant, I have this function attached to my distribution to re-route https://www.mydomain.org requests to https://mydomain.org:

function handler(event) {
  var request = event.request;

  var headers = request.headers;
  var host = request.headers.host.value;

  if (host.startsWith('www')) {
    var response = {
      statusCode: 301,
      statusDescription: 'Moved Permanently',
      headers: {
        location: {
          value: 'https://' + host.replace('www.', ''),
        },
      },
    };

    return response;
  }

  return request;
}

However, unlinking the function doesn't resolve the issue. (I have tested the function in the AWS console and it seems fine.)

CloudFront itself seems to be OK. If I hit the CloudFront domain (e.g. d3abcdefghi123.cloudfront.net), I can see the site in all its glory.

I have created a TLS/SSL certificate through ACM that covers both the apex and wildcard. I don't think there's a problem there.

Something just refuses to resolve my custom domain to the CloudFront distribution. I know DNS changes can be slow - maybe I'm just too impatient? I don't want to create a new CloudFront distribution if I can help it - surely it's possible to update the Origin path and alternate domains? Given the site works fine through the default CloudFront domain, I don't think I need to invalidate it either.

I'm out of ideas at this point, so suggestions would be most welcome.

Regards, Flic

Flic
asked 2 years ago1230 views
1 Answer
1
Accepted Answer

So there were a couple of things I did to resolve this. First, I flushed the DNS cache on my Macbook Pro with the following command in a terminal session:

sudo dscacheutil -flushcache;sudo killall -HUP mDNSResponder

For good measure, I cleared the web cache on all my browsers and restarted the laptop as well. The big clue that pointed me in this direction was the site working on another computer that hadn't previously been used to view it. This suggested it was an issue with the laptop itself. Make sure to give the DNS cache some time to clear (a few minutes should be more than sufficient).

Another thing I had overlooked was creating A record aliases to the CloudFront distribution - I had only created AAAA records (which are recommended by Amazon). In my case, it was tricky to diagnose the problem because I hotspot off a mobile phone. Most of the time I'm allocated an IPv6 address so the site looked fine. However, it would randomly stop responding to browser requests - I suspect on those occasions the PC was allocated an IPv4 address and was looking for an A record instead of an AAAA. Why do I think this? I asked friends to check out my site and they DIDN'T see it first go - so DNS caching on their machines was not the issue.

Moral of the story: create BOTH A and AAAA alias records. IPv4 is going to be around for a long while yet and you simply can't predict what type of record your audience will request. If you're stuck getting bad responses and dig doesn't report errors, flush BOTH the web and DNS caches as well.

Hope someone else finds this useful.

Cheers, Flic

Flic
answered 2 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions