By using AWS re:Post, you agree to the Terms of Use

Route 53 to S3 - partially working, mostly not working as required



Very long post - my apologies - just trying to include a comprehensive context.

TL;DR I need some pointers to debug mapping of an existing domain through Route 53 to S3. Route 53 health checks are good and it looks like DNS/NS changes have propogated out OK (36+ hours after change).

I could do with a little help troubleshooting my domain/Route 53/S3 setup. As per the tutorials and other pages, I'll use and instead of the actual name of my website.

My domain is registered on - I'm not trying to transfer it to Route 53. 123-reg does let me change nameservers. I'm testing using Chrome (from which I keep deleting cookies), wget and Insomnia.

I have two S3 buckets called and as per the setup instructions for S3 web buckets. The www. bucket redirects to the bucket as per recommended practice. I can access both buckets via their direct urls (<zone> and<zone> and the underlying app works as expected. (The underlying app is a create-react-app built and deployed through CodePipeline just in case that's significant.)

For Route 53 setup, I have followed the steps at and have also read through I've also read through

In Route 53, for Step 3: Create Records I'm using an 'A' record type, with an alias. The console form/validation drop-down offers s3-website-<zone>; i.e. the suffix of the S3 bucket, but without the prefix. If I try to change the alias to<zone> then the validation fails. CNAME record types don't seem compatible with S3 as I can't use either s3-website-<zone> or<zone> as valid entries - seems to need to be an IP address.

So, my Route 53 record config is set as A // Alias (Y) // s3-website-<zone> for both and records.

I added a Route 53 health check at this point, which is consistently returning a healthy status - so I presume Route 53 can see the S3 Web Bucket OK (and from earlier tests I know the bucket itself is accessible).

For Step 6: Update the NS Record with Your Current DNS Service Provider to Use Route 53 Name Servers on 123-Reg I have changed the name server records to the AWS ones listed for my zone/record set and deleted the previous ones. I have also deleted all CNAME entries which used to point to an EC2 instance (but left the @ // MX entries on 123-reg).

I did all this 36+ hours ago.

(I've also followed the instructions for adding SSL using ACM and added the CNAME to my AWS Route 53 record set but not my 123-reg DNS. That doesn't seem to have materially impacted the rest of this one way or the other.)

Now I'm getting inconsistent and incomplete behaviour which I'm struggling to understand/unpick.

In Chrome, if I go to, I get to my S3 site and it navigates through OK.

However, if I go to I get a 'took too long to respond' timeout and Chrome then rewrites the url to be (but stops as it has timed out).

If I try to go to I get a timeout too.

If I do an NS lookup and then browse to the looked-up IP address, it takes me to the Route 53 product page.

Using, to list all name severs for and most of the entries come back as 52.218.X.X IP addresses, which [I'm almost certain] are AWS.

The main reponse block from digwebinterface (suitably redacted) seems to be: (Comodo (US)):  Copy results to clipboard
  dig +noadditional +noquestion +nocomments +nocmd +nostats @  5  IN  A  172800  IN  NS  172800  IN  NS  172800  IN  NS  172800  IN  NS

These are the name servers I've used for Step 6 above.

The response for my domain from confirms that the name servers are indeed the AWS ones.

Finally, I tried testing my links in Insomnia, with the following results...

  • an HTTP/GET for returns my index.html page (as per Chrome)
  • an HTTP/GET for returns my index.html page (different to Chrome, which timed out)
  • an HTTPS/GET for either _example.com_or returns my index.html page (same as Chrome)

Any suggestions on how to debug/fix?

Also, which bit of the stack do I need to change to make the default url (instead of ). Do I need to swap the S3 bucket behaviour ( points to not the other way around)? Do I need to swap the Route 53 aliases so they point to the bucket, not the one?

Many thanks in anticipation - John

asked 4 years ago57 views
8 Answers

Hi John,

The "Getting Started" topic in the Route 53 Developer Guide explains how to set up a website in an Amazon S3 bucket and how to configure Route 53 so internet traffic is routed to your bucket:

Recursive DNS resolvers typically cache the names of your name servers for 48 hours. If the recursive resolver that you're using submitted a DNS query for your domain's name servers just before you made the switch, it'll be another 12 hours or so before another query returns the names of your Route 53 name servers.

Sadly, because of the way S3 works, you need to redirect www.your-domain-name to your-domain-name. It has to do with the domain name that gets passed around in the Host header. I'll write this up someday to explain the details.

Also, what's the name of your domain? (I don't need anything secret, like your hosted zone ID. I can check using just your public domain name.) I'll take a gander at you current DNS configuration to confirm that it's set up correctly.


answered 4 years ago

Hi Scott - many thanks for getting back to me.

The domain is "". I'll double check the link you provided though I think I've followed all the set up correctly - certainly as far as a "visual inspection" checks out. Hence seeing if there are other ways I can debug what's going on to pinpoint what I've done wrong.

Hoping you can shed some further light.

Cheers - John

answered 4 years ago

Hi John,

Everything seems to be in order from the DNS side. Take a careful look at that "Getting Started" topic. The process of routing traffic to an S3 bucket is filled with nitpicky details.


answered 4 years ago

Thanks for the confirmation there Scott.

Some more diagnostics from my side to help with future articles/amendments to materials.

Testing in the Chrome browser can introduce it's own challenges. On closer inspection, part of the problem is that when requesting '', Chrome does a 307 internal rewrite and requests 'https://' instead. This can be turned off in the config, but runs the risk of messing up otherwise default behaviour.

I've been testing in Firefox and Insomnia and looking at the network/timeline traffic, which is more informative.

There are a couple of inconsistencies between documentation and current interface/options.

The instructions in the Getting Started link you provided don't mention turning off the 'block public' flags for the '' bucket - would be helpful to include these.

Also, for reader clarity, the second, optional '' bucket doesn't seem to need any access/policy settings. The redirect instructions alone are enough.

Again for reader clarity - putting '' in the bucket redirect properties does not appear to be referring to the bucket name. Such a value results in an HTTP/1.1 301 Moved Permanently response to the client, using the provided value ('') which will is then treated as a domain - so the client requests '' - which of course then needs that to be set up as a public DNS entry in order for that redirect to work.

If the user wants specifically to refer to the bucket then the redirect on '' needs to point to<zone> (NB using the bucket arn doesn't work).

I've not marked this as answered yet as I'm still trying to understand how I can diagnose whats going on w.r.t. DNS

Also, when it comes to setting up Route 53, then there seems to be some redundancy here. I'm not clear which bit of Route 53 record sets and aliases or which bits of S3 redirect are resulting in which part of the resolution from original web request from client to the final response being served from the S3 bucket.

I'll keep investigating.

answered 4 years ago


Thanks for the heads up about the "Getting Started" topic. I haven't revisited it in a while; based on your comments, it's time.


answered 4 years ago

Description of process to set up is a little fragmented across the documentation for S3, Route 53, CloudFront and ACM. See the steps in the last post for a more concise, 'straight thought' recipe.

answered 4 years ago


Thanks for the feedback. I'll work with the CloudFront writer to clarify how this all works across services.


answered 4 years ago

Another update. I've been using a related domain "" to do some comparative testing and experimenting with config.

Here are some observations/feedback for documentation...

There are at least two places I can redirect traffic of an S3 hosted website - in S3 itself (as per the setup instructions) and in Route 53 (and possibly in CloudFront - though I've not fully explored that).

My use case was to setup all three: S3 hosting, Route 53 for DNS/routing and CloudFront to add https. For me (being a bear of very little brain) the documentation across the three wasn't clear enough that if I was using Route 53, then some of the S3 instructions may not apply - and so on.

The S3 'redirect' as per instructions returns a client redirect (permanently moved) rather than acting
as a *nix style symbolic link - which I'd got the impression it was from the S3 documentation. If using S3 buckets as websites direct (e.g. for corporate storage directly accessed by s3-website) this wouldn't matter, but in the context of public domain name hosting it became confusing.

It does not appear to be necessary to serve content from '' - there was an earlier comment (from memory, about "www." causing problems in headers). I've set up "" as an S3 bucket with, no "" sibling - traffic routed by Route 53 via CloudFront (with as it's origin bucket) supporting both http and https and it's working find (React app so dynamic content/public content all being served fine).

With this setup, the url in my client (browser) stays as "" rather than getting rewritten (as a result of redirect) to "" (which was being caused by the S3 redirect).

Finally, I don't recall seeing anything about setting the Default Root Object to index.html in the CloudFront instructions for fronting an S3 website. I may have missed it, but it took me a while to realise that I was getting an Access Denied because I was trying to browse '/' rather than access a specific file. I found it and fixed it based on forums and articles rather than recalling having seen it in the main 'getting started' page - if it is there then perhaps it needs pointing out a little more clearly.

That was a bit of a dump of various learnings - hope it didn't sound too terse and hope it's of use when a documentation redraft comes around. Many thanks for the interaction and support - I'll flag this as fixed with the following summarised explanation:

If you want to set up an S3 hosted website using Route 53 to route domain traffic to it and using CloudFront to support https as well as http, then:

In S3:

  • set up your S3 bucket ''
  • set bucket permissions to unblock all four of the public access 'blocker' criteria
  • set bucket permissions and policy as per AWS instructions
  • for a React App (create react app) set both index and error pages to be index.html

In Route 53:

  • Create an hosting zone for your domain ''
  • Point your domain name servers to the AWS ones for the hosting zone and delete any current CNAME or A entries you have for your current domain

Do not create a record linking Route 53 at your S3 implementation. (The traffic route goes from client to DNS lookup (Route 53) to CloudFront to S3. So we don't need a direct Route 53 to S3 link.)

In CloudFront:

  • Create a distribution for your domain ''
  • Request a certificate from ACM for '' (also adding '') and use DNS verification
  • Set the Default Root Object to 'index.html'
  • Pick the http/https behaviour you want - I'm assuming you're going for http and https. Behaviour below when we test will reflect what you pick here.
  • Use the ACM 'add the verification(s) to my name server' option - it will add CNAME(s) for the certificates to the Route 53 hosting zone for ''.
    - For the "Alternate Domain Names (CNAMEs)" add the web/domain name '' (and possibly '')

For the above, note the small print, the certificates are in Virginia zone - this confused me for a while as I tried to create ACM certificates in my zone as well and couldn't understand why CloudFront wasn't finding them.

Make a note of the CloudFront domain name, e.g ''

Back in Route 53:

  • By the by, you should see your CNAME entries added when you created the ACM certificate(s)
  • Add an A/Alias entry for the name '' with a target of the CloudFront domain name you created above (e.g '')

C'est tout

You'll need to wait for the CloudFront distribution to distribute itself. Once it has distributed, then from a browser go to '' and you should see your site, ditto '' (depending on the http/s behaviour you picked above).

Edited by: johnskelton3 on Mar 7, 2019 4:30 AM

Edited by: johnskelton3 on Mar 7, 2019 4:31 AM

answered 4 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions