1 Answer
- Newest
- Most votes
- Most comments
0
TTLs of 30/60 seconds have become extremely common in web frontends / APIs to allow for fast traffic reroute in case of endpoint failure, and don't carry a noticeable impact on end user performance (resolution is a few ms + network rtt).
The risk of keeping it lower would be your end users would keep hitting a failed ELB node unable to serve requests and get timeouts, which is a tradeoff people isn't likely gonna want.
You might want to look at tuning timeouts for supporting persistent connections (particularly in HTTP2) to improve performance (they will also save time from TCP/TLS session setup): details here.
answered 2 years ago
Relevant content
- asked 9 months ago
- asked 4 months ago
- AWS OFFICIALUpdated 7 months ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago
Giorgio, thank you! I totally understand that AWS infrastructure is very dynamic, and we should be ready for failover.
According to my test right now (dev panel in Chrome, DSL connection) DNS lookup took 0.5 sec (0.504ms)!
It's possible - depends where you are, which resolvers you use and connectivity. Also note, depending on how "hot" the record is (ie how many customers you have), end user's DNS recursors might already have a refreshed version by the time the user sends a request (because another user behind the same recursors already requested it). Overall hard to predict, you will need some real world measurements to understand impact!