I find the latency test for Oregon to be quite out-of-the ordinary; I just ran a test with some instances across different AZs in Oregon and got times that I expected (single digit millisecond or less). I suspect there was some network anomaly there while the test for the blog was running - but as always, I encourage customers to do their own tests because there are so many variables - operating systems, software versions, test stacks, application stacks, etc.
As for the 1 MB transfer table: If you look at the numbers and compare them to the 10 MB transfer table you'll see a lot of transfer times that are very close to each other despite one transfer being ten times the size of the other. For small transfers (and 1 MB is a small transfer) there will be a fair chunk of that taken in negotiating the TLS session and then some time will be taken by S3 to deal with the file that is being uploaded.
Again, there are going to be variances in this particular benchmark - far greater than with the latency test because S3 is a multi-tenant service. When the test is being run you simply don't know what else is going on; other operations that other customers are running; or what is happening within the service itself. Naturally, this is why the blog post writer has run many tests to get a good set of data - but it's a good idea to run those tests across several days; weeks or even months to ensure that the data set is representative of the conditions being tested.
The great thing about creating a test suite and a set of data like that (especially if you're testing so that you have a benchmark to compare your production systems against) is that you can see when performance improves (or degrades) and (perhaps) do something about it; or at least take that data and explain why your system might be behaving in a different way.
To answer your final question: Regions have all been built at different times and there will be differences in how they were constructed. Regions are always undergoing change - mostly expansion; but also new services being deployed; and existing services (even ones you can't see like the network) being upgraded. Each region is built on a different geography - there's no way to get the same length fibre runs between AZs in a different city and the speed of light in glass (or the speed of electrons in copper) does make a difference - it isn't infinite.
TL;DR: Run your own tests if these types of numbers matter; there are many, many variables.
- asked 4 years ago
- Accepted Answerasked 8 months ago
- Accepted Answerasked 7 months ago
- asked 8 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 7 months ago
- AWS OFFICIALUpdated 7 months ago
- EXPERTpublished 3 months ago
- EXPERTpublished 7 months ago
When i look at the inter-region ping test result for North virginia against Frankfurt I see that the ping is 169 ms. I'm currently testing an application thats running on localhost but is communicating with a database in the region North virginia. For a simple query like
SELECT 1 FROM tablei get an average ping of about 552 milliseconds. Thats a lot more then that reported 169 milliseconds. I live relatively near Frankfurt, (Groningen to be exact). Is the difference in latency caused because i'm using the public internet instead of the private network of AWS to do the query? Thank you
A ping test is (generally) two packets: An echo-request; and echo-reply. That's it.
A database query is part of a TCP session (which may or may not be already established; if not already established it's three packets to start then an unknown number of packets to authenticate to the database; then:) and depending on the database you might only have two packets (one for the query, on for the acknowledgment) and then the query response (an undetermined number of packets depending on the size of the response).
It's unsurprising that a database query is longer than a ping test because you're not testing the round-trip time of the network; you're testing that plus the database response time and that's dependent on the load on the database as well (and perhaps CPU, memory, disk load on the server if it isn't single-purpose).
Assuming a 169ms round-trip time, 552/169 = (approx) 3 round trips which seems somewhat correct (two round trips - one for query; one for response) plus some database response time. But I'm guessing. The only way to know is to run the same test across multiple different scenarios.