How do I benchmark network throughput between Amazon EC2 Linux instances in the same Amazon VPC?
I want to measure the network bandwidth between Amazon Elastic Compute Cloud (Amazon EC2) Linux instances in the same Amazon Virtual Private Cloud (Amazon VPC). How can I do that?
Short description
Here are some factors that can affect Amazon EC2 network performance when the instances are in the same Amazon VPC:
- The physical proximity of the EC2 instances. Instances in the same Availability Zone are geographically closest to each other. Instances in different Availability Zones in the same Region, instances in different Regions on the same continent, and instances in different Regions on different continents are progressively farther away from one another.
- The EC2 instance maximum transmission unit (MTU). The MTU of a network connection is the largest permissible packet size (in bytes) that your connection can pass. All EC2 instances types support 1500 MTU. All current generation Amazon EC2 instances support jumbo frames. In addition, the previous generation instances, C3, G2, I2, M3, and R3 also use jumbo frames. Jumbo frames allow more than 1500 MTU. However, there are scenarios where your instance is limited to 1500 MTU even with jumbo frames. For more information, see Jumbo frames (9001 MTU).
- The size of your EC2 instance. Larger instance sizes for an instance type typically provide better network performance than smaller instance sizes of the same type. For more information, see Amazon EC2 instance types.
- Amazon EC2 enhanced networking support for Linux, except for T2 and M3 instance types. For more information, see Enhanced networking on Linux. For information on enabling enhanced networking on your instance, see How do I enable and configure enhanced networking on my EC2 instances?
- Amazon EC2 high performance computing (HPC) support that uses placement groups. HPC provides full-bisection bandwidth and low latency, with support for up to 100-gigabit network speeds, depending on the instance type. To review network performance for each instance type, see Amazon Linux AMI instance type matrix. For more information, see Launch instances in a placement group.
- The instance uses a network I/O credit mechanism to allocate network bandwidth. Instances designated with a **†**symbol in the Network performance column in General purpose instances - Network performance can reach the designated maximum network performance. However, these instances use a network I/O credit mechanism to allocate bandwidth to instances based on average bandwidth utilization. So, network performance varies for these instances.
Because of these factors, you might experience significant network performance differences between different cloud environments. It's a best practice to regularly evaluate and baseline the network performance of your environment to improve application performance. Testing network performance provides valuable insight for determining the EC2 instance types, sizes, and configurations that best suit your needs. You can run network performance tests on any combination of instances you choose.
For more information, open an AWS Support case and ask for additional network performance specifications for the specific instance types that you're interested in.
Resolution
Before beginning benchmark tests, launch and configure your EC2 Linux instances:
1. Launch two Linux instances that you can run network performance testing from.
2. Verify that the instances support enhanced networking for Linux, and that they are in the same Amazon VPC.
3. (Optional) If you're performing network testing between instances that don't support jumbo frames, then follow the steps in Network maximum transmission unit (MTU) for your EC2 instance to check and set the MTU on your instance.
4. Connect to the instances to verify that you can access the instances.
Install the iperf network benchmark tool on both instances
In some distros, such as Amazon Linux, iperf is part of the Extra Packages for Enterprise Linux (EPEL) repository. To enable the EPEL repository, see How do I enable the EPEL repository for my Amazon EC2 instance running CentOS, RHEL, or Amazon Linux?
Note: The command iperf refers to version 2.x. The command iperf3 refers to version 3.x. Use version 2.x when benchmarking EC2 instances with high throughput because version 2.x provides multi-thread support. Although version 3.x also supports parallel streams using the -P flag, version 3.x is single-threaded and limited by a single CPU. Due to this, version 3.x requires multiple processes running in parallel to drive the necessary throughput on bigger EC2 instances. For more information, see iperf2/iperf3 on the ESnet website.
Connect to your Linux instances, and then run the following commands to install iperf.
To install iperf on RHEL 6 Linux hosts, run the following command:
# yum -y install https://dl.fedoraproject.org/pub/archive/epel/6/x86_64/epel-release-6-8.noarch.rpm && yum -y install iperf
To install iperf on RHEL 7 Linux hosts, run the following command:
# yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm && yum -y install iperf
To install iperf on Debian/Ubuntu hosts, run the following command:
# apt-get install -y iperf
To install iperf on CentOS 6/7 hosts, run the following command:
# yum -y install epel-release && yum -y install iperf
Test TCP network performance between the instances
By default, iperf communicates over port 5001 when testing TCP performance. However, you can configure that port by using the -p switch. Be sure to configure your security groups to allow communication over the port that iperf uses.
1. Configure one instance as a server to listen on the default port, or specify an alternate listener port with the -p switch. Replace 5001 with your port, if different:
$ sudo iperf -s [-p 5001]
2. Configure a second instance as a client, and run a test against the server with the desired parameters. For example, the following command initiates a TCP test against the specified server instance with 40 parallel connections:
$ iperf -c 172.31.30.41 --parallel 40 -i 1 -t 2
Note: For a bidirectional test with iperf (version 2), use the -r option on the client side.
Using these specified iperf parameters, the output shows the interval per client stream, the data transferred per client stream, and the bandwidth used by each client stream. The following iperf output shows test results for two c5n.18xlarge EC2 Linux instances launched in a cluster placement group. The total bandwidth transmitted across all connections is 97.6 Gbits/second:
------------------------------------------------------------------------------------ Client connecting to 172.31.30.41, TCP port 5001 TCP window size: 975 KByte (default) ------------------------------------------------------------------------------------ [ 8] local 172.31.20.27 port 49498 connected with 172.31.30.41 port 5001 [ 38] local 172.31.20.27 port 49560 connected with 172.31.30.41 port 5001 [ 33] local 172.31.20.27 port 49548 connected with 172.31.30.41 port 5001 [ 40] local 172.31.20.27 port 49558 connected with 172.31.30.41 port 5001 [ 36] local 172.31.20.27 port 49554 connected with 172.31.30.41 port 5001 [ 39] local 172.31.20.27 port 49562 connected with 172.31.30.41 port 5001 ... [SUM] 0.0- 2.0 sec 22.8 GBytes 97.6 Gbits/sec
Test UDP network performance between the instances
By default, iperf communicates over port 5001 when testing UDP performance. However, the port that you use is configurable using the -p switch. Be sure to configure your security groups to allow communication over the port that iperf uses.
Note: The default for UDP is 1 Mbit per second unless you specify a different bandwidth.
1. Configure one instance as a server to listen on the default UDP port, or specify an alternate listener port with the -p switch. Replace 5001 with your port, if different:
$ sudo iperf -s -u [-p 5001]
2. Configure a second instance as a client, and then run a test against the server with the desired parameters. The following example initiates a UDP test against the specified server instance with the -b parameter set to 5g.
The -b parameter changes the bandwidth to 5g from the UDP default of 1 Mbit per second. 5g is the maximum network performance a c5n18xlarge instance can provide for a single traffic flow within a VPC. For more information, see New C5n instances with 100 Gpbs networking.
Note: UDP is connectionless and doesn't have the congestion control algorithms that TCP has. When testing with iperf, the bandwidth obtained with UDP might be lower than the bandwidth obtained with TCIP.
# iperf -c 172.31.1.152 -u -b 5g
The output shows the interval (time), the amount of data transferred, the bandwidth achieved, the jitter (the deviation in time for the periodic arrival of data grams), and the loss/total of UDP datagrams:
$ iperf -c 172.31.30.41 -u -b 5g ------------------------------------------------------------------------------------ Client connecting to 172.31.30.41, UDP port 5001 Sending 1470 byte datagrams, IPG target: 2.35 us (kalman adjust) UDP buffer size: 208 KByte (default) ------------------------------------------------------------------------------------ [ 3] local 172.31.20.27 port 39022 connected with 172.31.30.41 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 5.82 GBytes 5.00 Gbits/sec [ 3] Sent 4251700 datagrams [ 3] Server Report: [ 3] 0.0-10.0 sec 5.82 GBytes 5.00 Gbits/sec 0.003 ms 1911/4251700 (0.045%) [ 3] 0.00-10.00 sec 1 datagrams received out-of-order
Related information
Related videos

Relevant content
- asked 4 months agolg...
- Accepted Answerasked a year agolg...
- asked 2 years agolg...
- asked 9 months agolg...
- asked a year agolg...
- AWS OFFICIALUpdated 10 months ago
- AWS OFFICIALUpdated 8 months ago
- AWS OFFICIALUpdated 2 years ago