How can I get maximum I/O performance from my EBS volumes that are hosted on EC2 Nitro-based instances?

4 minute read

I run my workload on Amazon Elastic Compute Cloud (Amazon EC2) Nitro-based instances. I want to make sure that I get the maximum I/O performance from Amazon Elastic Block Store (Amazon EBS) volumes that are hosted on my instances.


1.     Check whether your EBS volume reached its IOPS quota. Latency can increase when your volume reaches its IOPS quota, and increased latency affects performance. For information, see How do I optimize the performance of my Amazon EBS Provisioned IOPS volumes?
Note: If you use a GP2 volume, then check that your volume didn't exhaust the burst credits.

2.    To use NVMe storage, you must run one of these operating systems (OSs):

  • Amazon Linux Amazon Machine Image (AMI) or later and kernel 4.12 or later
  • CentOS - 7.0 or later and kernel 3.10 or later
  • Red Hat - 7.0 or later and kernel 3.10 or later
  • Ubuntu 19.10 with kernel 5.0 or Ubuntu 18.04.03 with kernel 5.0 and later
    Note: For these Ubuntu versions, multi-queue is turned on by default.
  • Ubuntu - 16.04 or 16.10
    Note: For these Ubuntu versions, multi-queue schedulers aren't kernel compiled and need separate module loading
  • SUSE 12 or SUSE 11 with SP3 or later
  • Windows Server 2008 R2, 2012 R2, and 2016 or later

Or, make sure that the kernel version supports an I/O scheduler with multi-queue capability. The most common multi-queue I/O schedulers are kyber, mq-deadline, and budget fair queue (bfg).

Note: For OSs such as Oracle, Linux, or Debian, use a kernel version that includes or supports a multi-queue I/O scheduler. CentOS and its kernel version supports a multi-queue I/O scheduler.

If you use an earlier version of these OSs, then you might see a decline in the I/O performance because Nitro-based instances have multi-queue processing at the host level. This creates incompatibility between the scheduler at the OS and host levels.

Before the volume intercepts I/O read or write requests that are submitted to the EBS volume, the requests travel through several layers. For older kernel versions with non multi-queue schedulers on Nitro-based instances, you sometimes see a delay in the I/O scheduler (I2D) layer. The delay occurs in tests and benchmark results that use blktrace, blkparse, and btt tools. For more information on these tools, see blktrace, blkparse, and btt on the website.

To improve the I/O performance on Nitro-based instances, CentOS 7 has the Multi-Queue Block I/O Queueing Mechanism (blk-mq) that allows device drivers to map I/O requests to multiple hardware or software queues. For maximum performance on Nitro-based systems, it's a best practice to use an up-to-date OS with the latest kernel.

I/O scheduler on CentOS 6

$cat /sys/block/xvdf/queue/scheduler noop anticipatory deadline \[cfq\]$cat config-2.6.32-754.30.2.el6.x86\_64 | grep -i blk\_mq

Note: Because the config file in the CentOS 6 kernel uses noop scheduler, it doesn't return the blk_mq.

I/O scheduler on Redhat 9 and Kernel 5.14 and later

cat /sys/block/<EBS device name>/queue/scheduler \[none\] mq-deadline kyber bfq

Before your choose a scheduler, review the details of each one. For more information, see Available disk schedulers on the Red Hat website.

To update the scheduler at the OS level during the run time of the EC2 instance, run the following command:

#sudo echo 'kyber'> /sys/block/<EBS device name>/queue/scheduler

To permanently change the I/O scheduler, modify the grub configuration and update the elevator parameter. The following steps are for CentOS and Red Hat Enterprise Linux (RHEL):

1.    Run the following command:

#sudo vim /etc/default/grubGRUB\_CMDLINE\_LINUX="crashkernel=auto rhgb quiet elevator=kyber"

2.    Run the following command:

#sudo grub2-mkconfig -o /boot/grub2/grub.cfg

If the instance is rebooted, then the I/O scheduler remains set.

AWS OFFICIALUpdated 7 months ago