By using AWS re:Post, you agree to the Terms of Use

Multiple EBS volumes to gain perfomance


Hi, I want to run "HCL Domino Server 12" on an EC2 instance, Domino is a server specialized on collaboration applications and includes a mail server, we can see it also as web servers that includes a Non-sql database behind as the engine for the email and the apps . During server setup, I can specify different paths for transactional logging, view indexes, mail&database applications, etc. I was thinking on create different filesystems for each route, and assign a different EBS volumen to each path/filesystem, but I do have several concerns / question on it:

  • EBS baseline: I am aware that t3 family EC2 instances have a baseline for CPU on 30%.... What about baselines for EBS ? Does t3 also have a baseline for EBS and credits towards the use of them ? I did not see clearly that info.
  • EBS and IOPs / Troughtput: I guess that if I have 3 EBS disk, and each of them have a performance base-level of 3000 IOPS and 125 MiB/s throughput. Does that means that using 3, I will have 9000 IOPS in total and 375 Mibs ? I am not sure if I have a previous bottleneck on the EC2 (ie, EC2 having a maximum on total for all the disk of 300 Mibs, so even I have multiple EBS volumes the maximum troughput is the one obtained by the EC2 machine)
  • Root Volumes: When you create and EC2 machine on a t3.large instance, how is by default created the Root Volume ? Is using an EBS gp2 or an EBS gp3 volume ?
  • NVMe SSD volumes: I saw EC2 images (ie, m5ad.large) that insted using "normal" EBS SSD volumes on the rrot, they provide you directly with 1x75 NVMe SSD volumes, and higher. I am confused there, since when I mounted additional SSD volumes on my linux systems, they always appeared also as a "NVMe" device. Are not normal gp2 / gp3 volumes NVMe based ? Can someone explained the difference and the value of this 1x75 VNMe SSD volumes offered by the m5ad.large image ?
1 Answer
Accepted Answer

I will try to answer all questions point by point.

  • The CPU credits are not divided between EBS or any other CPU usage. Just like any other operating system some of the CPU is used for storage I/O. AWS provides dedicated connectivity to your EBS volumes so the I/O is not impacted by your network usage. For more information about baseline of different EBS offerings please check:
  • No, the throughput is based on volume. If you have 3 EBS volumes and for example you create Raid 0 you will technically get combined throughput, but I am sure there must be some upper limit which I have not seen yet. The information about Raid 0 and other performance tricks is documented at:
  • When you create t3 instance you get the option to select your EBS volume type.
  • 1x75GV NVMe is the local SSD on the instance, which means you can't detach that SSD and attach it to another instance. Its physicaly connected to the instance because it exists on the same physical host where the instance is hosted.
answered 9 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions