Attachment order for EBS volumes as /dev/nvme devices

0

Hello,

We started seeing (from what I can find, our old instances from months ago don't exhibit this behavior) that the order of attached EBS volumes changes after first reboot. For example, we attach (using AWS console):

vol-011117cfde1966e5f as /dev/sdf

vol-0222290fbbd8a3b79 as /dev/sdg

and they immediately show up as /dev/nvme1n1 and /dev/nvme2n1. After reboot, they change order, where vol-011117cfde1966e5f becomes /dev/nvme2n1 and vol-0222290fbbd8a3b79 becomes /dev/nvme1n1. This order becomes permanent even if you reboot again any number of times. In the console, vol-0111* is still listed first alphabetically as sdf and vol-0222* second as sdg.

I'm seeing this behavior on CentOS 7.9, RockyLinux and AlmaLinux 8.6 and RockyLinux 9.0, so it doesn't seem to be specific to any operating system. I tested with t3a and m6i instance types.

I am aware that we can mount filesystems using UUIDs to ensure order. I also know that https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/nvme-ebs-volumes.html says "The block device driver can assign NVMe device names in a different order than you specified for the volumes in the block device mapping."

The question is whether it's expected behavior that this order changes after first reboot only, and then doesn't change again?

已提問 2 年前檢視次數 118 次
沒有答案

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南