Attachment order for EBS volumes as /dev/nvme devices

0

Hello,

We started seeing (from what I can find, our old instances from months ago don't exhibit this behavior) that the order of attached EBS volumes changes after first reboot. For example, we attach (using AWS console):

vol-011117cfde1966e5f as /dev/sdf

vol-0222290fbbd8a3b79 as /dev/sdg

and they immediately show up as /dev/nvme1n1 and /dev/nvme2n1. After reboot, they change order, where vol-011117cfde1966e5f becomes /dev/nvme2n1 and vol-0222290fbbd8a3b79 becomes /dev/nvme1n1. This order becomes permanent even if you reboot again any number of times. In the console, vol-0111* is still listed first alphabetically as sdf and vol-0222* second as sdg.

I'm seeing this behavior on CentOS 7.9, RockyLinux and AlmaLinux 8.6 and RockyLinux 9.0, so it doesn't seem to be specific to any operating system. I tested with t3a and m6i instance types.

I am aware that we can mount filesystems using UUIDs to ensure order. I also know that https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/nvme-ebs-volumes.html says "The block device driver can assign NVMe device names in a different order than you specified for the volumes in the block device mapping."

The question is whether it's expected behavior that this order changes after first reboot only, and then doesn't change again?

질문됨 2년 전118회 조회
답변 없음

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠