Dag nab it, thanks for the response. After taking a few days away from the issue, I started looking at the creation of the EBS device script again with fresh eyes and noticed I was omitting the snapshot-id option. So it was indeed creating a blank EBS device. After adding the snapshot-id option, I'm seeing the partitions present as expected.
Hello, thank you for your post. I understand you have created a snapshot of a root volume of a RHEL 8 EC2 instance. You then created a new EBS volume from the snapshot and then attached the new EBS volume to a RHEL 8 EC2 instance as a secondary volume. After this, lsblk does not show any partitions on the new EBS volume, even though there were multiple partitions originally.
I was unable to reproduce this issue when following the same procedure.
I launched a new EC2 instance using a RHEL 8 AMI, then created a snapshot of its root volume. From the snapshot I created a new EBS volume. I then launched a new EC2 instance using a RHEL 8 AMI again, and attached the EBS volume from the previous step. When I ran the lsblk command, I was able to see that all of the original partitions were present and visible for the secondary volume:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 10G 0 disk
├─nvme0n1p1 259:1 0 1M 0 part
└─nvme0n1p2 259:2 0 10G 0 part /
nvme1n1 259:3 0 10G 0 disk
├─nvme1n1p1 259:4 0 1M 0 part
└─nvme1n1p2 259:5 0 10G 0 part
I encourage you to open a support case so that one of our support engineers can inspect the details of the specific snapshot and EBS volume in question, and provide further assistance with this issue.
- Accepted Answerasked 2 years ago
- AWS OFFICIALUpdated 8 months ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 4 months ago
- AWS OFFICIALUpdated 6 months ago
- EXPERTpublished 4 months ago