- Newest
- Most votes
- Most comments
Instead of VM Import, can you do a Snapshot Import? Snapshot import does not check kernel version.
As it does not inject the necessary drivers for your OS to run on AWS, you will need to install necessary drivers. These are Xen PV drivers and NVMe & ENA drivers for HVM and Nitro instance types respectively.
You can use the script at Why is my Linux instance not booting after I changed its type to a Nitro-based instance type? to ensure your OS works with Nitro instance types.
Happy to know that you have configured it. Serial Console is only available on Nitro instance types.
Hello Mike,
We did exactly what you suggested and now our network appliance works like a charm on a Xen based VM as well as on a KVM based VM.
Thanks a lot for the key information you provided.
Best Regards, The CacheGuard development team
Thanks a lot for your response.
We proceeded with the snapshot import as you suggested and the import worked perfectly. Then, we followed all instructions given at https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-import-snapshot.html.
We used a Debian 11 AMI instance on which we have had to detach the default volume in order to attach our bootable volume (based on the disk snapshot that we had to import).
Our first attempt to start the instance finished with the error message: Failed to start the instance i-xxxxxxxxxxxxxxxxx Invalid value 'i-xxxxxxxxxxxxxxxxx' for instanceId. Instance does not have a volume attached at root (/dev/xvda)
Then we detached the volume and re-attached it by replacing /dev/sdf by /dev/xvda in the command:
aws ec2 attach-volume --volume-id vol-xxxxxxxxxxxxxxxxx --instance-id i-xxxxxxxxxxxxxxxxx --device /dev/sdf
.
This time, the instance has been successfully started but we've been unable to access to our VM via SSH. We've been completely left in the dark as the usual console port on any HyperVisor is not available on AWS (why?). And it seems that the serial console port is also inaccessible on the instance type that we use.
However, the screenshot below shows that there is apparently no issues on our instance (instance summary window / Status Check tab):
We confirm that the security group associated to our public NIC allows SSH.
What did we miss or have wrongly done?
It is important to notice that we used VirtualBox 7.0.8 under Ubuntu to install our OS on the virtual disk that we had to import as a snapshot. We might emphasize, in passing, that the VMDK format generated by our VirtualBox can't be imported as a snapshot (we got an invalid format error). We had to use the VHD format.
Some question:
- What storage adapter should we use on VirtualBox to be compatible with the AWS hypervisor? We used AHCI. Is this correct?
- We read somewhere that AWS network adapters use the ena Linux module as driver. Is this still true?
- In the VM under VirtualBox, the first disk (OS disk) is identified by /dev/sda and our root partition is mounted on it while AWS reports that the bootable disk should be /dev/xvda. Should we worry about this point?
Thanks in advance for any responses.
Relevant content
- asked 8 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 2 years ago
Thanks for providing the information. It helps. I have updated my answer with more information
To answer your question, AWS use a mix of Xen (HVM) and Nitro hypervisors depending on the instance type you choose. You need PV storage and network drivers for Xen , and NVMe & ENA network drivers for Nitro. You will need to configure your fstab to use disk UUID/label instead of device name. The script referenced in updated post will check for this. Serial console is only available for Nitro instance types.