Why is my EC2 Linux instance migrated with Application Migration Service or Disaster Recovery Service failing instance status checks?

4 minute read
1

Instance status checks are failing on my Amazon Elastic Compute Cloud (Amazon EC2) Linux instance. I migrated the instance using AWS Application Migration Service or AWS Disaster Recovery Service.

Resolution

Note: The following resolution includes the most common reasons for instance status check failures. Application Migration Service conversion servers make bootloader changes, inject hypervisor drivers, and installs cloud tools. With instance type right-sizing, instance status check failures for exhausted memory are uncommon. Corrupted file systems usually occur on the source machine.

When troubleshooting instance status check failure, keep the Linux boot process in mind when checking the following:

The typical boot order is: Power on - power on self test (POST) - BIOS/UEFI - Master boot record/EFI - Boot loader - Kernel (and initramfs)

The boot sequence might differ for some operating systems.

Incorrect startup configuration

Instance failed to reach the boot loader (GRUB)

The following error occurs if the instance failed to reach the boot loader (GRUB):

No bootable device. Retrying in 60 seconds.

Booting from hard disk 0...

To troubleshoot the preceding error, verify the following:

You might see a GRUB prompt similar to the following if there's an issue with a GRUB configuration file (grub.cfg):

grub>

An instance console screenshot that shows the boot still in the GRUB bootloader indicates an issue with the grub.cfg file. The grub.cfg is usually located in /boot/grub2/grub.cfg, /boot/grub/grub.cfg, or /boot/grub/grub.conf.

Incompatible kernel

Issues with the kernel or drivers

If GRUB errors described in the preceding section aren't found, then troubleshoot the kernel and drivers.

Xen platform:

Previous generation instance types (m4, c4, r4) run on the Xen platform. Operating systems running on this platform require xen-blkfront and xen-netfront drivers. Failure to have both of these drivers installed results in instance status check failure. This failure might appear as missing drives in the console output as shown in the following example:

[ ***] dracut-initqueue[679]: Warning: dracut-initqueue timeout - starting timeout scripts

The preceding failure occurs when the initramfs is missing the required drivers.

If there are errors related to initramfs or the kernel, then rebuild the initramfs. For more information, see I'm receiving a "Kernel panic" error after I've upgraded the kernel or tried to reboot my EC2 Linux instance

Nitro-based instances:

Nitro-based instances require both NVME drivers (for the EBS volumes) and ENA drivers (for the network interfaces). Failure to have both of these drivers results in instance status check failure. This failure might appear as missing drives in the console output as shown in the following example:

[***   ] A start job is running for dev-disk...e2.device (12min 17s / no limit)

For information on resolving the preceding error, see Why is my Linux instance not booting after I changed its type to a Nitro-based instance type?

Incorrect networking configuration

There are many network configurations for source servers. The upstream maintainers of the applications used to manage these configurations have detailed documentation on their configuration. If you suspect a network configuration issue, access the instance and review the configuration. To do this, use one of the following methods outlined in Why is my EC2 Linux instance not booting and going into emergency mode:

The following are common network configuration management tools:

Related information

My EC2 Linux instance failed the instance status check due to operating system issues. How do I troubleshoot this?

System status checks