- Newest
- Most votes
- Most comments
In general terms, "can't find a boot device" means, when you've already checked that the correct drive has the proper device name, as you did, that the disk is corrupted or missing data, such that it either cannot be mounted at all or doesn't have the contents needed even to try to start the operating system. You mentioned that the disks were mounted, but I suspect you mean that the EBS volumes are attached to the EC2 instance at the AWS level, but I don't see how they could be mounted at the operating system level, if BIOS/UEFI can't even find its boot device to try to start from.
You could create a temporary EC2 instance in the same AZ where the problematic EBS volume resides. Make sure you can log on to the new instance. Then attach the problematic EBS volume to the new instance. You can try mounting it there and see if any of its contents are accessible. With luck, you might be able at least to recover the data, configurations, etc. you need, even if the original server can no longer be restored to life. Given your microscopic instance sizes, I expect you're using Linux, so if the disk contents are mostly healthy but the OS just won't boot, you could also try copying most of the programs and data directly to a new server's disk and see if you can get your application/system to work on the new server.
Relevant content
- asked 4 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago