- Newest
- Most votes
- Most comments
I stopped the instance. The volume then attached with no problem. The error message of "volume is already attached to an instance" is apparently not identifying the actual issue to be resolved.
I would raise a support ticket for this - it's not possible from this distance to see what might be happening.
I created a fresh volume, attached it, and did a copy from an existing instance that had the volume mounted. As I found a workaround, I'll not be pursuing this further since it was for a one time migration. If this was an issue I was having frequent issues with, I'd pursue it. If someone else has this same issue, I encourage them to follow up with AWS and a support ticket.
Hi, I think is important to understand why this is happening.
The filesystem can be mounted by a single server at the same time, if two servers mount the same filesystem corruption will happen, to have two EC2 instances mounting the same filesystem this needs to be a CFS (Cluster Filesystem), they are a few around but I will expand this in another thread.
The reference [1] is the official documentation that explains how long may take, this is just a reference time frame. Explains a few ways to work around the issue.
In short, AWS Drivers are to ensure the condition of EBS is mounted by a single instance are there to protect the data, having layers of protection to ensure NO two instances have the EBS mounted are the goal.
Reference: [1] https://aws.amazon.com/premiumsupport/knowledge-center/ebs-stuck-attaching/
Relevant content
- asked 8 months ago
- asked 4 years ago
- asked 2 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 5 months ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 6 months ago
After attachment, the instance will not start. Error in EC2 console is "Server.InternalError: Internal error on launch"
I tried making a new volume not using a snapshot and attaching it. It worked with no issues. Something about making the volume from a snapshot that is causing the issue.