- Newest
- Most votes
- Most comments
I believe its due to the following:
The most likely explanation is that you did not properly unmount the old volume before you reboot your instance. This can happen if you simply detach the volume from your instance without unmounting it first. When you reboot your instance, the kernel will automatically mount the first available volume, which in your case was the old volume. Another possibility is that you have a typo in your ~/.bashrc file. The source /etc/environment command is used to load the environment variables from the /etc/environment file into your current shell environment. If there is a typo in this command, it could cause the kernel to load the environment variables from the old volume instead of the new volume. Finally, it is also possible that there is a problem with the new volume itself. If the new volume is corrupt or damaged, it may not be able to be mounted properly. To troubleshoot this issue, you can try the following steps:
Check the status of the old volume to make sure that it is not still mounted. You can do this by running the following command: sudo lsblk If the old volume is still mounted, you can unmount it by running the following command:
sudo umount /dev/xvda1 Check the ~/.bashrc file for any typos. You can do this by opening the file in a text editor and searching for the source /etc/environment command. Try mounting the new volume manually. You can do this by running the following command: sudo mount /dev/xvdf1 /mnt
Relevant content
- asked a year ago
Thank you for the assistance Subhaan!
The old volume which I am now stuck on has so many issues that I can't even run certain commands such as lsblk. It will be easier for me to start from scratch since I didn't do too much work on the volume I just installed. I am now creating a new volume, and permanently deleting the older volumes before using the new volume. The method that I am using to create a new volume is:
This is the same process I went through last time. Given what you said: "This can happen if you simply detach the volume from your instance without unmounting it first" I am unsure if the way I am going about this is correct.
I am unsure what these AWS commands are doing on the backend exactly. Do you know if this is the right way to clear the volume and launch a new one?