EC2 reverted to older volume

0

I created a new volume to alleviate issues I was having with my EC2 instance. I was using this new volume, and yesterday, without touching the console (you can see when you change volumes) the data on my EC2 instance all of a sudden changed back to the data from the older volume!

All I did was append the ~/.bashrc file with "source /etc/environment" and then after rebooting all the data on my server was from the old volume.

How did this happen? I tried unmounting the old volume since it looked like both volumes were mounted, and I "Force unmonted" it and it didn't fix anything. Any suggestions?

asked 2 years ago375 views
1 Answer
0

I believe its due to the following:

The most likely explanation is that you did not properly unmount the old volume before you reboot your instance. This can happen if you simply detach the volume from your instance without unmounting it first. When you reboot your instance, the kernel will automatically mount the first available volume, which in your case was the old volume. Another possibility is that you have a typo in your ~/.bashrc file. The source /etc/environment command is used to load the environment variables from the /etc/environment file into your current shell environment. If there is a typo in this command, it could cause the kernel to load the environment variables from the old volume instead of the new volume. Finally, it is also possible that there is a problem with the new volume itself. If the new volume is corrupt or damaged, it may not be able to be mounted properly. To troubleshoot this issue, you can try the following steps:

Check the status of the old volume to make sure that it is not still mounted. You can do this by running the following command: sudo lsblk If the old volume is still mounted, you can unmount it by running the following command:

sudo umount /dev/xvda1 Check the ~/.bashrc file for any typos. You can do this by opening the file in a text editor and searching for the source /etc/environment command. Try mounting the new volume manually. You can do this by running the following command: sudo mount /dev/xvdf1 /mnt

profile picture
answered 2 years ago
  • Thank you for the assistance Subhaan!

    The old volume which I am now stuck on has so many issues that I can't even run certain commands such as lsblk. It will be easier for me to start from scratch since I didn't do too much work on the volume I just installed. I am now creating a new volume, and permanently deleting the older volumes before using the new volume. The method that I am using to create a new volume is:

    1. Go to the EC2 instance in question.
    2. On the top left click "Actions" -> "Monitor and Troubleshoot" -> "Replace Root Volume".

    This is the same process I went through last time. Given what you said: "This can happen if you simply detach the volume from your instance without unmounting it first" I am unsure if the way I am going about this is correct.

    I am unsure what these AWS commands are doing on the backend exactly. Do you know if this is the right way to clear the volume and launch a new one?

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions