How to decrease size of boot volume on M5 instance using NVMe ?

0

I tried following this article to decrease the volume and replace the boot, but it does not work with NVMe based volumes...
https://medium.com/@m.yunan.helmy/decrease-the-size-of-ebs-volume-in-your-ec2-instance-ea326e951bce

Can anyone offer any help on how to accomplish this?

I wish AWS would PLEASE offer a way to easily decrease a volume ( the same way they do with increasing one ). :-/

已提问 3 年前1121 查看次数
4 回答
0

Hello there

Thanks for reaching out to AWS via forum support.

Please note that decreasing the volume size is not supported from AWS. There are many 3rd party article trying to achieve this but we would not comment about its effectiveness. The only effective way is to launch another instance with smaller root volume and then migrate your data.

AWS
已回答 3 年前
0

This is not helpful.

已回答 3 年前
0

From "PAID" AWS Support ...

********* Please note, before performing the following steps, I would suggest you to take a back up of your instance by taking the EBS snapshot and creating an AMI of instance :

[+] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-creating-snapshot.html

[+] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-an-ami-ebs.html

Note: Since the steps for this use case involve changes at filesystem and grub level, it is recommended to test them first in a test environment before using them for a production instance. You may consider creating an AMI/snapshot of current instance (creating the snapshot of the root volume is anyways required), launch a new instance from the same and then perform these steps on the new instance. Once it is confirmed that everything is working correctly and volume size is decreased, you may repeat the same on your production instance.

[+] Launching instance from AMI : https://aws.amazon.com/premiumsupport/knowledge-center/launch-instance-custom-ami/

Please understand, there is no official documentation that has been released by AWS regarding the same. I shall be sharing with you the steps that I have tried in my test environment and have confirmed that they work fine.

================
Steps:-

  1. Stop the instance whose EBS root volume needs to be reduced. Let us call this original instance. We will be working with only this instance and no additional instances.

  2. Create a new EBS volume of size 400 GB in the same Availability Zone as that of the original volume.

  3. Attach the new volume to the instance original instance as /dev/sdf.

  4. Start the original instance and SSH into it.

  5. Execute the following commands

$ sudo su

lsblk

Example:-

lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:1 0 8G 0 disk
`-nvme0n1p1 259:2 0 8G 0 part / ---> Original volume
nvme1n1 259:0 0 2G 0 disk ---> New Volume

  1. Create a partition in the new EBS volume using the following command:

fdisk /dev/nvme1n1

 Remember to use the correct device name.  

Note: Press 'n' after executing the above command to create a partition and accept all the defaults ( Create a primary partition 'p' and the partition number should be '1'. Let all other values remain default). When prompted to enter a command again, press 'w' to write the changes made into memory.

  1. Create a new XFS filesystem on the partition created in the new EBS volume using the following command:

mkfs.xfs /dev/nvme1n1p1

  1. Mount the new EBS volume using the following command:

mount /dev/nvme1n1p1 /mnt

  1. Copy the contents from current EBS volume to the new EBS volume device using the following command:

rsync -aAXv --exclude={"/dev/","/proc/","/sys/","/tmp/","/run/","/mnt/","/media/*","/lost+found","/","/mnt"} / /mnt

  1. Execute the following command to add some bind mounts to prepare for doing chroot.

for dir in {/dev,/dev/pts,/sys,/proc}; do mount -o bind $dir /mnt$dir; done

  1. Chroot to the new EBS volume device using the following command :

chroot /mnt /bin/bash

Make sure that that the chroot command has executed successfully, without any errors.

  1. Note the UUID of the new EBS volume using the following command :

xfs_admin -u /dev/nvme1n1p1

  1. Note the UUID of the original EBS volume using the following command :

cat /etc/fstab

  1. Replace the existing UUID (the one found in /etc/fstab) with the UUID of the new EBS volume device (found in Step 19) in /etc/grub2.cfg using the command:

sed -i 's/<UUID found in /etc/fstab>/<UUID of new EBS volume device>/g' /etc/grub2.cfg

Example:-

sed -i 's/388a99ed-9486-4a46-aeb6-06eaf6c47675/cbfd60ce-27af-4e5c-a860-0afe55f854df/g' /etc/grub2.cfg


  1. Reinstall grub in the new EBS volume and update the grub config :

grub2-install /dev/nvme1n1

grub2-mkconfig -o /boot/grub2/grub.cfg

  1. Edit the file /etc/fstab and change the root filesystem UUID to the one you got from step 12.

    Save and exit the editor.

  2. Exit from chroot :

exit

  1. Stop the original instance

  2. Detach the original and new EBS volumes from the original instance.

  3. Attach the new EBS volume to the original instance as /dev/sda1 and start the instance.

  4. Wait for the original instance to pass status checks and SSH into the instance. Verify everything is working as intended.

Please note, I would like to reiterate here that since the above steps involve changes at the filesystem and grub level, I would suggest that you try these steps on a test instance before using them on a production instance.

已回答 3 年前
0

Hi there, step 9 and step 10 are confused because step 9 exclude directories which used in step 10. After step 9, there is no /mnt/dev for example.

hai
已回答 2 年前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则