Extend the root (ext4) partition adjacent to 2nd SWAP partition that is adjacent third LVM2 partition that has 2 volume groups in ext4 format inside of it

0

We need to extend the root (nvme0n1p1) partition that is adjacent to the 2nd SWAP (nvme0n1p2) partition that is adjacent to the third (nvme0n1p3) LVM2 partition that has 2 volume groups in ext4 format inside the LVM2. One of the volume groups is mounted at /home/abc and that is where the server hosts the web (django framework) application from (inside a virtual environment | miniconda ).

I am able to (as verified on a staging instance) increase the size of the volume on Amazon and extend the filesystem but only on the 3rd LVM2 partition (tail end) of the (nvme0n1) hard drive.

The first (nvme0n1p1) partition is almost out of space.

After increasing the size of the volume on Amazon EBS, the instructions tell you how to grow the partition (the one i want to grow is ext4)(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html) tell you to:

sudo growpart /dev/nvme0n1 1

but the response is:

NOCHANGE: partition 1 could only be grown by 33 [fudge=2048]

After you're supposed to:

sudo resize2fs /dev/nvme0n1p1

but without growing it, you cant resize the first (nvme0n1p1) ext4 partition I was not able to growpart (or resize2fs) the first partition because it is sandwiched in-between the 2nd and 3rd. How do I accomplish this? - I have to shift the 2nd SWAP and 3rd LVM2 partitions over to allow for sectors to be free for the first partition to be able to be resized, correct?

So since I wasnt able to grow p1, i applied the same commands to p3 and it worked.

sudo growpart /dev/nvme0n1 3

I am able to use Logical Volume Manager using things like:

lvextend -L +5G /dev/mapper/vg_abc-app

to resize the volume groups inside the 3rd partition (nvme0n1p3) of the device.

The information below is from the actual instance i need to work on, it does not reflect an increase in EBS volume size yet.

Here is the lsblk -fm output:

NAME                 FSTYPE      LABEL           UUID                                   MOUNTPOINT                  NAME                   SIZE OWNER GROUP MODE
loop0                squashfs                                                           /snap/core/00000            loop0                114.9M root  disk  brw-rw----
loop2                squashfs                                                           /snap/amazon-ssm-agent/0000 loop2                 26.7M root  disk  brw-rw----
loop3                squashfs                                                           /snap/core/00000            loop3                  114M root  disk  brw-rw----
loop4                squashfs                                                           /snap/core18/0000           loop4                 55.6M root  disk  brw-rw----
loop5                squashfs                                                           /snap/amazon-ssm-agent/0000 loop5                 25.1M root  disk  brw-rw----
loop6                squashfs                                                           /snap/core18/0000           loop6                 55.6M root  disk  brw-rw----
nvme0n1                                                                                                             nvme0n1                 50G root  disk  brw-rw----
├─nvme0n1p1          ext4        cloudimg-rootfs 00000000-0000-0000-0000-000000000000   /                           ├─nvme0n1p1             20G root  disk  brw-rw----
├─nvme0n1p2          swap                        00000000-0000-0000-0000-000000000000   [SWAP]                      ├─nvme0n1p2              2G root  disk  brw-rw----
└─nvme0n1p3          LVM2_member                 00000000-0000-0000-0000-000000000000                             └─nvme0n1p3             28G root  disk  brw-rw----
  ├─vg_abc-logs      xfs                         00000000-0000-0000-0000-000000000000   /var/log                      ├─vg_abc-logs          8G root  disk  brw-rw----
  └─vg_abc-app       xfs                         00000000-0000-0000-0000-000000000000   /home/abc                     └─vg_abc-app          19G root  disk  brw-rw----
nvme1n1              LVM2_member                 00000000-0000-0000-0000-000000000000                             nvme1n1                 50G root  disk  brw-rw----
└─vg_backups-backups xfs                         00000000-0000-0000-0000-000000000000   /home/abc/Backups-Disk      └─vg_backups-backups    49G root  disk  brw-rw----

Here is the output of: df -hT

Filesystem                     Type      Size  Used Avail Use% Mounted on
udev                           devtmpfs   16G     0   16G   0% /dev
tmpfs                          tmpfs     3.1G  306M  2.8G  10% /run
/dev/nvme0n1p1                 ext4       20G   15G  4.5G  77% /
tmpfs                          tmpfs      16G   40K   16G   1% /dev/shm
tmpfs                          tmpfs     5.0M     0  5.0M   0% /run/lock
tmpfs                          tmpfs      16G     0   16G   0% /sys/fs/cgroup
/dev/mapper/vg_abc-logs        xfs       8.0G  1.6G  6.5G  20% /var/log
/dev/loop2                     squashfs   27M   27M     0 100% /snap/amazon-ssm-agent/0000
/dev/loop5                     squashfs   26M   26M     0 100% /snap/amazon-ssm-agent/0001
/dev/mapper/vg_abc-app         xfs        19G  2.0G   18G  11% /home/abc
/dev/mapper/vg_backups-backups xfs        49G  312M   49G   1% /home/abc/Backups-Disk
/dev/loop3                     squashfs  114M  114M     0 100% /snap/core/00000
/dev/loop4                     squashfs   56M   56M     0 100% /snap/core18/0000
/dev/loop0                     squashfs  115M  115M     0 100% /snap/core/00001
/dev/loop6                     squashfs   56M   56M     0 100% /snap/core18/0001
tmpfs                          tmpfs     3.1G     0  3.1G   0% /run/user/1000

Here is the output of:

parted -a optimal /dev/nvme0n1 print free
Model: NVMe Device (nvme)
Disk /dev/nvme0n1: 53.7GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start   End     Size    Type     File system     Flags
        32.3kB  1049kB  1016kB           Free Space
 1      1049kB  21.5GB  21.5GB  primary  ext4            boot
        21.5GB  21.5GB  16.9kB           Free Space
 2      21.5GB  23.6GB  2147MB  primary  linux-swap(v1)
 3      23.6GB  53.7GB  30.1GB  primary                  lvm

How do I increase the root size of nvme0n1p1 without stopping the EC2? The solutions found indicate that the entire drive needs to be formatted and a new partition table made. That would obviously entail stopping the EC2. Please help.

The AWS docs direct you to https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/5/html/deployment_guide/s1-lvm-diskdruid-auto if you have LVM on Nitro based instances. However, that link doesn't help me much wit what I'm trying to do here.

Also, there is a second device, nvme1n1 that is entirely LVM2/xfs mounted at /home/abc/Backups-Disks (Filesystem: /dev/mapper/vg_backups-backups).

Not sure if it makes a difference, as you can see above:

nvme1n1              LVM2_member                 00000000-0000-0000-0000-000000000000                             nvme1n1                 50G root  disk  brw-rw----
└─vg_backups-backups xfs                         00000000-0000-0000-0000-000000000000   /home/abc/Backups-Disk      └─vg_backups-backups    49G root  disk  brw-rw----

I don't believe this other device needs to be changed in any way, however, please advise if so.

Any help is appreciated, Thank you.

2 Answers
0

Hi There

It does not look like you increased the size of the EBS volume. In your lsblk output, i see a 50g nvme0n1 volume and 3 partitions adding up to 50GB (20G + 2G + 28G)

Increase the size of the EBS volume first and then extend the filesystem. See https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/requesting-ebs-volume-modifications.html

nvme0n1                 50G root  disk  brw-rw----
├─nvme0n1p1          ext4        cloudimg-rootfs 00000000-0000-0000-0000-000000000000   /                           ├─nvme0n1p1             20G root  disk  brw-rw----
├─nvme0n1p2          swap                        00000000-0000-0000-0000-000000000000   [SWAP]                      ├─nvme0n1p2              2G root  disk  brw-rw----
└─nvme0n1p3          LVM2_member                 00000000-0000-0000-0000-000000000000                             └─nvme0n1p3             28G root  disk  brw-rw----

Also, which OS are you using? I came across these articles with a similar error showing that you might need to install/update cloud-guest-utils

https://access.redhat.com/solutions/4350511

https://devopstechy.online/how-to-fix-nochange-partition-1-could-only-be-grown-by-6190792671-fudge2048-issue-in-linux/

profile pictureAWS
EXPERT
Matt-B
answered 2 years ago
  • You are correct, I will edit the question. I was able to increase nvme0n1p3 lvm2 partition in another EC2 staging environment. So what I mean to say is that I can get to the point where I can increase the size of the EBS and at least extend the filesystem on the third partition and then change the sizes of the EXT4 volume groups inside the nvme0n1p3 LVM2 partition. I just didn't do that here on this particular instance . OS is Ubuntu 16.04. Looking into the updates for cloud-guest-utils. Thanks.

  • I have the cloud-guest-utils already. it still does not work when trying to grow the first / root partition

0

This is a common partition management issue and is not unique to AWS, and happens on all OSes (win/mac/linux)

To understand the issue better, do a web search for the term 'How to extend C drive to a non adjacent partition' , to get more info on this. Or see - https://www.reddit.com/r/sysadmin/comments/10rl05w/how_to_extend_c_drive_to_a_non_adjacent_partition/

You will need to use a rescue instance along with proper partition management software such as:

  1. GParted (Needs GUI via X11) - see https://www.pcquest.com/pcqlinux-gparted-aws-appliance/
  2. Minitool Partition Manager -Paid
  3. AOMEI Partition Assistant - Paid To shuffle partitions down to the new unused space on the right. Then there should be free space to the right of the partition you want to extend and you regular partition expansion cli or above software can expand the 'sanwiched' partition.

Please take a snapshot before you start.

Also its AWS best practice to keep one partition on one EBS volume, that way you would never run into this issue.

AWS
Jatin
answered a year ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions