Unable to resize AWS EC2 storage with growpart

0

I'm trying to update the disk size of my EC2 instance and am having problems.

I was able to use the AWS console and modify the size of the drive to 224GB. This is reflected when I use lsblk. I can't get the partition to resize though.

This is what lsblk displays:

sudo lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
nvme0n1     259:0    0   224G  0 disk
|-nvme0n1p1 259:1    0     1M  0 part
|-nvme0n1p2 259:2    0     1K  0 part
|-nvme0n1p5 259:3    0   976M  0 part /boot
`-nvme0n1p6 259:4    0 110.9G  0 part
  |-vg-root 253:0    0  94.6G  0 lvm  /
  |-vg-tmp  253:1    0   976M  0 lvm  /tmp
  `-vg-swap 253:2    0  15.3G  0 lvm  [SWAP]

When I use the growpart command this is the output I get:

sudo growpart /dev/nvme0n1 6
CHANGED: partition=6 start=2007040 old: size=232433664 end=234440704 new: size=467754975,end=469762015

I then try to use the resize2fs command and this is what I get:

sudo resize2fs /dev/nvme0n1p6
resize2fs 1.44.1 (24-Mar-2018)
resize2fs: Device or resource busy while trying to open /dev/nvme0n1p6
Couldn't find valid filesystem superblock.

I tried rebooting the EC2 instance and when it comes back online I get the same results from lsblk.

I'm not quite sure what I'm doing wrong here. I'm following this guide:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html

  • Do you know what filesystem is being used for that partition? If you know the linux distro, that might also help.

  • @Paul Frederiksen The linux distro is Ubuntu 18.04.6 LTS (GNU/Linux 4.15.0-191-generic x86_64)

    As for the filesystem type, I'm not 100% sure how to get that information. This is what I get when I run df -T

    Filesystem          Type     1K-blocks     Used Available Use% Mounted on
    udev                devtmpfs   8028464        0   8028464   0% /dev
    tmpfs               tmpfs      1612192     3168   1609024   1% /run
    /dev/mapper/vg-root ext4      97066316 80446960  11642512  88% /
    tmpfs               tmpfs      8060956        0   8060956   0% /dev/shm
    tmpfs               tmpfs         5120        0      5120   0% /run/lock
    tmpfs               tmpfs      8060956        0   8060956   0% /sys/fs/cgroup
    /dev/nvme0n1p5      ext2        982504   148916    783620  16% /boot
    /dev/mapper/vg-tmp  ext4        964900       76    898472   1% /tmp
    tmpfs               tmpfs      1612188        0   1612188   0% /run/user/1012
    tmpfs               tmpfs      1612188        0   1612188   0% /run/user/1016
    
Rudy
asked 2 months ago496 views
2 Answers
1

/dev/nvme0n1p6 doesn't have a filesystem on it, instead it is presented to the Logical Volume Manager as a physical volume (a.k.a. PV).

You need to run sudo pvresize /dev/nvme0n1p6 to tell LVM that the underlying device has grown. Once this has completed the extra diskspace should be reflected in the output of sudo vgs and sudo vgdisplay. Then grown the partition(s) that you want to grow with sudo lvextend and sudo resize2fs.

Full steps are here https://repost.aws/knowledge-center/ebs-extend-volume-lvm-partitions

profile picture
EXPERT
Steve_M
answered 2 months ago
  • I tried doing that before, and just gave it another try. What I get is this:

    sudo pvresize /dev/nvme0n1p6
      Physical volume "/dev/nvme0n1p6" changed
      1 physical volume(s) resized / 0 physical volume(s) not resized
    

    But when I go in and check with vgs I still get the same size displayed:

    sudo vgs
      VG #PV #LV #SN Attr   VSize   VFree
      vg   1   3   0 wz--n- 110.83g 8.00m
    
  • As @Paul Frederiksen has already asked, what flavour of Linux is this? I'm not familiar with an AMI that puts root, swap & tmp on a partition of the system disk under LVM control. It would be nice to be able to spin up an EC2 with it to test a few things.

    Back to your question, the doc I linked to deals with growing an EBS volume where it's the secondary disk, which is a bit different to what you're trying to do here.

    So when you run lsblk, then increase the EBS & run growpart, after this lsblk shows the same as before? I think that growing the partition when it's the running root disk isn't enough, you actually have to delete and re-add the partition with the updated disk geometry (this sounds familiar from when I've worked on Linux VMs in the past).

    This looks close to what you are trying to achieve https://serverfault.com/questions/861517/centos-7-extend-partition-with-unallocated-space

  • This is running Ubuntu 18.04.6 LTS (GNU/Linux 4.15.0-191-generic x86_64). I'll take a look through that serverfault question as well.

  • Also, as for this: "So when you run lsblk, then increase the EBS & run growpart, after this lsblk shows the same as before?"

    Running lsblk and before changing the EBS size in the AWS console was showing 112GB. I updated that to 224GB in the AWS console and ran lsblk again and it showed the new 224GB size for the main nvme0n1 drive, and keeps the old size for the partition nvme0n1p6. Then I run the growpart command, and it tells me that it's resizing it, doubling it, but then when I use lsblk it still shows the old size.

0

You will need to grow the logical volume as well:

sudo lvextend -r -l +100%FREE /dev/mapper/vg-root

The -r in lvextend will do the fs resize for you, so no need to run sudo resize2fs

Let me know if that helps!

profile pictureAWS
answered 2 months ago
  • I ran that command, and this is what I got:

     sudo lvextend -r -l +100%FREE /dev/mapper/vg-root
    
      Size of logical volume vg/root changed from 94.61 GiB (24221 extents) to 94.62 GiB (24223 extents).
      Logical volume vg/root successfully resized.
    resize2fs 1.44.1 (24-Mar-2018)
    Filesystem at /dev/mapper/vg-root is mounted on /; on-line resizing required
    old_desc_blocks = 12, new_desc_blocks = 12
    The filesystem on /dev/mapper/vg-root is now 24804352 (4k) blocks long.
    

    I'm not sure why it only extended by .01GB... When I run lsblk it looks like I should have the full 224GB available, and it doesn't look like it's being used:

    NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
    nvme0n1     259:0    0   224G  0 disk
    |-nvme0n1p1 259:1    0     1M  0 part
    |-nvme0n1p2 259:2    0     1K  0 part
    |-nvme0n1p5 259:3    0   976M  0 part /boot
    `-nvme0n1p6 259:4    0 110.9G  0 part
      |-vg-root 253:0    0  94.6G  0 lvm  /
      |-vg-tmp  253:1    0   976M  0 lvm  /tmp
      `-vg-swap 253:2    0  15.3G  0 lvm  [SWAP]
    
  • I think before that I have to resize the partition right? Which is what growpart is supposed to do, but doesn't seem to be working.

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions