After creating new volume from snapshot lsblk shows old size

0

Hi, I have created a new volume from a snapshot and increased it’s size to 100 GB, then attached the volume to my instance. But I can not grow the partitions and lsblk doesn’t show a bigger volume.

Here is the output of lsblk:

❯ lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS

loop0 7:0 0 25.1M 1 loop /snap/amazon-ssm-agent/5656

loop1 7:1 0 24.4M 1 loop /snap/amazon-ssm-agent/6312

loop2 7:2 0 49.6M 1 loop /snap/snapd/17883

loop3 7:3 0 55.6M 1 loop /snap/core18/2667

loop4 7:4 0 63.2M 1 loop /snap/core20/1738

loop6 7:6 0 103M 1 loop /snap/lxd/23541

loop7 7:7 0 63.3M 1 loop /snap/core20/1778

loop9 7:9 0 49.8M 1 loop /snap/snapd/17950

loop10 7:10 0 55.6M 1 loop /snap/core18/2654

loop11 7:11 0 111.9M 1 loop /snap/lxd/24322

xvda 202:0 0 16G 0 disk ├─xvda1 202:1 0 15.9G 0 part /

├─xvda14 202:14 0 4M 0 part

└─xvda15 202:15 0 106M 0 part /boot/efi

Here is the output of

sudo resize2fs /dev/xvda1

resize2fs 1.46.5 (30-Dec-2021) The filesystem is already 4165883 (4k) blocks long. Nothing to do!

Does anyone knows what could be the cause of the issue?

Thank you very much.

2 Antworten
0

Is this ext4 or xfs file system? resize2fs is for ext4.

Also probably you need to run growpart command first. sudo growpart /dev/xvda 1

beantwortet vor einem Jahr
  • Hi thank you, I also tried sudo growpart /dev/xvda 1 but it does not work either.

    What’s weird is that the original volume was 30 GB but always showed as 16 GB

    sudo growpart /dev/xvda 1

    NOCHANGE: partition 1 is size 33327071. it cannot be grown

    Do you have more ideas I could try?

    Thanks.

0

Sorry I'm reading this only now. Have you fixed it?

What's the output of fdisk -l?

beantwortet vor einem Jahr

Du bist nicht angemeldet. Anmelden um eine Antwort zu veröffentlichen.

Eine gute Antwort beantwortet die Frage klar, gibt konstruktives Feedback und fördert die berufliche Weiterentwicklung des Fragenstellers.

Richtlinien für die Beantwortung von Fragen