I modified my default volume size from within the aws console web app. I selected the instance, then the volume and modified its size to 500gb. The volume now displays as 500gb but im still getting the out of space error on ssh when trying to move files in.
I was directed to this article (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html) where I try to run these commands but get an error message:
[cloudshell-user@ip-10-4-34-97 ~]$ sudo xfs_growfs -d /dev/sda1
sudo: xfs_growfs: command not found
[cloudshell-user@ip-10-4-34-97 ~]$ sudo resize2fs /dev/sda1
sudo: resize2fs: command not found
[cloudshell-user@ip-10-4-34-97 ~]$
Why would this be happening? Here are some other related results:
[cloudshell-user@ip-10-4-34-97 ~]$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 1G 0 loop /home/cloudshell-user
vda 254:0 0 2G 0 disk
vdb 254:16 0 10G 0 disk
vdc 254:32 0 10G 0 disk
vdd 254:48 0 10G 0 disk
vde 254:64 0 20.3G 0 disk /aws/mde/logs
[cloudshell-user@ip-10-4-34-97 ~]$ df -hT
Filesystem Type Size Used Avail Use% Mounted on
overlay overlay 20G 3.9G 16G 21% /
tmpfs tmpfs 64M 0 64M 0% /dev
tmpfs tmpfs 1.3G 0 1.3G 0% /sys/fs/cgroup
shm tmpfs 64M 0 64M 0% /dev/shm
/dev/vde ext4 20G 3.9G 16G 21% /home
/dev/loop0 ext4 974M 6.9M 900M 1% /home/cloudshell-user
Here is the EC2 volume attached:
Here is the command run from the ssh terminal:
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 4.1G 0 4.1G 0% /dev
tmpfs tmpfs 812M 81M 731M 10% /run
/dev/nvme0n1p1 ext4 8.2G 8.2G 0 100% /
tmpfs tmpfs 4.1G 0 4.1G 0% /dev/shm
tmpfs tmpfs 5.3M 0 5.3M 0% /run/lock
tmpfs tmpfs 4.1G 0 4.1G 0% /sys/fs/cgroup
/dev/loop1 squashfs 56M 56M 0 100% /snap/snapd/19122
/dev/loop2 squashfs 26M 26M 0 100% /snap/amazon-ssm-agent/6312
/dev/loop3 squashfs 112M 112M 0 100% /snap/core/16091
/dev/loop0 squashfs 11M 11M 0 100% /snap/canonical-livepatch/235
/dev/loop4 squashfs 26M 26M 0 100% /snap/amazon-ssm-agent/7528
/dev/loop5 squashfs 43M 43M 0 100% /snap/snapd/20092
/dev/loop6 squashfs 59M 59M 0 100% /snap/core18/2790
/dev/loop7 squashfs 59M 59M 0 100% /snap/core18/2745
/dev/nvme0n1p15 vfat 110M 5.5M 104M 5% /boot/efi
tmpfs tmpfs 812M 0 812M 0% /run/user/1000
Here is the final ssh terminal results:
sudo: unable to resolve host ip-172-31-47-70: Resource temporarily unavailable
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 9.6M 1 loop /snap/canonical-livepatch/235
loop1 7:1 0 53.2M 1 loop /snap/snapd/19122
loop2 7:2 0 24.4M 1 loop /snap/amazon-ssm-agent/6312
loop3 7:3 0 105.8M 1 loop /snap/core/16091
loop4 7:4 0 24.6M 1 loop /snap/amazon-ssm-agent/7528
loop5 7:5 0 40.9M 1 loop /snap/snapd/20092
loop6 7:6 0 55.7M 1 loop /snap/core18/2790
loop7 7:7 0 55.7M 1 loop /snap/core18/2745
**nvme0n1 259:0 0 500G 0 disk
├─nvme0n1p1 259:1 0 499.9G 0 part /**
├─nvme0n1p14 259:2 0 4M 0 part
└─nvme0n1p15 259:3 0 106M 0 part /boot/efi
Thx
ok so i ssh'd and growfs doesnt work cause its not xfs type, so im gonna try resize. I still have a doubt as to what path i should use in the resize. Volume root device name in the aws console says /dev/sda1. Should I do sudo resize2fs /dev/sda1?
Yes, you need to run the command according to your file system. Use the following command to check the root volume name recognized by the OS.
The results to df -hT are up on the original post. Im just not understanding those results enough to determine which path I should use.
What your post mentions is CloudShell volume information. No EC2 volume information is listed.
Ok what would I need to do to give you the info for ec2 volume?