By using AWS re:Post, you agree to the Terms of Use

Questions tagged with Amazon Elastic Block Store

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Extend the root (ext4) partition adjacent to 2nd SWAP partition that is adjacent third LVM2 partition that has 2 volume groups in ext4 format inside of it

We need to extend the root (nvme0n1p1) partition that is adjacent to the 2nd SWAP (nvme0n1p2) partition that is adjacent to the third (nvme0n1p3) LVM2 partition that has 2 volume groups in ext4 format inside the LVM2. One of the volume groups is mounted at /home/abc and that is where the server hosts the web (django framework) application from (inside a virtual environment | miniconda ). I am able to (as verified on a staging instance) increase the size of the volume on Amazon and extend the filesystem but only on the 3rd LVM2 partition (tail end) of the (nvme0n1) hard drive. The first (nvme0n1p1) partition is almost out of space. After increasing the size of the volume on Amazon EBS, the instructions tell you how to grow the partition (the one i want to grow is ext4)(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html) tell you to: ``` sudo growpart /dev/nvme0n1 1 ``` but the response is: ``` NOCHANGE: partition 1 could only be grown by 33 [fudge=2048] ``` After you're supposed to: ``` sudo resize2fs /dev/nvme0n1p1 ``` but without growing it, you cant resize the first (nvme0n1p1) ext4 partition I was not able to growpart (or resize2fs) the first partition because it is sandwiched in-between the 2nd and 3rd. How do I accomplish this? - I have to shift the 2nd SWAP and 3rd LVM2 partitions over to allow for sectors to be free for the first partition to be able to be resized, correct? So since I wasnt able to grow p1, i applied the same commands to p3 and it worked. ``` sudo growpart /dev/nvme0n1 3 ``` I am able to use Logical Volume Manager using things like: ``` lvextend -L +5G /dev/mapper/vg_abc-app ``` to resize the volume groups inside the 3rd partition (nvme0n1p3) of the device. *The information below is from the actual instance i need to work on, it does not reflect an increase in EBS volume size yet.* Here is the ```lsblk -fm``` output: ``` NAME FSTYPE LABEL UUID MOUNTPOINT NAME SIZE OWNER GROUP MODE loop0 squashfs /snap/core/00000 loop0 114.9M root disk brw-rw---- loop2 squashfs /snap/amazon-ssm-agent/0000 loop2 26.7M root disk brw-rw---- loop3 squashfs /snap/core/00000 loop3 114M root disk brw-rw---- loop4 squashfs /snap/core18/0000 loop4 55.6M root disk brw-rw---- loop5 squashfs /snap/amazon-ssm-agent/0000 loop5 25.1M root disk brw-rw---- loop6 squashfs /snap/core18/0000 loop6 55.6M root disk brw-rw---- nvme0n1 nvme0n1 50G root disk brw-rw---- ├─nvme0n1p1 ext4 cloudimg-rootfs 00000000-0000-0000-0000-000000000000 / ├─nvme0n1p1 20G root disk brw-rw---- ├─nvme0n1p2 swap 00000000-0000-0000-0000-000000000000 [SWAP] ├─nvme0n1p2 2G root disk brw-rw---- └─nvme0n1p3 LVM2_member 00000000-0000-0000-0000-000000000000 └─nvme0n1p3 28G root disk brw-rw---- ├─vg_abc-logs xfs 00000000-0000-0000-0000-000000000000 /var/log ├─vg_abc-logs 8G root disk brw-rw---- └─vg_abc-app xfs 00000000-0000-0000-0000-000000000000 /home/abc └─vg_abc-app 19G root disk brw-rw---- nvme1n1 LVM2_member 00000000-0000-0000-0000-000000000000 nvme1n1 50G root disk brw-rw---- └─vg_backups-backups xfs 00000000-0000-0000-0000-000000000000 /home/abc/Backups-Disk └─vg_backups-backups 49G root disk brw-rw---- ``` Here is the output of: ```df -hT``` ``` Filesystem Type Size Used Avail Use% Mounted on udev devtmpfs 16G 0 16G 0% /dev tmpfs tmpfs 3.1G 306M 2.8G 10% /run /dev/nvme0n1p1 ext4 20G 15G 4.5G 77% / tmpfs tmpfs 16G 40K 16G 1% /dev/shm tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/mapper/vg_abc-logs xfs 8.0G 1.6G 6.5G 20% /var/log /dev/loop2 squashfs 27M 27M 0 100% /snap/amazon-ssm-agent/0000 /dev/loop5 squashfs 26M 26M 0 100% /snap/amazon-ssm-agent/0001 /dev/mapper/vg_abc-app xfs 19G 2.0G 18G 11% /home/abc /dev/mapper/vg_backups-backups xfs 49G 312M 49G 1% /home/abc/Backups-Disk /dev/loop3 squashfs 114M 114M 0 100% /snap/core/00000 /dev/loop4 squashfs 56M 56M 0 100% /snap/core18/0000 /dev/loop0 squashfs 115M 115M 0 100% /snap/core/00001 /dev/loop6 squashfs 56M 56M 0 100% /snap/core18/0001 tmpfs tmpfs 3.1G 0 3.1G 0% /run/user/1000 ``` Here is the output of: ``` parted -a optimal /dev/nvme0n1 print free ``` ``` Model: NVMe Device (nvme) Disk /dev/nvme0n1: 53.7GB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 32.3kB 1049kB 1016kB Free Space 1 1049kB 21.5GB 21.5GB primary ext4 boot 21.5GB 21.5GB 16.9kB Free Space 2 21.5GB 23.6GB 2147MB primary linux-swap(v1) 3 23.6GB 53.7GB 30.1GB primary lvm ``` How do I increase the root size of nvme0n1p1 without stopping the EC2? The solutions found indicate that the entire drive needs to be formatted and a new partition table made. That would obviously entail stopping the EC2. Please help. The AWS docs direct you to https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/5/html/deployment_guide/s1-lvm-diskdruid-auto if you have LVM on Nitro based instances. However, that link doesn't help me much wit what I'm trying to do here. Also, there is a second device, nvme1n1 that is entirely LVM2/xfs mounted at /home/abc/Backups-Disks (Filesystem: /dev/mapper/vg_backups-backups). Not sure if it makes a difference, as you can see above: ``` nvme1n1 LVM2_member 00000000-0000-0000-0000-000000000000 nvme1n1 50G root disk brw-rw---- └─vg_backups-backups xfs 00000000-0000-0000-0000-000000000000 /home/abc/Backups-Disk └─vg_backups-backups 49G root disk brw-rw---- ``` I don't believe this other device needs to be changed in any way, however, please advise if so. Any help is appreciated, Thank you.
1
answers
0
votes
36
views
asked 7 days ago

Understand RDS PIOPS, EBS IO and EBS BYTE Balance (%) ?

I have a Posgrest RDS instance using r5.xlarge type. 500GB SSD gp2 type. As I understand, with 500GB gp2, i got a baseline with (3 x 500) = 1500 IOPS. Now I need to increase it to 2500 IOPS, what should I do ? As documents said, I have 2 option (please correct me if I'm wrong) : 1. I can increase the size of DB to ~ 850 ( 3x850 ~ 2500 IOPS) 2. Change the disk type to IO1 and set PIOPS = 2500. 500GB Gp2 cost 115$ per month With option 1, I has to pay 195$ per month. With option 2, I has to pay 115$ + 500$ ( 0.2 * 2500) = 615$ per month. I know that Gp1 Provide more throughput and SLA level, but do I really need to use io1 + PIOPS? Which case should I use it (assume that I just need 99% SLA) ? And one more question, assume that I have RDS with 1000 GB Gp2, so the baseline is 3000 IOPS, what happen if I change it to io1 and set PIOPS to 1000? Now what is the baseline IO of my RDS? 3000 or 1000 or 3000 + 1000 ? ------ I saw EBS IO Balance (%) and EBS Byte Balance (%) in CloudWatch metric, as I understand, it's my reserved balance of (IO and Throughput), but how do I know the absolute value of it ? (So I can count how many IO balance remaining) Let say I have RDS with 1000GB Gp2, as I understand from documents, I got 3000 IOPS, if my RDS used < 3000 IOPS, it will reserved the IO credits to my balance, but what is the maximum balance that I can reserved? I couldn't find the documents said about that. Is there anyway to monitor how RDS consume my IO ( independently of AWS ) ? Thank you so much.
0
answers
0
votes
26
views
asked a month ago