Questions tagged with Amazon Elastic Block Store
Content language: English
Sort by most recent
Transfer Files from Google Drive to EC2
Hi Dears, I am trying to transfer a machine learning model file (500 MB) from google drive to EC2, for inferencing. I tried different commands, and I see that wget works better, but when I do it : The result was : Connecting to drive.google.com (drive.google.com)|172.293.62.138|:443... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: 'uc?id=1-Dqk6fZzDiFKTqnnQ2yqW48uJk-CPqrB' 2022-09-20 18:26:40 (37.2 MB/s) - 'uc?id=1-Dqk6fZzDiFKTqnnQ2yqW48uJk-CPqrB' saved  - Done wget https://drive.google.com/uc?id=1-Dqk6fZzDiFKTqnnQ2yqW48uJk-CPqrB But, I cant see the file that I need to transfer, I see file with neme " uc?id=1-Dqk6fZzDiFKTqnnQ2yqW48uJk-CPqrB "... Can you please help with that step? many Thanks in advance Best Basem
Extend the root (ext4) partition adjacent to 2nd SWAP partition that is adjacent third LVM2 partition that has 2 volume groups in ext4 format inside of it
We need to extend the root (nvme0n1p1) partition that is adjacent to the 2nd SWAP (nvme0n1p2) partition that is adjacent to the third (nvme0n1p3) LVM2 partition that has 2 volume groups in ext4 format inside the LVM2. One of the volume groups is mounted at /home/abc and that is where the server hosts the web (django framework) application from (inside a virtual environment | miniconda ). I am able to (as verified on a staging instance) increase the size of the volume on Amazon and extend the filesystem but only on the 3rd LVM2 partition (tail end) of the (nvme0n1) hard drive. The first (nvme0n1p1) partition is almost out of space. After increasing the size of the volume on Amazon EBS, the instructions tell you how to grow the partition (the one i want to grow is ext4)(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html) tell you to: ``` sudo growpart /dev/nvme0n1 1 ``` but the response is: ``` NOCHANGE: partition 1 could only be grown by 33 [fudge=2048] ``` After you're supposed to: ``` sudo resize2fs /dev/nvme0n1p1 ``` but without growing it, you cant resize the first (nvme0n1p1) ext4 partition I was not able to growpart (or resize2fs) the first partition because it is sandwiched in-between the 2nd and 3rd. How do I accomplish this? - I have to shift the 2nd SWAP and 3rd LVM2 partitions over to allow for sectors to be free for the first partition to be able to be resized, correct? So since I wasnt able to grow p1, i applied the same commands to p3 and it worked. ``` sudo growpart /dev/nvme0n1 3 ``` I am able to use Logical Volume Manager using things like: ``` lvextend -L +5G /dev/mapper/vg_abc-app ``` to resize the volume groups inside the 3rd partition (nvme0n1p3) of the device. *The information below is from the actual instance i need to work on, it does not reflect an increase in EBS volume size yet.* Here is the ```lsblk -fm``` output: ``` NAME FSTYPE LABEL UUID MOUNTPOINT NAME SIZE OWNER GROUP MODE loop0 squashfs /snap/core/00000 loop0 114.9M root disk brw-rw---- loop2 squashfs /snap/amazon-ssm-agent/0000 loop2 26.7M root disk brw-rw---- loop3 squashfs /snap/core/00000 loop3 114M root disk brw-rw---- loop4 squashfs /snap/core18/0000 loop4 55.6M root disk brw-rw---- loop5 squashfs /snap/amazon-ssm-agent/0000 loop5 25.1M root disk brw-rw---- loop6 squashfs /snap/core18/0000 loop6 55.6M root disk brw-rw---- nvme0n1 nvme0n1 50G root disk brw-rw---- ├─nvme0n1p1 ext4 cloudimg-rootfs 00000000-0000-0000-0000-000000000000 / ├─nvme0n1p1 20G root disk brw-rw---- ├─nvme0n1p2 swap 00000000-0000-0000-0000-000000000000 [SWAP] ├─nvme0n1p2 2G root disk brw-rw---- └─nvme0n1p3 LVM2_member 00000000-0000-0000-0000-000000000000 └─nvme0n1p3 28G root disk brw-rw---- ├─vg_abc-logs xfs 00000000-0000-0000-0000-000000000000 /var/log ├─vg_abc-logs 8G root disk brw-rw---- └─vg_abc-app xfs 00000000-0000-0000-0000-000000000000 /home/abc └─vg_abc-app 19G root disk brw-rw---- nvme1n1 LVM2_member 00000000-0000-0000-0000-000000000000 nvme1n1 50G root disk brw-rw---- └─vg_backups-backups xfs 00000000-0000-0000-0000-000000000000 /home/abc/Backups-Disk └─vg_backups-backups 49G root disk brw-rw---- ``` Here is the output of: ```df -hT``` ``` Filesystem Type Size Used Avail Use% Mounted on udev devtmpfs 16G 0 16G 0% /dev tmpfs tmpfs 3.1G 306M 2.8G 10% /run /dev/nvme0n1p1 ext4 20G 15G 4.5G 77% / tmpfs tmpfs 16G 40K 16G 1% /dev/shm tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/mapper/vg_abc-logs xfs 8.0G 1.6G 6.5G 20% /var/log /dev/loop2 squashfs 27M 27M 0 100% /snap/amazon-ssm-agent/0000 /dev/loop5 squashfs 26M 26M 0 100% /snap/amazon-ssm-agent/0001 /dev/mapper/vg_abc-app xfs 19G 2.0G 18G 11% /home/abc /dev/mapper/vg_backups-backups xfs 49G 312M 49G 1% /home/abc/Backups-Disk /dev/loop3 squashfs 114M 114M 0 100% /snap/core/00000 /dev/loop4 squashfs 56M 56M 0 100% /snap/core18/0000 /dev/loop0 squashfs 115M 115M 0 100% /snap/core/00001 /dev/loop6 squashfs 56M 56M 0 100% /snap/core18/0001 tmpfs tmpfs 3.1G 0 3.1G 0% /run/user/1000 ``` Here is the output of: ``` parted -a optimal /dev/nvme0n1 print free ``` ``` Model: NVMe Device (nvme) Disk /dev/nvme0n1: 53.7GB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 32.3kB 1049kB 1016kB Free Space 1 1049kB 21.5GB 21.5GB primary ext4 boot 21.5GB 21.5GB 16.9kB Free Space 2 21.5GB 23.6GB 2147MB primary linux-swap(v1) 3 23.6GB 53.7GB 30.1GB primary lvm ``` How do I increase the root size of nvme0n1p1 without stopping the EC2? The solutions found indicate that the entire drive needs to be formatted and a new partition table made. That would obviously entail stopping the EC2. Please help. The AWS docs direct you to https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/5/html/deployment_guide/s1-lvm-diskdruid-auto if you have LVM on Nitro based instances. However, that link doesn't help me much wit what I'm trying to do here. Also, there is a second device, nvme1n1 that is entirely LVM2/xfs mounted at /home/abc/Backups-Disks (Filesystem: /dev/mapper/vg_backups-backups). Not sure if it makes a difference, as you can see above: ``` nvme1n1 LVM2_member 00000000-0000-0000-0000-000000000000 nvme1n1 50G root disk brw-rw---- └─vg_backups-backups xfs 00000000-0000-0000-0000-000000000000 /home/abc/Backups-Disk └─vg_backups-backups 49G root disk brw-rw---- ``` I don't believe this other device needs to be changed in any way, however, please advise if so. Any help is appreciated, Thank you.
Need help with Autoscaling and warm pool
We are currently testing a scenario where we will run two Ec2 instances (Running custom application) at a time in autoscaling group. One will be hot and another will be warm ready to go. When the active server goes down due to application failure, the autoscaling will bring the warm one to the active state. While the active server is up and running we are taking snapshot of the attached EBS volume every hour. Is there a way when the active server goes down and the warm one comes up, it has the EBS volume attached and mounted to it with the latest snapshot? I am aware of triggering a Lambda and have it attached an EBS volume to the instance but then I am not sure how will I mount it when the warm pool ec2 instance is in autoscaling:EC2_INSTANCE_LAUNCHING state? The application will be running on RHEL8 AMI. Any guidance, suggestion are appreciated. Thanks
Windows Instances launched using RegisterImage API are showing PlatformDetails as "Linux/UNIX" and are not booting - when the EBS volume snapshots are created using StartSnapshot() API
Steps to reproduce: 1. Create a Windows Instance with a single EBS volume (aka root volume) 2. Take a snapshot of the root volume (Snap-1) 3. Create a new EBS snapshot (Snap-2) using `StartSnapshot` API with the Snap-1 as parent snapshot. 4. On Snap-2, issue `CompleteSnapshot` API with 0 changed blocks. 5. Create an AMI using `RegisterImage` API with Snap-2 as the root volume and same block device mapping as the original instance. 6. Any instance created using the AMI has "Linux/UNIX" in Platform Details and does not boot. It was observed that this issue occurs only if the EBS snapshot used is created using `StartSnapshot` API. For example, if Snap-1 was used in the `RegisterImage` call, any instance created using the AMI is showing platform as "windows" and is booting up properly. Question: Is there any way to create a Windows AMI (that creates bootable windows instances) using EBS snapshots created via `StartSnapshot` API or is there any workaround for this ?
Attachment order for EBS volumes as /dev/nvme devices
Hello, We started seeing (from what I can find, our old instances from months ago don't exhibit this behavior) that the order of attached EBS volumes changes after first reboot. For example, we attach (using AWS console): vol-011117cfde1966e5f as /dev/sdf vol-0222290fbbd8a3b79 as /dev/sdg and they immediately show up as /dev/nvme1n1 and /dev/nvme2n1. After reboot, they change order, where vol-011117cfde1966e5f becomes /dev/nvme2n1 and vol-0222290fbbd8a3b79 becomes /dev/nvme1n1. This order becomes permanent even if you reboot again any number of times. In the console, vol-0111* is still listed first alphabetically as sdf and vol-0222* second as sdg. I'm seeing this behavior on CentOS 7.9, RockyLinux and AlmaLinux 8.6 and RockyLinux 9.0, so it doesn't seem to be specific to any operating system. I tested with t3a and m6i instance types. I am aware that we can mount filesystems using UUIDs to ensure order. I also know that https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/nvme-ebs-volumes.html says "The block device driver can assign NVMe device names in a different order than you specified for the volumes in the block device mapping." The question is whether it's expected behavior that this order changes after first reboot only, and then doesn't change again?
large 2nd disk volume not mounted, unable to access 2nd disk volume.
I have an ECS instance (AWS Linux, t2.xlarge) with a a large 2nd disk volume (block device: /dev/sdb 80 Gib) When my instance is started, the 2nd disk volume is not mounted onto any directories. I ssh into my instance as a non-root user, and tried to mount the /dev/sdb, but I am not allowed to do so because I do not have root privileges. What should I do? How can I access my 2nd disk volume? Could there be an issue because of the large disk size? Also, is there a way to request for root privileges for my instance? Thanks.
Understand RDS PIOPS, EBS IO and EBS BYTE Balance (%) ?
I have a Posgrest RDS instance using r5.xlarge type. 500GB SSD gp2 type. As I understand, with 500GB gp2, i got a baseline with (3 x 500) = 1500 IOPS. Now I need to increase it to 2500 IOPS, what should I do ? As documents said, I have 2 option (please correct me if I'm wrong) : 1. I can increase the size of DB to ~ 850 ( 3x850 ~ 2500 IOPS) 2. Change the disk type to IO1 and set PIOPS = 2500. 500GB Gp2 cost 115$ per month With option 1, I has to pay 195$ per month. With option 2, I has to pay 115$ + 500$ ( 0.2 * 2500) = 615$ per month. I know that Gp1 Provide more throughput and SLA level, but do I really need to use io1 + PIOPS? Which case should I use it (assume that I just need 99% SLA) ? And one more question, assume that I have RDS with 1000 GB Gp2, so the baseline is 3000 IOPS, what happen if I change it to io1 and set PIOPS to 1000? Now what is the baseline IO of my RDS? 3000 or 1000 or 3000 + 1000 ? ------ I saw EBS IO Balance (%) and EBS Byte Balance (%) in CloudWatch metric, as I understand, it's my reserved balance of (IO and Throughput), but how do I know the absolute value of it ? (So I can count how many IO balance remaining) Let say I have RDS with 1000GB Gp2, as I understand from documents, I got 3000 IOPS, if my RDS used < 3000 IOPS, it will reserved the IO credits to my balance, but what is the maximum balance that I can reserved? I couldn't find the documents said about that. Is there anyway to monitor how RDS consume my IO ( independently of AWS ) ? Thank you so much.
Do I need to extend EC2 file system after resizing?
I increased the size of a gp2 EBS attached to a Linux EC2 instance from 8GB to 25GB. From the documentation, I thought I would then need to extend the Linux file system. However, I think that the output of `lsblk` is telling me that it has happened automatically and I do not need to do anything. I have pasted the output below. I think it is saying that I have a 25G EBS volume called xvda and that has 3 partitions in it. One of them, xvda1, is 24.9G. If that is correct then presumably I do not have to do anything more. I can just use the extra space. Please correct me if I am wrong. ``` ubuntu@ip-172-31-32-80:~$ sudo lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS loop0 7:0 0 25.1M 1 loop /snap/amazon-ssm-agent/5656 loop1 7:1 0 43.9M 1 loop /snap/certbot/2192 loop2 7:2 0 55.5M 1 loop /snap/core18/2409 loop3 7:3 0 55.6M 1 loop /snap/core18/2538 loop4 7:4 0 62M 1 loop /snap/core20/1593 loop5 7:5 0 62M 1 loop /snap/core20/1611 loop6 7:6 0 79.9M 1 loop /snap/lxd/22923 loop7 7:7 0 103M 1 loop /snap/lxd/23541 loop8 7:8 0 47M 1 loop /snap/snapd/16010 loop9 7:9 0 47M 1 loop /snap/snapd/16292 xvda 202:0 0 25G 0 disk ├─xvda1 202:1 0 24.9G 0 part / ├─xvda14 202:14 0 4M 0 part └─xvda15 202:15 0 106M 0 part /boot/efi ubuntu@ip-172-31-32-80:~$ ```
AWS MGN , EBS Volumes
Hi Team, we are doing a lift & shift migration using AWS MGN. In on-Prem we have 1 TB storage attached to a server in that only 250 GB is utilised. So here during replication , AWS MGN agent created 1 TB EBS volume.Is there any way to customise the EBS volumes that are created by the MGN during replication or during cutover phase ?? . Why we need to pay for unused volume ? Can you please guide us providing a right solution.
Low throughput on EBS gp3 volume
Hi everyone, I'm having a very low read throughput on a gp3 volume that I'm using. The volume is set to the standard 3000 IOPS and a max throughput of 125MB/s. However, Cloudwatch reports only 60 Ops/s and a read bandwith of 7000 KiB/s (there is no write). What could be the reason for this?
instance keeps crashes in intervals after extending ebs volume
My EC2 instance keeps crashing (unreachable) after I extended the EBS volume. When I reboot the instance, it comes back up and then become inaccessible after 24-48 hours and the circle continues. Please help. I need it to stay up continuously like before