Questions tagged with High Performance Compute
Content language: English
Sort by most recent
Lightsail - EC2 - Instances Hard Limit
Hello AWS Pros ! I have a question about the limits of the instances. Have you ever tried or maybe had a chance to work on a project with plenty of instances in one REGION? I have information that with the default setting, I can create up to 20 instances per REGION for Lightsail and EC2, but do you know how much more I can request? Is there a hard limit? For example: with the best possible setup for my project/case, I can request maximum 99 instances or 1000 instances. Thanks :)
Extend the root (ext4) partition adjacent to 2nd SWAP partition that is adjacent third LVM2 partition that has 2 volume groups in ext4 format inside of it
We need to extend the root (nvme0n1p1) partition that is adjacent to the 2nd SWAP (nvme0n1p2) partition that is adjacent to the third (nvme0n1p3) LVM2 partition that has 2 volume groups in ext4 format inside the LVM2. One of the volume groups is mounted at /home/abc and that is where the server hosts the web (django framework) application from (inside a virtual environment | miniconda ). I am able to (as verified on a staging instance) increase the size of the volume on Amazon and extend the filesystem but only on the 3rd LVM2 partition (tail end) of the (nvme0n1) hard drive. The first (nvme0n1p1) partition is almost out of space. After increasing the size of the volume on Amazon EBS, the instructions tell you how to grow the partition (the one i want to grow is ext4)(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html) tell you to: ``` sudo growpart /dev/nvme0n1 1 ``` but the response is: ``` NOCHANGE: partition 1 could only be grown by 33 [fudge=2048] ``` After you're supposed to: ``` sudo resize2fs /dev/nvme0n1p1 ``` but without growing it, you cant resize the first (nvme0n1p1) ext4 partition I was not able to growpart (or resize2fs) the first partition because it is sandwiched in-between the 2nd and 3rd. How do I accomplish this? - I have to shift the 2nd SWAP and 3rd LVM2 partitions over to allow for sectors to be free for the first partition to be able to be resized, correct? So since I wasnt able to grow p1, i applied the same commands to p3 and it worked. ``` sudo growpart /dev/nvme0n1 3 ``` I am able to use Logical Volume Manager using things like: ``` lvextend -L +5G /dev/mapper/vg_abc-app ``` to resize the volume groups inside the 3rd partition (nvme0n1p3) of the device. *The information below is from the actual instance i need to work on, it does not reflect an increase in EBS volume size yet.* Here is the ```lsblk -fm``` output: ``` NAME FSTYPE LABEL UUID MOUNTPOINT NAME SIZE OWNER GROUP MODE loop0 squashfs /snap/core/00000 loop0 114.9M root disk brw-rw---- loop2 squashfs /snap/amazon-ssm-agent/0000 loop2 26.7M root disk brw-rw---- loop3 squashfs /snap/core/00000 loop3 114M root disk brw-rw---- loop4 squashfs /snap/core18/0000 loop4 55.6M root disk brw-rw---- loop5 squashfs /snap/amazon-ssm-agent/0000 loop5 25.1M root disk brw-rw---- loop6 squashfs /snap/core18/0000 loop6 55.6M root disk brw-rw---- nvme0n1 nvme0n1 50G root disk brw-rw---- ├─nvme0n1p1 ext4 cloudimg-rootfs 00000000-0000-0000-0000-000000000000 / ├─nvme0n1p1 20G root disk brw-rw---- ├─nvme0n1p2 swap 00000000-0000-0000-0000-000000000000 [SWAP] ├─nvme0n1p2 2G root disk brw-rw---- └─nvme0n1p3 LVM2_member 00000000-0000-0000-0000-000000000000 └─nvme0n1p3 28G root disk brw-rw---- ├─vg_abc-logs xfs 00000000-0000-0000-0000-000000000000 /var/log ├─vg_abc-logs 8G root disk brw-rw---- └─vg_abc-app xfs 00000000-0000-0000-0000-000000000000 /home/abc └─vg_abc-app 19G root disk brw-rw---- nvme1n1 LVM2_member 00000000-0000-0000-0000-000000000000 nvme1n1 50G root disk brw-rw---- └─vg_backups-backups xfs 00000000-0000-0000-0000-000000000000 /home/abc/Backups-Disk └─vg_backups-backups 49G root disk brw-rw---- ``` Here is the output of: ```df -hT``` ``` Filesystem Type Size Used Avail Use% Mounted on udev devtmpfs 16G 0 16G 0% /dev tmpfs tmpfs 3.1G 306M 2.8G 10% /run /dev/nvme0n1p1 ext4 20G 15G 4.5G 77% / tmpfs tmpfs 16G 40K 16G 1% /dev/shm tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/mapper/vg_abc-logs xfs 8.0G 1.6G 6.5G 20% /var/log /dev/loop2 squashfs 27M 27M 0 100% /snap/amazon-ssm-agent/0000 /dev/loop5 squashfs 26M 26M 0 100% /snap/amazon-ssm-agent/0001 /dev/mapper/vg_abc-app xfs 19G 2.0G 18G 11% /home/abc /dev/mapper/vg_backups-backups xfs 49G 312M 49G 1% /home/abc/Backups-Disk /dev/loop3 squashfs 114M 114M 0 100% /snap/core/00000 /dev/loop4 squashfs 56M 56M 0 100% /snap/core18/0000 /dev/loop0 squashfs 115M 115M 0 100% /snap/core/00001 /dev/loop6 squashfs 56M 56M 0 100% /snap/core18/0001 tmpfs tmpfs 3.1G 0 3.1G 0% /run/user/1000 ``` Here is the output of: ``` parted -a optimal /dev/nvme0n1 print free ``` ``` Model: NVMe Device (nvme) Disk /dev/nvme0n1: 53.7GB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 32.3kB 1049kB 1016kB Free Space 1 1049kB 21.5GB 21.5GB primary ext4 boot 21.5GB 21.5GB 16.9kB Free Space 2 21.5GB 23.6GB 2147MB primary linux-swap(v1) 3 23.6GB 53.7GB 30.1GB primary lvm ``` How do I increase the root size of nvme0n1p1 without stopping the EC2? The solutions found indicate that the entire drive needs to be formatted and a new partition table made. That would obviously entail stopping the EC2. Please help. The AWS docs direct you to https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/5/html/deployment_guide/s1-lvm-diskdruid-auto if you have LVM on Nitro based instances. However, that link doesn't help me much wit what I'm trying to do here. Also, there is a second device, nvme1n1 that is entirely LVM2/xfs mounted at /home/abc/Backups-Disks (Filesystem: /dev/mapper/vg_backups-backups). Not sure if it makes a difference, as you can see above: ``` nvme1n1 LVM2_member 00000000-0000-0000-0000-000000000000 nvme1n1 50G root disk brw-rw---- └─vg_backups-backups xfs 00000000-0000-0000-0000-000000000000 /home/abc/Backups-Disk └─vg_backups-backups 49G root disk brw-rw---- ``` I don't believe this other device needs to be changed in any way, however, please advise if so. Any help is appreciated, Thank you.
Amazon Redshift concurrency scaling - How much time it takes to complete scaling and setting threshold to trigger it
Hi Team, I have an existing redshift cluster, where I want to enable concurrency scaling. I had few queries related to the same : 1. My cluster having 2 on demand ra3.4xlarge nodes is running since March 2021. AWS docs mentions that the running redshift cluster accrues 1 hour of free cluster usage credit every 24 hours which never expire. Does it mean that, my cluster would already have 18 months * 30 credit usage hours already accrued, since concurrency scaling was never enabled for this cluster.? 2. When the does the concurrency scaling feature kicks in ? Is it only when the queries starts getting queued up ? Can we define some kind of threshold like cpu %utilization or memory % utilization, which would automatically start the concurrency scaling process ? 3. How much time does it take for cluster to complete the autoscaling process and start serving queries ? Thanks!
Upgrade ec2 Instance ( c5.9xlarge to c5.24xlarge)
We are using c4.8xlarge and this server is not compatible with my current software kindly upgrade my ec2 (c5.24xlarge) because my CPU goes up to 98% when I'm using any ec2 just like c4.8xlarge c5.9xlarge as so. that's the reason why I need to upgrade my CPU. If you have any suggestions please suggest me regarding this. Thank you in advance, hope to get a swift response.
Progress of migrating C4/M4 instances to nitro hypervisor.
In this article: https://perspectives.mvdirona.com/2021/11/xen-on-nitro-aws-nitro-for-legacy-instances/ it mentioned that nitro can support legacy instances. But I can't find an official notice of this. Can you confirm this article is correct and if so, can you tell if M4/C4 instances are now supported or a time frame. Also, when they are supported, does this mean that the bidirectional serial console will also be supported for this legacy instances?
About recovery operation when AZ failure occurs
When a website system is configured by combining a multi-AZ configuration and an ALB When an AZ failure occurs, I think the EC2 on the failure side will be separated, but please tell me what will happen when the AZ failure is recovered.Automatic If there is a document page that can be used as a reference, such as whether the connection will be restored on a regular basis, please share that as well.
Deploy 3 MEAN stack dockerized website to AWS EC2
I have 3 MEAN Stack applications and I assume that our approx monthly traffic would be 30k. According to you, what specification of AWS EC2 instance will be good for this? And also if possible tell me the instance name. Till now my applications are dockerized and hosted in t2.micro (free tier). We develop this software from India but our user base is from California. I Choose N. Virginia region on AWS. But I felt the system is very slow when I tried to interact with the system. Can you tell me if the cause of lag is for the region or for the instances we choose(t2.micro) Today I update my EC2 and move all websites to a **t3 large** instance. Now the website is running smoothly. But I feel the cost of the t3 large is very high. Please suggest me the best instance for my applications.
EFS throughput mounted on many instances
Hello, i'm evaluating to use EFS as shared storage for 10 EC2 instances with a provisioned throughput of 120 MB/s but i don't clearly understand how this throughput is splitted between instances. My questions is: If i mount EFS to **1 instance**, **all 120 MB/s** **will be dedicated** to it...if mount EFS to **10 instances** and all instances are writing simultaneously on EFS, throughput will be **12 MB/s per instance**....but what about if only **1 of 10 instances** is writing on EFS, the EFS throughput will be **120 MB/s** or **12 MB/s**? Thanks in advanced!