By using AWS re:Post, you agree to the Terms of Use

Questions tagged with Linux Provisioning

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Stuck in stopping state

Hello there, I have launched EC2 instance with hibernate mode enabled from custom Ubuntu-20 based AMI. When I choose hibernate option for this EC2 instance, its taking more than 20 minutes to change from stopping to stopped state. I don't know why its taking this much time to take stopped state. I tried with multiple EC2 instances from this custom AMI. All launched instances from this AMI, taking more than 20 minutes to stopped state. I increased my root volume size too, now this root volume has more than 15 GB free space. However, its still taking more time to stopped state when I choose hibernate option from console. I can see the hibernate related logs from /var/log/syslog. Can any one please help me to outdo from this issues? > Sep 27 12:17:41 SparxEA systemd[1]: Starting EC2 instance hibernation setup agent... Sep 27 12:17:41 SparxEA /hibinit-agent: Effective config: {'log_to_syslog': True, 'log_to_stderr': True, 'mkswap': 'mkswap {swapfile}', 'swapon': 'swapon {swapfile}', 'swapoff': 'swapoff {swapfile}', 'touch_swap': False, 'grub_update': True, 'swap_percentage': 100, 'swap_mb': 4000} Sep 27 12:17:41 SparxEA /hibinit-agent: Will check if swap is at least: 4000 megabytes Sep 27 12:17:41 SparxEA /hibinit-agent: Create swap and initialize it Sep 27 12:17:41 SparxEA hibinit-agent[1101]: Effective config: {'log_to_syslog': True, 'log_to_stderr': True, 'mkswap': 'mkswap {swapfile}', 'swapon': 'swapon {swapfile}', 'swapoff': 'swapoff {swapfile}', 'touch_swap': False, 'grub_update': True, 'swap_percentage': 100, 'swap_mb': 4000} Sep 27 12:17:41 SparxEA hibinit-agent[1101]: Will check if swap is at least: 4000 megabytes Sep 27 12:17:41 SparxEA hibinit-agent[1101]: Create swap and initialize it Sep 27 12:17:41 SparxEA /hibinit-agent: kicking child process to initiate the setup Sep 27 12:17:41 SparxEA /hibinit-agent: Allocating 4194304000 bytes in /swap-hibinit Sep 27 12:17:41 SparxEA /hibinit-agent: Swap pre-heating is skipped, the swap blocks won't be touched during to ensure they are ready Sep 27 12:17:41 SparxEA /hibinit-agent: Running: mkswap /swap-hibinit Sep 27 12:17:41 SparxEA systemd[1]: Started EC2 instance hibernation setup agent. Sep 27 12:17:41 SparxEA hibinit-agent[1105]: Setting up swapspace version 1, size = 3.9 GiB (4194299904 bytes) Sep 27 12:17:41 SparxEA hibinit-agent[1105]: no label, UUID=16350ccb-d242-40f1-93f5-9fbe280d33ce Sep 27 12:17:41 SparxEA /hibinit-agent: Running: swapon /swap-hibinit Sep 27 12:17:41 SparxEA kernel: [ 25.160330] Adding 4095996k swap on /swap-hibinit. Priority:-2 extents:16 across:11141120k SSFS Sep 27 12:17:41 SparxEA /hibinit-agent: Updating the kernel offset for the swapfile: /swap-hibinit Sep 27 12:17:41 SparxEA /hibinit-agent: Updating GRUB to use the device PARTUUID=4986e35b-1bd5-45d3-b528-fa2edb861a38 with offset 4161536 for resume Sep 27 12:17:42 SparxEA hibinit-agent[1112]: Sourcing file `/etc/default/grub' Sep 27 12:17:42 SparxEA hibinit-agent[1112]: Sourcing file `/etc/default/grub.d/40-force-partuuid.cfg' Sep 27 12:17:42 SparxEA hibinit-agent[1112]: Sourcing file `/etc/default/grub.d/50-cloudimg-settings.cfg' Sep 27 12:17:42 SparxEA hibinit-agent[1112]: Sourcing file `/etc/default/grub.d/99-set-swap.cfg' Sep 27 12:17:42 SparxEA hibinit-agent[1112]: Sourcing file `/etc/default/grub.d/init-select.cfg' Sep 27 12:17:42 SparxEA hibinit-agent[1187]: Generating grub configuration file ... Sep 27 12:17:42 SparxEA hibinit-agent[1245]: GRUB_FORCE_PARTUUID is set, will attempt initrdless boot Sep 27 12:17:42 SparxEA hibinit-agent[1245]: Found linux image: /boot/vmlinuz-5.15.0-1019-aws Sep 27 12:17:42 SparxEA hibinit-agent[1245]: Found initrd image: /boot/microcode.cpio /boot/initrd.img-5.15.0-1019-aws Sep 27 12:17:43 SparxEA hibinit-agent[1245]: Found linux image: /boot/vmlinuz-5.13.0-1029-aws Sep 27 12:17:43 SparxEA hibinit-agent[1245]: Found initrd image: /boot/microcode.cpio /boot/initrd.img-5.13.0-1029-aws Sep 27 12:17:43 SparxEA hibinit-agent[1740]: Found memtest86+ image: /boot/memtest86+.elf Sep 27 12:17:43 SparxEA hibinit-agent[1740]: Found memtest86+ image: /boot/memtest86+.bin Sep 27 12:17:45 SparxEA hibinit-agent[1823]: Found Ubuntu 20.04.5 LTS (20.04) on /dev/nvme0n1p1 Sep 27 12:17:46 SparxEA hibinit-agent[3078]: done Sep 27 12:17:46 SparxEA /hibinit-agent: GRUB configuration is updated Sep 27 12:17:46 SparxEA /hibinit-agent: Setting swap device to 66305 with offset 4161536 Sep 27 12:17:46 SparxEA /hibinit-agent: Done updating the swap offset. Turning swapoff Sep 27 12:17:46 SparxEA /hibinit-agent: Running: swapoff /swap-hibinit Sep 27 12:17:46 SparxEA systemd[1]: swap\x2dhibinit.swap: Succeeded. Sep 27 12:17:46 SparxEA systemd[877]: swap\x2dhibinit.swap: Succeeded. Sep 27 12:17:46 SparxEA hibinit-agent[1103]: Effective config: {'log_to_syslog': True, 'log_to_stderr': True, 'mkswap': 'mkswap {swapfile}', 'swapon': 'swapon {swapfile}', 'swapoff': 'swapoff {swapfile}', 'touch_swap': False, 'grub_update': True, 'swap_percentage': 100, 'swap_mb': 4000} Sep 27 12:17:46 SparxEA hibinit-agent[1103]: Will check if swap is at least: 4000 megabytes Sep 27 12:17:46 SparxEA hibinit-agent[1103]: Create swap and initialize it Sep 27 12:17:46 SparxEA hibinit-agent[1103]: kicking child process to initiate the setup Sep 27 12:17:46 SparxEA hibinit-agent[1103]: Allocating 4194304000 bytes in /swap-hibinit Sep 27 12:17:46 SparxEA hibinit-agent[1103]: Swap pre-heating is skipped, the swap blocks won't be touched during to ensure they are ready Sep 27 12:17:46 SparxEA hibinit-agent[1103]: Running: mkswap /swap-hibinit Sep 27 12:17:46 SparxEA hibinit-agent[1103]: Running: swapon /swap-hibinit Sep 27 12:17:46 SparxEA hibinit-agent[1103]: Updating the kernel offset for the swapfile: /swap-hibinit Sep 27 12:17:46 SparxEA hibinit-agent[1103]: Updating GRUB to use the device PARTUUID=4986e35b-1bd5-45d3-b528-fa2edb861a38 with offset 4161536 for resume Sep 27 12:17:46 SparxEA hibinit-agent[1103]: GRUB configuration is updated Sep 27 12:17:46 SparxEA hibinit-agent[1103]: Setting swap device to 66305 with offset 4161536 Sep 27 12:17:46 SparxEA hibinit-agent[1103]: Done updating the swap offset. Turning swapoff Sep 27 12:17:46 SparxEA hibinit-agent[1103]: Running: swapoff /swap-hibinit Sep 27 12:17:46 SparxEA systemd[1]: hibinit-agent.service: Succeeded.
1
answers
0
votes
26
views
asked 3 days ago

What to look at for resolving Nice DCV 404 errors

I've got an EC2 instance setup with Nice DCV. I have setup port access in my security rules and created a session in nice dcv. However, whenever I try to connect to the session via the browsed, I get an HTTP ERROR 404. I can't seem to find any information in the Nice DCV docs about causes of 404 except for the session resolver which I'm not using. How can I go about resolving this issue? Below is the output from dcv list-sessions -j ``` [ { "id" : "cloud9-session", "owner" : "ubuntu", "num-of-connections" : 0, "creation-time" : "2022-09-23T12:58:40.919860Z", "last-disconnection-time" : "", "licenses" : [ { "product" : "dcv", "status" : "licensed", "check-timestamp" : "2022-09-23T12:58:42.540422Z", "expiration-date" : "" }, { "product" : "dcv-gl", "status" : "licensed", "check-timestamp" : "2022-09-23T12:58:42.540422Z", "expiration-date" : "" } ], "licensing-mode" : "EC2", "storage-root" : "", "type" : "virtual", "status" : "running", "x11-display" : ":0", "x11-authority" : "/run/user/1000/dcv/cloud9-session.xauth", "display-layout" : [ { "width" : 800, "height" : 600, "x" : 0, "y" : 0 } ] } ] ``` This is the output from dcv get-config ``` [connectivity] web-use-https = false web-port = 8080 web-extra-http-headers = [('test-header', 'test-value')] [security] authentication = 'none' ``` This is the output from systemctl status dcvserver ``` ● dcvserver.service - NICE DCV server daemon Loaded: loaded (/lib/systemd/system/dcvserver.service; enabled; vendor preset: enable> Active: active (running) since Fri 2022-09-23 12:58:40 UTC; 18min ago Main PID: 715 (dcvserver) Tasks: 6 (limit: 76196) Memory: 39.9M CGroup: /system.slice/dcvserver.service ├─715 /bin/bash /usr/bin/dcvserver -d --service └─724 /usr/lib/x86_64-linux-gnu/dcv/dcvserver --service Sep 23 12:58:40 ip-10-0-0-115 systemd[1]: Starting NICE DCV server daemon... Sep 23 12:58:40 ip-10-0-0-115 systemd[1]: Started NICE DCV server daemon. ``` I'm trying to access the page with http://<public ip>:8080 I've also tried including the #session_id part in the url and using the windows client with no luck. My operating system is Ubuntu 20.04 with a custom AMI running in a g4dn.4xlarge machine.
0
answers
0
votes
26
views
asked 7 days ago

Extend the root (ext4) partition adjacent to 2nd SWAP partition that is adjacent third LVM2 partition that has 2 volume groups in ext4 format inside of it

We need to extend the root (nvme0n1p1) partition that is adjacent to the 2nd SWAP (nvme0n1p2) partition that is adjacent to the third (nvme0n1p3) LVM2 partition that has 2 volume groups in ext4 format inside the LVM2. One of the volume groups is mounted at /home/abc and that is where the server hosts the web (django framework) application from (inside a virtual environment | miniconda ). I am able to (as verified on a staging instance) increase the size of the volume on Amazon and extend the filesystem but only on the 3rd LVM2 partition (tail end) of the (nvme0n1) hard drive. The first (nvme0n1p1) partition is almost out of space. After increasing the size of the volume on Amazon EBS, the instructions tell you how to grow the partition (the one i want to grow is ext4)(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html) tell you to: ``` sudo growpart /dev/nvme0n1 1 ``` but the response is: ``` NOCHANGE: partition 1 could only be grown by 33 [fudge=2048] ``` After you're supposed to: ``` sudo resize2fs /dev/nvme0n1p1 ``` but without growing it, you cant resize the first (nvme0n1p1) ext4 partition I was not able to growpart (or resize2fs) the first partition because it is sandwiched in-between the 2nd and 3rd. How do I accomplish this? - I have to shift the 2nd SWAP and 3rd LVM2 partitions over to allow for sectors to be free for the first partition to be able to be resized, correct? So since I wasnt able to grow p1, i applied the same commands to p3 and it worked. ``` sudo growpart /dev/nvme0n1 3 ``` I am able to use Logical Volume Manager using things like: ``` lvextend -L +5G /dev/mapper/vg_abc-app ``` to resize the volume groups inside the 3rd partition (nvme0n1p3) of the device. *The information below is from the actual instance i need to work on, it does not reflect an increase in EBS volume size yet.* Here is the ```lsblk -fm``` output: ``` NAME FSTYPE LABEL UUID MOUNTPOINT NAME SIZE OWNER GROUP MODE loop0 squashfs /snap/core/00000 loop0 114.9M root disk brw-rw---- loop2 squashfs /snap/amazon-ssm-agent/0000 loop2 26.7M root disk brw-rw---- loop3 squashfs /snap/core/00000 loop3 114M root disk brw-rw---- loop4 squashfs /snap/core18/0000 loop4 55.6M root disk brw-rw---- loop5 squashfs /snap/amazon-ssm-agent/0000 loop5 25.1M root disk brw-rw---- loop6 squashfs /snap/core18/0000 loop6 55.6M root disk brw-rw---- nvme0n1 nvme0n1 50G root disk brw-rw---- ├─nvme0n1p1 ext4 cloudimg-rootfs 00000000-0000-0000-0000-000000000000 / ├─nvme0n1p1 20G root disk brw-rw---- ├─nvme0n1p2 swap 00000000-0000-0000-0000-000000000000 [SWAP] ├─nvme0n1p2 2G root disk brw-rw---- └─nvme0n1p3 LVM2_member 00000000-0000-0000-0000-000000000000 └─nvme0n1p3 28G root disk brw-rw---- ├─vg_abc-logs xfs 00000000-0000-0000-0000-000000000000 /var/log ├─vg_abc-logs 8G root disk brw-rw---- └─vg_abc-app xfs 00000000-0000-0000-0000-000000000000 /home/abc └─vg_abc-app 19G root disk brw-rw---- nvme1n1 LVM2_member 00000000-0000-0000-0000-000000000000 nvme1n1 50G root disk brw-rw---- └─vg_backups-backups xfs 00000000-0000-0000-0000-000000000000 /home/abc/Backups-Disk └─vg_backups-backups 49G root disk brw-rw---- ``` Here is the output of: ```df -hT``` ``` Filesystem Type Size Used Avail Use% Mounted on udev devtmpfs 16G 0 16G 0% /dev tmpfs tmpfs 3.1G 306M 2.8G 10% /run /dev/nvme0n1p1 ext4 20G 15G 4.5G 77% / tmpfs tmpfs 16G 40K 16G 1% /dev/shm tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/mapper/vg_abc-logs xfs 8.0G 1.6G 6.5G 20% /var/log /dev/loop2 squashfs 27M 27M 0 100% /snap/amazon-ssm-agent/0000 /dev/loop5 squashfs 26M 26M 0 100% /snap/amazon-ssm-agent/0001 /dev/mapper/vg_abc-app xfs 19G 2.0G 18G 11% /home/abc /dev/mapper/vg_backups-backups xfs 49G 312M 49G 1% /home/abc/Backups-Disk /dev/loop3 squashfs 114M 114M 0 100% /snap/core/00000 /dev/loop4 squashfs 56M 56M 0 100% /snap/core18/0000 /dev/loop0 squashfs 115M 115M 0 100% /snap/core/00001 /dev/loop6 squashfs 56M 56M 0 100% /snap/core18/0001 tmpfs tmpfs 3.1G 0 3.1G 0% /run/user/1000 ``` Here is the output of: ``` parted -a optimal /dev/nvme0n1 print free ``` ``` Model: NVMe Device (nvme) Disk /dev/nvme0n1: 53.7GB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 32.3kB 1049kB 1016kB Free Space 1 1049kB 21.5GB 21.5GB primary ext4 boot 21.5GB 21.5GB 16.9kB Free Space 2 21.5GB 23.6GB 2147MB primary linux-swap(v1) 3 23.6GB 53.7GB 30.1GB primary lvm ``` How do I increase the root size of nvme0n1p1 without stopping the EC2? The solutions found indicate that the entire drive needs to be formatted and a new partition table made. That would obviously entail stopping the EC2. Please help. The AWS docs direct you to https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/5/html/deployment_guide/s1-lvm-diskdruid-auto if you have LVM on Nitro based instances. However, that link doesn't help me much wit what I'm trying to do here. Also, there is a second device, nvme1n1 that is entirely LVM2/xfs mounted at /home/abc/Backups-Disks (Filesystem: /dev/mapper/vg_backups-backups). Not sure if it makes a difference, as you can see above: ``` nvme1n1 LVM2_member 00000000-0000-0000-0000-000000000000 nvme1n1 50G root disk brw-rw---- └─vg_backups-backups xfs 00000000-0000-0000-0000-000000000000 /home/abc/Backups-Disk └─vg_backups-backups 49G root disk brw-rw---- ``` I don't believe this other device needs to be changed in any way, however, please advise if so. Any help is appreciated, Thank you.
1
answers
0
votes
38
views
asked 10 days ago