Browse through the questions and answers listed below or filter and sort to narrow down your results.
Is there way for users in china to access my s3, ec2 and api gateways?
We have an extensive network of AWS services that support our application. But currently users from China can not reliably access the browser facing endpoints of our service (on EC2, S3 and API Gateway). Some days they can. Some days they can't. We are set up in 13 different AWS regions currently, and usually recommend users in China choose either our Singapore or Tokyo regions. But thats not working out. We have investigated setting up our services on an AWS region in China, but it seems to mean maintaining an entirely parallel network of AWS services. And we don't really have the business in China to justify that much effort and cost. Our clients are schools and their users are students. So asking them to use a VPN is not really a solution. So my question: Is there way for users in China to access our AWS services without us setting up a parallel AWS infrastucture in China? Have others done this? Is there an AWS region that always works for users in China?
VMWare to AWS Linux Based Instance
I have an VMWare server which I exported into OVF format. On import I get this error: "ClientError: Multiple different grub/menu.lst file s found." I have also tried importing the drive of the server and I get the same: ClientError: Multiple different grub/menu.lst file s found." Any ideas would be appreciated.
Extend the root (ext4) partition adjacent to 2nd SWAP partition that is adjacent third LVM2 partition that has 2 volume groups in ext4 format inside of it
We need to extend the root (nvme0n1p1) partition that is adjacent to the 2nd SWAP (nvme0n1p2) partition that is adjacent to the third (nvme0n1p3) LVM2 partition that has 2 volume groups in ext4 format inside the LVM2. One of the volume groups is mounted at /home/abc and that is where the server hosts the web (django framework) application from (inside a virtual environment | miniconda ). I am able to (as verified on a staging instance) increase the size of the volume on Amazon and extend the filesystem but only on the 3rd LVM2 partition (tail end) of the (nvme0n1) hard drive. The first (nvme0n1p1) partition is almost out of space. After increasing the size of the volume on Amazon EBS, the instructions tell you how to grow the partition (the one i want to grow is ext4)(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html) tell you to: ``` sudo growpart /dev/nvme0n1 1 ``` but the response is: ``` NOCHANGE: partition 1 could only be grown by 33 [fudge=2048] ``` After you're supposed to: ``` sudo resize2fs /dev/nvme0n1p1 ``` but without growing it, you cant resize the first (nvme0n1p1) ext4 partition I was not able to growpart (or resize2fs) the first partition because it is sandwiched in-between the 2nd and 3rd. How do I accomplish this? - I have to shift the 2nd SWAP and 3rd LVM2 partitions over to allow for sectors to be free for the first partition to be able to be resized, correct? So since I wasnt able to grow p1, i applied the same commands to p3 and it worked. ``` sudo growpart /dev/nvme0n1 3 ``` I am able to use Logical Volume Manager using things like: ``` lvextend -L +5G /dev/mapper/vg_abc-app ``` to resize the volume groups inside the 3rd partition (nvme0n1p3) of the device. *The information below is from the actual instance i need to work on, it does not reflect an increase in EBS volume size yet.* Here is the ```lsblk -fm``` output: ``` NAME FSTYPE LABEL UUID MOUNTPOINT NAME SIZE OWNER GROUP MODE loop0 squashfs /snap/core/00000 loop0 114.9M root disk brw-rw---- loop2 squashfs /snap/amazon-ssm-agent/0000 loop2 26.7M root disk brw-rw---- loop3 squashfs /snap/core/00000 loop3 114M root disk brw-rw---- loop4 squashfs /snap/core18/0000 loop4 55.6M root disk brw-rw---- loop5 squashfs /snap/amazon-ssm-agent/0000 loop5 25.1M root disk brw-rw---- loop6 squashfs /snap/core18/0000 loop6 55.6M root disk brw-rw---- nvme0n1 nvme0n1 50G root disk brw-rw---- ├─nvme0n1p1 ext4 cloudimg-rootfs 00000000-0000-0000-0000-000000000000 / ├─nvme0n1p1 20G root disk brw-rw---- ├─nvme0n1p2 swap 00000000-0000-0000-0000-000000000000 [SWAP] ├─nvme0n1p2 2G root disk brw-rw---- └─nvme0n1p3 LVM2_member 00000000-0000-0000-0000-000000000000 └─nvme0n1p3 28G root disk brw-rw---- ├─vg_abc-logs xfs 00000000-0000-0000-0000-000000000000 /var/log ├─vg_abc-logs 8G root disk brw-rw---- └─vg_abc-app xfs 00000000-0000-0000-0000-000000000000 /home/abc └─vg_abc-app 19G root disk brw-rw---- nvme1n1 LVM2_member 00000000-0000-0000-0000-000000000000 nvme1n1 50G root disk brw-rw---- └─vg_backups-backups xfs 00000000-0000-0000-0000-000000000000 /home/abc/Backups-Disk └─vg_backups-backups 49G root disk brw-rw---- ``` Here is the output of: ```df -hT``` ``` Filesystem Type Size Used Avail Use% Mounted on udev devtmpfs 16G 0 16G 0% /dev tmpfs tmpfs 3.1G 306M 2.8G 10% /run /dev/nvme0n1p1 ext4 20G 15G 4.5G 77% / tmpfs tmpfs 16G 40K 16G 1% /dev/shm tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/mapper/vg_abc-logs xfs 8.0G 1.6G 6.5G 20% /var/log /dev/loop2 squashfs 27M 27M 0 100% /snap/amazon-ssm-agent/0000 /dev/loop5 squashfs 26M 26M 0 100% /snap/amazon-ssm-agent/0001 /dev/mapper/vg_abc-app xfs 19G 2.0G 18G 11% /home/abc /dev/mapper/vg_backups-backups xfs 49G 312M 49G 1% /home/abc/Backups-Disk /dev/loop3 squashfs 114M 114M 0 100% /snap/core/00000 /dev/loop4 squashfs 56M 56M 0 100% /snap/core18/0000 /dev/loop0 squashfs 115M 115M 0 100% /snap/core/00001 /dev/loop6 squashfs 56M 56M 0 100% /snap/core18/0001 tmpfs tmpfs 3.1G 0 3.1G 0% /run/user/1000 ``` Here is the output of: ``` parted -a optimal /dev/nvme0n1 print free ``` ``` Model: NVMe Device (nvme) Disk /dev/nvme0n1: 53.7GB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 32.3kB 1049kB 1016kB Free Space 1 1049kB 21.5GB 21.5GB primary ext4 boot 21.5GB 21.5GB 16.9kB Free Space 2 21.5GB 23.6GB 2147MB primary linux-swap(v1) 3 23.6GB 53.7GB 30.1GB primary lvm ``` How do I increase the root size of nvme0n1p1 without stopping the EC2? The solutions found indicate that the entire drive needs to be formatted and a new partition table made. That would obviously entail stopping the EC2. Please help. The AWS docs direct you to https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/5/html/deployment_guide/s1-lvm-diskdruid-auto if you have LVM on Nitro based instances. However, that link doesn't help me much wit what I'm trying to do here. Also, there is a second device, nvme1n1 that is entirely LVM2/xfs mounted at /home/abc/Backups-Disks (Filesystem: /dev/mapper/vg_backups-backups). Not sure if it makes a difference, as you can see above: ``` nvme1n1 LVM2_member 00000000-0000-0000-0000-000000000000 nvme1n1 50G root disk brw-rw---- └─vg_backups-backups xfs 00000000-0000-0000-0000-000000000000 /home/abc/Backups-Disk └─vg_backups-backups 49G root disk brw-rw---- ``` I don't believe this other device needs to be changed in any way, however, please advise if so. Any help is appreciated, Thank you.
AmazonLinux 2022 Package availability seems really lacking
Hello, I am trying out AL2022 RC, and on thing that strikes me as odd is that many packages have been removed. For example, mlocate and libreswan have been removed and are nowhere to be found. This seems very odd as those packages are available by default on debian 22.04, RH8, etc, but are a real pain to install on amazonlinux2022. The only way I found to add them was downloading the rpm manually but this would cause a lot of headache to maintain for a multitude of production servers. Are there additional repositories for extra packages available for al2022 that I might have missed ? Is there a chance the list of removed packages could be reconsidered ? Thanks a lot for your help, Xavier
Password Change in RDP
Please I have tried changing my windows rdp password after i was prompted to change the password using ctrl+alt+del key. but my problem is that the default password is too long to memorize and there is no copy and paste option. Please is there any easy way or video on how to change vps password in my aws dashboard account? Thanks
Hasura server shows 502 Bad Gateway errors which is running on EC2
https://prnt.sc/XIVLCkyDETq8 Front side while we trying to fetch data using API but shows cors error. But actually is due to the Hasura server looks not connected wth that EC2 instance. I tried to reboot the instance but it still not working. Instances running properly. Please let me know. How we can figure out this?
Uncaught RedisClusterException: Can't communicate with any node in the cluster in /home/cloudpanel/public_html/RedisCache.php
Hello Everyone, We are getting "Uncaught RedisClusterException: Can't communicate with any node in the cluster in" this error. We are using Elastic Cache redis cluster with 2 node and cluster mode was enable PHP library ==>https://github.com/cheprasov/php-redis-client we are using this library. This error randomly occurs in our websites. At the time of error Elastic cache load was normal. Is there any way to trouble shoot this issue.
MATE Desktop won’t open Chromium
I launched an EC2 instance based on the Amazon Linux 2 with .NET 6, PowerShell, Mono, and MATE Desktop Environment AMI on a t3.large instance and successfully accessed the MATE Desktop environment through my RDP client - I got to the desktop, clicked on the button on the lower left hand corner and navigated the menu to open Chromium. I clicked on the Chromium button, but Chromium didn’t launch. I can open other desktop applications in the menu, such as Caja and MATE Font Viewer, but Chromium won’t open. The Applications menu does close after I click on it, just like the other apps, but the other apps open and Chromium doesn’t. Am I doing something wrong?
Private Instance and Public ELB HTTPS Problem.
My VPC structure looks like this: VPC: 1 Public Subnet : 2 (1 Public Instance in each Subnet) Public ELB : 1 (Public ELB for Public Instance above) Public Subnet : 1 (for NAT Gateway) Private Subnet : 1 (1 Private Instance) Here, the Private Instance should connect to the Public ELB. At this time, HTTPS communication should be established between the Private Instance and the Public Instance behind the ELB, but HTTPS communication is not possible because the Public ELB is playing an intermediate role. How can I solve the problem? Or is the structure wrong?
How to calculate the number of hours covered by EC2 Instance Savings Plans
I am using EC2 Saving Plans for 6 instances. In the bill, I see On Demand Linux Instance Hour (hrs) and usage covered by EC2 Instance Savings Plans (hrs). I wonder how I can calculate the usage covered by SP. For instance (on August): - Commitment: $0.42400/hour - $0.125 per On Demand Linux m4.large Instance Hour: 4,464.000 Hrs (= 6 x 24 x 31) - m4.large Linux instance usage in ap-southeast-1 covered by EC2 Instance Savings Plans: **4,393.599 Hrs** - SP rate: $0.07400 - On-Demand rate: $ 0.12500