By using AWS re:Post, you agree to the Terms of Use

Questions tagged with Amazon EC2

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

AuthorizedKeysCommand /usr/share/ec2-instance-connect/eic_run_authorized_keys <username> SHA256:<long hex string> failed, status 22

We use Ubuntu 20.04 (`ami-0c8858c090152d291`) as the basis for a production ecommerce stack, and I need to move users around as part of a handover. In order to do this I am trying to ssh in to the instance using the original ami-configured instance user and AWS generated key, so I can move the user I normally log in as. This fails with the subject error in `/var/log/auth.log`. I have reconfirmed keys and user many times obviously. This appears to be related to [AuthorizedKeysCommand fails on Ubuntu 20.04](https://github.com/widdix/aws-ec2-ssh/issues/157), which blames the package `ec2-instance-connect`. We keep instances up to date, so I suspect this package was installed as part of a post-install security update. The above-linked GitHub thread suggests: ``` # rm /usr/lib/systemd/system/ssh.service.d/ec2-instance-connect.conf # systemctl daemon-reload ``` I have tried the above unsuccessfully. Even after removing `ec2-instance-connect.conf` and issuing either `systemctl daemon-reload` or `kill -s HUP <sshd pid>` the sshd process is *still* running using the `ec2-instance-connect.conf` settings: ``` sshd: /usr/sbin/sshd -D -o AuthorizedKeysCommand /usr/share/ec2-instance-connect/eic_run_authorized_keys %u %f -o AuthorizedKeysCommandUser ec2-instance-connect [listener] 0 of 10-100 startups ``` For obvious reasons I am reluctant to tinker more extensively with the sshd configuration on a production server without hearing from the community. It seems rather questionable (to put it mildly) for a "security update package" to hijack the normal sshd auth process, especially with no well publicized info, only to come to light when I actually have to work on it. The package listing says > Configures ssh daemon to accept EC2 Instance Connect ssh keys -but what it fails to add is "... and may disable other keys". We surely cannot be the first ones to encounter this problem??
0
answers
0
votes
12
views
asked 3 days ago

I'm getting charged and don't know how to stop it

So, I just started yesterday practicing with terraform and aws. Me and a coworker. We were doing an excercise to familiarize with terraform and aws following a guide. We were doing the same (except for deploying in different regions) We did terraform apply, everything works great. Later, I discover that I'm getting charged a large amount of money. He is not, his is a really really small amount. Mine: ![Enter image description here](https://repost.aws/media/postImages/original/IMy3H9OLSeRTqJrI0V1nV6Hw) ![Enter image description here](https://repost.aws/media/postImages/original/IMhobkgoQdQH--Pj2Pd-oRPA) This is hours later after I did terraform destroy... I have no idea on how to stop this. What is causing this? There is no t2.medium or m4.large in my terraform code... just 2 t2.micro, an VPC, EKS... The NatGateway is no more... but it says there that it's being used and charging me hourly. What 4 services are active? It also says that there are two regions active. I know that eu-west2 london is one, but I'm really not sure about what the other one is or how to get to know that. I'm pretty much new to this so, I hope someone can help me. I already know it's going to spend the whole night charging me for no reason and feels really bad. Also, if I were to delete my account (not sure how it it), everything should stop 100% right? Edit1: It's going up... Don't know what to do. ![Enter image description here](https://repost.aws/media/postImages/original/IMucVoGiYPRVqpv2FnCD4_yw)
1
answers
0
votes
47
views
asked 4 days ago

Error loading patching payloadfailed to run commands: exit status 156

I'm trying to automate Patching on Ubuntu EC2 instances with Patch Manager and I'm getting this error while trying to execute the command document "AWS-RunPatchBaseline": Error loading patching payloadfailed to run commands: exit status 156 Error log: ``` /usr/bin/python3 /usr/bin/python /usr/bin/apt-get Reading package lists... Building dependency tree... Reading state information... python3-apt is already the newest version (2.3.0ubuntu2.1). 0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. Using python binary: 'python' Using Python Version: Python 3.10.4 /usr/bin/curl /usr/bin/wget 08/02/2022 04:25:05 root [INFO]: Downloading payload from https://s3.dualstack.ap-southeast-2.amazonaws.com/aws-ssm-ap-southeast-2/patchbaselineoperations/linux/payloads/patch-baseline-operations-1.90.tar.gz 08/02/2022 04:25:06 root [INFO]: Attempting to import entrance file os_selector 08/02/2022 04:25:06 root [ERROR]: Error loading entrance module. Traceback (most recent call last): File "/var/log/amazon/ssm/patch-baseline-operations/common_startup_entrance.py", line 164, in execute entrance_module = __import__(module_name) File "/var/log/amazon/ssm/patch-baseline-operations/os_selector.py", line 11, in <module> import common_os_selector_methods File "/var/log/amazon/ssm/patch-baseline-operations/common_os_selector_methods.py", line 11, in <module> from patch_common.baseline_override import load_baseline_override File "/var/log/amazon/ssm/patch-baseline-operations/patch_common/baseline_override.py", line 6, in <module> from patch_common.downloader import download_file, load_json_file, is_access_denied File "/var/log/amazon/ssm/patch-baseline-operations/patch_common/downloader.py", line 1, in <module> import boto3 File "/var/log/amazon/ssm/patch-baseline-operations/boto3/__init__.py", line 16, in <module> from boto3.session import Session File "/var/log/amazon/ssm/patch-baseline-operations/boto3/session.py", line 17, in <module> import botocore.session File "/var/log/amazon/ssm/patch-baseline-operations/botocore/session.py", line 29, in <module> import botocore.configloader File "/var/log/amazon/ssm/patch-baseline-operations/botocore/configloader.py", line 19, in <module> from botocore.compat import six File "/var/log/amazon/ssm/patch-baseline-operations/botocore/compat.py", line 25, in <module> from botocore.exceptions import MD5UnavailableError File "/var/log/amazon/ssm/patch-baseline-operations/botocore/exceptions.py", line 15, in <module> from botocore.vendored import requests File "/var/log/amazon/ssm/patch-baseline-operations/botocore/vendored/requests/__init__.py", line 58, in <module> from . import utils File "/var/log/amazon/ssm/patch-baseline-operations/botocore/vendored/requests/utils.py", line 26, in <module> from .compat import parse_http_list as _parse_list_header File "/var/log/amazon/ssm/patch-baseline-operations/botocore/vendored/requests/compat.py", line 7, in <module> from .packages import chardet File "/var/log/amazon/ssm/patch-baseline-operations/botocore/vendored/requests/packages/__init__.py", line 3, in <module> from . import urllib3 File "/var/log/amazon/ssm/patch-baseline-operations/botocore/vendored/requests/packages/urllib3/__init__.py", line 10, in <module> from .connectionpool import ( File "/var/log/amazon/ssm/patch-baseline-operations/botocore/vendored/requests/packages/urllib3/connectionpool.py", line 38, in <module> from .response import HTTPResponse File "/var/log/amazon/ssm/patch-baseline-operations/botocore/vendored/requests/packages/urllib3/response.py", line 9, in <module> from ._collections import HTTPHeaderDict File "/var/log/amazon/ssm/patch-baseline-operations/botocore/vendored/requests/packages/urllib3/_collections.py", line 1, in <module> from collections import Mapping, MutableMapping ImportError: cannot import name 'Mapping' from 'collections' (/usr/lib/python3.10/collections/__init__.py) 08/02/2022 04:25:06 root [ERROR]: cannot import name 'Mapping' from 'collections' (/usr/lib/python3.10/collections/__init__.py) Traceback (most recent call last): File "/var/log/amazon/ssm/patch-baseline-operations/common_startup_entrance.py", line 164, in execute entrance_module = __import__(module_name) File "/var/log/amazon/ssm/patch-baseline-operations/os_selector.py", line 11, in <module> import common_os_selector_methods File "/var/log/amazon/ssm/patch-baseline-operations/common_os_selector_methods.py", line 11, in <module> from patch_common.baseline_override import load_baseline_override File "/var/log/amazon/ssm/patch-baseline-operations/patch_common/baseline_override.py", line 6, in <module> from patch_common.downloader import download_file, load_json_file, is_access_denied File "/var/log/amazon/ssm/patch-baseline-operations/patch_common/downloader.py", line 1, in <module> import boto3 File "/var/log/amazon/ssm/patch-baseline-operations/boto3/__init__.py", line 16, in <module> from boto3.session import Session File "/var/log/amazon/ssm/patch-baseline-operations/boto3/session.py", line 17, in <module> import botocore.session File "/var/log/amazon/ssm/patch-baseline-operations/botocore/session.py", line 29, in <module> import botocore.configloader File "/var/log/amazon/ssm/patch-baseline-operations/botocore/configloader.py", line 19, in <module> from botocore.compat import six File "/var/log/amazon/ssm/patch-baseline-operations/botocore/compat.py", line 25, in <module> from botocore.exceptions import MD5UnavailableError File "/var/log/amazon/ssm/patch-baseline-operations/botocore/exceptions.py", line 15, in <module> from botocore.vendored import requests File "/var/log/amazon/ssm/patch-baseline-operations/botocore/vendored/requests/__init__.py", line 58, in <module> from . import utils File "/var/log/amazon/ssm/patch-baseline-operations/botocore/vendored/requests/utils.py", line 26, in <module> from .compat import parse_http_list as _parse_list_header File "/var/log/amazon/ssm/patch-baseline-operations/botocore/vendored/requests/compat.py", line 7, in <module> from .packages import chardet File "/var/log/amazon/ssm/patch-baseline-operations/botocore/vendored/requests/packages/__init__.py", line 3, in <module> from . import urllib3 File "/var/log/amazon/ssm/patch-baseline-operations/botocore/vendored/requests/packages/urllib3/__init__.py", line 10, in <module> from .connectionpool import ( File "/var/log/amazon/ssm/patch-baseline-operations/botocore/vendored/requests/packages/urllib3/connectionpool.py", line 38, in <module> from .response import HTTPResponse File "/var/log/amazon/ssm/patch-baseline-operations/botocore/vendored/requests/packages/urllib3/response.py", line 9, in <module> from ._collections import HTTPHeaderDict File "/var/log/amazon/ssm/patch-baseline-operations/botocore/vendored/requests/packages/urllib3/_collections.py", line 1, in <module> from collections import Mapping, MutableMapping ImportError: cannot import name 'Mapping' from 'collections' (/usr/lib/python3.10/collections/__init__.py) ``` Could someone help me with this one? Instance Details: PRETTY_NAME="Ubuntu 22.04.1 LTS" NAME="Ubuntu" VERSION_ID="22.04" VERSION="22.04.1 LTS (Jammy Jellyfish)" VERSION_CODENAME=jammy ID=ubuntu ID_LIKE=debian HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" UBUNTU_CODENAME=jammy
1
answers
0
votes
26
views
asked 6 days ago

Client.UnsupportedHostConfiguration whne trying to allocate mac1.metal instance

I would like to allocate a dedicated mac1.metal host on my account. The Service quota says I can allocate up to three, but the request fails. I tried multiple AZs and Regions - same result ``` ➜ ~ aws service-quotas list-service-quotas --service-code ec2 --query "Quotas[?QuotaName == 'Running Dedicated mac1 Hosts']" [ { "ServiceCode": "ec2", "ServiceName": "Amazon Elastic Compute Cloud (Amazon EC2)", "QuotaArn": "arn:aws:servicequotas:eu-central-1:40....46:ec2/L-A8448DC5", "QuotaCode": "L-A8448DC5", "QuotaName": "Running Dedicated mac1 Hosts", "Value": 3.0, "Unit": "None", "Adjustable": true, "GlobalQuota": false } ] ``` Here is the relevant cloud trail log for the detail. ``` { "eventVersion": "1.08", "userIdentity": { "type": "IAMUser", "principalId": "AI....62O", "arn": "arn:aws:iam::40....46:user/sst", "accountId": "40.....46", "accessKeyId": "AS.....35D", "userName": "sst", "sessionContext": { "sessionIssuer": {}, "webIdFederationData": {}, "attributes": { "creationDate": "2022-07-30T11:19:43Z", "mfaAuthenticated": "true" } } }, "eventTime": "2022-07-30T11:21:24Z", "eventSource": "ec2.amazonaws.com ", "eventName": "AllocateHosts", "awsRegion": "us-east-1", "sourceIPAddress": "AWS Internal", "userAgent": "AWS Internal", "errorCode": "Client.UnsupportedHostConfiguration", "errorMessage": "The requested configuration is currently not supported. Please check the documentation for supported configurations.", "requestParameters": { "AllocateHostsRequest": { "HostRecovery": "off", "AutoPlacement": "off", "AvailabilityZone": "us-east-1f", "Quantity": 1, "InstanceType": "mac1.metal", "TagSpecification": { "ResourceType": "dedicated-host", "tag": 1, "Tag": { "Value": "xxx Demo", "tag": 1, "Key": "Name" } } } }, "responseElements": null, "requestID": "2506eca6-fb62-4354-98f5-8e536e623f1b", "eventID": "1f9ae005-dadc-445d-b6a1-9b2b21ce0073", "readOnly": false, "eventType": "AwsApiCall", "managementEvent": true, "recipientAccountId": "40.......46", "eventCategory": "Management", "sessionCredentialFromConsole": "true" } ```
0
answers
0
votes
3
views
asked 7 days ago

Can't install any packages on Amazon Linux 2 instance

I have launched an ec2(Amazon Linux 2) but I am not able to install any packages. Below errors and instructions, I am getting every time. The image is attached.![Enter image description here](https://repost.aws/media/postImages/original/IM1jSEyvbYQRag5x_05rUiNQ) **One of the configured repositories failed (Unknown),** and yum doesn't have enough cached data to continue. At this point the only safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work). 3. Run the command with the repository temporarily disabled yum --disablerepo=<repoid> ... 4. Disable the repository permanently, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage: yum-config-manager --disable <repoid> or subscription-manager repos --disable=<repoid> 5. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands, so will have to try and fail each time (and thus. yum will be be much slower). If it is a very temporary problem though, this is often a nice compromise: yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true
1
answers
0
votes
29
views
asked 11 days ago

Cannot load my FPGA Image

Hello, I developed an FPGA accelerator for f1-instances. Several years ago I made it. I set it as public. Now, I want to use it again, but cannot load it. Here are some details. In short, it is my image, set also as public. It matches with the shell on the aws-instance. However, I cannot load it. The details are below. Can you please help me resolve the issue and load the image? Kind Regards, Furkan ___ These are the details of my image, with `afi-0da97a1d59bf1e558` and `agfi-05bfb2806dd7970d2`. I am the owner with correct Owner ID. Besides, it is set as public. ``` [centos@ip-172-31-92-65 xdma]$ aws ec2 describe-fpga-images --fpga-image-ids afi-0da97a1d59bf1e558 { "FpgaImages": [ { "UpdateTime": "2019-12-06T13:58:03.000Z", "Name": "he_v1_5_afi", "Tags": [], "PciId": { "SubsystemVendorId": "0xfedd", "VendorId": "0x1d0f", "DeviceId": "0xf000", "SubsystemId": "0x1d51" }, "DataRetentionSupport": false, "FpgaImageGlobalId": "agfi-05bfb2806dd7970d2", "State": { "Code": "available" }, "ShellVersion": "0x04261818", "OwnerId": "210929643974", "FpgaImageId": "afi-0da97a1d59bf1e558", "Public": true, "Description": "he_v1_5_description" } ] } ``` ___ Here is the slot information. You can see that shell version `0x04261818` matches with the accelerator information above ``` [centos@ip-172-31-92-65 xdma]$ sudo fpga-describe-local-image -S 0 -H Type FpgaImageSlot FpgaImageId StatusName StatusCode ErrorName ErrorCode ShVersion AFI 0 none cleared 1 ok 0 0x04261818 Type FpgaImageSlot VendorId DeviceId DBDF AFIDEVICE 0 0x1d0f 0x1042 0000:00:1d.0 ``` ___ Now, if I try to load it, I receive error ``` [centos@ip-172-31-92-65 xdma]$ sudo fpga-load-local-image -S 0 -I agfi-05bfb2806dd7970d2 Error: (5) invalid-afi-id The agfi id passed is invalid or you do not have permission to load the AFI. ```
1
answers
0
votes
21
views
asked 12 days ago

User Data script not downloading file(s) from S3

I have been trying for days to get a User Data script for a Windows instance to copy files from S3. At first I was trying to use the 'aws s3 sync' command to copy a large number of files, but since that wouldn't work I zipped the files and I'm trying to now copy just that one zipped file. I am trying to perform the copy with both a script command and powershell command. Since other commands work in both blocks I know the user data script is formatted correctly and executing at launch, but this one file copy command using the AWS CLI is simply not working. It is also worth noting that I am downloading and installing the AWS CLI first, and that the download/install works from either the script block of powershell block. I am also associating an IAM role with sufficient permissions at launch time via an instance profile. And I know both work (CLI and role) as I am able to manually execute the copy command soon as I logon to the instance, it's just not performing the copy from the user data script. Here's the (cleansed) script block I'm running (formatting is fouled up due to <script>): > <script> > c:\windows\system32\msiexec.exe /i https://awscli.amazonaws.com/AWSCLIV2.msi /qn > aws s3 cp s3://<bucket>/<path/key>/Installers.zip C:\Temp\Installers.zip > </script> Here's the (cleansed) powershell block I'm running (formatting is fouled up due to <powershell>): > <powershell> > aws s3 cp s3://<bucket>/<path/key>/Installers.zip C:\Temp\Installers.zip > </powershell> Obviously it's the exact same CLI command structured the same for both script and powershell, but again I cannot get it to execute the file copy operation from either block. However, the installation of the AWS CLI works fine from either block. After pulling my hair out for days on this trying to figure it out, searching the Internet and AWS documentation, and not finding any possible solutions I'm posting here to try to get this figured out. Thank you in advance for any assistance. More info... I can see in the err.tmp file (for the batch file <script> block that, "'#aws' is not recognized as an internal or external command, operable program or batch file." And in the <powershell> err.tmp file, "aws : The term 'aws' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again." So it appears that the AWS CLI is not installed by the time I am attempting to copy files from S3 to the instance, despite the fact that the AWS CLI is being installed as the first line in the batch portion of the User Data script, and I've even added "timeout 90" right after the install AWS CLI line in an attempt to pause for 1.5 minutes to give plenty of time to install the CLI prior to the first attempt at copying the file (in the batch block), then the second attempt to copy the file is in the 27th line of the script in the powershell portion, long after several other commands complete successfully. Again, thank you in advance for any assistance with this issue.
1
answers
0
votes
22
views
asked 13 days ago

Problem Setting up EC2 as Airgap Server with Client VPN Endpoint

Afternoon All, I'm a (very) inexperienced user who's keen to learn and appreciate I might have bitten off far more than I can chew with this. I'm working on a project where we need to share UDP packets between two companies with the packets going in both directions. I want to setup an airgap server where exchange of data could take place. I have an EC2 server with an external IP address (that I SSH into) as the airgap machine and a VPN client endpoint linked to the subnet the EC2 instance is in. My intent was to send UDPs from my company system to the airgap on a particular port say 3005, for example and then listen on a different port, say 4005, for example, on the same EC2 instance for UDP packets from the other company. And use socat to send packets from 4005 to the client IP on my Windows machine (currently set in the Endpoint to 16.10.0.0/16 (yes I know the subnet is probably far too big for this)). I have successfully created the VPN client endpoint, downloaded the configuration file and can connect in from my Windows10 laptop using OpenVPN client. I can send packets from my Windows10 machine to the Airgap EC2 instance and see that it arrives on port 3005 as expected using tcpdump. I can also ping from the Windows machine to the Airgap server... so the connection is working in one direction. The issue I have is that the connection does not work sending packets from the Airgap EC2 instance to my machine via the VPN... If I run socat with various options of udp-recvfrom or udp-listen and udp-sendto or udp-datagram I get no packets arriving at my Windows machine. Neither can I ping the Windows machine from the EC2 Airgap instance (I have tried this with Windows Firewall turned off to test whether the FW was getting in the way) My questions then: 1. Is it possible to do what I want? 2. WHat am I doing wrong and how can I fix? 3. Is my assumption about an EC2 instance being a good way of setting up an airgap server like this correct? Many Thanks G
0
answers
0
votes
13
views
asked 13 days ago

Automatically reboot EC-2 linux servers of a target-group if OS update requires a reboot

We're having some Ubuntu instances that are registered targets of Target Groups behind an Elastic Load Balancer. Also, those servers make use of the "unattended-upgrades" package to install security relevant packages. Some of those newly installed packages require the server to be rebooted. Therefore it sends an email to our System Engineers to let them know. So now, in order to reboot those instances they need to be deregistered from their Target Group, rebooted, and registered again with the Target Group. Those Target Groups have redundant targets - so the missing one target is okay for the time it takes to make it become functional again. So now my actual question. Can this easily be automated or is there some light-weighted solution available? If possible I would like to avoid a "full-blown" fleet management software. However, I can see how it can get complicated fast but still thought to ask. My first thought was some sort of AWS-CLI scripting that unregisters the instance from the target-group and registers itself again after the reboot succeeded. If there are enough other targets available to cover for a few minutes. Or maybe have the instance shut down and let an auto-scaling group boot up a new instance. However, that new instance would need to be updated from the base-image first as well then. Any idea where or what to look for? Thanks, M
1
answers
0
votes
22
views
asked 20 days ago
  • 1
  • 90 / page