Browse through the questions and answers listed below or filter and sort to narrow down your results.
Amazon Linux 2022 (AL2022) Is it possible to upgrade from one Release Candidate (RC) to the next RC
First things first: AL2022 is a joy to use on T4G EC2. Is the update from one RC to the next one implemented yet? I have the august AL2022 installed and want to upgrade to the september one: ``` # dnf update --releasever=2022.0.20220928.0 Amazon Linux 2022 repository. Errors during downloading metadata for repository 'amazonlinux': - Status code: 403 for https://al2022-repos-XXX.amazonaws.com/core/mirrors/2022.0.20220928.0/aarch64/mirror.list - Status code: 403 for https://al2022-repos-XXX.amazonaws.com/core/mirrors/2022.0.20220928.0/aarch64/mirror.list Error: Failed to download metadata for repo 'amazonlinux': Cannot prepare internal mirrorlist: Status code: 403 for https://al2022-repos-XXX.amazonaws.com/core/mirrors/2022.0.20220928.0/aarch64/mirror.list Ignoring repositories: amazonlinux. Dependencies resolved. Nothing to do. Complete! ``` All in- and outbound ports are open for both IP4 and IP6 in security rules.
Cannot access Timestream via PrivateLink without explicitly passing endpoint_url
Hi, I am trying to access Timestream from EC2/Lambda instances that run within a VPC so that I can speak to a RDS instance from those EC2 instances/Lambda functions. I have spent many hours trying to get access to Timestream via PrivateLink/a VPC instance endpoint to work and think I may have found an issue. When I provision a VPC endpoint for the Timestream ingest service, the Private DNS name is specific to the cell endpoint, e.g. *ingest-cell2.timestream.us-east-1.amazonaws.com* NOT the general endpoint URL that boto3 uses, i.e. *ingest.timestream.us-east-1.com*. When I run a nslookup on *ingest-cell2.timestream.us-east-1.amazonaws.com* it properly resolves to the private IP of the VPC endpoint ENI, but if I lookup the more general endpoint URL of *ingest.timestream.us-east-1.com* it continues to resolve to public AWS IPs. The result of this is that if I initialize the timestream write client normally and perform any actions, it hangs because it is trying to communicate with a public IP from a private subnet, ``` import boto3 ts = boto3.client('timestream-write') ts.meta.endpoint_url # https://ingest.timestream.us-east-1.amazonaws.com ts.describe_endpoints() # hangs ts.describe_database(DatabaseName='dbName') # hangs ``` If I explicitly give it the cell specific endpoint URL, the describe_endpoints() function throws an error but seemingly normal functions work (haven't tested writes or reads yet, just describing databses) ``` import boto3 ts = boto3.client('timestream-write', endpoint_url='https://ingest-cell2.timestream.us-east-1.amazonaws.com') ts.describe_endpoints() # throws UnknwonOperationException error ts.describe_databse(DatabaseName='dbName') # Succeeds ``` If I provision a NAT gateway in the private subnet rather than a VPC endpoint everything works normally as expected. Furthermore for fun, I tried adding the VPC endpoint private IP to the /etc/hosts file with *ingest.timestream.us-east-1.com* to force proper resolution and even then I get the same hanging behavior when running the above block of code This seems pretty broken to me. The whole point of the VPC endpoint is to enable the SDK to operate normally. Maybe I am missing something?
Bad performance on RDS MariaDB
I'm migrating an ancient on-premises 5.5.65 MariaDB to RDS. The new RDS MariaDB 10.6 not show CPU, IOPs or Memory problems but the SQLs are too slow. One complex SELECT cost 0,89 sec on premises and 2,93 using an EC2 in the same VPC. I have compared the execution plans an are different but probably is because the different versions. On_premises execution ![On_premises execution](/media/postImages/original/IMJRSsaEmuQnOBhb2tc1fvaw) EC2 over RDS in the same VPC ![RDS with EC2](/media/postImages/original/IMHGeRQGjLQ8e65EvruI4SfQ) I don't know if the differences on the "key" column are relevant to this issue. Any experience on bad performance with RDS and MariaDB?
What type of GetEntitlements Response is received when a customer cancels his or her subscription before the expiry of the contract.
I am trying to integrate our product with AWS Marketplace using SaaS Contract plan. I am trying to implement a scenario where a customer deliberately cancels his subscription from the portal. Once I receive the SNS notification and call the GetEntitlments, how can I identify from the response that the contract has been cancelled. Please suggest.
Lambda function failing with "cannot import name 'etree' from 'lxml' (/var/task/lxml/__init__.py)"
I'm creating a lambda function using a virtual env in Windows 10. I'm adding site-packages to a .zip archive and updating the lambda function. Currently this is failing with "cannot import name 'etree' from 'lxml' (/var/task/lxml/__init__.py)" My venv is using Python 3.8 as is the runtime environment in lambda. Some [previous solutions](https://stackoverflow.com/questions/53406638/importerror-cannot-import-name-etree-on-python-3-6) suggest setting runtime on lambda to Python 3.6 but this is no longer an option. I've tried 3.7 and 3.9 but the problem persists. Other solutions on [re:Post](https://repost.aws/questions/QUwIZg6DlXTgKHjJyMIlG1dQ) discuss docker containers rather than zip archives and linux environments rather than windows.
Unable to connect to EC2 instance & access over internet
Hello, I am trying to connect to the linux base EC2 instance using putty but I can't connect to the instance. Also I create linux base application server using User Data and trying to access over browser, I am not able to access it. Please help me to reach out that problem
Elastic beanstalk cannot extend nginx config without zip archive
Hi, I've noticed that if you deploy your Elastic Beanstalk application with a Dockerrun.aws.json file only e.g. `aws elasticbeanstalk create-application-version --application-name "<name>" --version-label "<tag>" --source-bundle S3Bucket="<bucket>",S3Key="Dockerrun.aws.json"` it doesn't pick up the nginx config if you add the .platform folder in the container. The only way I managed to make this work was to instead zip the .platform folder, source code and Dockerrun.aws.json opposed to just have the aws.json file. Does anyone know whether it's possible to override the nginx config with just a Docker.aws.json file
Elastic beanstalk docker-compose errors when using Dockerrun.aws.json
Hi, I'm zipping up my source code and Dockerrun.aws.json file. I'm not using docker-compose so I have not included that in my source bundle. I keep getting the following logs in cloudwatch: `Can't find a suitable configuration file in this directory or any parent. Are you in the right directory Supported filenames: docker-compose.yml, docker-compose.yaml, compose.yml, compose.yaml` I'd expect these errors in the logs not to occur because I am not using docker compose. In our logs I can see that it detects we are not using docker-compose: `2022/10/06 11:40:05.640803 [INFO] detected new app is not docker compose app` Any guidance will be appreciated.
Usage of named shadow
Hi, I am using aws-io-device-sdk-python package: https://github.com/aws/aws-iot-device-sdk-python/tree/e0f9eef8aafdf6319022d7972d3f1f65eefb784d for an AWS shadow purpose. I want to use named shadow, however, I have seen that this library only send message for unamed shadow: ` self._topicGeneral = "$aws/things/" + str(self._shadowName) + "/shadow/" + str(self._actionName) ` ( found in https://github.com/aws/aws-iot-device-sdk-python/blob/e0f9eef8aafdf6319022d7972d3f1f65eefb784d/AWSIoTPythonSDK/core/shadow/shadowManager.py ) \ while the shape of the topic for a named shadow should be: `$aws/things/thingName/shadow/name/shadowName/actionName` Is there any way to use this library without modifying for named shadow? Thank you in advance