Questions tagged with Amazon Aurora

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Since we upgraded our DB cluster from Aurora 2.X to 3.x (MySQL 8), our DB has been slow on some read ops (on read replicas). When we check AWS performance insights, we see many waits related to wait/io/table/sql/handler, which we never had on Aurora 2.X. Does anyone know how to resolve this? Is it related to a configuration? ![Performance Insights](/media/postImages/original/IMP626jAOBR16ZH7Ct66R_mg) The query issues issue seems to be intermittent since we can run the same query reasonably quickly under a second) but sometimes, the same query takes forever and reaches MAX_EXECUTION_TIME. ![MAX_EXECUTION_TIME](/media/postImages/original/IMqag14jP9Rem4X0bC-zhHBA) ![Query Speed](/media/postImages/original/IMZlUCjuI8Rj-t7752cK1RNw) ** I've already read almost any article related to this but couldn't find anything useful.
0
answers
0
votes
45
views
asked 3 months ago
Hello All, I have created simple python-shell script as mentioned below code is running fine from my local system and I am also able to connect to cluster from my local system. But when try running python-shell script as glue then getting following error: ``` import sys import psycopg2 rds_host = "hostname" name = "aaaaaaa" password = "XXXXXXX" db_name = "bbb" conn = psycopg2.connect(host=rds_host, user=name, password=password, dbname=db_name) with conn.cursor() as cur: query="CALL test_vals()" cur.execute(query) conn.commit() cur.close() ``` cloudwatch error log ``` conn = _connect(dsn, connection_factory=connection_factory, **kwasync) psycopg2.OperationalError: could not connect to server: Connection timed out Is the server running on host "hostname" (XX.XXX.XX.XX) and accepting TCP/IP connections on port 5432? During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/tmp/runscript.py", line 215, in <module> ``` I have not added any Connections in job properties . Please help .
1
answers
0
votes
30
views
asked 3 months ago
With our upgrade of Aurora from 2.10.2 to 2.11 we've experienced connection timeouts to one of our DBs in our aurora clusters that couldn't be explained by load or number of connections. We've investigated and found that the MySQL.Data library for .Net as provided by Oracle appears to have issues with this latest version of Aurora which can cause the server to block new connections. We have reproduced this using the latest version of MySQL.Data 8.0.31 (as well as past versions of this package) and issuing parallel connections. The server ceases to accept new connections after two parallel connections are made. In error logs we see "Bad handshake" and "Got an error reading communication packets" errors and the user shows as "unauthenticated". When pointing at a cluster using 2.10.2 we cannot replicate any issues and the MySQL.Data library works as expected. Are there any advisory's on this?
1
answers
0
votes
60
views
asked 3 months ago
Newer versions of Amazon Aurora PostgreSQL-compatible edition are now available and database cluster(s) running Aurora PostgreSQL minor versions 11.12, 12.7, and 13.3 need to be upgraded by March 15, 2023. These newer minor versions include important updates that will improve the operations of your Aurora PostgreSQL instances and workloads. We strongly encourage you to upgrade to at least a recommended minimum minor version at your earliest convenience. * For PostgreSQL Minor Version 11.12, the recommended minimum minor version is 11.17. * For PostgreSQL Minor Version 12.7, the recommended minimum minor version is 12.12. * For PostgreSQL Minor Version 13.3, the recommended minimum minor version is 13.8. Starting on or after 12:00 PM PDT on March 15, 2023, if your Amazon Aurora PostgreSQL cluster has not been upgraded to a newer minor version, we will schedule the relevant recommended minimum minor version to be automatically applied during your next maintenance window. Changes will apply to your cluster during your next maintenance window even if auto minor version upgrade is disabled. Restoration of Amazon Aurora PostgreSQL 11.12, 12.7, and 13.3 database snapshots after March 15, 2023 will result in an automatic upgrade of the restored database to a supported version at the time. How to Determine Which Instances are Running These Minor Versions? * In the Amazon RDS console, you can see details about a database cluster, including the Aurora PostgreSQL version of instances in the cluster, by choosing Databases from the console's navigation pane. * To view DB cluster information by using the AWS CLI, use the describe-db-clusters command. * To view DB cluster information using the Amazon RDS API, use the DescribeDBClusters operation. You can also query a database directly to get the version number by querying the aurora_version() system function i.e., "SELECT * FROM aurora_version();". How to Apply a New Minor Version You can apply a new minor version in the AWS Management Console, via the AWS CLI, or via the RDS API. Customers using CloudFormation are advised to apply updates in CloudFormation. We advise you to take a manual snapshot before upgrading. For detailed upgrade procedures, please see the available User Guide [1]. Please be aware that your cluster will experience a short period of downtime when the update is applied. Visit the Aurora Version Policy [2] and the documentation [3] for more information and detailed release notes about minor versions, including existing supported versions. If you have any questions or concerns, the AWS Support Team is available on AWS re:Post and via Premium Support [4]. [1] https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_UpgradeDBInstance.PostgreSQL.html [2] https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.VersionPolicy.html [3] https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Updates.20180305.html [4] https://aws.amazon.com/support
0
answers
0
votes
223
views
AWS
EXPERT
asked 3 months ago
We have a lambda function for which provisioned conurrency =1 is enabled. We insert records in Postgres Aurora RDS using this lambda. We are connecting to RDS proxy endpoint to connect to db . The db object creation is done in below manner : ``` @staticmethod def init_database_dao(): if Dao._db_connection_dao is None: _logger.info(msg=f"creating db connection , singleton") ConnectionProvider._db_connection_metadata_dao = MetadataDao() // inside MetadataDao we are calling postgres.connect() else: _logger.info(msg=f"refreshing connection for MetadataDao singleton") DatabaseConnectionProvider._db_connection_metadata_dao.reset_if_cursor_or_conn_is_none() return DatabaseConnectionProvider._db_connection_metadata_dao ``` We getting 'SSL connection has been closed unexpectedly' while executing SELECT query once in few hours (can be in 2 hours ,sometimes in 6 hours). We have handled this Postgres Operational Error by resetting the connection in code and retrying and it works. Not able to find the root cause of this error. we are using RDS proxy, we increased the idle client timeout of the proxy from 30 mins to 60 mins , we observed the frequency of this error has reduced. Also we are using the same db for other lambda (with same db object creation implementation) , those lambdas dont have provisioned concurrency enabled, we never encountered SSL connection closed error in those lambdas, Is it somewhat related to the lambda always warmed up(provisioned concurrency) and the connection getting shutup or destroyed. Any suggestions to get rid of this error? Note : The lambda with provisioned concurrency is the entry point of the application . In this lambda we are getting connection object, then reading from db and inserting into db .Once it is executed the other lambdas are triggered .
1
answers
0
votes
206
views
asked 3 months ago
Hello All, We have aurora postgres cluster up and running . We have lot of procedures in it. Is there a way to call procedure from Step function . If we need to use lambda to call procedure , then we have some procedure taking more than 15 mins for completion so in that case how lambda will be able to send back information to step function that procedure in finished ?
3
answers
0
votes
68
views
Purnima
asked 3 months ago
Can I delete Aurora 3 User by 'drop user 'rds_superuser_role'@'%'? 1. rds_superuser_role 2. AWS_COMPREHEND_ACCESS 3. AWS_LAMBDA_ACCESS 4. AWS_LOAD_S3_ACCESS 5. AWS_SAGEMAKER_ACCESS 6. AWS_SELECT_S3_ACCESS
1
answers
0
votes
33
views
asked 3 months ago
1.AWS_COMPREHEND_ACCESS 2.AWS_LAMBDA_ACCESS 3.AWS_LOAD_S3_ACCESS 4.AWS_SAGEMAKER_ACCESS 5. AWS_SELECT_S3_ACCESS I want to change these five Aurora3 mysql user's plugin from 'mysql_native_password' to 'sha256_password'. but i can't . i have two question about this issue. 1. What is the role of an these five user ? 2. How to change these user's plugin from 'mysql_native_password' to 'sha256_password'. Thank you.
1
answers
0
votes
42
views
asked 3 months ago
RDS has automatically upgraded us to MySQL 8.0.30, and Aurora currently only supports migration from 8.0.28. AWS Support says they can't provide a timeline for 8.0.30 support, and recommends either using DMS (which seems very lacking for this type of migration) or dumping the database and recreating it (difficult for a database in the hundreds of gigabytes range.) Does anyone have any good strategies for migrating a database of this size to Aurora? The answer here -- https://repost.aws/questions/QUM2j4BPEQS5CBHVu4QbOLCA/migrate-rds-my-sql-8-0-28-to-aurora-my-sql -- doesn't work, since you can't create an Aurora read replica for unsupported MySQL versions...
1
answers
0
votes
44
views
rainman
asked 3 months ago
Hi there -- AWS bug appears to have resulted in an inoperable RDS snapshot stuck at 0% for many months. Can you kindly work some voodoo magic to destroy database-1-final-snapshot in us-east-1 as neither API or management console allows us to achieve this goal on our end. Appreciate the time.
1
answers
0
votes
27
views
asked 4 months ago
We are using two clusters of Aurora Serverless v2 to host a D2C e-commerce business. On December 3rd, I noticed that one of the clusters had been shut down by recovery for around 10 minutes. According to the event message I discovered, `Recovery of the DB instance has started. Recovery time will vary with the amount of data to be recovered.` at 01:57(UTC). after that, `Recovery of the DB instance is complete.` at 02:06(UTC). I discovered that metrics from Performance Insight, mysql-error-running.log, and RDS monitoring had disappeared for 15 minutes, which is appropriate given the situation. Nevertheless, hundreds of SQL queries had been processed normally up to 01:59:18(UTC), as indicated by our DataDog APM metrics. Therefore, there hasn't been a crash or other unusual circumstance to trigger the recovery. I read [this post](https://www.repost.aws/questions/QUWrOK7aiwSH-iVwgpwq4UFQ/does-recovery-of-db-instance-run-automatically) and got to the point. But I want to make sure that the events I described above can be reproduced and, if possible, how often they can be repeated. I also want to have a theoretical understanding of what might have happened. (I know, I could ask AWS Support Engineering Team if I need an analysis, but in theory.)
0
answers
0
votes
32
views
asked 4 months ago
I am seeing my Aurora Postgres sequence values skip by 33 on a semi-consistent basis. I am aware of a thread from the PG mailing list saying that when PG recovers it can cause a sequence skip. Also if a large transaction rolls back then any sequence values updated during that transaction will remain at their new value and won't be rolled back. I get that - its the whole point of a sequence. But in my case nothing is happening and boom tomorrow morning the sequences have skipped ahead by 33. This article discusses other reasons that can cause a sequence skip: https://www.cybertec-postgresql.com/en/gaps-in-sequences-postgresql/ But I am not seeing any of those events. This appears to happen randomly. Anyone else seeing this? I migrated from RDS/Postgres and never experienced this. This just started on the migration to Aurora Postgres.
2
answers
0
votes
35
views
cody
asked 4 months ago