Questions tagged with Amazon Aurora

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

I am trying to migrate data from an on prem SQL Server DB to Postgres Aurora using AWS DMS. The data migrates just fine, but for tables that have a boolean column in the primary key, during validation it fails with the following error: ``` [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Statement(s) could not be prepared. ``` Upon checking the logs on the Postgres side, I find this: `ERROR: invalid input syntax for type boolean: "" at character <number>` `STATEMENT: SELECT cast ("pcode" as varchar(6)) , "other_columns" , "boolean_column" FROM "db"."table" WHERE ((("boolean_column" = ' ' AND "pcode" > 'L7L3V9') AND ("boolean_column" = ' ' AND "pcode" <= 'L8L4E8'))) ORDER BY "boolean_column" ASC , "pcode" ASC ` During validation, it's fetching the records from the postgres in batches, and for each batch it uses the wrong value for the "bolean_column" (comparing to '' - blank string). I am not sure why it's doing this, or how to influence this behaviour so that the validations will be performed successfully. The boolean column is a: - bit field on the SQL Server side - on postgres side, have tried converting to (both with same validation error as above): - numeric with precision 1 - boolean
0
answers
0
votes
26
views
asked a month ago
As MySQL 5.6 is nearing EOL support, we have upgraded one of the 5.6 Aurora serverless MySQL clusters to 5.7. From the moment we upgraded we faced a lot of issues. esp with Table locking related which we never faced in 5.6 all these years. Have tried changing Isolation levels, Disabling cache and lot more options last few days but none provided any useful outcomes. It still hangs with tables with very minimal data on UPDATE queries with PK in WHERE clause. Gap Locks, Deadlocks or Locks not handled when the Processes were cleaning up., something we couldnt' exactly find so far. Does anybody face similar issues., Kindly advice. ---TRANSACTION 960499483, ACTIVE 12 sec starting index read mysql tables in use 1, locked 1 LOCK WAIT 2 lock struct(s), heap size 1136, 1 row lock(s) MySQL thread id 43, OS thread handle 47444799534848, query id 36684 10.1.28.175 admin updating UPDATE bulk_import_hist_tracking set records_count =IFNULL(nullif(1, ''), records_count), practiceid = 123, submission_status = 'test',s3_url = NULLIF('test', ''), updatedat=now() where target_resource_id = 1 ------- TRX HAS BEEN WAITING 12 SEC FOR THIS LOCK TO BE GRANTED: RECORD LOCKS space id 15857 page no 4 n bits 0 index PRIMARY of table `embry_care_db`.`bulk_import_hist_tracking` trx id 960499483 lock_mode X locks rec but not gap waiting Record lock, heap no 111 PHYSICAL RECORD: n_fields 15; compact format; info bits 0 0: len=4; bufptr=0x2b27454f67bb; hex= 80000001; asc ;; 1: len=6; bufptr=0x2b27454f67bf; hex= 000039400ccf; asc 9@ ;; 2: len=7; bufptr=0x2b27454f67c5; hex= 300000642917c5; asc 0 d) ;; 3: len=4; bufptr=0x2b27454f67cc; hex= 8000007b; asc {;; 4: len=7; bufptr=0x2b27454f67d0; hex= 44425f74657374; asc DB_test;; 5: SQL NULL; 6: len=3; bufptr=0x2b27454f67d7; hex= 8fcd8e; asc ;; 7: len=4; bufptr=0x2b27454f67da; hex= 80000001; asc ;; 8: len=2; bufptr=0x2b27454f67de; hex= 4442; asc DB;; 9: len=4; bufptr=0x2b27454f67e0; hex= 74657374; asc test;; 10: len=1; bufptr=0x2b27454f67e4; hex= 80; asc ;; 11: len=4; bufptr=0x2b27454f67e5; hex= 6399cc81; asc c ;; 12: len=4; bufptr=0x2b27454f67e9; hex= 63dce65b; asc c [;; 13: len=4; bufptr=0x2b27454f67ed; hex= 74657374; asc test;; 14: SQL NULL;
1
answers
0
votes
58
views
Vasanth
asked 2 months ago
Does AWS automatically perform database snapshot backups before and after the minor version upgrade for RDS and Aurora engines? Is this process same for all RDS database engines and Aurora?
2
answers
0
votes
44
views
AWS
asked 2 months ago
To process the EOL of aurora MYSQL 5.6, i need upgrade the major version. here is my case, i have a database named avocado(the blue one), and i create a blue/green deployement and swith the traffic to green one(avocado-xxx), but right now i find some issues in green cluster, so i want to switch traffic back to the blue cluster, i know the data-loss from switching time. is there any way to let me switch back to blue qucikly? or can i restore in green cluster using the snapshot which i create before switching in blue cluster? Looking forwarding to get your information, Thanks team and best regards.
3
answers
0
votes
87
views
asked 2 months ago
When you run `REINDEX VERBOSE INDEX <index-name>;.`, is it possible to do this on one AZ at a time, or some other way of maintaining up time throughout the process? Would you need a multi-master cluster for this to be possible? [This](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-multi-master.html) would suggest so: > With a multi-master cluster, your application can redirect write operations from the failed DB instance to any other DB instance in the cluster. I wasn't sure by reading [this maintenance doc](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_UpgradeDBInstance.Maintenance.html#USER_UpgradeDBInstance.OSUpgrades).
0
answers
0
votes
16
views
AWS
asked 2 months ago
As Aurora 1 (Mysql 5.6) is reaching end of life soon, we tried to upgrade to Aurora 2 (mysql 5.7). We used a manual snapshot to create a new cluster. The upgrade went fine. However as soon as we shifted our production load to this new cluster (2 instances of size r5.4xlarge), The overall performance of the application was hit badly. The reader instance was presenting a 99% CPU usage for whole time and many of our queries were taking many many minutes. That rendered everything unusable so we had to go back to 5.6 using the backup snapshot. We used the cluster and instance parameters as close as possible between 5.6 and 5.7. I wonder what might be causing this and whats the way out? I am in bit of a panic because we just have few weeks left.
0
answers
0
votes
45
views
asked 2 months ago
I have a running RDS Aurora MySQL 8.0.23 cluster running in production. The database is unencrypted and I need to enable encryption for it. As far as I understand, this is not possible to do directly. The procedure I am evaluating is: 1. Create a read replica on the current cluster. 2. Stop replication on replica and annotate binlog filename and position. 3. Promote the read replica to a new encrypted cluster (maybe it requires to do a snapshot before). 4. Set up back replication with the original cluster using binlog file and position annotated before. 5. Wait until replication lag is zero. 6. Redirect production traffic to the new cluster. 7. Stop replication. 8. [Optional] Delete old cluster. I have two issues with the above procedure: - Once created the replica, running commands like 'SHOW SLAVE STATUS' or 'SHOW REPLICA STATUS' return empty set, so I can't annotate binlog file and position. Please note that replication is enabled on the original cluster (binlog_format is set to ROW). - It seems I can't promote the Aurora read replica to a new cluster, the options is missing on the available actions. But according to the documentation it should be possible. Has anyone have feedback about the issues above? What is the current up-to-date procedure to encrypt and Aurora MySQL cluster with minimum downtime and no data loss?
3
answers
0
votes
56
views
Ludo
asked 2 months ago
I have created an Aurora MYSQL 8 cluster - engine: 8.0.mysql_aurora.3.02.2 and are testing the LOAD FROM S3 statement. The appropriate S3 role is added to the server and the DB Cluster parameter group has been changed to use this role. The LOAD FROM S3 statement is working when I use the default admin user however when I create another user and grant it the LOAD FROM S3 privilege the command does not work. The error I get is Access denied; you need (at least one of) the LOAD FROM S3 privilege(s) for this operation. When I run SELECT * FROM mysql.`user` u the column Select_into_S3_priv is no longer there as it was in MYSQL 5.7. Can you confirm if there is an issue with the LOAD FROM S3 command for Aurora MYSQL 8? Thanks
1
answers
0
votes
61
views
asked 2 months ago
I have an aurora postgresql serverless v2 cluster with 2 instances, one writer and another reader and would like them to scale independently. According to [documentation](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2-administration.html#aurora-serverless-v2-choosing-promotion-tier:~:text=Aurora%20Serverless%20v2%20reader%20DB%20instances%20in%20tiers%202%E2%80%9315%20don%27t%20have%20the%20same%20constraint%20on%20their%20minimum%20capacity.%20When%20they%20are%20idle%2C%20they%20can%20scale%20down%20to%20the%20minimum%20Aurora%20capacity%20unit%20(ACU)%20value%20specified%20in%20the%20cluster%27s%20capacity%20range.), if the reader instance is in a Failover priority other than 0 or 1. They SHOULD scale independently but still no matter what I do, they always scale synchronously. I have a workload that runs twice a day and demands the higher acu count but the reader instance has very low usage, so I would like them to scale independently to save on costs. In my use case it is not a problem if there is a longer downtime for the reader instance to scale and take over in case of failure. Thanks
2
answers
0
votes
25
views
asked 2 months ago
We recently migrated our RDS databases to 5.7+ to prepare for AWS' retirement of MySQL 5.6 support. We have snapshots of previous databases from the 5.6 days - will those be accessible down the line or should we plan to upgrade them? Per the [announcement here](https://repost.aws/questions/QUImshxjRKSRq-t-AQppM6SA/announcement-deprecated-amazon-relational-database-service-rds-for-my-sql-5-6-end-of-life-date-is-august-3-2021): > You can continue to restore your MySQL 5.6 snapshots as well as create read replicas with version 5.6 until the August 3, 2021 end of support date. This makes it sound like we should prepare to restore, upgrade, and re-snapshot existing snapshots to be able to maintain access to them. Is this something Amazon is planning to automate or should I make a ticket for our own teams to do it ourselves?
1
answers
0
votes
63
views
asked 2 months ago
I tried to upgrade from Aurora MySQL 5.7 (2.10.2) to Aurora MySQL 8.0 (3.02.2) and I got this pre-check error in the logs. ``` { "id": "engineMixupCheck", "title": "Tables recognized by InnoDB that belong to a different engine", "status": "OK", "description": "Error: Following tables are recognized by InnoDB engine while the SQL layer believes they belong to a different engine. Such situation may happen when one removes InnoDB table files manually from the disk and creates e.g. a MyISAM table with the same name.\n\nA possible way to solve this situation is to e.g. in case of MyISAM table:\n\n1. Rename the MyISAM table to a temporary name (RENAME TABLE).\n2. Create some dummy InnoDB table (its definition does not need to match), then copy (copy, not move) and rename the dummy .frm and .ibd files to the orphan name using OS file commands.\n3. The orphan table can be then dropped (DROP TABLE), as well as the dummy table.\n4. Finally the MyISAM table can be renamed back to its original name.", "detectedProblems": [ { "level": "Error", "dbObject": "mysql.general_log_backup", "description": "recognized by the InnoDB engine but belongs to CSV" } ] }, ``` Looking at the [MySQL shell code ](https://github.com/mysql/mysql-shell/blob/8.0.23/modules/util/upgrade_check.cc#L1301-L1316) and running that SQL, I get this result. ``` SELECT a.table_schema, a.table_name, concat('recognized by the InnoDB engine but belongs to') FROM information_schema.tables a JOIN (SELECT substring_index(NAME, '/', 1) AS table_schema, substring_index(substring_index(NAME, '/', -1), '#', 1) AS TABLE_NAME FROM information_schema.innodb_sys_tables WHERE NAME like '%/%') b ON a.table_schema = b.table_schema AND a.table_name = b.table_name WHERE a.engine != 'Innodb' +--------------+--------------------+----------------------------------------------------------+ | table_schema | table_name | concat('recognized by the InnoDB engine but belongs to') | +--------------+--------------------+----------------------------------------------------------+ | mysql | general_log_backup | recognized by the InnoDB engine but belongs to | +--------------+--------------------+----------------------------------------------------------+ 1 row in set (0.11 sec) ``` And it is because this entry is present in the information_schema.innodb_sys_tables which should not really be present in the first place. ``` mysql> select * from information_schema.innodb_sys_tables where NAME like '%general%'; +----------+--------------------------+------+--------+-------+-------------+------------+---------------+------------+ | TABLE_ID | NAME | FLAG | N_COLS | SPACE | FILE_FORMAT | ROW_FORMAT | ZIP_PAGE_SIZE | SPACE_TYPE | +----------+--------------------------+------+--------+-------+-------------+------------+---------------+------------+ | 16462 | mysql/general_log_backup | 33 | 9 | 16448 | Barracuda | Dynamic | 0 | Single | +----------+--------------------------+------+--------+-------+-------------+------------+---------------+------------+ 1 row in set (0.09 sec) ``` Coincidentally, according to the release notes of [Aurora 3.02.0](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraMySQLReleaseNotes/AuroraMySQL.Updates.3020.html), it says this: > Fixed an issue that can cause upgrade failures from Aurora MySQL 2 to Aurora MySQL 3 due to schema inconsistency errors reported by upgrade prechecks for the general log and slow log tables. While it says "schema inconsistency errors" and my error is "engineMixupCheck", it feels like both errors are somewhat related to each other since it involves the general_log. Also, when I look at [this](https://repost.aws/questions/QUPC7D-_ZuTgCZSLALluxW9g/need-help-in-upgrading-the-aurora-mysql-5-7-to-mysql-8-urgent), it mentions about > mysql.general_log_backup recognized by the InnoDB engine but belongs to CSV. which is exactly the error that I am getting but it does not seem a solution has been provided. So, has anyone seen this error and is there a workaround for this?
0
answers
0
votes
134
views
asked 2 months ago
It appears that Aurora PostgreSQL major version upgrade requires us to first drop all replication slots, then perform upgrade, and then recreate replication slots. We use logical slots for Debezium/Kafka replication for inter-process work flows. When we drop and recreate these replication slots as part of major version upgrade, how can we ensure replication restarts from where it left of (meaning, replication resumes incremental changes only) and not force us to do FULL sync. We cannot afford to have FULL sync due to large table sizes.
1
answers
0
votes
38
views
asked 2 months ago