Questions tagged with General Database Migrations

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

  • 1
  • 2
  • 12 / page
I am using the Aurora Blue/Green deployment process to upgrade by database from mySQL5.7 to mySQL8.0.26. This also is upgrading the Aurora engine from 2 to 3. The upgrade fails due to a pre-check failure: ``` { "id": "engineMixupCheck", "title": "Tables recognized by InnoDB that belong to a different engine", "status": "OK", "description": "Error: Following tables are recognized by InnoDB engine while the SQL layer believes they belong to a different engine. Such situation may happen when one removes InnoDB table files manually from the disk and creates e.g. a MyISAM table with the same name.\n\nA possible way to solve this situation is to e.g. in case of MyISAM table:\n\n1. Rename the MyISAM table to a temporary name (RENAME TABLE).\n2. Create some dummy InnoDB table (its definition does not need to match), then copy (copy, not move) and rename the dummy .frm and .ibd files to the orphan name using OS file commands.\n3. The orphan table can be then dropped (DROP TABLE), as well as the dummy table.\n4. Finally the MyISAM table can be renamed back to its original name.", "detectedProblems": [ { "level": "Error", "dbObject": "mysql.general_log_backup", "description": "recognized by the InnoDB engine but belongs to CSV" } ] } ``` As an Aurora user, it is not possible for me to delete, move, move, alter or change any tables in the `mysql` tablespace, so the recommend remediation is not possible. So my question is, how can I force the Blue/Green process to skip this check, or even better, how can I manually DROP the `mysql.general_log_backup` table as I do not need it? Please note I am using "FILE" based logging the DB parameters. Steps to reproduce: - Create an aurora instance with Engine version 5.7.mysql_aurora.2.10.3 - start a blue green deployment with * engine version 8.0 and aurora3+ * use custom cluster parameter group * use custom instance parameter group - Blue Green environment created - DB Engine Upgrade fails Thanks!
1
answers
0
votes
54
views
seeotee
asked 20 days ago
As per AWS documentation the max size supported by AWS RDS DB is 16TB. My on-prem MS SQL DB is approx 25TB. So I cant use RDS. -what are different options available as I migrate this DB to AWS? - If I install MS SQL on EC2 instance, are there any Pro's and Con's - Is there any AWS recommended document for installing MS SQL on a EC2 RHEL instance? - For MS SQL in EC2, how to best design the backup and restore process?
1
answers
0
votes
43
views
asked 4 months ago
Hi Everyone, I configured an AWS DMS to migrate data from an AWS Managed Oracle database to a S3 Bucket. The CDC has been creating file greater than 32MB. I was expecting to have file with at most 32MB due to the `cdcMinFileSize`. In my bucket there are files greater than 200MB. **cdcMaxBatchInterval** - The maximum interval length condition, defined in seconds, to output a file to Amazon S3. The default value is 60 seconds. **cdcMinFileSize** - The minimum file size condition, defined in KB, to output a file to Amazon S3. The default value is 32000 KB. **WriteBufferSize** - The size, in KB, of the in-memory file write buffer used when generating .csv files on the local disk at the AWS DMS replication instance. The default value is 1000 KB. I was wondering if my oracle database committed a large transaction that lead to this scenario. But during the processing on the Replication Instance would break the file into chunks to fit the 32MB? Thanks in advance.
1
answers
0
votes
130
views
Marcus
asked 6 months ago
I'm unable to authenticate/connect to local Informix server from AWS DMS endpoint. **Here is the Error: **Test Endpoint failed: Application-Status: 1020912, Application-Message: Cannot connect to DB2 LUW Server Network error has occurred, Application-Detailed-Message: RetCode: SQL_ERROR SqlState: 08001 NativeError: -1336 Message: [unixODBC][IBM][CLI Driver] SQL1336N The remote host "Informix_server" was not found. SQLSTATE=08001 Which source Engine in Endpoint, I need to use to connect to Informix Server?
1
answers
0
votes
49
views
asked 6 months ago
Hello, I have issues with table export from PhpMyAdmin on the Amazon Lightsale instance. I have access to my website database PhpMyAdmin through Local Host and until yesterday I was normally downloading databases and uploading new ones, but since yesterday I can't download them in any of the browsers. I can normally execute all commands and everything working fine except data export any advice is welcome. I have few other servers and database or table export is normally working.
1
answers
0
votes
33
views
asked 6 months ago
I've created an empty Aurora DB inside AWS. I backed up my SQL Server DB and upload that backup to an S3 bucket. I thought I might be able to import that SQL Server backup either into my Aurora DB or into a SQL Server instance in AWS then migrate that to Aurora... I can't see how to do this. I've tried creating an endpoint, thinking I could use that to link to the SQL Server backup file but can't see a way. Is this possible or should I use a different approach?
1
answers
0
votes
127
views
neohed
asked 8 months ago
I have ran DMS , which migrates data from Oracle to postgres. After the migration DMS shows mimsatched records and it shows around 600 records mismatched. Is there any we can identify and fix this issue?
1
answers
0
votes
609
views
asked 9 months ago
hi, i need to migrate from aws to a different provider. where do i find the following info: SSH hostname: SSH username: SSH port number: SSH private key: SSH key passphrase: Thanks
2
answers
-1
votes
60
views
asked 9 months ago
We have noticed that the pre-checks for the upgrade of MySQL 5.7 to MySQL 8 are having issues with character combinations that "resemble" depreciated words. For example, the depreciated "Group By ... DESC" is one of those constructs "[https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.MySQL.html#USER_UpgradeDBInstance.MySQL.57to80Prechecks]()" "There must be no queries and stored program definitions from MySQL 8.0.12 or lower that use ASC or DESC qualifiers for GROUP BY clauses." While our stored procedures use Group by's, there is no associated "DESC" word with them. However, the character sequence does appear in the stored procedure in various forms: * There is a call to another stored procedure called "update_fc**desc**ription();". It has the characters "desc" within the name * There are columns in the queries (table columns) with the name "blah**Desc**riptionblah" * There is a block comment that has the word "**Desc**ription:" that describes the stored procedure (documentation) However, there are no "DESC" words associated with the "Group by". For testing, * I deleted the comments word, and that issue no longer appeared as an error * I renamed the call to the other stored procedure update_fc**desc**ription(); to update_fc**dxexscxrixp**tion();, and that issue no longer appeared as an error * The columns that have the characters "desc" I couldn't work around without a lot of changing to the stored procedure It seems that there is a Stackoverflow question outlining this behavior too: [https://stackoverflow.com/questions/71412470/aws-mysql-aurora-major-version-2-3-upgrade-pre-checks-obsolete-procedure]() Also, a "re:Post" question too: [https://repost.aws/questions/QUWJzlcpitRoGM0woZVOylBQ/aurora-2-to-3-mysql-5-7-to-8-0-upgrade-pre-check-incorrect-validation-on-store-procedure]() This is clearly a bug in the pre-check process and is limiting our upgrade from MySQL 5.7 to 8. Any updates on this being fixed/addressed? Thank you.
2
answers
0
votes
161
views
asked 9 months ago
We're using DMS for CDC Only migration for the time b/w point in time restore and current DB state, i.e AWS DMS to replicate changes as of the point in time at which you started your bulk load to bring and keep your source and target systems in sync. We've configured AWS DMS (CDC Only) with source endpoint to On-premise SQL Server 2012 (Standard Edition) and Target endpoint with AWS RDS MSSQL 2019 (Standard Edition). By looking into AWS CDC pre-requisites documentation https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.SQLServer.html#CHAP_Source.SQLServer.Prerequisites Running below query on on-premise MSSQL 2012 instance returns an error, ref: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.SQLServer.html#CHAP_Source.SQLServer.Prerequisites ``` use uat_testdb EXEC sys.sp_cdc_enable_db ``` Msg 22988, Level 16, State 1, Procedure sp_cdc_enable_db, Line 14 [Batch Start Line 0] This instance of SQL Server is the Standard Edition (64-bit). Change data capture is only available in the Enterprise, Developer, and Enterprise Evaluation editions. It looks ongoing replication CDC feature is supported only from MSSQL standard edition from 2016 SP1 and later. Could you please suggest if there any other workaround to complete CDC without upgrading our on-premise MSSSQL Standard Edition 2012 to Std Edition 2016 / Enterprise Edition? **However, without applying this CDC prerequisites config settings at on-premise DB instance, we can see the ongoing and replication b/w on-premise and RDS DBs instances statistics that shows sync updates of Inserts and Deletes. (Based on the testing target RDS DB instance sync. happening only for Insert and Delete operations of on-premise source db not for any updates) Could you please confirm/clarify if those CDC pre-requisites config are mandatory since we could see the replication successfully on DMS and why we're not getting any error /warning messages on AWS DMS for missing CDC prerequisites config. settings? Thanks.**[]()
1
answers
0
votes
389
views
asked 9 months ago
How to migrate DB on premise Oracle to Microsoft SQL Server DB in AWS. We see SCT will not allow MS SQL Server as Target when source is Oracle. Looking for tools like SCT to move Schema from Oracle. We see SCT is useful to move Source Schema (Eg: Oracle ) to Target Schema DB (Eg: Aurora Postgres) and other limited DB combinations. Now requirement is to migrate from On-premise Oracle to MS SQL Server in AWS Cloud. Please let me know if anyone has worked on this task.
2
answers
0
votes
308
views
asked a year ago
Hi We are creating a DMS task that we were expecting to use a view in a Aurora Postgres service as data source, for a single use table migration. But it seems that views arent actually supported in this specific source case .... any reason for this? Theres a column which serves as primary key for that view Below is the message when creating the DMS task Error in mapping rules. Rule with ruleId = 042389161 failed validation. view selection is not available for aurora-postgresql source type In the meantime, im looking towards AWS Glue to do the procedure, and in the "worst" case, create a procedure to do the task / load process Any tips would be great for the ones that may fall for the same case best regards
1
answers
1
votes
300
views
asked a year ago
  • 1
  • 2
  • 12 / page