Questions tagged with General Database Migrations
Content language: English
Sort by most recent
AWD DMS setting CDC file size
Hi Everyone, I configured an AWS DMS to migrate data from an AWS Managed Oracle database to a S3 Bucket. The CDC has been creating file greater than 32MB. I was expecting to have file with at most 32MB due to the `cdcMinFileSize`. In my bucket there are files greater than 200MB. **cdcMaxBatchInterval** - The maximum interval length condition, defined in seconds, to output a file to Amazon S3. The default value is 60 seconds. **cdcMinFileSize** - The minimum file size condition, defined in KB, to output a file to Amazon S3. The default value is 32000 KB. **WriteBufferSize** - The size, in KB, of the in-memory file write buffer used when generating .csv files on the local disk at the AWS DMS replication instance. The default value is 1000 KB. I was wondering if my oracle database committed a large transaction that lead to this scenario. But during the processing on the Replication Instance would break the file into chunks to fit the 32MB? Thanks in advance.
AWS DMS from On-Prem Informix to AWS MYSQL
I'm unable to authenticate/connect to local Informix server from AWS DMS endpoint. **Here is the Error: **Test Endpoint failed: Application-Status: 1020912, Application-Message: Cannot connect to DB2 LUW Server Network error has occurred, Application-Detailed-Message: RetCode: SQL_ERROR SqlState: 08001 NativeError: -1336 Message: [unixODBC][IBM][CLI Driver] SQL1336N The remote host "Informix_server" was not found. SQLSTATE=08001 Which source Engine in Endpoint, I need to use to connect to Informix Server?
PhpMyAdmin table export on Amazon Lightsale
Hello, I have issues with table export from PhpMyAdmin on the Amazon Lightsale instance. I have access to my website database PhpMyAdmin through Local Host and until yesterday I was normally downloading databases and uploading new ones, but since yesterday I can't download them in any of the browsers. I can normally execute all commands and everything working fine except data export any advice is welcome. I have few other servers and database or table export is normally working.
Migrate SQL Server DB (outside of AWS) to Aurora DB in AWS
I've created an empty Aurora DB inside AWS. I backed up my SQL Server DB and upload that backup to an S3 bucket. I thought I might be able to import that SQL Server backup either into my Aurora DB or into a SQL Server instance in AWS then migrate that to Aurora... I can't see how to do this. I've tried creating an endpoint, thinking I could use that to link to the SQL Server backup file but can't see a way. Is this possible or should I use a different approach?
Aurora upgrade 2 to 3 / MySql 5.7 to 8.0: potential bug in pre-check validation (depreciated words)
We have noticed that the pre-checks for the upgrade of MySQL 5.7 to MySQL 8 are having issues with character combinations that "resemble" depreciated words. For example, the depreciated "Group By ... DESC" is one of those constructs "[https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.MySQL.html#USER_UpgradeDBInstance.MySQL.57to80Prechecks]()" "There must be no queries and stored program definitions from MySQL 8.0.12 or lower that use ASC or DESC qualifiers for GROUP BY clauses." While our stored procedures use Group by's, there is no associated "DESC" word with them. However, the character sequence does appear in the stored procedure in various forms: * There is a call to another stored procedure called "update_fc**desc**ription();". It has the characters "desc" within the name * There are columns in the queries (table columns) with the name "blah**Desc**riptionblah" * There is a block comment that has the word "**Desc**ription:" that describes the stored procedure (documentation) However, there are no "DESC" words associated with the "Group by". For testing, * I deleted the comments word, and that issue no longer appeared as an error * I renamed the call to the other stored procedure update_fc**desc**ription(); to update_fc**dxexscxrixp**tion();, and that issue no longer appeared as an error * The columns that have the characters "desc" I couldn't work around without a lot of changing to the stored procedure It seems that there is a Stackoverflow question outlining this behavior too: [https://stackoverflow.com/questions/71412470/aws-mysql-aurora-major-version-2-3-upgrade-pre-checks-obsolete-procedure]() Also, a "re:Post" question too: [https://repost.aws/questions/QUWJzlcpitRoGM0woZVOylBQ/aurora-2-to-3-mysql-5-7-to-8-0-upgrade-pre-check-incorrect-validation-on-store-procedure]() This is clearly a bug in the pre-check process and is limiting our upgrade from MySQL 5.7 to 8. Any updates on this being fixed/addressed? Thank you.
How AWS DMS CDC is working successfully without CDC On-premise MSSQL CDC prerequisites config?
We're using DMS for CDC Only migration for the time b/w point in time restore and current DB state, i.e AWS DMS to replicate changes as of the point in time at which you started your bulk load to bring and keep your source and target systems in sync. We've configured AWS DMS (CDC Only) with source endpoint to On-premise SQL Server 2012 (Standard Edition) and Target endpoint with AWS RDS MSSQL 2019 (Standard Edition). By looking into AWS CDC pre-requisites documentation https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.SQLServer.html#CHAP_Source.SQLServer.Prerequisites Running below query on on-premise MSSQL 2012 instance returns an error, ref: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.SQLServer.html#CHAP_Source.SQLServer.Prerequisites ``` use uat_testdb EXEC sys.sp_cdc_enable_db ``` Msg 22988, Level 16, State 1, Procedure sp_cdc_enable_db, Line 14 [Batch Start Line 0] This instance of SQL Server is the Standard Edition (64-bit). Change data capture is only available in the Enterprise, Developer, and Enterprise Evaluation editions. It looks ongoing replication CDC feature is supported only from MSSQL standard edition from 2016 SP1 and later. Could you please suggest if there any other workaround to complete CDC without upgrading our on-premise MSSSQL Standard Edition 2012 to Std Edition 2016 / Enterprise Edition? **However, without applying this CDC prerequisites config settings at on-premise DB instance, we can see the ongoing and replication b/w on-premise and RDS DBs instances statistics that shows sync updates of Inserts and Deletes. (Based on the testing target RDS DB instance sync. happening only for Insert and Delete operations of on-premise source db not for any updates) Could you please confirm/clarify if those CDC pre-requisites config are mandatory since we could see the replication successfully on DMS and why we're not getting any error /warning messages on AWS DMS for missing CDC prerequisites config. settings? Thanks.**()
How to migrate DB on premise Oracle to Microsoft SQL Server DB in AWS. We see SCT will not allow MS SQL Server as Target when source is Oracle. Looking for tools like SCT to move Schema from Oracle.
How to migrate DB on premise Oracle to Microsoft SQL Server DB in AWS. We see SCT will not allow MS SQL Server as Target when source is Oracle. Looking for tools like SCT to move Schema from Oracle. We see SCT is useful to move Source Schema (Eg: Oracle ) to Target Schema DB (Eg: Aurora Postgres) and other limited DB combinations. Now requirement is to migrate from On-premise Oracle to MS SQL Server in AWS Cloud. Please let me know if anyone has worked on this task.
DMS service for aurora postgres doesnt accept a view as source for full load process
Hi We are creating a DMS task that we were expecting to use a view in a Aurora Postgres service as data source, for a single use table migration. But it seems that views arent actually supported in this specific source case .... any reason for this? Theres a column which serves as primary key for that view Below is the message when creating the DMS task Error in mapping rules. Rule with ruleId = 042389161 failed validation. view selection is not available for aurora-postgresql source type In the meantime, im looking towards AWS Glue to do the procedure, and in the "worst" case, create a procedure to do the task / load process Any tips would be great for the ones that may fall for the same case best regards
Continues Replication between RDS MYsql to Keyspace Cassandra
Want to know which AWS service will help me to transfer data from RDS to Keyspaces Cassandra on real time basis. #Case I am using RDS MYSQL as a production database and i have also created a read replica for the same. Now i want that as and when new data updated on my database then it automatically transferred to keyspace cassandra for further use.
MongoDB Atlas vs DocumentDB
If I have an application that is currently using MongoDB Atlas (not hosted on AWS), should I migrate to AWS DocumentDB if the rest of my infrastructure is running on AWS? What should I look out for when migrating? Are there any bugs or known issues between the two services, and what type of support does DocumentDB get for new MongoDB features? I'd love a pros/cons list of running each service so I can make a good decision.