By using AWS re:Post, you agree to the Terms of Use

Questions tagged with AWS Database Migration Service

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

DMS Postgres Source Error: No tables were found at task initialization

I am trying to setup AWS DMS using an RDS Postgres instance as a source. However, I keep getting the following error when starting the replication task: > Last Error No tables were found at task initialization. Either the selected table(s) or schemas(s) no longer exist or no match was found for the table selection pattern(s). If you would like to start a Task that does not initially capture any tables, set Task Setting FailOnNoTablesCaptured to false and restart task. Stop Reason FATAL_ERROR Error Level FATAL I followed the [DMS guide](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html#CHAP_Source.PostgreSQL.RDSPostgreSQL) and my source connection test is successful. I am using the master account and have verified the pglogical plugin is installed. For my Task selection rules, I have tried using the wildcard (`"schema-name": "%", "table-name": "%"`) and targeting specific schema and tables with no success. I have also tried all 3 migration types: full-load, cdc, and full-load-and-cdc. Here is my complete task configuration: ``` { "Logging": { "EnableLogging": false, "LogComponents": [ { "Severity": "LOGGER_SEVERITY_DEFAULT", "Id": "TRANSFORMATION" }, { "Severity": "LOGGER_SEVERITY_DEFAULT", "Id": "SOURCE_UNLOAD" }, { "Severity": "LOGGER_SEVERITY_DEFAULT", "Id": "IO" }, { "Severity": "LOGGER_SEVERITY_DEFAULT", "Id": "TARGET_LOAD" }, { "Severity": "LOGGER_SEVERITY_DEFAULT", "Id": "PERFORMANCE" }, { "Severity": "LOGGER_SEVERITY_DEFAULT", "Id": "SOURCE_CAPTURE" }, { "Severity": "LOGGER_SEVERITY_DEFAULT", "Id": "SORTER" }, { "Severity": "LOGGER_SEVERITY_DEFAULT", "Id": "REST_SERVER" }, { "Severity": "LOGGER_SEVERITY_DEFAULT", "Id": "VALIDATOR_EXT" }, { "Severity": "LOGGER_SEVERITY_DEFAULT", "Id": "TARGET_APPLY" }, { "Severity": "LOGGER_SEVERITY_DEFAULT", "Id": "TASK_MANAGER" }, { "Severity": "LOGGER_SEVERITY_DEFAULT", "Id": "TABLES_MANAGER" }, { "Severity": "LOGGER_SEVERITY_DEFAULT", "Id": "METADATA_MANAGER" }, { "Severity": "LOGGER_SEVERITY_DEFAULT", "Id": "FILE_FACTORY" }, { "Severity": "LOGGER_SEVERITY_DEFAULT", "Id": "COMMON" }, { "Severity": "LOGGER_SEVERITY_DEFAULT", "Id": "ADDONS" }, { "Severity": "LOGGER_SEVERITY_DEFAULT", "Id": "DATA_STRUCTURE" }, { "Severity": "LOGGER_SEVERITY_DEFAULT", "Id": "COMMUNICATION" }, { "Severity": "LOGGER_SEVERITY_DEFAULT", "Id": "FILE_TRANSFER" } ], "CloudWatchLogGroup": null, "CloudWatchLogStream": null }, "StreamBufferSettings": { "StreamBufferCount": 3, "CtrlStreamBufferSizeInMB": 5, "StreamBufferSizeInMB": 8 }, "ErrorBehavior": { "FailOnNoTablesCaptured": true, "ApplyErrorUpdatePolicy": "LOG_ERROR", "FailOnTransactionConsistencyBreached": false, "RecoverableErrorThrottlingMax": 1800, "DataErrorEscalationPolicy": "SUSPEND_TABLE", "ApplyErrorEscalationCount": 0, "RecoverableErrorStopRetryAfterThrottlingMax": true, "RecoverableErrorThrottling": true, "ApplyErrorFailOnTruncationDdl": false, "DataTruncationErrorPolicy": "LOG_ERROR", "ApplyErrorInsertPolicy": "LOG_ERROR", "EventErrorPolicy": "IGNORE", "ApplyErrorEscalationPolicy": "LOG_ERROR", "RecoverableErrorCount": -1, "DataErrorEscalationCount": 0, "TableErrorEscalationPolicy": "STOP_TASK", "RecoverableErrorInterval": 5, "ApplyErrorDeletePolicy": "IGNORE_RECORD", "TableErrorEscalationCount": 0, "FullLoadIgnoreConflicts": true, "DataErrorPolicy": "LOG_ERROR", "TableErrorPolicy": "SUSPEND_TABLE" }, "TTSettings": { "TTS3Settings": null, "TTRecordSettings": null, "EnableTT": false }, "FullLoadSettings": { "CommitRate": 10000, "StopTaskCachedChangesApplied": false, "StopTaskCachedChangesNotApplied": false, "MaxFullLoadSubTasks": 8, "TransactionConsistencyTimeout": 600, "CreatePkAfterFullLoad": false, "TargetTablePrepMode": "DROP_AND_CREATE" }, "TargetMetadata": { "ParallelApplyBufferSize": 0, "ParallelApplyQueuesPerThread": 0, "ParallelApplyThreads": 0, "TargetSchema": "", "InlineLobMaxSize": 0, "ParallelLoadQueuesPerThread": 0, "SupportLobs": false, "LobChunkSize": 0, "TaskRecoveryTableEnabled": false, "ParallelLoadThreads": 0, "LobMaxSize": 0, "BatchApplyEnabled": false, "FullLobMode": false, "LimitedSizeLobMode": false, "LoadMaxFileSize": 0, "ParallelLoadBufferSize": 0 }, "BeforeImageSettings": null, "ControlTablesSettings": { "historyTimeslotInMinutes": 5, "HistoryTimeslotInMinutes": 5, "StatusTableEnabled": false, "SuspendedTablesTableEnabled": false, "HistoryTableEnabled": false, "ControlSchema": "", "FullLoadExceptionTableEnabled": false }, "LoopbackPreventionSettings": null, "CharacterSetSettings": null, "FailTaskWhenCleanTaskResourceFailed": false, "ChangeProcessingTuning": { "StatementCacheSize": 50, "CommitTimeout": 1, "BatchApplyPreserveTransaction": true, "BatchApplyTimeoutMin": 1, "BatchSplitSize": 0, "BatchApplyTimeoutMax": 30, "MinTransactionSize": 1000, "MemoryKeepTime": 60, "BatchApplyMemoryLimit": 500, "MemoryLimitTotal": 1024 }, "ChangeProcessingDdlHandlingPolicy": { "HandleSourceTableDropped": true, "HandleSourceTableTruncated": true, "HandleSourceTableAltered": true }, "PostProcessingRules": null } ```
1
answers
0
votes
6
views
asked a day ago

DMSStack-DMSRole-xxxx/dms-session-for-replication-engine is not authorized to perform: secretsmanager:GetSecretValue

I'm trying to test endpoint connection from DMS Replication Instance, DMS (3.4.7) RI instance (running in Acnt A) is attempting to get a secret from SecretsManager (running in Acnt B) using VPC Interface endpoint, but errors out with the following. Test Endpoint failed: Application-Status: 1020912, Application-Message: Failed to retrieve secret. Unable to find Secrets Manager secret, Application-Detailed-Message: Unable to find AWS Secrets Manager secret Arn 'arn:aws:secretsmanager:us-east-1:acntBbbbbb:secret:/dmsdemo/aaaaa-<erandomStrng>' The secrets_manager get secret value failed: User: arn:aws:sts::acntAaaaa:assumed-role/DMSStack-DMSRole-zzzzzzz/dms-session-for-replication-engine is not authorized to perform: secretsmanager:GetSecretValue on resource: arn:aws:secretsmanager:us-east-1:acntBbbbbb:secret:/aaaaa-<randomStrng> because no session policy allows the secretsmanager:GetSecretValue action Not retriable error: <AccessDeniedException> User: arn:aws:sts::acntAaaaa:assumed-role/DMSStack-DMSRole-zzzzzzz/dms-session-for-replication-engine is not authorized to perform: secretsmanager:GetSecretValue on resource: arn:aws:secretsmanager:us-east-1:acntBbbbbb:secret:/dmsdemo/aaaaa-<randomStrng>' because no session policy allows the secrets DMSRole { "Version": "2012-10-17", "Statement": [ { "Action": [ "secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret" ], "Resource": "arn:aws:secretsmanager:us-east-1:acntAaaaa:secret:/dmsdemo/aaaaa-<randomStrng>", "Effect": "Allow" }, { "Action": "kms:Decrypt", "Resource": "arn:aws:kms:us-east-1:acnt:key/ddddddddddd", "Effect": "Allow" } ] } Resource Policy on Secret { "Version" : "2012-10-17", "Statement" : [ { "Effect" : "Allow", "Principal" : { "AWS" : [ "arn:aws:iam::acntAaaaaa:root", "arn:aws:iam::acntBbbbbbb:root" ] }, "Action" : [ "secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret" ], "Resource" : "*" } ] } Any thoughts on what was missing in permissions that is restricting the access to secret
1
answers
0
votes
17
views
asked 6 days ago

DMS migration from Aurora MySQL 5.6 to Aurora MySQL 5.7 on graviton

Hi there, I am having recurring issues migrating Aurora MySQL 5.6.10 on db.r5.large to Aurora MySQL 5.7.12 on db.r6g.large. I started by trying to replicate all schemas I had created, but this failed with an unknown error. I then broke this down into one schema per replication group, this also failed with an unknown error. I then turned on CloudWatch logging for all tasks. This worked other than one table repeatedly fails to replicate. If I use the mysql cli to drop or repair the table, mysql drops the connection! When I look at the table in phpmyadmin, it says 'unknown storage engine' and/or table in use. When I try to drop the schema using phpmyadmin, it logs me out straight away! I've waited a few minutes and now can log back in, and can see the schema has been dropped successfully. This looks like a bug in DMS creating the table, or in Aurora somehow locking the table and putting it into an inconsistent state. I've now resolved the issue and moved on, but the service team might want to be aware of this. The table schema is very simple: CREATE TABLE IF NOT EXISTS `lkcities` ( `state` varchar(2) DEFAULT NULL, `city` varchar(16) DEFAULT NULL, `country_id` varchar(2) NOT NULL, UNIQUE KEY `country_id` (`country_id`,`state`,`city`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- -- Dumping data for table `lkcities` -- INSERT INTO `lkcities` (`state`, `city`, `country_id`) VALUES ('AK', 'Akhiok', 'US'), ('AK', 'Akiachak', 'US'), ('AK', 'Akiak', 'US'), ('AK', 'Akutan', 'US'), ('AK', 'Alakanuk', 'US'), ('AK', 'Aleknagik', 'US'), ('AK', 'Allakaket', 'US'), ('AK', 'Ambler', 'US'), ('AK', 'Anaktuvuk Pass', 'US'), ('AK', 'Anchorage', 'US'), etc ~25,705 rows.
2
answers
0
votes
24
views
asked 6 days ago
1
answers
0
votes
68
views
asked 12 days ago