AWS DMS Database Migration Task Completed but CloudWatch Logs Indicate Errors

0

I have a Postgres database hosted by Heroku that I dumped and restored via an EC2 instance to an AWS RDS database. I am able to confirm the tables were properly transferred by querying in postgres via the EC2 instance, and creating a snapshot to an S3 bucket, downloading it, and viewing the zipped parquet file contents with Python.

Here is the problem: When I go to create endpoints (source: RDS; target: Kinesis Stream) and I successfully complete my Database Migration task with 0 apparent errors in the service, I check on CloudWatch logs and see that no data has actually been transferred over. The 6 tables that I want :

ar_internal_metadata users document_thread_posts document_threads document_thread_users schema_migrations

Have all been transferred over according to DMS table statistics, but the errors I am seeing on CloudWatch logs are of the following nature:

2024-08-14T11:31:11.000Z
2024-08-14T11:31:11 [SOURCE_CAPTURE  ]I:  No Event fetched from wal log  (postgres_endpoint_wal_engine.c:1416)

2024-08-14T11:31:11 [SOURCE_CAPTURE ]I: No Event fetched from wal log (postgres_endpoint_wal_engine.c:1416)
2024-08-14T11:32:13.000Z
2024-08-14T11:32:13 [SORTER          ]I:  No records received to load or apply on target , waiting for data from upstream. The last context is  {operation:LOAD_TABLE (61)}  (streamcomponent.c:1989)

2024-08-14T11:32:13 [SORTER ]I: No records received to load or apply on target , waiting for data from upstream. The last context is {operation:LOAD_TABLE (61)} (streamcomponent.c:1989)
2024-08-14T11:32:24.000Z
2024-08-14T11:32:24 [SOURCE_CAPTURE  ]I:  Sent captured record 0 to internal queue from Source  {operation:IDLE (51), connectionId:18155}  (streamcomponent.c:2927)

2024-08-14T11:32:24 [SOURCE_CAPTURE ]I: Sent captured record 0 to internal queue from Source {operation:IDLE (51), connectionId:18155} (streamcomponent.c:2927)
2024-08-14T11:32:43.000Z
2024-08-14T11:32:43 [TARGET_APPLY    ]I:  No records received to load or apply on target , waiting for data from upstream. The last context is  {operation:LOAD_TABLE (61), tableName:users, schemaName:public}  (streamcomponent.c:1989)

2024-08-14T11:32:43 [TARGET_APPLY ]I: No records received to load or apply on target , waiting for data from upstream. The last context is {operation:LOAD_TABLE (61), tableName:users, schemaName:public} (streamcomponent.c:1989)
2024-08-14T11:33:41.000Z
2024-08-14T11:33:41 [SORTER          ]I:  Task is running  {operation:LOAD_TABLE (61)}  (sorter.c:761)

2024-08-14T11:33:41 [SORTER ]I: Task is running {operation:LOAD_TABLE (61)} (sorter.c:761)
2024-08-14T11:34:13.000Z
2024-08-14T11:34:13 [SOURCE_CAPTURE  ]I:  No Event fetched from wal log  (postgres_endpoint_wal_engine.c:1416)

2024-08-14T11:34:13 [SOURCE_CAPTURE ]I: No Event fetched from wal log (postgres_endpoint_wal_engine.c:1416)
2024-08-14T11:35:14.000Z
2024-08-14T11:35:14 [SORTER          ]I:  No records received to load or apply on target , waiting for data from upstream. The last context is  {operation:LOAD_TABLE (61)}  (streamcomponent.c:1989)

2024-08-14T11:35:14 [SORTER ]I: No records received to load or apply on target , waiting for data from upstream. The last context is {operation:LOAD_TABLE (61)} (streamcomponent.c:1989)
2024-08-14T11:35:24.000Z
2024-08-14T11:35:24 [SOURCE_CAPTURE  ]I:  Sent captured record 0 to internal queue from Source  {operation:IDLE (51), connectionId:18155}  (streamcomponent.c:2927)

2024-08-14T11:35:24 [SOURCE_CAPTURE ]I: Sent captured record 0 to internal queue from Source {operation:IDLE (51), connectionId:18155} (streamcomponent.c:2927)
2024-08-14T11:35:43.000Z
2024-08-14T11:35:43 [TARGET_APPLY    ]I:  No records received to load or apply on target , waiting for data from upstream. The last context is  {operation:LOAD_TABLE (61), tableName:users, schemaName:public}  (streamcomponent.c:1989)

2024-08-14T11:35:43 [TARGET_APPLY ]I: No records received to load or apply on target , waiting for data from upstream. The last context is {operation:LOAD_TABLE (61), tableName:users, schemaName:public} (streamcomponent.c:1989)
2024-08-14T11:37:15.000Z
2024-08-14T11:37:15 [SOURCE_CAPTURE  ]I:  No Event fetched from wal log  (postgres_endpoint_wal_engine.c:1416)

2024-08-14T11:37:15 [SOURCE_CAPTURE ]I: No Event fetched from wal log (postgres_endpoint_wal_engine.c:1416)
2024-08-14T11:38:14.000Z
2024-08-14T11:38:14 [SORTER          ]I:  No records received to load or apply on target , waiting for data from upstream. The last context is  {operation:LOAD_TABLE (61)}  (streamcomponent.c:1989)

2024-08-14T11:38:14 [SORTER ]I: No records received to load or apply on target , waiting for data from upstream. The last context is {operation:LOAD_TABLE (61)} (streamcomponent.c:1989)
2024-08-14T11:38:25.000Z
2024-08-14T11:38:25 [SOURCE_CAPTURE  ]I:  Sent captured record 0 to internal queue from Source  {operation:IDLE (51), connectionId:18155}  (streamcomponent.c:2927)

2024-08-14T11:38:25 [SOURCE_CAPTURE ]I: Sent captured record 0 to internal queue from Source {operation:IDLE (51), connectionId:18155} (streamcomponent.c:2927)
2024-08-14T11:38:43.000Z
2024-08-14T11:38:43 [TARGET_APPLY    ]I:  No records received to load or apply on target , waiting for data from upstream. The last context is  {operation:LOAD_TABLE (61), tableName:users, schemaName:public}  (streamcomponent.c:1989)

2024-08-14T11:38:43 [TARGET_APPLY ]I: No records received to load or apply on target , waiting for data from upstream. The last context is {operation:LOAD_TABLE (61), tableName:users, schemaName:public} (streamcomponent.c:1989)
2024-08-14T11:40:15.000Z
2024-08-14T11:40:15 [SOURCE_CAPTURE  ]I:  No Event fetched from wal log  (postgres_endpoint_wal_engine.c:1416)

2024-08-14T11:40:15 [SOURCE_CAPTURE ]I: No Event fetched from wal log (postgres_endpoint_wal_engine.c:1416)
2024-08-14T11:41:14.000Z
2024-08-14T11:41:14 [SORTER          ]I:  No records received to load or apply on target , waiting for data from upstream. The last context is  {operation:LOAD_TABLE (61)}  (streamcomponent.c:1989)

2024-08-14T11:41:14 [SORTER ]I: No records received to load or apply on target , waiting for data from upstream. The last context is {operation:LOAD_TABLE (61)} (streamcomponent.c:1989)
2024-08-14T11:41:26.000Z
2024-08-14T11:41:26 [SOURCE_CAPTURE  ]I:  Sent captured record 0 to internal queue from Source  {operation:IDLE (51), connectionId:18155}  (streamcomponent.c:2927)

2024-08-14T11:41:26 [SOURCE_CAPTURE ]I: Sent captured record 0 to internal queue from Source {operation:IDLE (51), connectionId:18155} (streamcomponent.c:2927)
2024-08-14T11:41:43.000Z
2024-08-14T11:41:43 [TARGET_APPLY    ]I:  No records received to load or apply on target , waiting for data from upstream. The last context is  {operation:LOAD_TABLE (61), tableName:users, schemaName:public}  (streamcomponent.c:1989)

I am attempting to find more information regarding these errors online and would very much appreciate if someone could assist with this problem whom may have seen something like this before. Thank you for your time.

1 Answer
0

Here are some potential reasons and solutions:

Check the Source Endpoint Configuration:

Ensure that the DMS source endpoint is correctly configured to capture changes from the PostgreSQL database.

Verify that the replication slot is properly set up and that the WAL (Write-Ahead Logging) level is configured to at least logical.

Verify Task Settings:

Ensure that your DMS task is configured for either full load, change data capture (CDC), or both, depending on your requirement.

If you’re using CDC, verify that there are actual changes happening in the source tables after the initial load.

Check the Target Endpoint:

Make sure that the Kinesis Stream target is correctly set up and accessible from your DMS task.

Ensure that the DMS IAM role has the necessary permissions to write to the Kinesis Stream.

Review Table Mappings:

Verify that the table mappings in your DMS task are correctly set up and that the tables you want to migrate are included. Check for any transformation rules that might be affecting the data migration.

Investigate CloudWatch Logs:

Dive deeper into the CloudWatch logs to see if there are more specific errors or warnings that could provide additional insights.

Look for any errors related to network connectivity, permissions, or resource availability.

Re-Run the Task:

Sometimes re-running the DMS task can help resolve temporary issues. Make sure to monitor the logs closely during the re-run.

EXPERT
answered a month ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions