By using AWS re:Post, you agree to the Terms of Use
/Migration & Transfer/

Migration & Transfer

Easily migrate to AWS and see business results faster. We’ve taken our experience with migrations to AWS and developed a broad set of first and third party tools and services to help simplify and accelerate migrations. Our migration tool catalog includes an end-to-end set of tools to ensure your investment achieves your desired business outcomes.

Recent questions

see all
1/18

DMS CDC task (Oracle->S3, binary reader) "hangs" without failing, misses changes

We're running DMS on an on-prem Oracle database, with a destination to S3 (which we then load to Snowflake outside of DMS). We're finding the replication task will, seemingly at random times after working fine for a few hours, simply stop processing Oracle log files and so will report no source CDC changes, and will report zero latency (whereas the source latency generally fluctuates between 1 and 5s). Cloudwatch logs continue to show the heartbeat message (but few other messages do): ```[SORTER ]I: Task is running (sorter.c:736)``` After turning on DEBUG logging, we're seeing the following messages that appear to be related to this problem: ``` 2021-12-27T16:39:50:430405 [TASK_MANAGER ]D: There are 284 swap files of total size 1 Mb. Left to process 5 of size 1 Mb (replicationtask_cmd.c:1639) ``` And this error, which seemingly pops up much more frequently, so including it here, but not sure if it's related: ``` 2021-12-27T16:43:01:094411 [DATA_STRUCTURE ]E: SQLite general error. Code <19>, Message <UNIQUE constraint failed: events.identifier, events.eventType, events.detailMessage>. [1000506] (at_sqlite.c:475) ``` We're running a dms.r5.large, and can't otherwise find any patterns about when/why the issue appears (again, without any warnings, other errors, etc.). Restarting the task (stopping the task takes an unusually long time) fixes the problem and causes the task to "catch up" to where the updates stalled. Our current workaround is to set an alert looking for too-long a time of zero latency, and then having a lambda function stop and restart the task.
0
answers
0
votes
3
views
AWS-User-9831964
asked 20 days ago

Popular users

see all
1/18

Learn AWS faster by following popular topics

1/1