By using AWS re:Post, you agree to the Terms of Use
/AWS Database Migration Service/

Questions tagged with AWS Database Migration Service

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Using DMS and SCT for extracting/migrating data from Cassandra to S3

IHAC who is doing scoping with an Architecture using DMS and SCT. I had a few questions I was hoping you can get answered for me. 1. Does AWS DMS support data validation with Cassandra as a source? I don’t see it here - https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html#CHAP_BestPractices.DataValidation but I do see Cassandra as a valid source target here https://aws.amazon.com/about-aws/whats-new/2018/09/aws-dms-aws-sct-now-support-the-migration-of-apache-cassandra-databases/ 2. Does AWS DMS support ongoing replication with Cassandra as a source? Reading the docs it looks like if I wanted to extract data from Cassandra and write to s3 (Using DMS) then post process that data into a different format (Like json) and write to a different S3 bucket, I could so by attaching a Lamba to the original S3 event from the DMS extract and drop. Can you confirm my understanding? 3. How is incremental data loaded ongoing after initial load from Cassandra (with DMS)? In the docs it looks like its stored in s3 in csv form. Does it write 1 csv per source table and keep appending or updating the existing csv? does it create 1 csv per row, per batch...etc? I’m wondering how the event in step 3 would be triggered if I did want to continuously post process updates as they come in in real time and covert source data from Cassandra into Json data I store on s3.
0
answers
0
votes
2
views
AWS-User-7019446
asked 3 days ago

How do I transfer my AWS account to another person or business?

I am selling my site and need to transfer the AWS account to the buyer's business (the buyers do not use AWS for their other sites - but they want my site to continue with AWS). I cannot figure out how to do it. Do I need to pay for support and what level? This is Amazon's advice on transfering ownership of a site: https://aws.amazon.com/premiumsupport/knowledge-center/transfer-aws-account/ "To assign ownership of an AWS account and its resources to another party or business, contact AWS Support for help: Sign in to the AWS Management Console as the root user. Open the AWS Support Center. Choose Create case. Enter the details of your case: Choose Account and billing support. For Type, choose Account. For Category, choose Ownership Transfer. For all other fields, enter the details for your case. For Preferred contact language, choose your preferred language. For Contact methods, choose your preferred contact method. Choose Submit. AWS Support will contact you with next steps and help you transfer your account ownership." I have done all this but have not yet been contacted (24 hours). The text seems to suggest that advice on transfering ownership is a necessary aspect of transfering an AWS root account to a company, and that such advice is provided free by Amazon, since nothing is said about pricing. If on the other hand AWS clients must pay for a support package to transfer ownership, which package? The $29 Developer package or the $100 Business package or some other package? How quickly does Amazon AWS respond? How quick is the transfer process? I am finding this very frustrating.
1
answers
0
votes
11
views
Matthew Pollock
asked 4 days ago

Not able to do one time load, from postgres to opensearch using DMS

Trying to migrate existing data from AWS RDS Postgres to AWS managed OpenSearch, but it is not working, no rows were migrated to opensearch, When checking the Cloudwatch log getting below error Bulk request failed. no retry. TotalRecordCount 4080, FailedRecordCount 4080 [1026400] (elasticsearch_bulk_utils.c:181) DMS has the following configuration: { "TargetMetadata": { "TargetSchema": "", "SupportLobs": false, "FullLobMode": false, "LobChunkSize": 0, "LimitedSizeLobMode": false, "LobMaxSize": 0, "InlineLobMaxSize": 0, "LoadMaxFileSize": 0, "ParallelLoadThreads": 5, "ParallelLoadBufferSize": 100, "BatchApplyEnabled": false, "TaskRecoveryTableEnabled": false, "ParallelLoadQueuesPerThread": 0, "ParallelApplyThreads": 0, "ParallelApplyBufferSize": 100, "ParallelApplyQueuesPerThread": 0 }, "FullLoadSettings": { "TargetTablePrepMode": "DO_NOTHING", "CreatePkAfterFullLoad": false, "StopTaskCachedChangesApplied": false, "StopTaskCachedChangesNotApplied": false, "MaxFullLoadSubTasks": 8, "TransactionConsistencyTimeout": 600, "CommitRate": 50000 }, "Logging": { "EnableLogging": true, "LogComponents": [ { "Id": "TRANSFORMATION", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "SOURCE_UNLOAD", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "IO", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "TARGET_LOAD", "Severity": "LOGGER_SEVERITY_DETAILED_DEBUG" }, { "Id": "PERFORMANCE", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "SOURCE_CAPTURE", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "SORTER", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "REST_SERVER", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "VALIDATOR_EXT", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "TARGET_APPLY", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "TASK_MANAGER", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "TABLES_MANAGER", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "METADATA_MANAGER", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "FILE_FACTORY", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "COMMON", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "ADDONS", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "DATA_STRUCTURE", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "COMMUNICATION", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "FILE_TRANSFER", "Severity": "LOGGER_SEVERITY_DEFAULT" } ], "CloudWatchLogGroup": null, "CloudWatchLogStream": null }, "ControlTablesSettings": { "historyTimeslotInMinutes": 5, "ControlSchema": "", "HistoryTimeslotInMinutes": 5, "HistoryTableEnabled": true, "SuspendedTablesTableEnabled": false, "StatusTableEnabled": true, "FullLoadExceptionTableEnabled": false }, "StreamBufferSettings": { "StreamBufferCount": 3, "StreamBufferSizeInMB": 8, "CtrlStreamBufferSizeInMB": 5 }, "ChangeProcessingDdlHandlingPolicy": { "HandleSourceTableDropped": true, "HandleSourceTableTruncated": true, "HandleSourceTableAltered": true }, "ErrorBehavior": { "DataErrorPolicy": "LOG_ERROR", "EventErrorPolicy": null, "DataTruncationErrorPolicy": "LOG_ERROR", "DataErrorEscalationPolicy": "SUSPEND_TABLE", "DataErrorEscalationCount": 0, "TableErrorPolicy": "SUSPEND_TABLE", "TableErrorEscalationPolicy": "STOP_TASK", "TableErrorEscalationCount": 0, "RecoverableErrorCount": -1, "RecoverableErrorInterval": 5, "RecoverableErrorThrottling": true, "RecoverableErrorThrottlingMax": 1800, "RecoverableErrorStopRetryAfterThrottlingMax": true, "ApplyErrorDeletePolicy": "IGNORE_RECORD", "ApplyErrorInsertPolicy": "LOG_ERROR", "ApplyErrorUpdatePolicy": "LOG_ERROR", "ApplyErrorEscalationPolicy": "LOG_ERROR", "ApplyErrorEscalationCount": 0, "ApplyErrorFailOnTruncationDdl": false, "FullLoadIgnoreConflicts": true, "FailOnTransactionConsistencyBreached": false, "FailOnNoTablesCaptured": true }, "ChangeProcessingTuning": { "BatchApplyPreserveTransaction": true, "BatchApplyTimeoutMin": 1, "BatchApplyTimeoutMax": 30, "BatchApplyMemoryLimit": 500, "BatchSplitSize": 0, "MinTransactionSize": 1000, "CommitTimeout": 1, "MemoryLimitTotal": 1024, "MemoryKeepTime": 60, "StatementCacheSize": 50 }, "PostProcessingRules": null, "CharacterSetSettings": null, "LoopbackPreventionSettings": null, "BeforeImageSettings": null, "FailTaskWhenCleanTaskResourceFailed": false, "TTSettings": null } Opensearch have index with following settings { "settings": { "index.max_ngram_diff" :8, "analysis": { "analyzer": { "my_ngram_analyzer": { "type": "custom", "tokenizer": "standard", "filter": [ "lowercase", "mynGram" ] } }, "filter": { "mynGram": { "type": "nGram", "min_gram": 6, "max_gram": 14, "token_chars": [ "letter", "digit", "whitespace", "symbol" ] } } }, "number_of_shards": 6, "number_of_replicas": 1 }, "mappings" : { "properties" : { "created_at" : { "type" : "date" }, "id" : { "type" : "long" }, "name" : { "type" : "text", "analyzer":"my_ngram_analyzer" , "search_analyzer": "my_ngram_analyzer" }, "phone" : { "type" : "text", "analyzer":"my_ngram_analyzer" , "search_analyzer": "my_ngram_analyzer" }, "updated_at" : { "type" : "date" } } } } I have tried to insert a sample document using _bulk API on opensearch console and it worked, below is the thing I had tried over opensearch, which worked POST _bulk {"index":{"_index":"contacts"}} {"name": "name","phone" : "11111111","created_at" : "2021-12-21T12:12:59","updated_at" : "2021-12-21T12:12:59","id": 101}
1
answers
0
votes
5
views
BhaveshD
asked 18 days ago

DMS Ignore Duplicate key errors while migrating data between DocumentDB instances

We need to replicate data between two collections in AWS documentDB to get rid of duplicate documents. Source and Target is AWS documentDB instances version 4.0.0. I've created a unique index in target table to only allow non-duplicate values. I needed to create index before migrating the data to new target, because our data size in ~1TB and index creation on source collection is impossible. Full load fails after the following error. Task status becomes table error and no data is migrated further to that collection. ``` 2022-03-23T03:13:57 [TARGET_LOAD ]E: Execute bulk failed with errors: 'Multiple write errors: "E11000 duplicate key error collection: reward_users_v4 index: lockId", "E11000 duplicate key error collection: reward_users_v4 index: lockId"' [1020403] (mongodb_apply.c:153) 2022-03-23T03:13:57 [TARGET_LOAD ]E: Failed to handle execute bulk when maximum events per bulk '1000' was reached [1020403] (mongodb_apply.c:433) ``` ``` "ErrorBehavior": { "FailOnNoTablesCaptured": false, "ApplyErrorUpdatePolicy": "LOG_ERROR", "FailOnTransactionConsistencyBreached": false, "RecoverableErrorThrottlingMax": 1800, "DataErrorEscalationPolicy": "SUSPEND_TABLE", "ApplyErrorEscalationCount": 1000000000, "RecoverableErrorStopRetryAfterThrottlingMax": true, "RecoverableErrorThrottling": true, "ApplyErrorFailOnTruncationDdl": false, "DataTruncationErrorPolicy": "LOG_ERROR", "ApplyErrorInsertPolicy": "LOG_ERROR", "ApplyErrorEscalationPolicy": "LOG_ERROR", "RecoverableErrorCount": 1000000000, "DataErrorEscalationCount": 1000000000, "TableErrorEscalationPolicy": "SUSPEND_TABLE", "RecoverableErrorInterval": 10, "ApplyErrorDeletePolicy": "IGNORE_RECORD", "TableErrorEscalationCount": 1000000000, "FullLoadIgnoreConflicts": true, "DataErrorPolicy": "LOG_ERROR", "TableErrorPolicy": "SUSPEND_TABLE" }, ``` How can I configure AWS DMS to continue even if such duplicate key errors keep on happening. I tried modifying the TableErrorEscalation count and many other error counts but loading always stops at first duplicate key error. I have 580k Documents in test workload for this task.
1
answers
0
votes
1
views
Raj
asked 2 months ago

How does AWS DMS table selection rules handle overlap or conflict?

Hello! I'm reading through the docs that talk about AWS DMS selection rules and wildcards: 1. https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Selections.html 2. https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Wildcards.html I have a use-case where we're currently pulling in a few dozen unnecessary tables in a snapshot load from source (Postgres) to destination (Athena DataLake). I want to exclude MOST tables matching a certain pattern (`%_aud`, to be specific), but then there's one table matching that same pattern I want to explicitly keep (let's say its called `foo_aud`). I'd like to not have to write explicit exclusion rules for the ~20 or so other tables, so I'm hoping I can do something like this: (rules are specified in JSON in a cloudformation file) ```json { "rules": [ // ...our other selection rules... { "rule-type": "selection", "rule-id": "21", "rule-name": "excludeaudit", "object-locator": { "schema-name": "myschema", "table-name": "%_aud" }, "rule-action": "exclude" }, { "rule-type": "selection", "rule-id": "22", "rule-name": "includefooaudit", "object-locator": { "schema-name": "myschema", "table-name": "foo_aud" }, "rule-action": "include" } ] } ``` What will happen in the above example? Will DMS correctly exclude all `%_aud` tables, but then include `foo_aud`? Does the rule ordering matter - e.g. if I swap rules 22 and 21? (How is rule priority managed?) Or is there some other way I can achieve this? Thank you!
1
answers
0
votes
2
views
danpincas
asked 3 months ago

Amazon DMS table mapping tranformation

Details: **Source**: Postgres **Target**: S3 **Format**: parquet Hi Guys! Currently, in default mode, the DMS task is reading a boolean column in the source and automatically converting to a char format in the target. Theoretically Amazon DMS provides a step to transfor data in the mapping rules task. https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Selections.html However after some attempts the data was not correctly converted in the target. The follow mapping transformations was tried. ```json { "rules": [ { "rule-type": "selection", "rule-id": "1", "rule-name": "1", "object-locator": { "schema-name": "test_schema", "table-name": "%" }, "rule-action": "include" }, { "rule-type": "transformation", "rule-id": "2", "rule-name": "2", "rule-action": "change-data-type", "rule-target": "column", "object-locator": { "schema-name": "test_schema", "table-name": "table_test", "column-name": "column1" }, "data-type": { "type": "int8" } }, { "rule-type": "transformation", "rule-id": "3", "rule-name": "3", "rule-action": "change-data-type", "rule-target": "column", "object-locator": { "schema-name": "test_schema", "table-name": "table_test", "column-name": "doc" }, "data-type": { "type": "boolean" } } ] } ``` I undertood that DMS automatically convert the from Boolean type to the char type, following the doc below. https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html Ok, great...so the "doc" column is currently a "char" type. But, what's the reason the "doc" column has not converted to boolean type in the target when used the transformation above? Would it be for that reason? https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Transformations.html "*AWS DMS supports column data type transformations for the following DMS data types: "bytes", "date", "time", "datetime", "int1", "int2", "int4", "int8", "numeric", "real4", "real8", "string", "uint1", "uint2", "uint4", "uint8", "wstring", "blob", "nclob", "clob", "boolean", "set", "list" "map", "tuple*""
2
answers
0
votes
4
views
AWS-User-0898362
asked 4 months ago

AWS DMS + OpenSearch + Index templates

I'm migrating some data from postgres to OpenSearch, but i'm struggling with migrating a set of coordinates. In postgres i have `latitude` and `longitude`, and i know DMS does not support geo types when using OpenSearch as a target. I wanted to work around this by using an ingest pipeline and an index template: ``` PUT _ingest/pipeline/my-pipeline-id { "description": "My optional pipeline description", "processors": [ { "set": { "field": "location.lon", "value": "{{{longitude}}}" } }, { "set": { "field": "location.lat", "value": "{{{latitude}}}" } } ] } ``` ``` PUT /_index_template/jobs_template { "index_patterns": [ "jobs*" ], "template": { "settings": { "index.default_pipeline": "my-pipeline-id" }, "mappings":{ "properties": { "location": { "type": "geo_point" } } } } } ``` I first tested having only the pipeline without the mappings and that part works. However, when i add a mapping to the template, and re run the migration task i get the following error ``` 2022-01-20T15:23:01 [TARGET_LOAD ]E: Elasticsearch:FAILED SourceTable:jobs TargetIndex:jobs Operation:INSERT_ENTRY RecordPKKey:93904e3c-5565-4469-94d6-e58fbecdc5a3 RecordPKID:217D5CE32D4EC983FE2C3CFD6048821EA2A95F3658122A80A7EEB3A6088EA89CES HttpCode:400 ESErrorResponse: { "error": { "root_cause": [ { "type": "illegal_argument_exception", "reason": "Rejecting mapping update to [jobs] as the final mapping would have more than 1 type: [_doc, doc]" } ], "type": "illegal_argument_exception", "reason": "Rejecting mapping update to [jobs] as the final mapping would have more than 1 type: [_doc, doc]" }, "status": 400 } ``` Is there any way to make this work? Any way to create a geo point so I can perform geo queries in OpenSearch?
0
answers
0
votes
2
views
mfrr1118
asked 4 months ago

AWS DMS Postgres to OpenSearch LOB handling

Source: postgres Target: OpenSearch I have a `text` column called `description` in one of my postgres tables. Per the documentation, this data type is mapped to a `NCLOB`. Since OpenSearch does not not offer LOB support, my `description` is missing in my OpenSearch documents. I tried using the mapping rule bellow, but does not seem to be doing anything ``` { "rule-type": "transformation", "rule-id": "3", "rule-name": "3", "rule-target": "column", "object-locator": { "schema-name": "public", "table-name": "jobs", "column-name": "description" }, "rule-action": "change-data-type", "data-type": { "type": "string", "length": 500 } } ``` When i check the logs i see the following ``` Column 'description' is unsupported in table def 'public.jobs' since the LOB support is disabled ``` However, i do have LOB enabled under task settings: ``` "TargetMetadata": { "ParallelApplyBufferSize": 0, "ParallelApplyQueuesPerThread": 0, "ParallelApplyThreads": 0, "TargetSchema": "", "InlineLobMaxSize": 0, "ParallelLoadQueuesPerThread": 0, "SupportLobs": true, "LobChunkSize": 10, "TaskRecoveryTableEnabled": false, "ParallelLoadThreads": 0, "BatchApplyEnabled": false, "FullLobMode": true, "LimitedSizeLobMode": false, "LoadMaxFileSize": 0, "ParallelLoadBufferSize": 0 }, ``` Is that transformation rule supposed to work? Or will any LOB column be skipped because OpenSearch does not have LOB support? Any way to make this work? Thanks!
1
answers
0
votes
5
views
mfrr1118
asked 4 months ago

DMS CDC task (Oracle->S3, binary reader) "hangs" without failing, misses changes

We're running DMS on an on-prem Oracle database, with a destination to S3 (which we then load to Snowflake outside of DMS). We're finding the replication task will, seemingly at random times after working fine for a few hours, simply stop processing Oracle log files and so will report no source CDC changes, and will report zero latency (whereas the source latency generally fluctuates between 1 and 5s). Cloudwatch logs continue to show the heartbeat message (but few other messages do): ```[SORTER ]I: Task is running (sorter.c:736)``` After turning on DEBUG logging, we're seeing the following messages that appear to be related to this problem: ``` 2021-12-27T16:39:50:430405 [TASK_MANAGER ]D: There are 284 swap files of total size 1 Mb. Left to process 5 of size 1 Mb (replicationtask_cmd.c:1639) ``` And this error, which seemingly pops up much more frequently, so including it here, but not sure if it's related: ``` 2021-12-27T16:43:01:094411 [DATA_STRUCTURE ]E: SQLite general error. Code <19>, Message <UNIQUE constraint failed: events.identifier, events.eventType, events.detailMessage>. [1000506] (at_sqlite.c:475) ``` We're running a dms.r5.large, and can't otherwise find any patterns about when/why the issue appears (again, without any warnings, other errors, etc.). Restarting the task (stopping the task takes an unusually long time) fixes the problem and causes the task to "catch up" to where the updates stalled. Our current workaround is to set an alert looking for too-long a time of zero latency, and then having a lambda function stop and restart the task.
1
answers
1
votes
7
views
AWS-User-9831964
asked 5 months ago

Resolution for Fatal error when using DMS for on-going replication from RDS Postgres to S3

I attempted to configure my RDS postgres instance for CDC using [Setting up an Amazon RDS PostgreSQL DB instance as a source][1] I configured the DMS source endpoint to use a specific slotName. First using a slot that already existed in the DB and then using a slot I created using pg_create_logical_replication_slot. After successfully testing the endpoint to ensure connectivity, I started the replication task with was to load existing data followed by on-going changes. Both times, the replication failed with a fatal error such as the following and no data (even the existing data that should have loaded) gets replicated. Last Error Stream Component Fatal error. Task error notification received from subtask 0, thread 0 [reptask/replicationtask.c:2822] [1020101] Error executing source loop; Stream component failed at subtask 0, component st_0_KVPUHIZBICJJRNGCC32HP5EFGGLIZLRF2YSRJ6Y ; Stream component 'st_0_KVPUHIZBICJJRNGCC32HP5EFGGLIZLRF2YSRJ6Y' terminated [reptask/replicationtask.c:2829] [1020101] Stop Reason FATAL_ERROR Error Level FATAL What is the configuration I'm missing to enable CDC from RDS Postgres v11.8? I'm configuring DMS to use the RDS master user. A migration existing data only DMS task succeeds on this instance (if the slotName configuration is removed). The relevant CloudWatch log entries appear to be these 2021-02-03T22:03:05 [SOURCE_CAPTURE ]I: Slot has plugin 'test_decoding' (postgres_test_decoding.c:233) 2021-02-03T22:03:05 [SOURCE_CAPTURE ]I: Initial positioning requested is 'now' (postgres_endpoint_capture.c:511) 2021-02-03T22:03:05 [SOURCE_CAPTURE ]E: When working with Configured Slotname, user must specify LSN [1020101] (postgres_endpoint_capture.c:517) 2021-02-03T22:03:05 [TASK_MANAGER ]I: Task - W6AUC5OI3DNFMFUQ6ZDEYYNZ3NBSABQK3HFD2WQ is in ERROR state, updating starting status to AR_NOT_APPLICABLE (repository.c:5101) 2021-02-03T22:03:05 [TASK_MANAGER ]E: Task error notification received from subtask 0, thread 0 [1020101] (replicationtask.c:2822) 2021-02-03T22:03:05 [TASK_MANAGER ]E: Error executing source loop; Stream component failed at subtask 0, component st_0_KVPUHIZBICJJRNGCC32HP5EFGGLIZLRF2YSRJ6Y ; Stream component 'st_0_KVPUHIZBICJJRNGCC32HP5EFGGLIZLRF2YSRJ6Y' terminated [1020101] (replicationtask.c:2829) 2021-02-03T22:03:05 [TASK_MANAGER ]E: Task 'W6AUC5OI3DNFMFUQ6ZDEYYNZ3NBSABQK3HFD2WQ' encountered a fatal error (repository.c:5194) 2021-02-03T22:03:05 [SORTER ]I: Final saved task state. Stream position , Source id 0, next Target id 1, confirmed Target id 0, last source timestamp 0 (sorter.c:803) Other items in the log appear to be informational to track progress. [1]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html#CHAP_Source.PostgreSQL.RDSPostgreSQL.CDC
1
answers
0
votes
6
views
Craig_J
asked a year ago

DMS Job Error. from mongodb to documentdb

2021-01-14T23:00:59.000+08:00 2021-01-14T15:00:59 \[AT_GLOBAL ]I: Task Server Log - 3DD5OUFBJIVNLDZTIWAKBVENSW3SDHDKLOXO7YA (V3.4.3.R1 localhost.localdomain Linux 4.14.59-64.43.amzn1.x86_64 #1 SMP Thu Aug 2 21:29:33 UTC 2018 x86_64 64-bit, PID: 26697) started at Thu Jan 14 15:00:59 2021 (at_logger.c:2700) 2021-01-14T23:00:59.000+08:00 2021-01-14T15:00:59 \[DATA_STRUCTURE ]I: SQLite version is 3.31.1 (at_sqlite.c:174) 2021-01-14T23:00:59.000+08:00 2021-01-14T15:00:59 \[VALIDATOR ]I: validation_util_class_initialize (validation_util.c:70) 2021-01-14T23:00:59.000+08:00 2021-01-14T15:00:59 \[VALIDATOR ]I: Creating Table Def Mutex (validation_util.c:74) 2021-01-14T23:00:59.000+08:00 2021-01-14T15:00:59 \[VALIDATOR ]I: ==> Success Creating Table Def Mutex (validation_util.c:82) 2021-01-14T23:00:59.000+08:00 2021-01-14T15:00:59 \[COMMON ]D: at_common_is_supported: load_common_wrapper( common_load_locker ) failed (at_common.c:69) 2021-01-14T23:01:00.000+08:00 2021-01-14T15:01:00 \[TASK_MANAGER ]I: Execute Request Task '3DD5OUFBJIVNLDZTIWAKBVENSW3SDHDKLOXO7YA' running full load and CDC with flags fresh start with cdcPosition null and stop_at null (replicationtask.c:753) 2021-01-14T23:01:00.000+08:00 2021-01-14T15:01:00 \[TASK_MANAGER ]I: BatchApplyPreserveTransaction is set to false, because BatchApplyEnabled is false (replicationtask.c:1100) 2021-01-14T23:01:00.000+08:00 2021-01-14T15:01:00 \[TASK_MANAGER ]I: Task '3DD5OUFBJIVNLDZTIWAKBVENSW3SDHDKLOXO7YA' starting full load and CDC in fresh start mode (replicationtask.c:1312) 2021-01-14T23:01:00.000+08:00 2021-01-14T15:01:00 \[TASK_MANAGER ]W: The "Transactional apply" option is not available when MongoDB is the target endpoint. The "Batch optimized apply" option will be used instead. (replicationtask.c:1541) 2021-01-14T23:01:00.000+08:00 2021-01-14T15:01:00 \[TASK_MANAGER ]I: Task Id: 1df77829-2a03-4d26-ba83-941e13de6edb (replicationtask.c:3279) 2021-01-14T23:01:00.000+08:00 2021-01-14T15:01:00 \[TASK_MANAGER ]I: LOB support on the task has been enabled (endpointshell.c:1792) 2021-01-14T23:01:00.000+08:00 2021-01-14T15:01:00 \[TASK_MANAGER ]I: Task is running in Limited LOB Mode. MaxLobSize is set to '32768' Bytes (endpointshell.c:1795) 2021-01-14T23:01:00.000+08:00 2021-01-14T15:01:00 \[METADATA_MANAGE ]I: Source endpoint 'mongoDB' is using provider syntax 'MongoDB' (provider_syntax_manager.c:622) 2021-01-14T23:01:00.000+08:00 2021-01-14T15:01:00 \[METADATA_MANAGE ]I: Connection string: 'mongodb://172.16.3.147:27017/?retryWrites=false' (mongodb_imp.c:408) 2021-01-14T23:01:01.000+08:00 2021-01-14T15:01:01 \[METADATA_MANAGE ]I: MongoDB version: 4.0.1 (mongodb_imp.c:210) 2021-01-14T23:01:01.000+08:00 2021-01-14T15:01:01 \[METADATA_MANAGE ]I: Target endpoint 'mongoDB' is using provider syntax 'MongoDB' (provider_syntax_manager.c:628) 2021-01-14T23:01:01.000+08:00 2021-01-14T15:01:01 \[METADATA_MANAGE ]I: Connection string: 'mongodb://mongo:****@inventorydb.cluster-c3jaqp3qyz8v.us-east-2.docdb.amazonaws.com:27017/?retryWrites=false&authSource=admin&authMechanism=SCRAM-SHA-1' (mongodb_imp.c:408) 2021-01-14T23:01:01.000+08:00 2021-01-14T15:01:01 \[METADATA_MANAGE ]I: MongoDB version: 4.0.0 (mongodb_imp.c:210) 2021-01-14T23:01:01.000+08:00 2021-01-14T15:01:01 \[TASK_MANAGER ]I: Preparing all components (replicationtask.c:1935) 2021-01-14T23:01:01.000+08:00 2021-01-14T15:01:01 \[TASK_MANAGER ]I: Task - 3DD5OUFBJIVNLDZTIWAKBVENSW3SDHDKLOXO7YA is in STARTING state, updating starting status to AR_PREPARING_COMPONENTS (repository.c:5111) 2021-01-14T23:01:01.000+08:00 2021-01-14T15:01:01 \[TASK_MANAGER ]I: Creating threads for all components (replicationtask.c:1968) 2021-01-14T23:01:01.000+08:00 2021-01-14T15:01:01 \[TASK_MANAGER ]I: Task - 3DD5OUFBJIVNLDZTIWAKBVENSW3SDHDKLOXO7YA is in STARTING state, updating starting status to AR_CREATING_TREADS (repository.c:5111) 2021-01-14T23:01:01.000+08:00 2021-01-14T15:01:01 \[TASK_MANAGER ]I: Task - 3DD5OUFBJIVNLDZTIWAKBVENSW3SDHDKLOXO7YA is in STARTING state, updating starting status to AR_CREATING_TABLES_LIST (repository.c:5111) 2021-01-14T23:01:01.000+08:00 2021-01-14T15:01:01 \[TABLES_MANAGER ]I: Calling for get capture table list from the Metadata Manager started. (tasktablesmanager.c:928) 2021-01-14T23:01:01.000+08:00 2021-01-14T15:01:01 \[TABLES_MANAGER ]I: Calling for get capture table list from the Metadata Manager ended. (tasktablesmanager.c:935) 2021-01-14T23:01:01.000+08:00 复制 2021-01-14T15:01:01 \[TASK_MANAGER ]E: No tables were found at task initialization. Either the selected table(s) no longer exist or no match was found for the table selection pattern(s). \[1021707] (replicationtask.c:2107) 2021-01-14T15:01:01 \[TASK_MANAGER ]E: No tables were found at task initialization. Either the selected table(s) no longer exist or no match was found for the table selection pattern(s). \[1021707] (replicationtask.c:2107) 2021-01-14T23:01:01.000+08:00 2021-01-14T15:01:01 \[TASK_MANAGER ]E: Task '3DD5OUFBJIVNLDZTIWAKBVENSW3SDHDKLOXO7YA' failed \[1021707] (replicationtask.c:3316) 2021-01-14T23:01:30.000+08:00 2021-01-14T15:01:30 \[TASK_MANAGER ]I: Task - 3DD5OUFBJIVNLDZTIWAKBVENSW3SDHDKLOXO7YA is in ERROR state, updating starting status to AR_NOT_APPLICABLE (repository.c:5103) 2021-01-14T23:01:30.000+08:00 2021-01-14T15:01:30 \[TASK_MANAGER ]E: Task '3DD5OUFBJIVNLDZTIWAKBVENSW3SDHDKLOXO7YA' encountered a fatal error (repository.c:5196) 2021-01-14T23:01:33.000+08:00 2021-01-14T15:01:33 \[METADATA_MANAGE ]I: Destroying mongoc client: '23271072968192' (mongodb_imp.c:1153) 2021-01-14T23:01:33.000+08:00 2021-01-14T15:01:33 \[METADATA_MANAGE ]I: Destroying mongoc client: '23271073330768' (mongodb_imp.c:1153) 2021-01-14T23:01:40.000+08:00 2021-01-14T15:01:40 \[TASK_MANAGER ]I: Task Management thread terminated abnormally (replicationtask.c:3969) 2021-01-14T23:01:40.000+08:00 2021-01-14T15:01:40 \[AT_GLOBAL ]I: Closing log file at Thu Jan 14 15:01:40 2021 (at_logger.c:2548)
2
answers
0
votes
0
views
tomtoto
asked a year ago
  • 1
  • 90 / page