By using AWS re:Post, you agree to the Terms of Use
/Migration & Transfer/

Questions tagged with Migration & Transfer

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

DMS Ignore Duplicate key errors while migrating data between DocumentDB instances

We need to replicate data between two collections in AWS documentDB to get rid of duplicate documents. Source and Target is AWS documentDB instances version 4.0.0. I've created a unique index in target table to only allow non-duplicate values. I needed to create index before migrating the data to new target, because our data size in ~1TB and index creation on source collection is impossible. Full load fails after the following error. Task status becomes table error and no data is migrated further to that collection. ``` 2022-03-23T03:13:57 [TARGET_LOAD ]E: Execute bulk failed with errors: 'Multiple write errors: "E11000 duplicate key error collection: reward_users_v4 index: lockId", "E11000 duplicate key error collection: reward_users_v4 index: lockId"' [1020403] (mongodb_apply.c:153) 2022-03-23T03:13:57 [TARGET_LOAD ]E: Failed to handle execute bulk when maximum events per bulk '1000' was reached [1020403] (mongodb_apply.c:433) ``` ``` "ErrorBehavior": { "FailOnNoTablesCaptured": false, "ApplyErrorUpdatePolicy": "LOG_ERROR", "FailOnTransactionConsistencyBreached": false, "RecoverableErrorThrottlingMax": 1800, "DataErrorEscalationPolicy": "SUSPEND_TABLE", "ApplyErrorEscalationCount": 1000000000, "RecoverableErrorStopRetryAfterThrottlingMax": true, "RecoverableErrorThrottling": true, "ApplyErrorFailOnTruncationDdl": false, "DataTruncationErrorPolicy": "LOG_ERROR", "ApplyErrorInsertPolicy": "LOG_ERROR", "ApplyErrorEscalationPolicy": "LOG_ERROR", "RecoverableErrorCount": 1000000000, "DataErrorEscalationCount": 1000000000, "TableErrorEscalationPolicy": "SUSPEND_TABLE", "RecoverableErrorInterval": 10, "ApplyErrorDeletePolicy": "IGNORE_RECORD", "TableErrorEscalationCount": 1000000000, "FullLoadIgnoreConflicts": true, "DataErrorPolicy": "LOG_ERROR", "TableErrorPolicy": "SUSPEND_TABLE" }, ``` How can I configure AWS DMS to continue even if such duplicate key errors keep on happening. I tried modifying the TableErrorEscalation count and many other error counts but loading always stops at first duplicate key error. I have 580k Documents in test workload for this task.
1
answers
0
votes
1
views
Raj
asked 2 months ago
0
answers
0
votes
0
views
AWS-User-2734320
asked 2 months ago

Aurora Postgres upgrade from 11.13 to 12.8 failing - I assume due to PostGis

Trying to upgrade our Aurora Clusters finally. Got them recently updated to 11.13, but every attempt I make to upgrade to 12.8 fails with **"Database cluster is in a state that cannot be upgraded: Postgres cluster is in a state where pg_upgrade can not be completed successfully."** Here are the logs which I think point to the culprit: **2022-02-11 22:37:53.514 GMT [5276] ERROR: could not access file "$libdir/postgis-2.4": No such file or directory 2022-02-11 22:37:53.514 GMT [5276] STATEMENT: LOAD '$libdir/postgis-2.4' 2022-02-11 22:37:53.515 GMT [5276] ERROR: could not access file "$libdir/rtpostgis-2.4": No such file or directory 2022-02-11 22:37:53.515 GMT [5276] STATEMENT: LOAD '$libdir/rtpostgis-2.4'** command: "/rdsdbbin/aurora-12.8.12.8.0.5790.0/bin/pg_ctl" -w -D "/rdsdbdata/db" -o "--config_file=/rdsdbdata/config_new/postgresql.conf --survivable_cache_mode=off" -m fast stop >> "pg_upgrade_server.log" 2>&1 waiting for server to shut down....2022-02-11 22:37:53.541 GMT [5185] LOG: received fast shutdown request 2022-02-11 22:37:53.541 GMT [5185] LOG: aborting any active transactions 2022-02-11 22:37:53.542 GMT [5237] LOG: shutting down ................sh: /rdsdbbin/aurora-12.8.12.8.0.5790.0/bin/curl: /apollo/sbin/envroot: bad interpreter: No such file or directory 2022-02-11 22:38:10.305 GMT [5185] FATAL: Can't handle storage runtime process crash 2022-02-11 22:38:10.305 GMT [5185] LOG: database system is shut down -------- I found several other articles that point to issues with Postgis, so I followed what they suggest, but no luck. First our cluster is running Postgis 2.4.4. So I went ahead and updated this to 3.1.4, tried the approach to restart the instance and validate its really using Postgis 3 and that all looks fine. Nothing helps though. If anyone has suggestions, I am happy to try. Thanks Thomas
2
answers
0
votes
7
views
ThomasG
asked 3 months ago

Why can't I install python logging library on Linux2 instance

Just started a new instance to run my python3 script. I need several libraries which I can install with pip3 (pip3 install requests runs well) but I can't get logging library installed. I have this output: $ pip3 install logging > ``` Defaulting to user installation because normal site-packages is not writeable Collecting logging Using cached logging-0.4.9.6.tar.gz (96 kB) ERROR: Command errored out with exit status 1: command: /usr/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-_n141mbi/logging/setup.py'"'"'; __file__='"'"'/tmp/pip-install-_n141mbi/logging/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-pxlnpi1y cwd: /tmp/pip-install-_n141mbi/logging/ Complete output (48 lines): running egg_info creating /tmp/pip-pip-egg-info-pxlnpi1y/logging.egg-info writing /tmp/pip-pip-egg-info-pxlnpi1y/logging.egg-info/PKG-INFO writing dependency_links to /tmp/pip-pip-egg-info-pxlnpi1y/logging.egg-info/dependency_links.txt writing top-level names to /tmp/pip-pip-egg-info-pxlnpi1y/logging.egg-info/top_level.txt writing manifest file '/tmp/pip-pip-egg-info-pxlnpi1y/logging.egg-info/SOURCES.txt' Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-install-_n141mbi/logging/setup.py", line 13, in <module> packages = ["logging"], File "/usr/lib64/python3.7/distutils/core.py", line 148, in setup dist.run_commands() File "/usr/lib64/python3.7/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/usr/lib64/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/usr/lib/python3.7/site-packages/setuptools/command/egg_info.py", line 297, in run self.find_sources() File "/usr/lib/python3.7/site-packages/setuptools/command/egg_info.py", line 304, in find_sources mm.run() File "/usr/lib/python3.7/site-packages/setuptools/command/egg_info.py", line 535, in run self.add_defaults() File "/usr/lib/python3.7/site-packages/setuptools/command/egg_info.py", line 571, in add_defaults sdist.add_defaults(self) File "/usr/lib64/python3.7/distutils/command/sdist.py", line 226, in add_defaults self._add_defaults_python() File "/usr/lib/python3.7/site-packages/setuptools/command/sdist.py", line 135, in _add_defaults_python build_py = self.get_finalized_command('build_py') File "/usr/lib64/python3.7/distutils/cmd.py", line 298, in get_finalized_command cmd_obj = self.distribution.get_command_obj(command, create) File "/usr/lib64/python3.7/distutils/dist.py", line 857, in get_command_obj klass = self.get_command_class(command) File "/usr/lib/python3.7/site-packages/setuptools/dist.py", line 768, in get_command_class self.cmdclass[command] = cmdclass = ep.load() File "/usr/lib/python3.7/site-packages/pkg_resources/__init__.py", line 2461, in load return self.resolve() File "/usr/lib/python3.7/site-packages/pkg_resources/__init__.py", line 2467, in resolve module = __import__(self.module_name, fromlist=['__name__'], level=0) File "/usr/lib/python3.7/site-packages/setuptools/command/build_py.py", line 16, in <module> from setuptools.lib2to3_ex import Mixin2to3 File "/usr/lib/python3.7/site-packages/setuptools/lib2to3_ex.py", line 13, in <module> from lib2to3.refactor import RefactoringTool, get_fixers_from_package File "/usr/lib64/python3.7/lib2to3/refactor.py", line 19, in <module> import logging File "/tmp/pip-install-_n141mbi/logging/logging/__init__.py", line 618 raise NotImplementedError, 'emit must be implemented '\ ^ SyntaxError: invalid syntax ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. ``` I can't understand why this happens. Anybody has an idea how to install logging ? Thanks
1
answers
0
votes
5
views
AWS-User-3646434
asked 3 months ago

Importing .ova to EC2 fails with "ClientError: Unknown OS / Missing OS files."

Hi all, I'm trying to convert VMware virtual machines to EC2 instances, but the import always fails with "ClientError: Unknown OS / Missing OS files." Here's my process: 0) I start with a CentOS 7.9 VM on ESXi (supported according to [https://docs.aws.amazon.com/vm-import/latest/userguide/vmie_prereqs.html#vmimport-operating-systems]()). The VM is bootable, has only one volume (MBR, ext4) and otherwise nothing special. 1) I use ovftools to export the VM from ESXi to .ova: ``` /usr/lib/vmware-ovftool/ovftool --X:logFile=mytestvm-log.txt --X:logLevel=warning --noSSLVerify --powerOffSource vi://$user:$pass@vmhost/mytestvm /volumes/vmexport/mytestvm.ova Opening VI source: vi://$user@vmhost:443/mytestvm Opening OVA target: /volumes/vmexport/mytestvm.ova Writing OVA package: /volumes/vmexport/mytestvm.ova Transfer Completed Completed successfully ``` I tried this as well as exporting to OVF and then manually creating a tar file from it - same result. There's nothing in the log that indicates any problem. I can re-import the ova file and run it on ESXi, so it doesn't seem to be broken in any way. 2) Upload the .ova to S3: ``` aws s3 cp --sse --acl private /vm/mytestvm.ova s3://mv-ova-test ``` 3) Create a presigned URL for the file: ``` aws s3 presign s3://mv-ova-test/mytestvm.ova --expires-in 86400 ``` 4) Make a .json to describe the import: ``` [ { "Description": "ova-import-test", "Format": "ova", "Url": "https://mv-ova-test.s3.eu-central-1.amazonaws.com/mytestvm.ova?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential={the rest of the URL}" } ] ``` 5) Start the import: ``` aws ec2 import-image --description "ova import test" --disk-containers "file://mytestvm.json" { "Description": "test ova import", "ImportTaskId": "import-ami-06c5fc120d02749d7", "Progress": "1", "SnapshotDetails": [ { "Description": "mytestvm", "DiskImageSize": 0.0, "Format": "OVA", "Url": "https://mv-ova-test.s3.eu-central-1.amazonaws.com/mytestvm.ova?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential={the rest of the URL}", "UserBucket": {} } ], "Status": "active", "StatusMessage": "pending" } ``` 6) Check the import task: ``` { "ImportImageTasks": [ "Description": "test ova import", "ImportTaskId": "import-ami-06c5fc120d02749d7", "Progress": "19", "SnapshotDetails": [ { "DiskImageSize": 7062419968.0, "Format": "VMDK", "Status": "active", "Url": "https://mv-ova-test.s3.eu-central-1.amazonaws.com/mytestvm.ova?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential={the rest of the URL}", "UserBucket": {} } ], "Status": "active", "StatusMessage": "converting", "Tags": [] } ] } ``` 7) And in the end I get: ``` { "ImportImageTasks": [ { "Description": "test ova import", "ImportTaskId": "import-ami-06c5fc120d02749d7", "SnapshotDetails": [ { "DeviceName": "/dev/sde", "DiskImageSize": 7062419968.0, "Format": "VMDK", "Status": "completed", "Url": "https://mv-ova-test.s3.eu-central-1.amazonaws.com/mytestvm.ova?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential={the rest of the URL}", "UserBucket": {} } ], "Status": "deleted", "StatusMessage": "ClientError: Unknown OS / Missing OS files.", "Tags": [] } ] } ``` 8) Trying to import the vmdk alone makes no difference: ``` { "ImportImageTasks": [ { "Description": "test vmdk import", "ImportTaskId": "import-ami-0e1dc2522176e0cdf", "SnapshotDetails": [ { "Description": "test-vm", "DeviceName": "/dev/sde", "DiskImageSize": 7062419968.0, "Format": "VMDK", "Status": "completed", "Url": "https://mv-ova-test.s3.eu-central-1.amazonaws.com/mytestvm.ova?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential={the rest of the URL}", "UserBucket": {} } ], "Status": "deleted", "StatusMessage": "ClientError: Unknown OS / Missing OS files.", "Tags": [] } ] } ``` So. What could be the cause for "ClientError: Unknown OS / Missing OS files" if I use a supported OS and the .ova seems to be intact? Would I have the same issue if I tried using MGN for this? Thanks, Marc
2
answers
0
votes
5
views
Marc
asked 3 months ago

Does AWS DMS support ARRAY data type for RDS for PostgreSQL on EC2 to Aurora PostgreSQL migration?

I am currently migrating an Amazon RDS for PostgreSQL database from Amazon EC2 to Amazon Aurora PostgreSQL-Compatible Edition. I am using AWS DMS and have encountered the following issue: One of the columns in a particular table stores the values of water pressure measured within a second. This column is an array of decimal numbers (Ex: {2.44, 5.66, 8.55}). I received the following error message during the migration from AWS DMS: "1 unsupported data type '_float4' on table 'data1', column 'pressure'". Does AWS DMS support ARRAY data type for double or floating point numbers? The AWS documentation indicates that the arrays can't be migrated. However, further down on the same page, it's mentioned that AWS DMS supports arrays from a source RDS for PostgreSQL database and that arrays are mapped to CLOBs in AWS DMS. I'm looking for some guidance on whether ARRAY data type is supported by AWS DMS during migration. The reports return the following error: Note: You can see that the pressure column is indicated with real[] . 1 unsupported data type '_float4' on table 'data1', column 'pressure' pipeminder=# \d data1 Table "public.data1" Column | Type | Modifiers ---------------+--------------------------+----------- device_id | bigint | not null timestamp | timestamp with time zone | not null pressure | real[] | not null pressure_min | real | not null pressure_mean | real | not null pressure_max | real | not null flow | real | not null Indexes: "data1_unique_device_time" UNIQUE CONSTRAINT, btree (device_id, "timestamp")
1
answers
0
votes
3
views
Ozioma Uzoegwu
asked 2 years ago

Seamlessly switch between CloudFront distributions using Route 53?

My customer wants ultimately to migrate multiple CloudFront Distributions from one AWS Account to another but realize [it’s not quite possible](https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-migrate-account/). Right now their CloudFront Distribution is configured this way: - CNAME of the CloudFront Distribution is the same as a production customer-facing FQDN (e.g.: download-office.customer.com) - in Route53 customer-facing FQDN is pointed to CloudFront Distribution FQDN using CNAME record (e.g.: download-office.customer.com CNAME d11ipsxxxxxxx.cloudfront.net) What they want to do is to introduce an intermediate FQDN and place it in between the customer-facing FQDN and CloudFront Distribution FQDN using Route53 Alias Records. So the configuration would look like: - CNAME of the CloudFront Distribution is the same as a intermediate FQDN (e.g.: balancer-download-office.customer.com) - in Route53 intermediate FQDN is pointed to CloudFront Distribution FQDN using ALIAS record (e.g.: balancer-download-office.customer.com ALIAS d11ipsxxxxxxx.cloudfront.net) - in Route53 customer-facing FQDN is pointed to intermediate FQDN using ALIAS record (e.g.: download-office.customer.com ALIAS balancer-download-office.customer.com) It's working in their QA environment but they would like feedback on any issues. However, they are finding from support engineers that the only way to swap a CloudFront distribution without downtime is specifically [through a support case](https://aws.amazon.com/premiumsupport/knowledge-center/resolve-cnamealreadyexists-error/). The question is: **what is the best way for my customer to seamlessly switch between CloudFront distributions, and ultimately move to a CloudFront distribution in another account without downtime?**
1
answers
0
votes
6
views
Joshua_S
asked 2 years ago

Migrate MySQL 5.5 to AWS

A customer is running ~60 MySQL databases (the majority are 5.5, the rest are 5.6) on-prem and are using GCP for DR/backup site (they do not perform local backups in their DC). Their database vary in size and can reach up to 3 TB. The way they sync the data with the DR site is by backing up the databases on-prem using Percona XtraBackup, copy the backup files to the cloud and import into the databases in the cloud from these backup files. They then use statement based replication (SBR) for updating the DR site continuously. Same mechanism is used to migrate data back to their data center when they need to recover a database from backup or when they will need to fail back after a DR. To perform backups, they just create volume snapshots at different intervals on the cloud (hourly for the first day, daily for the fist week, weekly for the first month and then monthly for 6 months). They want to migrate the DR/backup site to AWS. They could use a similar approach and use MySQL on EC2, but there is not much benefit for them in this, so they are willing to move only if they can use managed services (Aurora MySQL, RDS MySQL). I need to come with a solution that will address their requirements: - Use Percona XtraBackup and not mysqldump due to the format of the files and the time it takes to seed a database from dump files - Continuous replication (to achieve near zero RPO), using statement based replication - Ability to move back to on-prem in short time when needed - Use managed services If I understand correctly, they can use the same mechanism for 5.6 using Aurora. They can import into an Aurora cluster from XtraBackup files and then they can set up SBR from on-prem to AWS. However, this solution does not solve the other direction. They can't backup Aurora (or RDS) using XtraBackup. This means that they will either need to use mysqldump, which they don't want to use as they claim it takes way too long. As an alternative they can create a MySQL Instance as a replica of Aurora and then use it to create the backup. Q1: Are there any other options to achieve what they need? Q2: How long should it take to create an Aurora replica on EC2 from a large database? Q3: Can the same mechanism be used to migrate from 5.5 on prem to Aurora 5.6? (and vice versa) Q4: Assuming that the answer to Q3 is yes, how backwards compatible is 5.6? Will the applications that use 5.5 work against 5.6 or will they need to be rewritten? Assuming there are issues with replicating from 5.5 to 5.6, they will need to use RDS MySQL 5.5 on AWS. My understand is that 5.5 does not support importing XtraBackup files. Q5: Is there a way to use XtraBackup with 5.5? If the answer to Q5 is no, they could use mysqldump or DMS to replicate from on-prem to the cloud. The initial seeding will be slower than XtraBackup if I understand correctly. Q6: Is there a preferred solution for the initial seeding? DMS? mysqldump, mydumper, etc.? Can they use SBR after the initial seeding? Q7: How to move back the data from RDS 5.5? Q8: I guess what I am asking is: What is the best solution for them for 5.5 (and maybe also 5.6)?
1
answers
0
votes
15
views
EXPERT
Uri
asked 3 years ago
  • 1
  • 90 / page