By using AWS re:Post, you agree to the Terms of Use
/Migration & Transfer/

Questions tagged with Migration & Transfer

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

ClientError: ENA must be supported with uefi boot-mode

I am trying to run a Windows 10 Home VM in EC2. The plan is to run it in EC2 for about two days, our partner will access it through RDP and then transfer it back to VirtualBox. I prepared the image in VirtualBox, then exported .ova file, uploaded it to S3 and tried to convert it to AMI with following command as described [here](https://docs.aws.amazon.com/vm-import/latest/userguide/what-is-vmimport.html). ``` $ aws ec2 import-image --description "Windows 10 VM" --platform Windows --disk-containers "file://foo/containers.json" --boot-mode uefi --license-type BYOL --architecture x86_64 ``` But I get following error after the import process reaches progress 27%: ``` $ aws ec2 describe-import-image-tasks --import-task-ids fooID { "ImportImageTasks": [ { "Architecture": "x86_64", "Description": "Windows 10 VM", "ImportTaskId": "fooID", "LicenseType": "BYOL", "Platform": "Windows", "SnapshotDetails": [ { "DeviceName": "/dev/sda1", "DiskImageSize": 8298251264.0, "Format": "VMDK", "Status": "completed", "Url": "s3://foo/Windows-10.ova", "UserBucket": { "S3Bucket": "foo", "S3Key": "Windows-10.ova" } } ], "Status": "deleted", "StatusMessage": "ClientError: ENA must be supported with uefi boot-mode", "Tags": [], "BootMode": "uefi" } ] } ``` I have done these steps: 1. [Installed ENA driver](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/enhanced-networking-ena.html#ena-adapter-driver-versions) (Didn't help) 2. [Installed AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) (Didn't help) What should I do? I know for sure that the VM boots using UEFI in VBox. Should I convert it to BIOS boot? Is there anything I need to install or what? Google returns only [this thread](https://repost.aws/questions/QUqKQIF1cdQrq6h3hb8yJYiw/does-aws-support-windows-11-ec-2-instances) which is unanswered and they are talking about instance types. So I asked my own question.
1
answers
0
votes
35
views
asked 15 days ago

How AWS DMS CDC is working successfully without CDC On-premise MSSQL CDC prerequisites config?

We're using DMS for CDC Only migration for the time b/w point in time restore and current DB state, i.e AWS DMS to replicate changes as of the point in time at which you started your bulk load to bring and keep your source and target systems in sync. We've configured AWS DMS (CDC Only) with source endpoint to On-premise SQL Server 2012 (Standard Edition) and Target endpoint with AWS RDS MSSQL 2019 (Standard Edition). By looking into AWS CDC pre-requisites documentation https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.SQLServer.html#CHAP_Source.SQLServer.Prerequisites Running below query on on-premise MSSQL 2012 instance returns an error, ref: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.SQLServer.html#CHAP_Source.SQLServer.Prerequisites ``` use uat_testdb EXEC sys.sp_cdc_enable_db ``` Msg 22988, Level 16, State 1, Procedure sp_cdc_enable_db, Line 14 [Batch Start Line 0] This instance of SQL Server is the Standard Edition (64-bit). Change data capture is only available in the Enterprise, Developer, and Enterprise Evaluation editions. It looks ongoing replication CDC feature is supported only from MSSQL standard edition from 2016 SP1 and later. Could you please suggest if there any other workaround to complete CDC without upgrading our on-premise MSSSQL Standard Edition 2012 to Std Edition 2016 / Enterprise Edition? **However, without applying this CDC prerequisites config settings at on-premise DB instance, we can see the ongoing and replication b/w on-premise and RDS DBs instances statistics that shows sync updates of Inserts and Deletes. (Based on the testing target RDS DB instance sync. happening only for Insert and Delete operations of on-premise source db not for any updates) Could you please confirm/clarify if those CDC pre-requisites config are mandatory since we could see the replication successfully on DMS and why we're not getting any error /warning messages on AWS DMS for missing CDC prerequisites config. settings? Thanks.**[]()
1
answers
0
votes
23
views
asked 16 days ago

Patching SLES12 SP3

I want migrate my S4HANA Server from SLES12 SP 3 to SLES15 SP3. However when I try to patch SLES12 SP3, I am getting following options and not sure how to go about it: ``` Resolving package dependencies... 7 Problems: Problem: nothing provides python3-pyparsing >= 2.0.2 needed by python3-packaging-17.1-2.7.1.noarch Problem: nothing provides python-cryptography >= 1.5 needed by python-paramiko-2.4.0-9.10.2.noarch Problem: nothing provides python3-cryptography >= 1.3.4 needed by python3-urllib3-1.25.10-3.29.1.noarch Problem: nothing provides SUSEConnect > 0.3.31 needed by cloud-regionsrv-client-10.0.0-52.66.1.noarch Problem: nothing provides SUSEConnect > 0.3.31 needed by cloud-regionsrv-client-10.0.0-52.66.1.noarch Problem: nothing provides SUSEConnect > 0.3.31 needed by cloud-regionsrv-client-10.0.0-52.66.1.noarch Problem: nothing provides python-cryptography >= 1.5 needed by python-paramiko-2.4.0-9.10.2.noarch Problem: nothing provides python3-pyparsing >= 2.0.2 needed by python3-packaging-17.1-2.7.1.noarch Solution 1: Following actions will be done: deinstallation of python3-setuptools-40.6.2-4.12.23.noarch deinstallation of python3-pyOpenSSL-16.0.0-4.17.1.noarch deinstallation of python3-cryptography-1.3.1-7.13.4.x86_64 Solution 2: do not install patch:SUSE-SLE-Module-Public-Cloud-12-2020-3594-1.noarch Solution 3: break python3-packaging-17.1-2.7.1.noarch by ignoring some of its dependencies Choose from above solutions by number or skip, retry or cancel [1/2/3/s/r/c] (c): ``` Please guide me to fix this issue.
0
answers
0
votes
22
views
asked a month ago

DMS Ignore Duplicate key errors while migrating data between DocumentDB instances

We need to replicate data between two collections in AWS documentDB to get rid of duplicate documents. Source and Target is AWS documentDB instances version 4.0.0. I've created a unique index in target table to only allow non-duplicate values. I needed to create index before migrating the data to new target, because our data size in ~1TB and index creation on source collection is impossible. Full load fails after the following error. Task status becomes table error and no data is migrated further to that collection. ``` 2022-03-23T03:13:57 [TARGET_LOAD ]E: Execute bulk failed with errors: 'Multiple write errors: "E11000 duplicate key error collection: reward_users_v4 index: lockId", "E11000 duplicate key error collection: reward_users_v4 index: lockId"' [1020403] (mongodb_apply.c:153) 2022-03-23T03:13:57 [TARGET_LOAD ]E: Failed to handle execute bulk when maximum events per bulk '1000' was reached [1020403] (mongodb_apply.c:433) ``` ``` "ErrorBehavior": { "FailOnNoTablesCaptured": false, "ApplyErrorUpdatePolicy": "LOG_ERROR", "FailOnTransactionConsistencyBreached": false, "RecoverableErrorThrottlingMax": 1800, "DataErrorEscalationPolicy": "SUSPEND_TABLE", "ApplyErrorEscalationCount": 1000000000, "RecoverableErrorStopRetryAfterThrottlingMax": true, "RecoverableErrorThrottling": true, "ApplyErrorFailOnTruncationDdl": false, "DataTruncationErrorPolicy": "LOG_ERROR", "ApplyErrorInsertPolicy": "LOG_ERROR", "ApplyErrorEscalationPolicy": "LOG_ERROR", "RecoverableErrorCount": 1000000000, "DataErrorEscalationCount": 1000000000, "TableErrorEscalationPolicy": "SUSPEND_TABLE", "RecoverableErrorInterval": 10, "ApplyErrorDeletePolicy": "IGNORE_RECORD", "TableErrorEscalationCount": 1000000000, "FullLoadIgnoreConflicts": true, "DataErrorPolicy": "LOG_ERROR", "TableErrorPolicy": "SUSPEND_TABLE" }, ``` How can I configure AWS DMS to continue even if such duplicate key errors keep on happening. I tried modifying the TableErrorEscalation count and many other error counts but loading always stops at first duplicate key error. I have 580k Documents in test workload for this task.
1
answers
0
votes
41
views
asked 3 months ago
0
answers
0
votes
14
views
asked 3 months ago

Aurora Postgres upgrade from 11.13 to 12.8 failing - I assume due to PostGis

Trying to upgrade our Aurora Clusters finally. Got them recently updated to 11.13, but every attempt I make to upgrade to 12.8 fails with **"Database cluster is in a state that cannot be upgraded: Postgres cluster is in a state where pg_upgrade can not be completed successfully."** Here are the logs which I think point to the culprit: **2022-02-11 22:37:53.514 GMT [5276] ERROR: could not access file "$libdir/postgis-2.4": No such file or directory 2022-02-11 22:37:53.514 GMT [5276] STATEMENT: LOAD '$libdir/postgis-2.4' 2022-02-11 22:37:53.515 GMT [5276] ERROR: could not access file "$libdir/rtpostgis-2.4": No such file or directory 2022-02-11 22:37:53.515 GMT [5276] STATEMENT: LOAD '$libdir/rtpostgis-2.4'** command: "/rdsdbbin/aurora-12.8.12.8.0.5790.0/bin/pg_ctl" -w -D "/rdsdbdata/db" -o "--config_file=/rdsdbdata/config_new/postgresql.conf --survivable_cache_mode=off" -m fast stop >> "pg_upgrade_server.log" 2>&1 waiting for server to shut down....2022-02-11 22:37:53.541 GMT [5185] LOG: received fast shutdown request 2022-02-11 22:37:53.541 GMT [5185] LOG: aborting any active transactions 2022-02-11 22:37:53.542 GMT [5237] LOG: shutting down ................sh: /rdsdbbin/aurora-12.8.12.8.0.5790.0/bin/curl: /apollo/sbin/envroot: bad interpreter: No such file or directory 2022-02-11 22:38:10.305 GMT [5185] FATAL: Can't handle storage runtime process crash 2022-02-11 22:38:10.305 GMT [5185] LOG: database system is shut down -------- I found several other articles that point to issues with Postgis, so I followed what they suggest, but no luck. First our cluster is running Postgis 2.4.4. So I went ahead and updated this to 3.1.4, tried the approach to restart the instance and validate its really using Postgis 3 and that all looks fine. Nothing helps though. If anyone has suggestions, I am happy to try. Thanks Thomas
2
answers
0
votes
75
views
asked 5 months ago

Why can't I install python logging library on Linux2 instance

Just started a new instance to run my python3 script. I need several libraries which I can install with pip3 (pip3 install requests runs well) but I can't get logging library installed. I have this output: $ pip3 install logging > ``` Defaulting to user installation because normal site-packages is not writeable Collecting logging Using cached logging-0.4.9.6.tar.gz (96 kB) ERROR: Command errored out with exit status 1: command: /usr/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-_n141mbi/logging/setup.py'"'"'; __file__='"'"'/tmp/pip-install-_n141mbi/logging/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-pxlnpi1y cwd: /tmp/pip-install-_n141mbi/logging/ Complete output (48 lines): running egg_info creating /tmp/pip-pip-egg-info-pxlnpi1y/logging.egg-info writing /tmp/pip-pip-egg-info-pxlnpi1y/logging.egg-info/PKG-INFO writing dependency_links to /tmp/pip-pip-egg-info-pxlnpi1y/logging.egg-info/dependency_links.txt writing top-level names to /tmp/pip-pip-egg-info-pxlnpi1y/logging.egg-info/top_level.txt writing manifest file '/tmp/pip-pip-egg-info-pxlnpi1y/logging.egg-info/SOURCES.txt' Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-install-_n141mbi/logging/setup.py", line 13, in <module> packages = ["logging"], File "/usr/lib64/python3.7/distutils/core.py", line 148, in setup dist.run_commands() File "/usr/lib64/python3.7/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/usr/lib64/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/usr/lib/python3.7/site-packages/setuptools/command/egg_info.py", line 297, in run self.find_sources() File "/usr/lib/python3.7/site-packages/setuptools/command/egg_info.py", line 304, in find_sources mm.run() File "/usr/lib/python3.7/site-packages/setuptools/command/egg_info.py", line 535, in run self.add_defaults() File "/usr/lib/python3.7/site-packages/setuptools/command/egg_info.py", line 571, in add_defaults sdist.add_defaults(self) File "/usr/lib64/python3.7/distutils/command/sdist.py", line 226, in add_defaults self._add_defaults_python() File "/usr/lib/python3.7/site-packages/setuptools/command/sdist.py", line 135, in _add_defaults_python build_py = self.get_finalized_command('build_py') File "/usr/lib64/python3.7/distutils/cmd.py", line 298, in get_finalized_command cmd_obj = self.distribution.get_command_obj(command, create) File "/usr/lib64/python3.7/distutils/dist.py", line 857, in get_command_obj klass = self.get_command_class(command) File "/usr/lib/python3.7/site-packages/setuptools/dist.py", line 768, in get_command_class self.cmdclass[command] = cmdclass = ep.load() File "/usr/lib/python3.7/site-packages/pkg_resources/__init__.py", line 2461, in load return self.resolve() File "/usr/lib/python3.7/site-packages/pkg_resources/__init__.py", line 2467, in resolve module = __import__(self.module_name, fromlist=['__name__'], level=0) File "/usr/lib/python3.7/site-packages/setuptools/command/build_py.py", line 16, in <module> from setuptools.lib2to3_ex import Mixin2to3 File "/usr/lib/python3.7/site-packages/setuptools/lib2to3_ex.py", line 13, in <module> from lib2to3.refactor import RefactoringTool, get_fixers_from_package File "/usr/lib64/python3.7/lib2to3/refactor.py", line 19, in <module> import logging File "/tmp/pip-install-_n141mbi/logging/logging/__init__.py", line 618 raise NotImplementedError, 'emit must be implemented '\ ^ SyntaxError: invalid syntax ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. ``` I can't understand why this happens. Anybody has an idea how to install logging ? Thanks
1
answers
0
votes
94
views
asked 5 months ago

Importing .ova to EC2 fails with "ClientError: Unknown OS / Missing OS files."

Hi all, I'm trying to convert VMware virtual machines to EC2 instances, but the import always fails with "ClientError: Unknown OS / Missing OS files." Here's my process: 0) I start with a CentOS 7.9 VM on ESXi (supported according to [https://docs.aws.amazon.com/vm-import/latest/userguide/vmie_prereqs.html#vmimport-operating-systems]()). The VM is bootable, has only one volume (MBR, ext4) and otherwise nothing special. 1) I use ovftools to export the VM from ESXi to .ova: ``` /usr/lib/vmware-ovftool/ovftool --X:logFile=mytestvm-log.txt --X:logLevel=warning --noSSLVerify --powerOffSource vi://$user:$pass@vmhost/mytestvm /volumes/vmexport/mytestvm.ova Opening VI source: vi://$user@vmhost:443/mytestvm Opening OVA target: /volumes/vmexport/mytestvm.ova Writing OVA package: /volumes/vmexport/mytestvm.ova Transfer Completed Completed successfully ``` I tried this as well as exporting to OVF and then manually creating a tar file from it - same result. There's nothing in the log that indicates any problem. I can re-import the ova file and run it on ESXi, so it doesn't seem to be broken in any way. 2) Upload the .ova to S3: ``` aws s3 cp --sse --acl private /vm/mytestvm.ova s3://mv-ova-test ``` 3) Create a presigned URL for the file: ``` aws s3 presign s3://mv-ova-test/mytestvm.ova --expires-in 86400 ``` 4) Make a .json to describe the import: ``` [ { "Description": "ova-import-test", "Format": "ova", "Url": "https://mv-ova-test.s3.eu-central-1.amazonaws.com/mytestvm.ova?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential={the rest of the URL}" } ] ``` 5) Start the import: ``` aws ec2 import-image --description "ova import test" --disk-containers "file://mytestvm.json" { "Description": "test ova import", "ImportTaskId": "import-ami-06c5fc120d02749d7", "Progress": "1", "SnapshotDetails": [ { "Description": "mytestvm", "DiskImageSize": 0.0, "Format": "OVA", "Url": "https://mv-ova-test.s3.eu-central-1.amazonaws.com/mytestvm.ova?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential={the rest of the URL}", "UserBucket": {} } ], "Status": "active", "StatusMessage": "pending" } ``` 6) Check the import task: ``` { "ImportImageTasks": [ "Description": "test ova import", "ImportTaskId": "import-ami-06c5fc120d02749d7", "Progress": "19", "SnapshotDetails": [ { "DiskImageSize": 7062419968.0, "Format": "VMDK", "Status": "active", "Url": "https://mv-ova-test.s3.eu-central-1.amazonaws.com/mytestvm.ova?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential={the rest of the URL}", "UserBucket": {} } ], "Status": "active", "StatusMessage": "converting", "Tags": [] } ] } ``` 7) And in the end I get: ``` { "ImportImageTasks": [ { "Description": "test ova import", "ImportTaskId": "import-ami-06c5fc120d02749d7", "SnapshotDetails": [ { "DeviceName": "/dev/sde", "DiskImageSize": 7062419968.0, "Format": "VMDK", "Status": "completed", "Url": "https://mv-ova-test.s3.eu-central-1.amazonaws.com/mytestvm.ova?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential={the rest of the URL}", "UserBucket": {} } ], "Status": "deleted", "StatusMessage": "ClientError: Unknown OS / Missing OS files.", "Tags": [] } ] } ``` 8) Trying to import the vmdk alone makes no difference: ``` { "ImportImageTasks": [ { "Description": "test vmdk import", "ImportTaskId": "import-ami-0e1dc2522176e0cdf", "SnapshotDetails": [ { "Description": "test-vm", "DeviceName": "/dev/sde", "DiskImageSize": 7062419968.0, "Format": "VMDK", "Status": "completed", "Url": "https://mv-ova-test.s3.eu-central-1.amazonaws.com/mytestvm.ova?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential={the rest of the URL}", "UserBucket": {} } ], "Status": "deleted", "StatusMessage": "ClientError: Unknown OS / Missing OS files.", "Tags": [] } ] } ``` So. What could be the cause for "ClientError: Unknown OS / Missing OS files" if I use a supported OS and the .ova seems to be intact? Would I have the same issue if I tried using MGN for this? Thanks, Marc
2
answers
0
votes
146
views
asked 5 months ago
  • 1
  • 90 / page