- Newest
- Most votes
- Most comments
50 Gb is not a very big database. The duration will be mostly determined by the time it takes for DMS to transfer the data from current location to AWS cloud. How is this database server connected to AWS: vpn over the internet or AWS Direct Connect ? (https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect.html) What is the throughput of this connection?
This document with numbers may interest you: https://docs.aws.amazon.com/dms/latest/sbs/chap-manageddatabases.postgresql-rds-postgresql-performance-comparison.html
The optimal solution for that is AWS DMS for pg:
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html
On your question, time will depend on complexity, data size, etc.
Do you have any metrics that you can share?
Hi,
many thanks for your help. About data size, we have more than 50 Gb to migrate.
A large-scale success story with DMS (for Oracle though): https://aws.amazon.com/blogs/industries/edf-completes-ssgroundbreaking-migration-to-run-oracle-utilities-solution-on-amazon-rds/
Relevant content
- asked a year ago
- asked 2 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 3 years ago
Hi,
I have more details, after did specific test. The effective size is about 450 Gb. At the moment I have planned a dump of schema and a dump of db structure. The connection is AWS Direct Connect. Thanks in advance for your help.
Yep, if you go with DMS for intial dump of db and structure, then you can leverage its features to also obtain changes done after the moment you started the dump. So, you'll obtain a very smooth migration and seamless handover