Help us improve the AWS re:Post Knowledge Center by sharing your feedback in a brief survey. Your input can influence how we create and update our content to better support your AWS journey.
How do I migrate my Database resources to another region using a logical dump?
This article provides general guidance on migrating Database resources from one region to another using a logical dump.
If you are migrating data from the Middle East (UAE) Region (me-central-1), then you might experience increased error rates as we continue making progress with restoration efforts. For additional information about recovery efforts and service updates that impact your AWS accounts, see the AWS Personal Health Dashboard. For assistance with this event, contact AWS Support through the AWS Management Console or the AWS Support Center.
This approach complements the snapshot-based and replication-based methods covered in How do I migrate my Database resources to another region?.
This is part of a series of articles that provide general guidance on migrating resources form one Region to another. This article provides guidance on migrating RDS and Aurora databases to another AWS region using logical dump and restore via an intermediary EC2 instance. This method is useful when native snapshot copy or replication options are not available or practical.
For general guidance and a full list of domain and service-specific migration guides, see How do I migrate my resources to another region?
For other domains, see the following resources:
- How do I migrate my Security, Identity and Compliance resources to another region?
- How do I migrate my Compute and Container resources to another region?
- How do I copy an EBS Snapshot to another region using coldsnap and the EBS Direct APIs?
- How do I migrate my Networking and Content Delivery resources to another region?
- How do I migrate my Storage resources to another region?
- How do I migrate my Application Integration resources to another region?
Key Considerations and Preparation
- KMS keys are regional — if your DB instance is encrypted, the dump EC2 instance must be able to connect to the source RDS endpoint
- DB credentials must be available on the EC2 instance performing the dump. Use environment variables, credential files, or AWS Secrets Manager
- Dump file sizes can be large — size your EC2 instance storage at approximately 1.5x the database storage size
- Use compressed dump formats to reduce storage and transfer time
- Run dumps during low-traffic periods to minimize impact on the source database
- Use
nohupand background execution (&) for long-running dump operations to survive SSH session disconnects
EC2 Instance Connectivity Options
You have two options for where to run the dump, depending on your network setup:
Option A: EC2 in the source region (dump locally, transfer the file)
- Launch an EC2 instance in the same region and VPC as the source RDS instance
- Run the dump command targeting the source RDS endpoint (low-latency, fastest dump)
- Transfer the dump file to the target region using
aws s3 cpto an S3 bucket, orscp/rsyncto an EC2 in the target region - Restore from the dump file on an EC2 instance in the target region
Option B: EC2 in the target region (dump directly across regions)
- Launch an EC2 instance in the target region
- Establish network connectivity to the source RDS instance via VPC peering, Transit Gateway, or VPN
- Update route tables to enable cross-region routing
- Update the source RDS security group to allow inbound traffic from the target region EC2 CIDR
- Verify connectivity (e.g.,
telnet <source-rds-endpoint> <port>) - Run the dump command directly from the target region EC2 targeting the source RDS endpoint
- Restore locally in the target region
Note: Option B avoids the file transfer step but the dump will be slower due to cross-region latency. Option A is generally faster for large databases.
EC2 Instance Setup (All Engines)
- Launch an EC2 instance with sufficient storage (1.5x the RDS storage size) in your chosen region
- Attach the instance to a VPC and subnet with connectivity to the relevant RDS endpoint(s)
- Ensure the security group allows outbound traffic to the RDS port (3306 for MySQL/MariaDB, 5432 for PostgreSQL, 1433 for SQL Server, 1521 for Oracle)
- Install the appropriate database client tools (see engine-specific sections below)
PostgreSQL / Aurora PostgreSQL
Install Client Tools
# Amazon Linux 2023 / AL2
sudo dnf install -y postgresql15
# or for older AL2:
sudo amazon-linux-extras install postgresql15 -y
# Ubuntu/Debian
sudo apt-get update && sudo apt-get install -y postgresql-client-15
Configure Credentials
# Option 1: .pgpass file
echo "<source-rds-endpoint>:5432:<dbname>:<username>:<password>" > ~/.pgpass
chmod 600 ~/.pgpass
# Option 2: Environment variable
export PGPASSWORD='<password>'
Dump the Database
nohup pg_dump \
-h <source-rds-endpoint> \
-p 5432 \
-U <username> \
-d <dbname> \
-F c -Z 6 \
--lock-wait-timeout=60000 \
--verbose \
-f /path/to/dump/<dbname>.pgdump \
> /path/to/dump/pgdump.log 2>&1 &
-F c= custom format (compressed, supports parallel restore)-Z 6= compression level (0-9, 6 is a good balance)
Restore to Target RDS/Aurora PostgreSQL
nohup pg_restore \
-h <target-rds-endpoint> \
-p 5432 \
-U <username> \
-d <dbname> \
--no-owner \
--no-privileges \
--verbose \
/path/to/dump/<dbname>.pgdump \
> /path/to/dump/pgrestore.log 2>&1 &
--no-ownerand--no-privilegesavoid errors from source-specific roles not existing on the target
Backup with parallel option
Backup (pg_dump)
/usr/pgsql-17/bin/pg_dump \
-p 5500 \
-h <source_rds_endpoint> \
-U <db_username> \
-d <database_name> \
-Fd \
-Z 9 \
-j 4 \
-f <backup_directory_name> \
> <backup_directory_name>.log 2>&1
**Parameter Breakdown**
| Parameter | Description |
|-----------|-------------|
| `-p 5500` | Port number (adjust to match your RDS configuration) |
| `-h` | Source RDS endpoint hostname |
| `-U` | Database username |
| `-d` | Database name to back up |
| `-Fd` | Directory format — required for parallel dump |
| `-Z 9` | Compression level (0 = none, 9 = maximum). Use 9 for best compression, reduce to 5-7 if CPU is a bottleneck |
| `-j 4` | Number of parallel dump jobs (see tuning section below) |
| `-f` | Output directory name (will be created automatically) |
Restore (pg_restore)
/usr/pgsql-17/bin/pg_restore \
-p 5500 \
-h <target_rds_endpoint> \
-U <db_username> \
-d <database_name> \
-j 4 \
<backup_directory_name> \
> <backup_directory_name>_restore.log 2>&1
MySQL / Aurora MySQL
Install Client Tools
# Amazon Linux 2023 / AL2
sudo dnf install -y mariadb105
# or for MySQL community client:
sudo dnf install -y mysql-community-client
# Ubuntu/Debian
sudo apt-get update && sudo apt-get install -y mysql-client
Dump the Database
nohup mysqldump \
-h <source-rds-endpoint> \
-P 3306 \
-u <username> \
-p'<password>' \
--single-transaction \
--routines \
--triggers \
--set-gtid-purged=OFF \
--verbose \
<dbname> | gzip > /path/to/dump/<dbname>.sql.gz \
2> /path/to/dump/mysqldump.log &
--single-transaction= consistent snapshot without locking (InnoDB)--routines --triggers= include stored procedures and triggers--set-gtid-purged=OFF= avoids GTID conflicts when restoring to a different instance
Restore to Target RDS/Aurora MySQL
nohup gunzip -c /path/to/dump/<dbname>.sql.gz | mysql \
-h <target-rds-endpoint> \
-P 3306 \
-u <username> \
-p'<password>' \
<dbname> \
> /path/to/dump/mysqlrestore.log 2>&1 &
MariaDB
Install Client Tools
# Amazon Linux 2023 / AL2
sudo dnf install -y mariadb105
# Ubuntu/Debian
sudo apt-get update && sudo apt-get install -y mariadb-client
Dump the Database
nohup mariadb-dump \
-h <source-rds-endpoint> \
-P 3306 \
-u <username> \
-p'<password>' \
--single-transaction \
--routines \
--triggers \
--verbose \
<dbname> | gzip > /path/to/dump/<dbname>.sql.gz \
2> /path/to/dump/mariadbdump.log &
Restore to Target RDS MariaDB
nohup gunzip -c /path/to/dump/<dbname>.sql.gz | mariadb \
-h <target-rds-endpoint> \
-P 3306 \
-u <username> \
-p'<password>' \
<dbname> \
> /path/to/dump/mariadbrestore.log 2>&1 &
Microsoft SQL Server
Install Client Tools
# Amazon Linux 2023 / AL2 — install sqlcmd via Microsoft repo
curl https://packages.microsoft.com/config/rhel/9/prod.repo | sudo tee /etc/yum.repos.d/mssql-release.repo
sudo ACCEPT_EULA=Y dnf install -y mssql-tools18 unixodbc-devel
echo 'export PATH="$PATH:/opt/mssql-tools18/bin"' >> ~/.bashrc && source ~/.bashrc
# Ubuntu/Debian
curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
curl https://packages.microsoft.com/config/ubuntu/22.04/prod.list | sudo tee /etc/apt/sources.list.d/mssql-release.list
sudo apt-get update && sudo ACCEPT_EULA=Y apt-get install -y mssql-tools18 unixodbc-dev
Dump and Restore
RDS SQL Server does not support native BACKUP DATABASE to local disk. Use one of these approaches:
Option 1: Native backup/restore via S3 (recommended)
-- On source: backup to S3
EXEC msdb.dbo.rds_backup_database
@source_db_name='<dbname>',
@s3_arn_to_backup_to='arn:aws:s3:::<bucket>/<dbname>.bak',
@overwrite_s3_backup_file=1;
-- On target: restore from S3
EXEC msdb.dbo.rds_restore_database
@restore_db_name='<dbname>',
@s3_arn_to_restore_from='arn:aws:s3:::<bucket>/<dbname>.bak';
- Both source and target RDS instances need an option group with
SQLSERVER_BACKUP_RESTOREenabled and an IAM role with S3 access. See Using native backup and restore
Option 2: Export/import data with BCP Importing and exporting SQL Server data using other methods - SQL Server Generate and Publish Scripts Wizard and bcp utility - https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Procedural.Importing.Snapshots.html#SQLServer.Procedural.Exporting.SSGPSW
to install the bcp utility https://learn.microsoft.com/en-us/sql/tools/bcp-utility?view=sql-server-ver17&tabs=windows
# Export data for each table using BCP from Source
bcp <dbname>.dbo.<tablename> out /path/to/dump/<tablename>.dat \
-S <source-rds-endpoint>,1433 -U <username> -P '<password>' -n
# Import on target
bcp <dbname>.dbo.<tablename> in /path/to/dump/<tablename>.dat \
-S <target-rds-endpoint>,1433 -U <username> -P '<password>' -n
Option 3: SQL Server Import and Export Wizard You can use the SQL Server Import and Export Wizard to copy one or more tables, views, or queries from your RDS for SQL Server DB instance to another data store.
Importing and exporting SQL Server data using other methods - SQL Server Import and Export Wizard - https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Procedural.Importing.Snapshots.html#SQLServer.Procedural.Exporting.SSIEW
Oracle
Install Client Tools
# Download Oracle Instant Client RPMs from Oracle's website
sudo dnf install -y /path/to/oracle-instantclient-basic-*.rpm
sudo dnf install -y /path/to/oracle-instantclient-tools-*.rpm
sudo dnf install -y /path/to/oracle-instantclient-sqlplus-*.rpm
export LD_LIBRARY_PATH=/usr/lib/oracle/21/client64/lib:$LD_LIBRARY_PATH
export PATH=/usr/lib/oracle/21/client64/bin:$PATH
Dump and Restore
RDS Oracle does not support expdp/impdp to local disk. Use the Data Pump integration with S3:
-- Run Data Pump export on source
DECLARE
h1 NUMBER;
BEGIN
h1 := DBMS_DATAPUMP.open('EXPORT','SCHEMA',NULL,'EXPORT_JOB');
DBMS_DATAPUMP.add_file(h1,'export.dmp','DATA_PUMP_DIR');
DBMS_DATAPUMP.add_file(h1,'export.log','DATA_PUMP_DIR',NULL,DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
DBMS_DATAPUMP.metadata_filter(h1,'SCHEMA_EXPR','IN (''<SCHEMA_NAME>'')');
DBMS_DATAPUMP.start_job(h1);
END;
/
-- Upload dump file from DATA_PUMP_DIR to S3
SELECT rdsadmin.rdsadmin_s3_tasks.upload_to_s3(
p_bucket_name => '<bucket>',
p_prefix => '',
p_s3_prefix => 'oracle-dump/',
p_directory_name => 'DATA_PUMP_DIR') AS TASK_ID FROM DUAL;
-- On target: download from S3 and import
SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
p_bucket_name => '<bucket>',
p_s3_prefix => 'oracle-dump/',
p_directory_name => 'DATA_PUMP_DIR') AS TASK_ID FROM DUAL;
-- On target: import the dump file using DBMS_DATAPUMP
DECLARE
v_hdnl NUMBER;
BEGIN
v_hdnl := DBMS_DATAPUMP.OPEN(
operation => 'IMPORT',
job_mode => 'SCHEMA',
job_name => null
);
DBMS_DATAPUMP.ADD_FILE(
handle => v_hdnl,
filename => 'export.dmp',
directory => 'DATA_PUMP_DIR',
filetype => dbms_datapump.ku$_file_type_dump_file
);
DBMS_DATAPUMP.ADD_FILE(
handle => v_hdnl,
filename => 'import.log',
directory => 'DATA_PUMP_DIR',
filetype => dbms_datapump.ku$_file_type_log_file
);
DBMS_DATAPUMP.METADATA_FILTER(v_hdnl,'SCHEMA_EXPR','IN (''<SCHEMA_NAME>'')');
DBMS_DATAPUMP.START_JOB(v_hdnl);
END;
/
- See Using Oracle Data Pump for full instructions including IAM role and option group setup
Transfer Dump Files Between Regions (Option A Only)
If you dumped locally in the source region, transfer the file to the target region:
# Via S3 (recommended for large files)
aws s3 cp /path/to/dump/<dbname>.dump s3://<bucket-in-target-region>/
# Then on target EC2:
aws s3 cp s3://<bucket-in-target-region>/<dbname>.dump /path/to/dump/
# Via direct SCP (for smaller files)
scp -i <key.pem> /path/to/dump/<dbname>.dump ec2-user@<target-ec2-ip>:/path/to/dump/
Troubleshooting
- Client tool installation failures: If the EC2 instance cannot reach external package repositories due to network restrictions, work with your network/CloudOps team to allow access or provide packages via an internal mirror or S3-hosted repo
- Connection timeouts: Verify security group rules, route tables, and VPC peering/Transit Gateway configuration. Test with
telnet <endpoint> <port> - Authentication errors: Verify credentials in
.pgpass, environment variables, or Secrets Manager. Ensure the DB user has the required permissions for dump/restore - Disk space issues: Monitor disk usage during dump with
df -h. Consider using compressed formats or streaming directly to S3 - Long-running dumps: Use
nohupwith&and redirect output to log files. Monitor progress via the log file withtail -f
In collaboration with Vania Toma and Nymus Booysen
- Topics
- Database
- Language
- English
Relevant content
AWS OFFICIALUpdated 2 years ago