- Più recenti
- Maggior numero di voti
- Maggior numero di commenti
Hey Felipe!
I am in the same boat currently as you were 2 months ago, only my Aurora clusters are not serverless. I came to the same conclusion as your comment; in fact, that led me to your original question. Have you found a resolution yet?
Yes, there are configuration settings you can adjust to increase the timeout and potentially resolve your issue with DMS Homogeneous Data Migration. Based on your description, it appears that the connection is being dropped due to timeout settings that are too low for your large tables and index creation process.
Here are some recommendations to address this:
-
Increase timeout values: You should set the net_read_timeout, net_write_timeout, and wait_timeout to values greater than 300 seconds (5 minutes) for both the source and target databases. A recommended value would be 3600 seconds (1 hour) or higher, depending on your specific needs. You can set these in the parameter groups for both the source and target databases.
-
Verify the changes: After setting these parameters, make sure to reboot your database instances for the changes to take effect. You can then verify the new values by connecting to the databases and running queries like:
SHOW VARIABLES LIKE 'net_read_timeout'; SHOW VARIABLES LIKE 'net_write_timeout'; SHOW VARIABLES LIKE 'wait_timeout';
-
Use AfterConnectScript: In addition to setting the parameters in the parameter groups, you can also use the AfterConnectScript in your DMS task settings. This script runs after the connection is established and can set these timeout values for the session. For example:
{"AfterConnectScript": "SET net_read_timeout=3600; SET net_write_timeout=3600; SET wait_timeout=3600;"}
-
Consider other parameters: Depending on your specific setup, you might also want to adjust other parameters like max_allowed_packet, which controls the maximum size of a single packet or any generated/intermediate string.
-
Monitor and adjust: After making these changes, monitor your migration process closely. If you still encounter issues, you may need to further increase these timeout values or investigate other potential bottlenecks in your migration process.
Remember that while increasing these timeout values can help with long-running operations like index creation, it's also important to balance this with your overall system performance and resource utilization. If the index creation is taking an extremely long time, you might want to consider optimizing your database schema or breaking down the migration into smaller, more manageable chunks.
By implementing these changes, you should be able to continue using DMS Homogeneous Data Migration for your MySQL to Aurora migration while accommodating the longer processing times required for your large tables and index creation.
Sources
AWS DMS migration Premigration assessment possible bug | AWS re:Post
Using a PostgreSQL database as an AWS DMS source - AWS Database Migration Service
The net_read_timeout is set to 1200 and the net_write_timeout to 600, both of which are bigger than five minutes. Another configuration needs to be changed, I believe.

Hey Istvan,
To complete the migration, I had to schedule a downtime for my service. During that time, I dropped the indexes from the source database, then I migrated the data without CDC. After the migration was completed, I created the indexes again. I did not find other alternatives, unfortunately.