How can I troubleshoot high replica lag with Amazon RDS for MySQL?

8 minute read

I want to find the cause of replica lag when I use Amazon Relational Database Service (Amazon RDS) for MySQL.

Short description

Amazon RDS for MySQL uses asynchronous replication. This means that sometimes the replica can't keep up with the primary DB instance. As a result, replication lag can occur.

To monitor replication lag, use an Amazon RDS for MySQL read replica with binary log file position-based replication.

In Amazon CloudWatch, check the ReplicaLag metric for Amazon RDS. The ReplicaLag metric reports the value of the Seconds_Behind_Master field of the SHOW SLAVE STATUS command.

The Seconds_Behind_Master field shows the difference between the current timestamp on the replica DB instance. It also shows the original timestamp logged on the primary DB instance for the event processing on the replica DB instance.

MySQL replication works with three threads: the Binlog Dump thread, the IO_THREAD, and the SQL_THREAD. For more information about how these threads function, see the MySQL documentation for Replication threads. If there's a delay in replication, then identify whether replica IO_THREAD or the replica SQL_THREAD causes the lag. Then, you can identify the root cause of the lag.


To identify which replication thread is lagging, see the following examples:

1.    Run the SHOW MASTER STATUS command on the primary DB instance, and then review the output. The output looks similar to the following:

| File                       | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
| mysql-bin-changelog.066552|      521 |              |                  |                   |
1 row in set (0.00 sec)

Note: In this example output, the source or primary DB instance writes the binary logs to the file mysql-bin.066552.

2.    Run the SHOW SLAVE STATUS command on the replica DB instance, and then review the output. The output looks similar to the following examples:

Example 1:

*************************** 1. row ***************************
Master_Log_File: mysql-bin.066548
Read_Master_Log_Pos: 10050480
Relay_Master_Log_File: mysql-bin.066548
Exec_Master_Log_Pos: 10050300
Slave_IO_Running: Yes
Slave_SQL_Running: Yes

In Example 1, the Master_Log_File: mysql-bin.066548 indicates that the replica IO_THREAD reads from the binary log file mysql-bin.066548. The primary DB instance writes the binary logs to the mysql-bin.066552 file. This output shows that the replica IO_THREAD is four binlogs behind. But, the Relay_Master_Log_File is mysql-bin.066548, which indicates that the replica SQL_THREAD reads from same file as the IO_THREAD. This means that the replica SQL_THREAD is keeping up, but the replica IO_THREAD is lagging.

Example 2:

*************************** 1. row ***************************
Master_Log_File: mysql-bin.066552
Read_Master_Log_Pos: 430
Relay_Master_Log_File: mysql-bin.066530
Exec_Master_Log_Pos: 50360
Slave_IO_Running: Yes
Slave_SQL_Running: Yes

Example 2 shows that the primary instance's log file is mysql-bin-changelog.066552. The output shows that IO_THREAD is keeping up with the primary DB instance. In the replica output, the SQL thread performs Relay_Master_Log_File: mysql-bin-changelog.066530. As a result, SQL_THREAD lags 22 binary logs behind.

Normally, IO_THREAD doesn't cause large replication delays, because the IO_THREAD only reads the binary logs from the primary or source instance. But, network connectivity and network latency can affect the speed of the reads between the servers. The IO_THREAD replica might perform slower because of high bandwidth usage.

If the replica SQL_THREAD is the source of replication delays, then the following situations might cause a delay:

  • Long-running queries on the primary DB instance
  • Insufficient DB instance class size or storage
  • Parallel queries run on the primary DB instance
  • Binary logs synced to the disk on the replica DB instance
  • Binlog_format on the replica is set to ROW
  • Replica creation lag

Long-running queries on the primary instance

Long-running queries on the primary DB instance that take an equal amount of time to run on the replica DB instance can increase seconds_behind_master. For example, if you initiate a change on the primary instance and it takes an hour to run, then the lag is one hour. If the change also takes one hour to complete on the replica, then the total lag is approximately two hours at the time of completion. This is an expected delay, but you can monitor the slow query log on the primary instance to minimize this lag. You can also identify long-running statements to reduce lag. Then, break long-running statements into smaller statements or transactions.

Insufficient DB instance class size or storage

If the replica DB instance class or storage configuration is lower than the primary, then the replica might throttle because of insufficient resources. This is because the replica can't keep up with the changes made on the primary instance. Make sure that the DB instance type of the replica is the same or higher than the primary DB instance. For replication to operate effectively, each read replica requires the same amount of compute and storage resources as the source DB instance. For more information, see DB instance classes.

Parallel queries run on the primary DB instance

If you run queries in parallel on the primary, then they commit on the replica in a serial order. This is because the MySQL replication is single threaded (SQL_THREAD), by default. If a high volume of writes to the source DB instance occurs in parallel, then the writes to the read replica serialize. The writes to the read replica use a single SQL_THREAD to serialize. This can cause a lag between the source DB instance and read replica.

Multi-threaded (parallel) replication is available for MySQL 5.6, MySQL 5.7, and higher versions. For more information about multi-threaded replication, see the MySQL documentation for Binary logging options and variables.

Multi-threaded replication can cause gaps in replication. For example, multi-threaded replication isn't a best practice when skipping the replication errors, because it's difficult to identify which transactions you're skipping. This can lead to gaps in data consistency between the primary and replica DB instances.

Binary logs synced to the disk on the replica DB instance

Turning on automatic backups on the replica might result in overhead to sync the binary logs to the disk on the replica. The default value of the parameter sync_binlog is set to 1. If you change this value to 0, then you also turn off the synchronization of the binary log to the disk by the MySQL server. Instead of logging to the disk, the operating system (OS) occasionally flushes the binary logs to disk.

Turning off the binary log synchronization can reduce the performance overhead required to sync the binary logs to disk on every commit. But, if there's a power failure or the OS crashes, then some of the commits might not synchronize to the binary logs. This asynchronization can affect point-in-time restore (PITR) capabilities. For more information, see the MySQL documentation for sync_binlog.

Binlog_format is set to ROW

The SQL thread performs a full table scan upon replica when these two factors are true:

  • The binlog_format on the primary DB instance is set to ROW.
  • The source table is missing a primary key.

This happens because the default value of parameter slave_rows_search_algorithms is TABLE_SCAN,INDEX_SCAN.

To resolve this issue in the short term, change the search algorithm to INDEX_SCAN,HASH_SCAN to reduce the overhead of full table scan. For the long term, it's a best practice to add an explicit primary key to each table.

For more information about the slave-rows-search-algorithms parameter, see the MySQL documentation for slave_rows_search_algorithms.

Replica creation lag

Amazon RDS takes a DB snapshot to create a read replica of a MySQL primary instance. Then, Amazon RDS restores the snapshot to create a new DB instance (replica) and establishes replication between the two.

Amazon RDS takes time to create new read replicas. After establishing replication, there's a lag for the duration of the time that it takes to create a backup of the primary DB instance. To minimize this lag, create a manual backup before calling for the replica creation. Then, the DB snapshot is an incremental backup.

When you restore a read replica from a snapshot, the replica doesn't wait for all the data to transfer from the source. The replica DB instance is available to perform the DB operations. The existing Amazon Elastic Block Store (Amazon EBS) snapshot loads create a new volume in the background.

Note: For Amazon RDS for MySQL replicas (EBS-based volumes), the replica lag can increase initially. This is because the lazy loading effect can influence the replication performance.

To help mitigate the effects of lazy loading on tables for your newly created read replica, you can perform operations that involve full-table scans. For example, running a mysqldump operation on your read replica for specific tables or databases. This allows Amazon RDS to prioritize and download all of the backed-up table data from Amazon Simple Storage Service (Amazon S3).

Also, consider using the "on demand" InnoDB cache warming feature. The InnoDB cache warming feature saves the buffer pool state on disk, in a file named ib_buffer_pool in the InnoDB data directory. This can provide performance gains by dumping the current state of the buffer pool of the primary DB instance before creating the read replica. Then, reload the buffer pool after you create a read replica.

Related information

Working with MySQL replication in Amazon RDS

Working with MySQL read replicas