Error 1159 (08S01): Failed to send write forwarding request to writer

0

I have setup a write-forward replication from Aurora serverless v2 instance in Singapore to an instance in Ireland, and while running various tests, I discovered a problem when there is a query with large (20Mb+) blob from Singapore; the write forward to Ireland master DB fails! Querys with small blob work fine, but when the blob size increases, I'm starting to see an exception happening with message "mysql.connector.errors.OperationalError: 1159 (08S01): Failed to send write forwarding request to writer". This error happens with both our web SW (Java/Spring/Hibernate) and the simple demo python app I created to demonstrate the problem (code below), with different connectors (mysqlconnector/Java and python3-mysql.connector), so this is clearly a problem between the replicated DBs.

I have tried to alter various Aurora parameters via DB Cluster Parameter Groups, for example:

    aurora_fwd_writer_idle_timeout -> 360
    aurora_fwd_writer_max_connections_pct -> 50
    max_allowed_packet -> 1073741824
    net_read_timeout -> 300
    net_write_timeout -> 300

None helped. Would anyone have any advice how to make write-forwards work with "larger" blobs?

Steps to reproduce:

  • create a Aurora Mysql cluster and writer instance eg. in Ireland: engine version 3.04.1, Serverless v2, 0.5-6 ACUs
  • Add AWS Region to create a Global Aurora Mysql DB and reader instance eg. in Singapore: Serverless v2, 0.5-6 ACUs. Turn on global write forwarding to Ireland instance.
  • create a table to the writer (Ireland) DB:
CREATE TABLE `PROJECTKEY_BYTES2` (
  `id` bigint NOT NULL,
  `bytes` longblob
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
  • Configure the minimal demoapp below to talk to the Singapore DB endpoint (change also other connection params like user, passwd etc)
    import mysql.connector
    import os

    if __name__ == "__main__":
        
        # generate blob
        blob = bytearray(os.urandom(15000000))
        
        # Initialize database connection
        mydb = mysql.connector.connect(
            host="production-cluster-****-ro-****.ap-southeast-1.rds.amazonaws.com",
            port=3306,
            user="****",
            password='****',
            database="****",
            ssl_disabled=True,
            init_command='SET aurora_replica_read_consistency=SESSION'
        )
        # It does not matter what aurora_replica_read_consistency is used, session, global, or eventual all cause the error...
        
        mycursor = mydb.cursor()
        insertQuery = ("INSERT INTO PROJECTKEY_BYTES2 (id, bytes) VALUES (%s, %s)")
        insertData = (1, blob)
        
        mycursor.execute(insertQuery, insertData)
        mydb.commit()
        mydb.close()
  • Run this app in an EC2 instance that has access to the reader instance with write-forwarding (Singapore) ----> Expected result would be that INSERT clause with a BLOB would success ----> I'm seeing:
    ubuntu@:~$ python3 replicationFailureTester.py 
    Traceback (most recent call last):
      File "/usr/lib/python3/dist-packages/mysql/connector/connection_cext.py", line 661, in cmd_query
        self._cmysql.query(
    _mysql_connector.MySQLInterfaceError: Failed to send write forwarding request to writer

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last):
      File "/home/ubuntu/replicationFailureTester.py", line 34, in <module>
        mycursor.execute(insertQuery, insertData)
      File "/usr/lib/python3/dist-packages/mysql/connector/cursor_cext.py", line 374, in execute
        result = self._cnx.cmd_query(
                 ^^^^^^^^^^^^^^^^^^^^
      File "/usr/lib/python3/dist-packages/mysql/connector/opentelemetry/context_propagation.py", line 74, in wrapper
        return method(cnx, *args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/lib/python3/dist-packages/mysql/connector/connection_cext.py", line 669, in cmd_query
        raise get_mysql_exception(
    mysql.connector.errors.OperationalError: 1159 (08S01): Failed to send write forwarding request to writer
    ubuntu@:~$

Any help highly appreciated...

CarlS
asked 2 months ago196 views
1 Answer
0

It's likely due to the large blob size exceeding limits for replication. Check these please.

  • Check the configuration of the Aurora serverless clusters and parameter groups used for write forwarding. Parameters like
aurora_fwd_writer_idle_timeout

and

aurora_fwd_writer_max_connections_pct
  • control how connections are handled during replication and may need adjustment for large blobs.
  • Consider splitting the large blob into smaller chunks before inserting into the database. This will reduce the data size that needs to be replicated between regions.
  • Temporarily increase the timeout and connection limits to higher values during testing to determine if that helps resolve the issue. But such high values may not be suitable for production.
  • As a last resort, store large blobs separately from the database in a centralized object storage like S3. Reference the blobs from the database tables to avoid replication issues.
profile picture
EXPERT
answered 2 months ago
  • Thanks for you answer! Comments inlined below...:

    • "Check the configuration of the Aurora serverless clusters and parameter groups used for write forwarding" Well the params you mentioned I have already changed to MUCH larger values, did not help.
    • "control how connections are handled during replication and may need adjustment for large blobs." I dont have a clue how you would be able to control the connections. If you know, would you be able to give an example with the python demo app I posted?
    • "Consider splitting the large blob into smaller chunks before inserting into the database. This will reduce the data size that needs to be replicated between regions." Yes, it would make each UPDATE or INSERT statement smaller, but the amount of data to be replicated obviously would not reduce... And this would add considerable complexity on the application side, I probably would need to bypass Hibernate altogether in order make chunked INSERTs or UPDATEs with BLOBs.

    Note that our current production system is 2x Mysql 5.7 instances with two-way replication set up manually (ie. master-master) and there has NOT been any problems with UPDATEs or INSERTs with larger BLOBs. So I figure this is an issue with Aurora. Obviously, the main reason that I would like to use the Aurora is the easiness of write-forwarding from app perspective, just as it is advertized by AWS, as we would be able to have more reader (with write-forwarding) without the complexity on the app end.

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions