How to handle failover Timeout with JDBC - Retrying failed queries

0

We created a Multi-AZ AWS Postgres (13.4) RDS Instance and are using a java application with Spring + hibernate and HikariCP as poolmanager. If a failover occures in the middle of a query, the query will fail and throw an exception, afterwards HikariCP will wait for a new connection and the application will continue working. We were hoping that there is a way to retry queries that failed due to a failover, either on AWS side by caching the query during downtime or by the use of the RDS proxy, but there is no such functionality.

Is there a way to configure RDS, RDS-proxy, Postgres or another aws service to retry failed queries or do we have to handle these errors within our application?

已提問 2 年前檢視次數 986 次
1 個回答
0

Hello,

It's typically the JDBC driver that should be cluster-aware and that's smart enough to cache, failover and retry. It's done the same way on different DB engines, special drivers are provided by DB vendors that achieve all or some of these functionalities.

AWS provides its own JDBC Postgres driver that's cluster-aware but only when used with Aurora (not regular RDS Postgres database): https://github.com/awslabs/aws-postgresql-jdbc

AWS
已回答 2 年前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南