MySQL Downtime Issue

0

Hi,

I was experienced downtime with AWS RDS t2.medium instance for MySQL about 02:37PM to 02:41PM (UTC+8). There was another downtime happened 2 days ago with exact same issue.

I cannot find useful logs about the issue. From the CloudWatch metrics in period of 1 Minute, I found during the downtime period, there are missing data for metrics like Freeable Memory, Write IOPS, Read IOPS, Queue Depth, Write Throughput, Read Throughput, Swap usage, etc.

The most obvious changes on the metrics was Swap Usage, from 02:37PM with 132.867MB jump to 02:41PM with 224.016MB. While 02:38PM to 02:40PM data are missing.

Recently I change table from MyISAM engine to InnoDB engine 3 days ago due to table level lock issue.
The CloudWatch showing
Before changes:
Freeable Memory around 1GB

After changes:
Freeable Memory around 100MB

Is this the sign that I should go for higher instance, or possible other solution like parameter group adjustment, or OPTIMIZE TABLE after change of Database engine can solve them?

Class
db.t2.medium

Region & AZ
ap-southeast-1a

2개 답변
0
수락된 답변

It sounds like you are short on memory. You can resize the innodb-buffer-pool-size to something lower than the default.

When you switched to innodb, you started using the buffer pool more than before, the swapping is evidence that you ran out of memory.

-Phil

AWS
중재자
philaws
답변함 5년 전
0

Thanks Phil!

These 2 days I focus on trying lower down the default innodb_buffer_pool_size to 2.5GB and 2GB. The server seems more stable than before. Default was 3GB, but when the issue happening I already lower down to 2.75GB. During my research online, I always saw that should not make changes on the RDS default parameter group. It seems in my case it need adjustment to lower down the memory usage to make server stable.

Thanks again for pointing out and confirming the direction!

답변함 5년 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인