RDS custom oracle disk full

0

Hello We are currently using AWS RDS Custom Oracle and the OS disk have a root partition of 10Gb , although allocating a 42 Gb volume . Just to add a cherry atop, there is a 16Gb swap area allocated in between, which make worse to expand to the remaining disk area

I could barely get out of os disk full condition killing old log files in /var/log

[root@ip- /]# df -kh Filesystem Size Used Avail Use% Mounted on devtmpfs 7.6G 0 7.6G 0% /dev tmpfs 16G 7.5G 7.8G 49% /dev/shm tmpfs 7.7G 785M 6.9G 11% /run tmpfs 7.7G 0 7.7G 0% /sys/fs/cgroup /dev/nvme0n1p1 9.8G 7.9G 1.8G 83% / /dev/nvme1n1 25G 13G 11G 54% /rdsdbbin /dev/mapper/dbdata01-lvdbdata01 296G 25G 271G 9% /rdsdbdata tmpfs 1.6G 0 1.6G 0% /run/user/61001 tmpfs 1.6G 0 1.6G 0% /run/user/61005

[root@ip- aws]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme0n1 259:0 0 42G 0 disk ├─nvme0n1p1 259:1 0 10G 0 part / ├─nvme0n1p128 259:3 0 1M 0 part └─nvme0n1p127 259:2 0 16G 0 part [SWAP] nvme3n1 259:6 0 75G 0 disk └─dbn0-lvdbn0 252:0 0 300G 0 lvm
└─dbdata01-lvdbdata01 252:1 0 300G 0 lvm /rdsdbdata nvme2n1 259:5 0 75G 0 disk └─dbn0-lvdbn0 252:0 0 300G 0 lvm
└─dbdata01-lvdbdata01 252:1 0 300G 0 lvm /rdsdbdata nvme5n1 259:8 0 75G 0 disk └─dbn0-lvdbn0 252:0 0 300G 0 lvm
└─dbdata01-lvdbdata01 252:1 0 300G 0 lvm /rdsdbdata nvme1n1 259:4 0 25G 0 disk /rdsdbbin nvme4n1 259:7 0 75G 0 disk └─dbn0-lvdbn0 252:0 0 300G 0 lvm
└─dbdata01-lvdbdata01 252:1 0 300G 0 lvm /rdsdbdata

du -sh ./*/

185M ./bin/ 143M ./boot/ 7.5G ./dev/ 43M ./etc/ 124K ./home/ 1.1G ./lib/ 195M ./lib64/ 16K ./lost+found/ 4.0K ./media/ 4.0K ./mnt/ 3.4G ./opt/ 0 ./proc/ 13G ./rdsdbbin/ 25G ./rdsdbdata/ 13M ./root/ 785M ./run/ 46M ./sbin/ 4.0K ./srv/ 0 ./sys/ 72K ./tmp/ 465M ./usr/ 2.5G ./var/

What im planning to do is to allocate a new swap volume , switch it on, kill the old swap and expand the original volume as the most i could.

  1. Could this harm any monitoring task in AWS for RDS custom?

2)Any good soul at AWS could look into the automation scripts for RDS custom and make SWAP on a separate volume? , and allocate the 42Gb volume fully for OS? There can happen some bad stuff any OS updates that surely need more disk space

已提問 2 年前檢視次數 567 次
2 個答案
1

Modifying the root volume or any volume for that matter will put you out of "support parameter" and automation will be paused. Your best bet is be to open a service ticket and have product team involved from AWS side and they can provide suggestions as how to proceed.

AWS
已回答 2 年前
0

Just a quick update

Thanks Dev

Ive did this totally online in the EC2 ... Ive killed the swap apartition , and rezised the root partiton to the max volume size using parted (it reported some mismatched GPT table size, but fix option went well), and recreated the swap as file into the root partition now the root partition have 42 Gb of space, which 25 is currently in use ive had set up an external swap partition on another volume and yeah, the dreaded message unsupported flashed up. it went away a good chunk of time later 2-3hours after ive moved the swap to a file into the root partiton

CloudWatch and Log outputs were really cumbersome to use to troubleshoot why the message didnt went away when using a swap file. I could ask for a procedure or some script that you can run to manually detect whats off from the supported configuration. too much verbose to find the needle

已回答 2 年前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南