By using AWS re:Post, you agree to the Terms of Use
/Database/

Database

AWS features the broadest selection of purpose-built databases for all of your application needs. With 15+ database engines to choose from, hundreds of thousands of customers rely on AWS databases to build use case driven, highly scalable, and distributed applications.

Recent questions

see all
1/18

RDS custom oracle disk full

Hello We are currently using AWS RDS Custom Oracle and the OS disk have a root partition of 10Gb , although allocating a 42 Gb volume . Just to add a cherry atop, there is a 16Gb swap area allocated in between, which make worse to expand to the remaining disk area I could barely get out of os disk full condition killing old log files in /var/log [root@ip- /]# df -kh Filesystem Size Used Avail Use% Mounted on devtmpfs 7.6G 0 7.6G 0% /dev tmpfs 16G 7.5G 7.8G 49% /dev/shm tmpfs 7.7G 785M 6.9G 11% /run tmpfs 7.7G 0 7.7G 0% /sys/fs/cgroup /dev/nvme0n1p1 9.8G 7.9G 1.8G 83% / /dev/nvme1n1 25G 13G 11G 54% /rdsdbbin /dev/mapper/dbdata01-lvdbdata01 296G 25G 271G 9% /rdsdbdata tmpfs 1.6G 0 1.6G 0% /run/user/61001 tmpfs 1.6G 0 1.6G 0% /run/user/61005 [root@ip- aws]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme0n1 259:0 0 42G 0 disk ├─nvme0n1p1 259:1 0 10G 0 part / ├─nvme0n1p128 259:3 0 1M 0 part └─nvme0n1p127 259:2 0 16G 0 part [SWAP] nvme3n1 259:6 0 75G 0 disk └─dbn0-lvdbn0 252:0 0 300G 0 lvm └─dbdata01-lvdbdata01 252:1 0 300G 0 lvm /rdsdbdata nvme2n1 259:5 0 75G 0 disk └─dbn0-lvdbn0 252:0 0 300G 0 lvm └─dbdata01-lvdbdata01 252:1 0 300G 0 lvm /rdsdbdata nvme5n1 259:8 0 75G 0 disk └─dbn0-lvdbn0 252:0 0 300G 0 lvm └─dbdata01-lvdbdata01 252:1 0 300G 0 lvm /rdsdbdata nvme1n1 259:4 0 25G 0 disk /rdsdbbin nvme4n1 259:7 0 75G 0 disk └─dbn0-lvdbn0 252:0 0 300G 0 lvm └─dbdata01-lvdbdata01 252:1 0 300G 0 lvm /rdsdbdata du -sh ./*/ 185M ./bin/ 143M ./boot/ 7.5G ./dev/ 43M ./etc/ 124K ./home/ 1.1G ./lib/ 195M ./lib64/ 16K ./lost+found/ 4.0K ./media/ 4.0K ./mnt/ 3.4G ./opt/ 0 ./proc/ 13G ./rdsdbbin/ 25G ./rdsdbdata/ 13M ./root/ 785M ./run/ 46M ./sbin/ 4.0K ./srv/ 0 ./sys/ 72K ./tmp/ 465M ./usr/ 2.5G ./var/ What im planning to do is to allocate a new swap volume , switch it on, kill the old swap and expand the original volume as the most i could. 1) Could this harm any monitoring task in AWS for RDS custom? 2)Any good soul at AWS could look into the automation scripts for RDS custom and make SWAP on a separate volume? , and allocate the 42Gb volume fully for OS? There can happen some bad stuff any OS updates that surely need more disk space
0
answers
0
votes
8
views
asked 7 hours ago

Popular users

see all
1/18

Learn AWS faster by following popular topics

1/1