How do I manage the storage in my Amazon RDS for Oracle DB instance?

Lesedauer: 7 Minute

I want to use best practices to manage the storage of my Amazon Relational Database Service (Amazon RDS) for Oracle DB instance.

Short description

There are a number of steps that you can take to avoid storage-full on your Amazon RDS for Oracle DB instance. When the DB instance encounters storage-full, your instance's login freezes and database growth stops. Until you increase the free storage, any extension to get more free storage fails with an error similar to the following one:

"ORA-27061: waiting for async I/Os failed
Error: Linux-x86_64 Error: 28: No space left on device"


Monitor the FreeStorageSpace metric

A rapid drop in FreeStorageSpace indicates that your database increased its database usage and might enter storage-full. If FreeStorageSpace drops and your DB instance uses more than 90% of its allocated storage, then the database encounters LOW_STORAGE events. If you turned on auto scaling, then your instance adds storage space.

Create Amazon CloudWatch alarms to monitor the FreeStorageSpace metric and notify you when the storage space reaches its threshold value. 

Review any autoextened tablespaces

The default tablespace type for RDS for Oracle DB instances is bigfile with AUTOEXTEND turned on. The maximum size of bigfile tablespaces is 16 TiB. Tablespace increases to either the maximum quota for the tablespace or to the allocated storage for your DB instance. If your allocated storage is full, then your instance switches to storage-full, and your tablespaces can't extend. For more information, see Autoextend tablespace on the Oracle website.

It's a best practice to plan tablespace extension based on your application's needs. To avoid storage-full issues before a bulk load, resize tablespaces or allocate more space. If you scale your storage, then DB instance performance usually isn't affected and no outages occur. After you modify the storage size, your DB instance is in the storage-optimization state.

Note: Before you modify your DB instance, see Working with storage for Amazon RDS DB instances.

Check the Oracle file size quotas for Amazon RDS

The maximum size of a single file in your RDS for Oracle DB instances is 16 TiB. If you resize a data file in a bigfile tablespace beyond this quota, then you receive an error similar to the following one:

"ORA-01237: cannot extend datafile 6
ORA-01110: data file 6: '/rdsdbdata/db/mydir/datafile/myfile.dbf'ORA-27059:
could not reduce file sizeLinux-x86_64 Error: 27:
File too largeAdditional information: 2"

Resolve the "Storage quota exceeded" error

If the allocated storage at your AWS account level reaches its quota and you try to create a new RDS DB instance, then you get an error similar to the following one:

"Storage quota exceeded by  <x>; requested size: <x>"

To resolve this error, either request a quota increase or delete unused backups to immediately free up space in your account. 

Manage large trace files that occupy the BDUMP directory

Occasionally, large trace files are created in the BDUMP directory. These files occupy storage and are cleared according to the retention policy. They can also contribute to storage-full occurrences on your DB instances.

Trace files can accumulate and use disk space. By default, Amazon RDS clears trace files and log files that are older than 7 days. To set the trace file retention period, use show_configuration. To view the configuration results, run the SET SERVEROUTPUT command.

For more information about how to manage your trace files, see Oracle database log files.

To sort the files in the BDUMP directory by size, run a command similar to the following one:

select FILENAME, FILESIZE/1024/1024/1024"GB",MTIME from TABLE  
 (rdsadmin.rds_file_util.listdir(p_directory => 'BDUMP')) order by 3 desc

After you identify files that aren't in use, use the fremove dbms package to delete large files in trace file retention:


The following example shows the current trace file retention period, and then sets a new trace file retention period:

# Show the current tracefile retentionSQL> EXEC rdsadmin.rdsadmin_util.show_configuration;  
NAME:tracefile retentionVALUE:10080  
DESCRIPTION:tracefile expiration specifies the duration in minutes before tracefiles in bdump are automatically deleted.  
# Set the tracefile retention to 24 hours:SQL> EXEC rdsadmin.rdsadmin_util.set_configuration('tracefile retention',1440);SQL> commit;  
#show the new tracefile retentionSQL> EXEC rdsadmin.rdsadmin_util.show_configuration;  
NAME:tracefile retentionVALUE:1440  
DESCRIPTION:tracefile expiration specifies the duration in minutes before tracefiles in bdump are automatically deleted. 

Manage large Data Pump Directory that uses the DATA_PUMP_DIR directory

If you use Oracle Data Pump to move your data, then you might copy large dumpfiles that decrease your free storage. To resolve this issue, integrate Amazon RDS for Oracle with Amazon Elastic File System (Amazon EFS). When you use this setup, you don't need to use DATA_PUMP_DIR for database refreshes.

Check your archive log retention

Amazon RDS backs up archive logs to internal storage based on the backup retention for PITR purposes. Based on archive log retention settings for RDS for Oracle, archive logs that are locally stored on RDS storage contribute to allocated storage usage.

To retain archived redo logs, run rdsadmin.rdsadmon_util.set_configuration with the following parameters:

        name  => 'archivelog retention hours',  
        value => '24');  
NAME:archivelog retention hours  
DESCRIPTION:ArchiveLog expiration specifies the duration   
in hours before archive/redo log files are automatically deleted.

To check how much space your DB instance used during a specific period of time, run the following query. Replace X with the number of hours you want to query:


To view retained archive logs, run the following command to create a directory object as ARCHIVELOG_DIR:

EXEC rdsadmin.rdsadmin_master_util.create_archivelog_dir;
SELECT * FROM TABLE(rdsadmin.rds_file_util.listdir(p_directory => 'ARCHIVELOG_DIR'));

Check your database for fragmented objects

Fragmentation causes the database to slow and uses an unnecessary amount of storage space. To identify the objects that are fragmented and use the most space, run a command similar to the following one:

select  owner,table_name,round((blocks8),2)/1024 "size (mb)" ,  
round((num_rowsavg_row_len/1024/1024),2) "actual_data (mb)",  
(round((blocks8),2) - round((num_rowsavg_row_len/1024/1024),2)) "wasted_space (mb)"  
from  dba_tables  
where (round((blocks8),2) > round((num_rowsavg_row_len/1024/1024),2)) and owner='&schema_name'  
order by 4 desc;

To remove fragmentation, you can use export and import, recreate tables, or move tables to a new tablespace.

Note: Free storage metrics are affected only when you drop the fragmented tablespace after you empty and move fragmented objects to a new tablespace.

Use AWR to analyze growth trends on your DB instance

With Oracle Enterprise edition, you can use AWR Explorer to query your historical information and analyze the growth trends of your database. To check the growth of your database schema per object type over a specific number of days, run the following command:

title "Total Disk Used by Object Type"  
select c.segment_type, sum(space_used_delta) / 1024 / 1024 "Space used (M)", sum(c.bytes) / 1024 / 1024 "Total Space (M)",  
round(sum(space_used_delta) / sum(c.bytes) * 100, 2) || '%' "Percent of Total Disk Usage"  
dba_hist_snapshot sn,  
dba_hist_seg_stat a,  
dba_objects b,  
dba_segments c  
where end_interval_time > trunc(sysdate) - &days_back  
and sn.snap_id = a.snap_id  
and b.object_id = a.obj#  
and b.owner = c.owner  
and b.object_name = c.segment_name  
and space_used_delta > 0  
and c.owner = '&schema_name'  
group by rollup(segment_type);

Implement other methods

Implement the following methods to prevent storage-full on your instance:

Related Items

How can I decrease the total provisioned storage size of my Amazon RDS DB instance?

Why is my Amazon RDS for Oracle DB instance using more storage than expected?

AWS OFFICIALAktualisiert vor 7 Monaten