I want to troubleshoot issues and errors when I integrate Amazon Relational Database Service (Amazon RDS) for Oracle with Amazon Simple Storage Service (Amazon S3).
Short description
When you transfer files between an Amazon RDS for Oracle DB instance and an Amazon S3 bucket, be sure of the following:
- The DB instance and the S3 bucket are in the same AWS Region.
- The DB instance has the required access to the S3 bucket.
- The DB instance is associated with an option group that includes the S3_INTEGRATION option.
- You're uploading files only from the Oracle directory object. This directory might be a user-created directory or a DATA_PUMP_DIR directory.
- If you're using a user-created directory, then the user performing the upload and download must have the required read and write privileges on the directory. For more information, see Grant privileges to the user on the source database.
- If you're copying data within subdirectories, specify the name of the subdirectory explicitly when using the rdsadmin packages. You can't copy subdirectories using the S3_INTEGRATION option.
Resolution
Be sure that the DB instance has the required access to the S3 bucket
To check if your DB instance has an AWS Identity and Access Management (IAM) role with the least privileges to upload and download from S3, do the following:
- Open the Amazon RDS console.
- In the navigation pane, choose Databases.
- Choose the DB instance that you want to check.
- Choose the Connectivity & security tab.
- Scroll down to the Manage IAM roles section to check if you have an active IAM role with the following privileges:
S3:ListBucket
S3:PutObject
S3:GetObject
You can also use the AWS Command Line Interface (AWS CLI) command describe-db-instances to check if your DB instance has an IAM role with these least privileges:
aws rds describe-db-instances \
--db-instance-identifier example-db-instance \
--query 'DBInstances[*].[AssociatedRoles[*]]'
Note: If you receive errors when running AWS CLI commands,
make sure that you’re using the most recent version of the AWS CLI.
After checking if the IAM role exists, do either of the following based on your finding:
- If the IAM role exists, but doesn't have these least privileges, then modify the role to include these permissions.
- If you don't have an IAM role with required permissions, then create an IAM policy with the permissions required to transfer the files from the S3 bucket to Amazon RDS. For more information, see Prerequisites for Amazon RDS for Oracle integration with Amazon S3. Be sure to add bucket level permissions to buckets and object level permissions to objects when creating the IAM policy.
If the issue with bucket permissions still persists, then modify the IAM role to grant full access to the S3 bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::example-bucket/*",
"arn:aws:s3:::example-bucket"
]
}
]
}
Important: The IAM role is modified to grant full access only to test if the issue is caused due to IAM privileges. After the issue is resolved, adjust the custom privileges as needed.
To upload files to a specific folder in S3, use the p_s3_prefix parameter in the rdsadmin.rdsadmin_s3_tasks.upload_to_s3 procedure. Be sure that you have the required permissions before you upload the files.
For example, to upload files to the example-folder folder to the S3 bucket example-bucket, attach the following IAM policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::example-bucket/example-folder/*"
]
}
]
}
Be sure that the DB instance is associated with an option group that includes the S3_INTEGRATION option
To check if you have an option group attached to your RDS DB instance, do the following:
- Open the Amazon RDS console.
- In the navigation pane, choose Databases.
- Choose the DB instance that you want to check.
- Choose the Configuration tab.
- Note the Option groups parameter.
If this option group isn't in the valid state, and you're trying to add the S3_INTEGRATION option, the option isn't added successfully. Therefore, be sure that the option group is in the valid state.
To check if this option group is in valid state, run the following AWS CLI command:
aws rds describe-db-instances \
--db-instance-identifier example-db-instance \
--query 'DBInstances[*].[OptionGroupMemberships[*]]'
If the option group isn't in the valid state, choose the Logs & events tab for your database, and then scroll down to the Recent events section. Identify the component that makes the option group invalid, and remove this component from the database.
If the option group is in the valid state, then check if the S3_INTEGRATION option is included in the option group. For more information, see Listing the options and option settings for an option group. If the S3_INTEGRATION option isn't included in the option group, then add the option. For more information, see Adding the Amazon S3 integration option.
Note: If the S3_INTEGRATION option is not added to your DB instance, you get the error ORA-00904: "RDSADMIN"."RDSADMIN_S3_TASKS"."UPLOAD_TO_S3": invalid identifier”. This error indicates that the required procedures to upload and download aren't created on the DB instance.
Adding the S3_INTEGRATION option to Amazon RDS doesn't result in downtime. If you're planning to use the Apply Immediately option, be sure that there are no pending maintenance actions. If there are any pending actions, those actions are applied along with S3 integration. For more information, see Maintaining a DB instance.
Be sure that you're uploading/downloading files only from the user-created Oracle directory object or DATA_PUMP_DIR directory
You can't access the S3 files directly in the database instance. To access these files, you must copy the files to a directory in the RDS host to access it. For uploading/downloading files from the DB instance to an S3 bucket, do the following:
- To copy the export files in Amazon RDS to S3, be sure to create the dump files using the expdp command or the dbms_datapump API in the DATA_DUMP_DIR or custom directory. Then, use the rdsadmin.rdsadmin_s3_tasks.upload_to_s3 procedure to upload files to S3. For information on the parameters of this procedure and example queries for uploading files, see Uploading files from an Oracle DB instance to an Amazon S3 bucket.
- Use the rdsadmin. rdsadmin_s3_tasks.download_from_s3 procedure to download files from the S3 bucket to the DATA_PUMP_DIR folder or any user-created directory. For information on the parameters of this procedure and example queries for download files, see Downloading files from an Amazon S3 bucket to an Oracle DB instance.
To monitor the status of file transfer, view the log that's generated when you run the procedure to either upload or download the rdsadmin packages. To view these logs, do the following:
- Open the Amazon RDS console.
- In the navigation pane, choose Databases.
- Choose the DB instance for which you want to view the logs.
- Choose the Logs & events tab.
- Under the Logs section, enter the task ID returned by the procedure in the search filter.
- Select the file that's returned.
- Choose Watch to view the logs.
-or-
Choose Download to download the logs.
You can also run the following command to read the log file generated during the upload/download process:
SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('BDUMP','dbtask-task-id.log'));
Note: Be sure to replace task-id with the task ID returned by the procedure.
Related information
Amazon S3 integration
How do I resolve problems with my Amazon RDS for Oracle DB instance that's using more storage than expected?