unable to use table_import_from_s3 with any path

0

I just upgraded from postgres 11.8 to 12.4 and I can not figure out how to call aws_s3.table_import_from_s3 on an object in a directory.
This works

SELECT aws_s3.table_import_from_s3(
'my_table', '', '(format csv)',
aws_commons.create_s3_uri('ai-brace-dev-brace-athena-results',
'x.csv',
'us-east-2')
);

whenever i have a directory in the path like this:

SELECT aws_s3.table_import_from_s3(
'my_table"', '', '(format csv)',
aws_commons.create_s3_uri('ai-brace-dev-brace-athena-results',
'6/x.csv',
'us-east-2')
);

I get this error:
ERROR: HTTP 403. Permission denied. Check bucket or provided credentials as they may no longer be valid.

The second command functioned fine before 12.4. I have tried urlencoding the filename and nothing helps.

Edited by: chrisgalardi on Oct 1, 2020 3:07 PM

Edited by: chrisgalardi on Oct 1, 2020 3:13 PM

Edited by: chrisgalardi on Oct 1, 2020 3:15 PM

Edited by: chrisgalardi on Oct 1, 2020 3:17 PM

已提問 4 年前檢視次數 3572 次
10 個答案
0

https://forums.aws.amazon.com/thread.jspa?messageID=957689&#957689 - had success with importing when the object is directly within the bucket you might want to try that.

已回答 4 年前
0

We have the same problem using AWS Aurora PostgreSQL compatible DB with engine v.11.8.

Some things we tried without success ...

  1. URL-encoding characters other than letters and numbers in the file path.
  2. Using Unicode U&'002F' instead of the forward slash (/) character.
  3. Explicitly adding the "folder" in a VPC Endpoint policy as a "Resource".
    "arn:aws:s3:::<bucket>/<folder>/*"
  4. Using PostgreSQL dollar quoting ($$) instead of the single quote (') character around the file path string.
  5. Listing the bucket, file path, region as separate arguments to the table_import_from_s3() function.
  6. Using aws_commons.create_s3_uri(<bucket>,<file path>,<region>) as a single argument to table_import_from_s3().
  7. Placing a copy of the file in a new folder directly under the bucket "root" with a very simple 7-letter name.

We can import data without any problem IF the file is at the "root" of the bucket.

已回答 3 年前
0

Wow. This is a really bad P1 bug. Any updates AWS?

mjgp
已回答 3 年前
0

We're experiencing the same s3 permission denied error after upgrade to 11.8.

已回答 3 年前
0

Has there been any update on this?
We are also encountering the same issue.

leahcim
已回答 3 年前
0

We started a new service that used an aurora postgres db with engine_version 11.8.

We we're unable to do s3import from subfolders regardless how we setup the policies for our database iam role. Importing from root of the bucket worked.

We recreated the database with version 11.7 and now things work as expected. Not ideal if you have a production environment though.

fredc
已回答 3 年前
0

My work around was to copy the file to the root of the bucket and then perform the aws_s3.table_import_from_s3. Later, deleted the file from root.

已回答 3 年前
0

UPDATE: (2021-01-11) Upgrading to Aurora PostgreSQL-compatible engine v. 11.9 and ensuring that the path presented to aws_s3.table_import_from_s3() does not begin with a forward slash ("/") enabled successful load of data from .csv.gz files in S3 buckets outside the root (e.g. path has multiple forward slashes) to Aurora DB tables.

已回答 3 年前
0

I updated my Postgres RDS to 11.9 and still am receiving the error: InternalError_: HTTP 403. Permission denied. Check bucket or provided credentials as they may no longer be valid.

I have double and tripled-checked that the IAM role and policy for this have been create properly.
The query I am using is this:

select aws_s3.table_import_from_s3('table1', '', '(format csv)', '<s3-bucket-name>', 'crunchbase/out/cb-20201008.csv', 'us-east-2' );  

I also have tried this version but get the same result:

select aws_s3.table_import_from_s3('table1', '', '(format csv)',   
                                                            aws_commons.create_s3_uri('<s3-bucket-name>','crunchbase/out/cb-20201008.csv', 'us-east-2')  
                                                            );  

I am not using the Aurora RDS, just a basic Postgres RDS database. If you could provide any insight into how to fix this I would greatly appreciate it!

已回答 3 年前
0
已回答 3 年前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南