- Le plus récent
- Le plus de votes
- La plupart des commentaires
https://forums.aws.amazon.com/thread.jspa?messageID=957689󩳹 - had success with importing when the object is directly within the bucket you might want to try that.
We have the same problem using AWS Aurora PostgreSQL compatible DB with engine v.11.8.
Some things we tried without success ...
- URL-encoding characters other than letters and numbers in the file path.
- Using Unicode U&'002F' instead of the forward slash (/) character.
- Explicitly adding the "folder" in a VPC Endpoint policy as a "Resource".
"arn:aws:s3:::<bucket>/<folder>/*" - Using PostgreSQL dollar quoting ($$) instead of the single quote (') character around the file path string.
- Listing the bucket, file path, region as separate arguments to the table_import_from_s3() function.
- Using aws_commons.create_s3_uri(<bucket>,<file path>,<region>) as a single argument to table_import_from_s3().
- Placing a copy of the file in a new folder directly under the bucket "root" with a very simple 7-letter name.
We can import data without any problem IF the file is at the "root" of the bucket.
We're experiencing the same s3 permission denied error after upgrade to 11.8.
Has there been any update on this?
We are also encountering the same issue.
We started a new service that used an aurora postgres db with engine_version 11.8.
We we're unable to do s3import from subfolders regardless how we setup the policies for our database iam role. Importing from root of the bucket worked.
We recreated the database with version 11.7 and now things work as expected. Not ideal if you have a production environment though.
My work around was to copy the file to the root of the bucket and then perform the aws_s3.table_import_from_s3. Later, deleted the file from root.
UPDATE: (2021-01-11) Upgrading to Aurora PostgreSQL-compatible engine v. 11.9 and ensuring that the path presented to aws_s3.table_import_from_s3() does not begin with a forward slash ("/") enabled successful load of data from .csv.gz files in S3 buckets outside the root (e.g. path has multiple forward slashes) to Aurora DB tables.
I updated my Postgres RDS to 11.9 and still am receiving the error: InternalError_: HTTP 403. Permission denied. Check bucket or provided credentials as they may no longer be valid.
I have double and tripled-checked that the IAM role and policy for this have been create properly.
The query I am using is this:
select aws_s3.table_import_from_s3('table1', '', '(format csv)', '<s3-bucket-name>', 'crunchbase/out/cb-20201008.csv', 'us-east-2' );
I also have tried this version but get the same result:
select aws_s3.table_import_from_s3('table1', '', '(format csv)',
aws_commons.create_s3_uri('<s3-bucket-name>','crunchbase/out/cb-20201008.csv', 'us-east-2')
);
I am not using the Aurora RDS, just a basic Postgres RDS database. If you could provide any insight into how to fix this I would greatly appreciate it!
Looks like a number of these symptoms have been addressed in Aurora PostgreSQL 3.3.2:
Contenus pertinents
- demandé il y a un an
- demandé il y a un an
- demandé il y a un an
- AWS OFFICIELA mis à jour il y a 4 mois
- AWS OFFICIELA mis à jour il y a 3 ans
- AWS OFFICIELA mis à jour il y a 3 ans
- AWS OFFICIELA mis à jour il y a 2 ans