By using AWS re:Post, you agree to the Terms of Use
/Upsert operation on PostGres or Aurora from S3 CSV file using aws_s3.table_import_from_s3/

Upsert operation on PostGres or Aurora from S3 CSV file using aws_s3.table_import_from_s3

0

Importing tens of thousands of rows into a PostGres table from S3 is awesome and incredibly fast, but is there any way to perform Upserts using the same process without creating staging tables?

Ideally it would be nice to have the equivalent of the conflict keyword, and perform an update if the PK is already present, but is there anything simpler than a staging+merge operation? At the moment it looks like it needs:

  • Create temporary table
  • aws_s3.table_import_from_s3 into temporary table
  • MERGE operation into main table
  • Clear temporary table

and it needs this for all 20 tables we're merging... Open to any suggestions

asked 2 months ago19 views
2 Answers
0

Your current architecture sounds standard (and good!).

Part of the reason that CSV import is so fast is that it's only handling a limited set of functions. I'm pretty sure there's no ability to handle the on conflict idea there.

answered 2 months ago
0

Thanks for the feedback.

What would be a nice simplification of the solution would be if there was a way of parsing the JSON structure in the S3 bucket directly as part of the import, as I currently have to split the JSON into individual files for each 'table' for the import process, then write a custom stored procedure to do the merge from each bucket.

First world problems...

answered 2 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions