How to write a _SUCCESS files per partition instead of top level directory in AWS Glue?

0

Hello,

I am having a pyspark application adding partitions (dynamic overwriting) to a AWS Glue table using insertInto method. Upon completion of the task, a global .SUCCESS file in the top level directory in S3 is being updated with the timestamp. My desired behaviour would be to have .SUCCESS files with timestamp inside the updated partition instead of the top level directory. Is this possible?

Best,

N

asked 2 years ago1002 views
1 Answer
0

Generally the _SUCCESS marker is per full job.
There are 2 options I could think of -

  1. Write a custom committer that records the partitions that are being written to, update an accumulator and then have the driver create those files. This could be complex and error-prone.
  2. Writing out files directly to partition directory path/to/table/partition_key1=foo/partition_key2=bar but not tell the output that it's partitioned. A generally-better option is to use a persistent metadata store (like Glue's Catalog) where you update the partition metadata after the write is confirmed complete.
    Once the partition metadata is updated, you can use the Predicate pushdowns for partition columns. This predicate can be any SQL expression or user-defined function as long as it uses only the partition columns for filtering. Remember that you are applying this to the metadata stored in the catalog, so you don’t have access to other fields in the schema.
AWS
answered 2 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions