How to write a _SUCCESS files per partition instead of top level directory in AWS Glue?



I am having a pyspark application adding partitions (dynamic overwriting) to a AWS Glue table using insertInto method. Upon completion of the task, a global .SUCCESS file in the top level directory in S3 is being updated with the timestamp. My desired behaviour would be to have .SUCCESS files with timestamp inside the updated partition instead of the top level directory. Is this possible?



posta 2 anni fa1235 visualizzazioni
1 Risposta

Generally the _SUCCESS marker is per full job.
There are 2 options I could think of -

  1. Write a custom committer that records the partitions that are being written to, update an accumulator and then have the driver create those files. This could be complex and error-prone.
  2. Writing out files directly to partition directory path/to/table/partition_key1=foo/partition_key2=bar but not tell the output that it's partitioned. A generally-better option is to use a persistent metadata store (like Glue's Catalog) where you update the partition metadata after the write is confirmed complete.
    Once the partition metadata is updated, you can use the Predicate pushdowns for partition columns. This predicate can be any SQL expression or user-defined function as long as it uses only the partition columns for filtering. Remember that you are applying this to the metadata stored in the catalog, so you don’t have access to other fields in the schema.
con risposta 2 anni fa

Accesso non effettuato. Accedi per postare una risposta.

Una buona risposta soddisfa chiaramente la domanda, fornisce un feedback costruttivo e incoraggia la crescita professionale del richiedente.

Linee guida per rispondere alle domande