How to write a _SUCCESS files per partition instead of top level directory in AWS Glue?

0

Hello,

I am having a pyspark application adding partitions (dynamic overwriting) to a AWS Glue table using insertInto method. Upon completion of the task, a global .SUCCESS file in the top level directory in S3 is being updated with the timestamp. My desired behaviour would be to have .SUCCESS files with timestamp inside the updated partition instead of the top level directory. Is this possible?

Best,

N

질문됨 2년 전1071회 조회
1개 답변
0

Generally the _SUCCESS marker is per full job.
There are 2 options I could think of -

  1. Write a custom committer that records the partitions that are being written to, update an accumulator and then have the driver create those files. This could be complex and error-prone.
  2. Writing out files directly to partition directory path/to/table/partition_key1=foo/partition_key2=bar but not tell the output that it's partitioned. A generally-better option is to use a persistent metadata store (like Glue's Catalog) where you update the partition metadata after the write is confirmed complete.
    Once the partition metadata is updated, you can use the Predicate pushdowns for partition columns. This predicate can be any SQL expression or user-defined function as long as it uses only the partition columns for filtering. Remember that you are applying this to the metadata stored in the catalog, so you don’t have access to other fields in the schema.
AWS
답변함 2년 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠