AWS Glue - DataSink is taking long time to write

0

Hello

I'm using DataSink to write the results of the job. Input file size is just 70 MB. Job is reading from Catalogue and trying to write into S3 and update the target Catalogue. Have no clue why it is taking so long (> 2 hours) When I write simple job to read (sample raw file) CSV and write into S3 Parquet, it just take 2 minutes. Reason I am using the DatSink to avoid running Crawler on target data source. Pl. suggest

result_sink = glueContext.getSink(
    path=fld_path,
    connection_type="s3",
    updateBehavior="LOG",
    partitionKeys=partition_cols_list,
    compression="snappy",
    enableUpdateCatalog=True,
    transformation_ctx="result_sink"
)
result_sink.setCatalogInfo(
    catalogDatabase=target_db, catalogTableName=dataset_dict_obj.get("table_name")
)

#Raw input format conversion from CSV/txt into Parquet 
result_sink.setFormat("glueparquet")

#convert df to ddf
final_df = DynamicFrame.fromDF(
    inc_df, glueContext, "final_df"
)
#Job is taking to 1 hours to reach this point. 

print("final_df size is that:",final_df.count())

#Write the dataframe into AWS S3 bucket and, also update the Data Catalogue.
result_sink.writeFrame(final_df)

job.commit()
質問済み 9ヶ月前385ビュー
1回答
0

If writing a plain file is fast, I suspect the performance issue is with "partitionKeys=partition_cols_list", maybe those columns have too much granularity and force writing lots of tiny files. Also the counting on the converted DF might result on double processing.

Since you already have a DataFrame, the DataFrame writer is faster doing table partitioning, you can achieve the same (as long as you are not writing to an S3 Lakeformation location) doing something like this (haven't tested it):

inc_df.writeTo(f'{target_db}.{dataset_dict_obj.get("table_name")}').partitionedBy(partition_cols_list).createOrReplace() 
profile pictureAWS
エキスパート
回答済み 9ヶ月前

ログインしていません。 ログイン 回答を投稿する。

優れた回答とは、質問に明確に答え、建設的なフィードバックを提供し、質問者の専門分野におけるスキルの向上を促すものです。

質問に答えるためのガイドライン

関連するコンテンツ