Is there an optimal way in pyspark to write the same dataframe to multiple locations?

0

I have a dataframe in pyspark and I want to write the same dataframe to two locations in AWS s3. Currently I have the following code running on AWS EMR.

# result is the name of the dataframe
        
result = result.repartition(repartition_value, 'col1').sortWithinPartitions('col1')

result.write.partitionBy("col2")\
      .mode("append") \
      .parquet(f"{OUTPUT_LOCATION_1}/end_date={event_end_date}")

result.write.partitionBy("col2") \
      .mode("append") \
      .parquet(f"{OUTPUT_LOCATION_2}/processed_date={current_date_str}")

The inclusion of this additional write step has increased the runtime of the job significantly (almost double). Could it be that the lazy evaluation of spark runs the same steps twice?

I have tried caching the data prior using result.cache() and forcing an action after e.g. result.count() but this hasnt provided any benefits. What would be the most efficient way to do a double dataframe output write?

已提問 2 年前檢視次數 1676 次
1 個回答
1

In pyspark, to write the same dataframe to multiple locations, you need to have two write statements but the distribution to partitions is the costly action hence the slowness. Efficient way is to copy the output from OUTPUT_LOCATION_1 to OUTPUT_LOCATION_2 outside of pyspark through cp. In spark, you can try to repartition with a specified number(example:5) before writing to see if helps the performance with two write statements.

result.repartition(5).write.partitionBy("col2").mode("append").parquet(f"{OUTPUT_LOCATION_1}/end_date={event_end_date}")

支援工程師
已回答 2 年前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南