Is there an optimal way in pyspark to write the same dataframe to multiple locations?

0

I have a dataframe in pyspark and I want to write the same dataframe to two locations in AWS s3. Currently I have the following code running on AWS EMR.

# result is the name of the dataframe
        
result = result.repartition(repartition_value, 'col1').sortWithinPartitions('col1')

result.write.partitionBy("col2")\
      .mode("append") \
      .parquet(f"{OUTPUT_LOCATION_1}/end_date={event_end_date}")

result.write.partitionBy("col2") \
      .mode("append") \
      .parquet(f"{OUTPUT_LOCATION_2}/processed_date={current_date_str}")

The inclusion of this additional write step has increased the runtime of the job significantly (almost double). Could it be that the lazy evaluation of spark runs the same steps twice?

I have tried caching the data prior using result.cache() and forcing an action after e.g. result.count() but this hasnt provided any benefits. What would be the most efficient way to do a double dataframe output write?

gefragt vor 2 Jahren1675 Aufrufe
1 Antwort
1

In pyspark, to write the same dataframe to multiple locations, you need to have two write statements but the distribution to partitions is the costly action hence the slowness. Efficient way is to copy the output from OUTPUT_LOCATION_1 to OUTPUT_LOCATION_2 outside of pyspark through cp. In spark, you can try to repartition with a specified number(example:5) before writing to see if helps the performance with two write statements.

result.repartition(5).write.partitionBy("col2").mode("append").parquet(f"{OUTPUT_LOCATION_1}/end_date={event_end_date}")

SUPPORT-TECHNIKER
beantwortet vor 2 Jahren

Du bist nicht angemeldet. Anmelden um eine Antwort zu veröffentlichen.

Eine gute Antwort beantwortet die Frage klar, gibt konstruktives Feedback und fördert die berufliche Weiterentwicklung des Fragenstellers.

Richtlinien für die Beantwortung von Fragen