Spark shuffle huge amount of data even read data is not huge

0

Reading few gb say 15gb of parquet skewed data , after few transformation such as data type change for some columns and then doing repartitions (dataframe.repartition(120)) before writing it to s3 in csv gzip format results in huge amount of shuffle writes as can be seen in spark UI though Input data size is 15 gb , shuffle write is 600gbs

Interested to know why its happening ?

Bibhu
已提問 1 個月前檢視次數 284 次
1 個回答
0

That number is normally larger (e.g. 2x) because it compresses rows while parquet columnar compression is much more efficient.
Must mean in your data there are many columns with repeated values.

profile pictureAWS
專家
已回答 1 個月前
  • Can you please explain a bit more. I dont have any repeated values as such but few values are nulls.

    How we can optimise .Because without repartition I tried writing to s3 in csv format ,its 500 GB of data

  • Avoid the shuffle if you can, otherwise don't worry too much about the amount, the transfer is quite fast

  • Data is skewed, so using repartition to distribute the data evenly which is resulting in huge shuffle writes. Even without repartition it is taking around 1 hr to complete with G2.x and 60 DPUs

  • These parquet data is being read from Glue Catalog Tables directly.

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南