1 réponse
- Le plus récent
- Le plus de votes
- La plupart des commentaires
0
That number is normally larger (e.g. 2x) because it compresses rows while parquet columnar compression is much more efficient.
Must mean in your data there are many columns with repeated values.
Contenus pertinents
- demandé il y a 7 mois
- demandé il y a un an
- demandé il y a 10 mois
- demandé il y a 17 jours
- AWS OFFICIELA mis à jour il y a un an
- AWS OFFICIELA mis à jour il y a un an
- AWS OFFICIELA mis à jour il y a 6 mois
Can you please explain a bit more. I dont have any repeated values as such but few values are nulls.
How we can optimise .Because without repartition I tried writing to s3 in csv format ,its 500 GB of data
Avoid the shuffle if you can, otherwise don't worry too much about the amount, the transfer is quite fast
Data is skewed, so using repartition to distribute the data evenly which is resulting in huge shuffle writes. Even without repartition it is taking around 1 hr to complete with G2.x and 60 DPUs
These parquet data is being read from Glue Catalog Tables directly.