Delete old parquet files of overwritten Iceberg table

0

I am trying to write a pyspark dataframe to S3 and the AWS data catalog using the Iceberg format and the pyspark.sql.DataFrameWriterV2 with the createOrReplace function. When I write the same dataframe twice one after another, I see that all parquet files on S3 exist twice with slightly different names (hashes) in each partition, however, when I read the table with SQL, i get the expected number of rows, which corresponds to the number of rows in the dataframe. Is there a way to automatically delete the overwritten/superseded parquet files?

asked 3 months ago575 views
1 Answer
0

That's normal Iceberg behavior, it keeps the old files in case you want to get the data as it was in the past.
In the configuration you can tell it when to expire or you can force it, see: https://iceberg.apache.org/docs/latest/maintenance/ https://iceberg.apache.org/docs/latest/configuration/

profile pictureAWS
EXPERT
answered 3 months ago
  • Thanks for your answer, Gonzalo!

    My problem is, that i need to physically delete data older than 10 years due to legal regulations. The age is defined by the value of a certain column in my tables, because i processed a large chunk of old data at once so that the time of writing the data does not correspond to the real age of the data. Do you have a hint for me how to implement that? My impression is, that the snapshot expire mechanism work only for the physical age of objects on S3.

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions