Glue Hudi get the freshly added or updated records

0

Hello,

I'm using Hudi connector in Glue, first, I bulk inserted the initial dataset to Hudi table, I'm adding a daily incremental records and I can query them using Athena, what I'm trying to do is to get the newly added, updated or deleted records in a separate parquet file.

I've tried different approaches and configurations using both copy on write and merge on read tables but still can get the updates in a separate file.

I used these configurations in different combinations:

'className' : 'org.apache.hudi',
'hoodie.datasource.hive_sync.use_jdbc': 'false',
'hoodie.datasource.write.precombine.field': 'ts',
'hoodie.datasource.write.recordkey.field': 'uuid',
'hoodie.payload.event.time.field': 'ts',
'hoodie.table.name': 'table_name',
'hoodie.datasource.hive_sync.database': 'hudi_db',
'hoodie.datasource.hive_sync.table': 'table_name',
'hoodie.datasource.hive_sync.enable': 'false',
# 'hoodie.datasource.write.partitionpath.field': 'date:SIMPLE',
'hoodie.datasource.write.hive_style_partitioning': 'true',
'hoodie.meta.sync.client.tool.class': 'org.apache.hudi.aws.sync.AwsGlueCatalogSyncTool',
'hoodie.datasource.write.table.type': 'COPY_ON_WRITE',
'path': 's3://path/to/output/',
# 'hoodie.datasource.write.operation': 'bulk_insert',
'hoodie.datasource.write.operation': 'upsert',
# 'hoodie.datasource.hive_sync.partition_extractor_class': 'org.apache.hudi.hive.NonPartitionedExtractor',
# 'hoodie.datasource.hive_sync.partition_extractor_class': 'org.apache.hudi.hive.MultiPartKeysValueExtractor',
'hoodie.datasource.write.keygenerator.class': 'org.apache.hudi.keygen.NonpartitionedKeyGenerator',
# 'hoodie.compaction.payload.class': 'org.apache.hudi.common.model.OverwriteWithLatestAvroPayload',
# 'hoodie.cleaner.policy': 'KEEP_LATEST_COMMITS',
'hoodie.cleaner.delete.bootstrap.base.file': 'true',
"hoodie.index.type": "GLOBAL_BLOOM",
'hoodie.file.index.enable': 'true',
'hoodie.bloom.index.update.partition.path': 'true',
'hoodie.bulkinsert.shuffle.parallelism': 1,
# 'hoodie.datasource.write.keygenerator.class': 'org.apache.hudi.keygen.CustomKeyGenerator'

Thank you.

已提問 2 年前檢視次數 729 次
1 個回答
0

At this moment, Hudi does not have a feature to fit the use case you described. When use Copy on Write table, incremental updates will be compacted into base file in real time automatically. While with Merge on Read table, incremental updates are logged to delta files (in Avro format) and later compacted to base Parquet files. Reference: https://hudi.apache.org/docs/concepts/#table-types

A possible workaround, you can try implement your own data write and compact logic using custom ETL jobs on native Glue Catalog table instead of using hudi table.

I hope this helps!

AWS
Ethan_H
已回答 1 年前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南