2 Antworten
- Neueste
- Die meisten Stimmen
- Die meisten Kommentare
0
You cannot just read the new columns, for that you would need a columnar format like parquet.
Also incremental ingestion normally refers to loading new files, for that you could use Glue bookmarks (running a Glue job instead of Spectrum) or putting new files on different folders(partitions) and telling Spectrum to load just that)
0
Have you configured an ETL job to merge data? https://github.com/sinemozturk/INCREMENTAL-DATA-LOADING-FROM-AWS-S3-BUCKET-TO-REDSHIFT-BY-USING-AWS-GLUE-ETL-JOB
We want to explore the options to load the data without flattening the json using AWS Glue job to reduce the billing
Relevanter Inhalt
- AWS OFFICIALAktualisiert vor einem Jahr
- AWS OFFICIALAktualisiert vor einem Jahr
- AWS OFFICIALAktualisiert vor 3 Jahren
How to dynamically change the partition values so that we could automate this job !
If you mean filtering partitions, you would need to build your query with the values you need, for instance using the current date for date related columns