We created a Timestream batch load job to backfill a table. The input was approximately 70 CSV files with sizes between 7MB and 15MB. The job finished successfully with indicating that 0 records were failed to be ingested.
However, during manual sanity check, we discovered some records in the CSV files were not processed.
We first thought that it might take time even after the job indicates a success completion until the records can be queried but that is not true as we waited couple of days but the records were not there.
We tried this with another bigger table 100 CSV files each 150MB approx. and the same issue of dropping records happened.
What in interesting is that the same records get dropped every time we ingest the same CSV files via a batch load job.
Not all records are dropped though, only a relatively small number of records.
We did not experience records dropping when ingesting to magnetic memory instead of a batch load job.
Any ideas?
Thank you.
The time, dimension, measure (no custom user defined partitions) are unique for the records that were dropped. In other words, there are no duplicate records with the same time, dimension and measure.
Is that what you meant?