Time series records lost during AWS Timestream batch load job


We created a Timestream batch load job to backfill a table. The input was approximately 70 CSV files with sizes between 7MB and 15MB. The job finished successfully with indicating that 0 records were failed to be ingested.

However, during manual sanity check, we discovered some records in the CSV files were not processed.

We first thought that it might take time even after the job indicates a success completion until the records can be queried but that is not true as we waited couple of days but the records were not there.

We tried this with another bigger table 100 CSV files each 150MB approx. and the same issue of dropping records happened.

What in interesting is that the same records get dropped every time we ingest the same CSV files via a batch load job.

Not all records are dropped though, only a relatively small number of records.

We did not experience records dropping when ingesting to magnetic memory instead of a batch load job.

Any ideas?

Thank you.

asked 10 months ago247 views
2 Answers
Accepted Answer

Tried it again and waited 48 hours, the records showed up. Perhaps more docs from AWS in this aspect can help.

answered 10 months ago

This might be an idempotency issue, did you check if the mapping for batch load / time granularity keeps the time+dimesions+measure/partition unique. As you describe it as a deterministic problem, that would be my first idea.

answered 10 months ago
  • The time, dimension, measure (no custom user defined partitions) are unique for the records that were dropped. In other words, there are no duplicate records with the same time, dimension and measure.

    Is that what you meant?

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions