- Newest
- Most votes
- Most comments
Hlo,
With the introduction of Timestream Query Pricing based on TimeSeries Compute Units (TCUs), understanding and estimating query costs has become more nuanced. While scan size does play a role in determining query cost, it's not the only factor. Here's a breakdown of how query costs are calculated in Timestream:
TimeSeries Compute Units (TCUs): TCUs are the unit of measure for query processing in Timestream. They represent the compute resources consumed by executing queries against Timestream data. The pricing is based on the number of TCUs consumed per query. Query Complexity and Processing: The complexity of the query, including the number of time series and data points scanned, the functions applied, and the duration of the query execution, all contribute to the TCU consumption.
Data Scanned: While not explicitly stated in the pricing documentation, it's reasonable to assume that the amount of data scanned still plays a role in TCU consumption. Queries that scan larger volumes of data may consume more TCUs.
**Query Editor in the Console: **When using the Query Editor in the AWS Management Console, the TCU consumption is not explicitly shown. However, the complexity and data scanned by the query still contribute to the overall TCU consumption, which reflects in the billing.
Predicting TCU Consumption: Predicting the exact TCU consumption for a query can be challenging due to the various factors involved, including query complexity and data distribution. However, you can estimate TCU consumption based on historical query patterns and adjust queries to optimize performance and cost.
**To better understand your Timestream query costs, consider the following steps: ** **Monitor TCU consumption: **Utilize AWS Cost Explorer to monitor TCU consumption and associated costs over time. This can help identify trends and anomalies in query usage.
**Optimize queries: **Review and optimize your queries to reduce complexity and minimize data scanned where possible. Utilize query performance insights provided by Timestream to identify optimization opportunities.
Implement cost controls: Set up AWS Budgets and Cost Allocation Tags to monitor and control Timestream query costs within your budgetary constraints.
Just wanted to add, since switch to the TCU based pricing for querying, our cost has gone up exponentially. For a few queries yesterday it was $6. We expect to run thousands of ad hoc queries a day, which would make the cost insanely expensive based on these numbers.
It's pretty awful. For certain use cases it won't affect the cost much. For others - yours, and cases like mine where I'm querying recent values out of memory store to display on dashboards - the new pricing is nuts. I think the biggest problem is the 30 TCU-seconds per query. That makes simple "get latest data" queries cost 4.3X as much as they used to. If I understand that correctly than this is not only going to drive me to stop using timestream, but to also recommend my customers move off of it NOW.
30-seconds minimum is for the TCU's used and not per query. If your query takes 250 ms, you could run 4*7 concurrent queries = 28 queries per second on 4 TCU.
As you are charged per second for the compute used, your costs (to a certain limit) do not linearly increase with the number of queries. In the above example, running 5 queries per second or 28 queries per second would cost the same as you are using TCU's for one second in both the cases.
Relevant content
- asked 9 months ago
- asked 9 months ago
- asked a year ago
- AWS OFFICIALUpdated a year ago

Hui, in case, pricing is detailled here: https://aws.amazon.com/timestream/pricing/
"estimating query costs has become more nuanced" well that's the understatement of the year, it's not more "nuanced" it's more obfuscated. The old way to do this was to pay per GB scanned. Now, I don't know HOW to estimate it. "Try it, and use the cost monitor" is not a very good answer. I don't think they very clearly say what minimum TCUs per query is. This whole thing seems like a huge mess.