Skip to content

Timestream for Live Analytics: Is it active and how can I see my TCU usage?

0

Do I explicitly need to configure a Max TCU before it activates TCU based billing? I don't see TCU hours anywhere in my bills (it still says DataScanned-Bytes). How do I know what my current TCU monthly usage is, in order to estimate my bill?

When I look in Cloudwatch Metrics, and do a TCU usage sum, we're averaging 1 - 2 million a day. It seems completely ridiculous that what previously cost less than $15 a day would suddenly go to $750,000 a day. So either my calculation is wrong or the pricing model just doesn't make sense for our use case and we need to migrate away before AWS enforces TCU billing (which i still don't know if we already have?).

asked a year ago287 views
2 Answers
0

All AWS accounts that onboard to the Timestream after April 29, 2024 will default to using TCUs for query pricing. If you are seeing DataScanned-Bytes in your bills, that means you are still in bytes scanned pricing model.

This new pricing model, based on Timestream Compute Units TCUs , offers a more cost-effective and predictable way to query your time series data. It aligns costs with the actual resources used, and you can configure the maximum number of compute resources to be used for queries, aiding adherence to budgets.

Timestream for Live Analytics provides the Amazon CloudWatch metric QueryTCU. This metric measures the number of compute units used in a minute and is updated every minute. So your calculation is incorrect.

As an existing user, you can do a one-time opt-in to use TCUs for better cost management and removal of per query minimum bytes metered. You can opt-in using the AWS Management Console or UpdateAccountSettings API operation with the AWS SDK or AWS CLI. In the API operation, set the QueryPricingModel parameter to COMPUTE_UNITS. Opting into the TCU pricing =model is optional and an irreversible change, which means if you’ve transitioned your account to use TCUs for query pricing, you can’t transition to using bytes scanned for query pricing.

AWS
answered a year ago
  • Thanks for the clarification, that helps. And in terms of the pricing? Is there any way to work out my usage before I switch? I am very concerned because when I look at the admin dashboard in timestream, it never drops below 16 TCU. Even without the query spikes, that takes my billing from a few hundred a month to a few thousand. Even in cloudwatch metrics, no matter how i view the data, it still feels like it's going to go up at least one order of magnitude.

0

1 TCU has 4 vCPUs and 16 GB of memory, 16 TCU constantly being used is 64 vCPUs and 256 GB memory typically serves high volume application. Please make sure you have optimized data model (Aim to keep the cardinality as low as possible.) Generally it is recommend to have Multi-measure table with high cardinal CDPK enabled and CDPK often used in queries, efficient querying is important (time, CDPK, and employ as much as possible dimensions in your queries for better performance and cost optimization). Leverage scheduled queries for pre-aggregating data.

Please refer following document for more details.

https://docs.aws.amazon.com/timestream/latest/developerguide/data-modeling.html https://docs.aws.amazon.com/timestream/latest/developerguide/customer-defined-partition-keys.html https://docs.aws.amazon.com/timestream/latest/developerguide/data-modeling.html#data-modeling-multiVsinglerecords https://docs.aws.amazon.com/timestream/latest/developerguide/scheduledqueries.html

AWS
answered a year ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.