- Mais recentes
- Mais votos
- Mais comentários
I already have a request in for AWS to add caching in Grafana for Timestream. We use Timestream with MultiMetric writes to help with this but the main reason we went with it over OpenSearch was that we have 2 people on our team and managing cross regional OpenSearch wasn't something I wanted to do. We've been pretty happy with it's scalability so far but I'm concerned over it only being a single region product.
Opensearch works well with lower data volumes and simpler queries. at higher data volumes, it’s not as well-suited for processing analytical queries compared to Timestream and it becomes difficult to formulate certain queries in the Opensearch query DSL
One big advantage is that timestream is completely serverless, for Opensearch, although you don’t have servers to manage directly, we do have to monitor the node health (which we can see in CW). they can experience issues and the cluster will be in an unhealthy state until the Opensearch support team is paged (they will not resolve it automatically). Timestream seems to be actually serverless so there’s no actual cluster for us to keep an eye on
If you have lots of real time aggregations and want to write them in SQL (and are expecting high cost), then Timestream will fit in your use case.
I found the following for you that lists other benefits of Timestream:
- https://towardsdatascience.com/amazon-timestream-is-finally-released-is-it-worth-your-time-e6b7eff10867
- https://k21academy.com/amazon-web-services/amazon-timestream/
- https://www.projectpro.io/recipes/explain-features-of-amazon-timestream
- https://sourceforge.net/software/compare/Amazon-Timestream-vs-Elasticsearch-vs-Google-Programmable-Search-Engine/
Thank you for your detailed answer. For my use case, i don't expect high volumes of data, rather I expect large number of queries.
How does "If you have lots of real time aggregations and want to write them in SQL (and are expecting high cost), then Timestream will fit in your use case." makes sense?
Let's say i expose a dashboard to my customers, while each customer has a table (for tenant isolation purposes). Now, If i'm having 30 widgest, i'm going to pay 30*10MB worth of scan, where let's say the entire 30 queries only scanned few MB.
won't i expect an outstanding read costs? how does this makes sense with grafana integration where every panel is a seperate query?
how to battle this minimum 10MB query limitation?
Conteúdo relevante
- AWS OFICIALAtualizada há 2 meses
- AWS OFICIALAtualizada há um ano
- AWS OFICIALAtualizada há 2 anos
The problem is that i'm going to show dashboards to my customers.. it's not used internally, so mass dashboards means mass queries