In QuickSight, filters in Direct Query mode are generally implemented as WHERE clauses in the SQL query that is sent to the underlying data source. This means that the filters are applied to the data source before the data is returned to QuickSight, which can result in improved query performance and reduced data transfer costs. However, the exact implementation may depend on the specific data source and query engine being used.
In your case, it sounds like the filters may not be reducing the amount of data that is being scanned in Timestream, which is leading to higher query costs. To optimize your query performance and reduce costs, you may want to consider applying filters that are more selective or using partitioning to reduce the amount of data that needs to be scanned. You could also consider using aggregations or precomputing summary data to reduce the amount of data that needs to be scanned and transferred.
Additionally, it's worth noting that using Direct Query mode with large data sources like Timestream can be challenging due to the high cost of data transfer and query execution. You may want to consider using SPICE for pre-aggregation and caching of frequently accessed data, or exploring other data storage options like Amazon Redshift or Amazon Athena that are better suited for large-scale data analysis
Relevant content
- Accepted Answerasked 10 months ago
- AWS OFFICIALUpdated 8 months ago
- AWS OFFICIALUpdated 2 months ago
- AWS OFFICIALUpdated 8 months ago
- AWS OFFICIALUpdated 2 months ago
thanks so much for your response. if the filters are applied to the datasource before data is returned then this is exactly what i would expect.
the implementation and query engine is the built in timestream data source.
in general we do use precalculated aggregations - this is something im looking at implementing here. i would love to find some subset to cache inside of spice. thanks for your answer