2 Answers
- Newest
- Most votes
- Most comments
0
Hi, to achieve your goal, you have to create a metric filter from the CloudWatch log events corresponding to the slow queries. You will be then able to extract the value corresponding to the duration of your query from the log event message. This duration will be added to your cloudwatch metric tracking those events. You can then fire a CloudWatch alarm when the average is above your target.
In you case, you may want to in fact distinguish your different queries in different metrics to have more granular tracking.
Links:
- Amazon RDS : turn on query logging for PostgreSQL https://repost.aws/knowledge-center/rds-postgresql-query-logging
- Creating metrics from log events using filters: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/MonitoringLogData.html
- Filter and pattern syntax to extract value from message: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html#extract-log-event-values
- https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html
Best,
Didier
-1
Hi,
Look also into event notifications: https://repost.aws/knowledge-center/create-rds-event-subscription
None of these notifications are related to query metrics.
Relevant content
- asked 2 years ago
- asked 2 years ago
- AWS OFFICIALUpdated 4 months ago

Hello, i have the same question for postgresql, and i did all the workaround, but I can't capture the queries longer than 1000ms.
This is my not working filter [timestamp, connection_id, user, dbname, pid, log_type, duration>=1000 ms]
But when runing with * the filter works perfectly, so, I guess something is wrong with my filter
This one works, but with all queries [timestamp, connection_id, user, dbname, pid, log_type, duration=* ms]