- Newest
- Most votes
- Most comments
Wow, that's a lot of details. The working set of your queries is larger than the available memory. DocumentDB reserves some memory for caching indexes and query results, but if the working set does not fit, it will need to fetch data from disk repeatedly.
You may have queries that are not optimally written and are not taking advantage of indexes. Without seeing the actual queries it is hard to say for certain, but non-optimal queries could lead to more disk I/O.
Concurrency and lock contention could be causing queries to wait on each other. With a high volume of queries, some may be queued behind others. Temporary spikes in load could be causing queueing and slower response times for queries during those periods. The instance size of db.r6g.2xlarge may be insufficient to handle the workload, though the average CPU utilization does not seem exceptionally high.
To better troubleshoot, I would recommend using the DocumentDB profiler to analyze the slowest queries. You can also enable the slow query log to capture query details.
Relevant content
- asked 6 months ago
- asked a year ago
- asked 10 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 8 months ago
- AWS OFFICIALUpdated 8 months ago
- AWS OFFICIALUpdated 8 months ago
Wow! Thank you for your advice. Indeed. There might be some inefficient queries. I'm sure there are no transactions. One thing I couldn't understand is when I copied a slow query -taken over 100ms- in the profiler, and pasted it in the console with explain() so that I could see the execution plan, it estimated way less time say 0.017 or less. Anyway, it seems the instance size is insufficient as you advised. I'll try to upgrade the instance size. Thanks again.