From the information provided, it looks like it is related to a growing terms index. Even though newer versions of ElasticSearch try to use less memory, it could be because it's constantly trying to cleanup the exhausted heap space (memory). Also, check how are your shards setup.
AWS ElasticSearch Instances allocate half of their memory to heap space. Wether to scale up, out, or both, depends a lot on your mapping. You'll find some quick relief by scaling up to an instance with more memory, but you'll have to take a deeper look at your mapping and queries to get the best long-term scalability.
Elastic search is very picky. The hardware environment it likes to run on, although always memory intensive, varies a lot based your mapping and the types of queries you throw at it. It is likely that, after you get it stable, you'll have to tweak it to find the happy-spot based on your performance/cost/storage needs.
I would advise opening up a case with the AWS PS team and work with them on the same to double check.
How to correctly issue post commands to opensearch cluster in VPC?asked 2 months ago
AppSync error: Communication error while executing a request to OpenSearchasked a year ago
AWS Grafana Bad Gateway error to AWS Opensearchasked 2 months ago
Opensearch failing connect timeout and no status in service dashboardasked 9 months ago
Throttling a Client/Index from OpenSearch (ElasticSearch)Accepted Answerasked a year ago
Amazon Opensearch MasterJVM Memory Pressure usage gets high after upgrading Data nodesasked 2 months ago
My EC2 instance suddenly stopped workingAccepted Answerasked a year ago
OpenSearch Cluster Health error: "No server available to handle the request"asked 4 months ago
AWS OpenSearch keeps dropping the indices after a failed Amplify buildasked 10 months ago
AWS Opensearch - stuck in processingasked a year ago