EC2 instance out of memory, trying to find root cause of memory error

0

We have a Node application running on an EC2 instance that spins up a process for each CPU core on the instance. Lately the instance keeps running out of memory and killing the process. The redundant system work perfectly fine, meaning that it spins up another instance and the load balancer directs traffic towards it when the instance becomes unresponsive.

The problem is that we are now trying to discover what cause the instance to run out of memory in the first place, so that we may optimize our code and prevent it from happening in the first place. Unfortunately we cannot check the logs for this because of log suppression. We have tried turning this off, but to no avail, we still end up with a gap in our logs from when the instance was unresponsive.

Any ideas how we might find out what in our application is causing the instance to run out of memory?

已提问 8 个月前479 查看次数
1 回答
0

I'd suggest you to consider installing cloudwatch agent, which would continuously keep sending the logs to cloudwatch.

Here are the references to AWS Documentation for same:

profile pictureAWS
专家
已回答 8 个月前
profile pictureAWS
专家
已审核 8 个月前
  • We already have cloudwatch running on the instance. That is how mostly how we search our logs. However since the logs are not being produced in the first place, they are unable to be sent to cloudwatch

  • Is it happening continuously for newly launched instances too or that happened for one instance?

  • It happens sporadically on random instances. It is almost certainly a mistake in our programming, but without the logs it is almost impossible to find out at what point something goes wrong.

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则