EC2 instance out of memory, trying to find root cause of memory error

0

We have a Node application running on an EC2 instance that spins up a process for each CPU core on the instance. Lately the instance keeps running out of memory and killing the process. The redundant system work perfectly fine, meaning that it spins up another instance and the load balancer directs traffic towards it when the instance becomes unresponsive.

The problem is that we are now trying to discover what cause the instance to run out of memory in the first place, so that we may optimize our code and prevent it from happening in the first place. Unfortunately we cannot check the logs for this because of log suppression. We have tried turning this off, but to no avail, we still end up with a gap in our logs from when the instance was unresponsive.

Any ideas how we might find out what in our application is causing the instance to run out of memory?

質問済み 9ヶ月前482ビュー
1回答
0

I'd suggest you to consider installing cloudwatch agent, which would continuously keep sending the logs to cloudwatch.

Here are the references to AWS Documentation for same:

profile pictureAWS
エキスパート
回答済み 9ヶ月前
profile pictureAWS
エキスパート
レビュー済み 9ヶ月前
  • We already have cloudwatch running on the instance. That is how mostly how we search our logs. However since the logs are not being produced in the first place, they are unable to be sent to cloudwatch

  • Is it happening continuously for newly launched instances too or that happened for one instance?

  • It happens sporadically on random instances. It is almost certainly a mistake in our programming, but without the logs it is almost impossible to find out at what point something goes wrong.

ログインしていません。 ログイン 回答を投稿する。

優れた回答とは、質問に明確に答え、建設的なフィードバックを提供し、質問者の専門分野におけるスキルの向上を促すものです。

質問に答えるためのガイドライン

関連するコンテンツ