Lambda Max Memory Used capped below available memory

0

A lambda of ours (which uses Puppeteer) has started crashing. The errors don't correlate to any code change, but do correlate to a bit of a memory spike. We got a few among them that indicate that there's no space left in the /tmp directory. The RESULT logs from the lambda show that Max Memory Usage is far below Memory Size, but it appears capped at an arbitrary number. It always reports Max Memory Usage of 580 MB, when it has 2048MB available to it.

I increased the amount of Ephemeral Storage available for the lambda, which didn't help. Additionally, I print out the size and contents of the /tmp directory using du whenever the lambda errors out, and the directory itself doesn't report being 100% full (often, it's around 25% full). I've pushed the current version of the code up to the lambda in an attempt to sort of force a refresh, which also didn't help.

Has anyone seen this before, or have an idea of what could solve it? Short of destroying the lambda and recreating it, I'm not sure how to figure out what memory issues are happening.

Mardown
已提問 23 天前檢視次數 158 次
1 個回答
0

First, there is no relation between memory and storage. The function doesn't use more than 580 MB probably because it doesn't need to.

With regards to storage, you need to remember that the same execution environment may be reused between invocations. This means that if you store files in /tmp and you do not delete them and the beginning/end of the invocation, your /tmp may become full eventually.

profile pictureAWS
專家
Uri
已回答 23 天前
profile pictureAWS
專家
已審閱 22 天前
  • We do clean the /tmp directory, and as I mentioned it is far from full.

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南