Lambda Max Memory Used capped below available memory

0

A lambda of ours (which uses Puppeteer) has started crashing. The errors don't correlate to any code change, but do correlate to a bit of a memory spike. We got a few among them that indicate that there's no space left in the /tmp directory. The RESULT logs from the lambda show that Max Memory Usage is far below Memory Size, but it appears capped at an arbitrary number. It always reports Max Memory Usage of 580 MB, when it has 2048MB available to it.

I increased the amount of Ephemeral Storage available for the lambda, which didn't help. Additionally, I print out the size and contents of the /tmp directory using du whenever the lambda errors out, and the directory itself doesn't report being 100% full (often, it's around 25% full). I've pushed the current version of the code up to the lambda in an attempt to sort of force a refresh, which also didn't help.

Has anyone seen this before, or have an idea of what could solve it? Short of destroying the lambda and recreating it, I'm not sure how to figure out what memory issues are happening.

Mardown
preguntada hace un mes182 visualizaciones
1 Respuesta
0

First, there is no relation between memory and storage. The function doesn't use more than 580 MB probably because it doesn't need to.

With regards to storage, you need to remember that the same execution environment may be reused between invocations. This means that if you store files in /tmp and you do not delete them and the beginning/end of the invocation, your /tmp may become full eventually.

profile pictureAWS
EXPERTO
Uri
respondido hace un mes
profile pictureAWS
EXPERTO
revisado hace un mes
  • We do clean the /tmp directory, and as I mentioned it is far from full.

No has iniciado sesión. Iniciar sesión para publicar una respuesta.

Una buena respuesta responde claramente a la pregunta, proporciona comentarios constructivos y fomenta el crecimiento profesional en la persona que hace la pregunta.

Pautas para responder preguntas