- 最新
- 最多得票
- 最多評論
You are correct, the timeout doesn't directly impact how/when a container is reused. The number of cold starts you're gonna experience (ie when a new container will be spinned up) will depend on usage pattern and parallel requests landing on your function - this is why we don't disclose an hard number.
This blog includes some details you might find useful.
For use cases where you can't really afford cold starts, you should take a look at Provisioned Concurrency too.
I can add to Giorgio's answer that it also depends on the size of a lambda.
Smaller lambdas are usually warm longer because Lambda is always at least the size you chosen.
So the bigger the lambda is the smaller amount of containers are available.
Smaller lambdas are usually warm longer because Lambda is always at least the size you chosen.
-- That's not fully accurate. If you have 10 active sandboxes of 128mb vs 10gb, we will not differentiate if you continue to use both at the same rate. Use can be "ping" or can be actually use.That's not what I mean :) I just wanted to say, that the need for bigger containers is higher because all smaller lambdas fit to them. It means if you configure lambda to be 128mb, then all containers are valid if your lambda needs to be launched. But if you have 10gb lambda, then only 10gb containers can be used that usually means that your lambda will be warm for a shorter amount of time, but it depends soooo much on the whole AWS infra plus what other clients are deploying to Lambda service etc. So yes, it's not so obvious and easy to estimate. It's better to not rely on such assumptions at all.
To add some notes on @Giorgio and MG's answers
As this duration theoretically depends on the aws infrastructure and can be varied time by time i propose highly to experience these kind of limits by monitoring the exact case,
i ca confirm that the 128Mo lambda was releases after 40min in my first test by after a while i recognized different duration.
As MG mentioned above , the size can change your release duration and well as speed,
i would like to mention again, in serverless world try to monitor and experience
相關內容
- 已提問 6 個月前
- 已提問 1 年前
- AWS 官方已更新 1 年前
- AWS 官方已更新 1 年前
Wrong link Giorgio? It points to an ALB doc page.
Got it. Thanks. I am also confused that if I have two lambda functions and they are in waiting state, does lambda will choose the function that has less waiting time to handle the next coming request?