Lambda using docker - Billed time very high for small duration


Hi, We are using lambda in a docker (ECR), The code is in python with a lot of dependencies (Docker size 700mb) We noticed that our code execution is very small (+-650ms) but the billed time is very high in comparison (4500ms) with a init time of (+- 3800ms) So basically init time is 80% of our billed time for each execution.

My previous pipeline : Every 45min we spin 200 instances of the lambda with a different argument at the same time. (I guess 200 cold start then?)

My new pipeline : Every 45min i spin the same lambda with no argument (1 Cold start) and this lambda spin 200 times the same lambda but with an argument (200 warm start?)

but with the new pipeline the problem is still the same and I dont see any improvement: REPORT RequestId: f9d67348-2deb-410e-b874-b29e8b3569b2 Duration: 653.58 ms Billed Duration: 4353 ms Memory Size: 300 MB Max Memory Used: 284 MB Init Duration: 3698.46 ms

So here are my 3 questions then:

  • Is 3800ms a normal cold start time?
  • In my new pipeline why is it not improving the init duration?
  • What approach do you recommend to reduce/fix the cost?

Thank you !

3 Answers
Accepted Answer

Hello, I mixed my answers with your questions hoping this would be easier to read.

Q) Is 3800ms a normal cold start time? A) Cold starts are impacted by the runtime and Lambda function's dependencies. A normal cold start time really depends on what the Lambda function is doing. The less the Lambda function does during the init duration, the shorter the cold start. If you wanted to see the impact of the docker container on cold start, you can create a Lambda function that just prints "hello world" and gather benchmarks.

Q) In my new pipeline why is it not improving the init duration? A) It isn't very clear if your Lambda function is still experiencing cold starts or not. You could use CloudWatch Insights to query the logs for your Lambda function and generate a report on cold starts. Here is an example of a query you could use:

filter @type = “REPORT”
  | stats count(@type) as countInvocations, 
    count(@initDuration) as countColdStarts, 
    (count(@initDuration)/count(@type))*100 as percentageColdStarts,
    max(@initDuration) as maxColdStartTime,
    avg(@duration) as averageDuration,
    max(@duration) as maxDuration,
    min(@duration) as minDuration,
    avg(@maxMemoryUsed) as averageMemoryUsed,
    max(@memorySize) as memoryAllocated,  (avg(@maxMemoryUsed)/max(@memorySize))*100 as percentageMemoryUsed 
  by bin(1h) as timeFrame

You can use this to compare the two pipelines. You can also use X-Ray to trace the Lambda function and the X-Ray daemon records a segment with details about the function invocation and execution.

Q) What approach do you recommend to reduce/fix the cost? There are three areas you could look at:

  1. If your Lambda function is still impacted by cold starts, you can use Reserved Concurrency to limit how many concurrency instances of the function can run. If I understand your workload, this is the same Lambda just receiving different parameters so you can use Reserved Concurrency to ensure 200 instances of the function aren't created. You'll want to test our different maximum number of concurrent instances to find what best meets your requirements around cost and speed.
  2. Make sure you are optimizing static initialization.
  3. Profile the function with AWS Lambda Power Tuning to ensure you are using the most cost and/or performant memory configuration.
answered 2 years ago

Init Duration is just for First Call, it didn't affect Nth call. i recommend try time() and print() functions between your imports and start of the code. maybe the problem is not relevant to lambda environment itself.

answered 2 years ago

Thanks for your answer @matthew_d it was very helpful ! I also found this great article that also answer all my questions :,alias%20with%20Provisioned%20Concurrency%20configured

answered 2 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions