By using AWS re:Post, you agree to the Terms of Use
/GGv2: Pinned (long lived) Node.js Lambda Times Out/

GGv2: Pinned (long lived) Node.js Lambda Times Out


I've created a Node.js Lambda function based on the examples found here ( and imported it as a Greengrass V2 component. Additionally, I've configured the Lambda function component as a 'pinned' or 'long-lived' function (i.e., it should remain running in the background). Also, the Lambda function is configured NOT to run in a Greengrass container (i.e., NoContainer).

Initially, upon deploying the Lambda function, it would not run at all. Then, after increasing the timeoutInSeconds value from 3 to 60, I was able to see the function start and run, but then it is promptly killed via SIGTERM after ~60 seconds. Increasing the timeoutInSeconds value to the max allowed (2147483647) doesn't seem to change the behavior either (and isn't really a good solution).

Since a 'pinned' function should be able to run indefinitely, I would think the timeoutInSeconds value would not matter to the execution of the function (i.e., Greengrass should not kill it)?

I have seen some older comments/notes from other users ( that this can happen when the callback() function is not called in your Lambda's handler function, but I tried this, and it did not seem to fix the issue. I also tried using an asynchronous (async) handler, but this didn't behave any differently.

Is there another setting that must be configured properly in Greengrass V2? The Lambda component? The Lambda function itself? Do I need construct the Lambda handler in a specific way? Are there any better examples of Lambda functions for Greengrass than what is at the link above?


3 Answers
Accepted Answer

The timeout is getting converted to millis as you suspected.

I was able to replicate this behavior on an raspberry pi 3b by disabling all but one core and limiting it to 600mhz. Depending on the device you are using, you could be seeing some issues due to single core performance here.

Deploying configuration to set the timeout to 90s did mitigate the issue though by giving it more time to wait:

  "reset": [],
  "merge": {
    "statusTimeoutInSeconds": 90

In the meantime I can take back to the team a request to investigate if we can use alternative means to track whether the lambda function process is running without error.

answered 20 days ago
  • @Rob: Thank you for the support!

    The platform I am using is a single core processor, so that explanation would make sense. I will verify that a longer status timeout (such as 90 seconds) works on my platform as well.

  • @Rob:

    Sorry for the delay - I gave your suggestion a try with an initial value of 120 seconds, but this still produced the error. I tried again with a timeout of 300 seconds, and this seems to do the trick (the Lambda has now been running for over an hour without a problem).

    My guess is that my platform is much slower performing than even the Raspberry Pi 3b (it is indeed single core).

    Thanks again for your help!


Yep! That's actually the example I've been working with except that I stripped it back to just run the express web server portion:

// WebServerNode.js

// const ggSdk = require('greengrass-core-sdk')

// const iotClient = new ggSdk.IotData()
const os = require('os')
const express = require('express')

const GROUP_ID = process.env.GROUP_ID
const THING_NAME = process.env.AWS_IOT_THING_NAME
const THING_ARN = process.env.AWS_IOT_THING_ARN
const PORT = process.env.PORT

// const base_topic = THING_NAME + '/web_server_node'
// const log_topic = base_topic + '/log'

// function publishCallback(err, data) {
//     console.log(err);
//     console.log(data);
// }

// This is a handler which does nothing for this example
exports.function_handler = function(event, context) {
    console.log('event: ' + JSON.stringify(event));
    console.log('context: ' + JSON.stringify(context));

const app = express()

app.get('/', (req, res) => {
                               res.send('Hello World!')
                               console.log('Hello World request serviced');

                            //   const pubOpt = {
                            //       topic: log_topic,
                            //       payload: JSON.stringify({ message: 'Hello World request serviced' })
                            //   };

                            //   iotClient.publish(pubOpt, publishCallback);

app.listen(PORT, () => console.log(`Example app listening on port ${PORT}!`))

I seem to have found a "solution" by setting the statusTimeoutInSeconds value to the largest 32-bit signed integer (2147483645). This results in an error in the Lambda log on the GG core:

2022-06-09T13:56:33.269Z [ERROR] (pool-2-thread-70) WebServerNode: (node:2398) TimeoutOverflowWarning: 2147483645000 does not fit into a 32-bit signed integer.. {serviceInstance=0, serviceName=WebServerNode, currentState=RUNNING}
2022-06-09T13:56:33.272Z [ERROR] (pool-2-thread-70) WebServerNode: Timeout duration was set to 1.. {serviceInstance=0, serviceName=WebServerNode, currentState=RUNNING}

Now, the Lambda runs fine and continues to handle incoming requests via express as expected. For reference, this is the configuration I'm using for the GGv2 component:

  "lambdaExecutionParameters": {
    "EnvironmentVariables": {
      "PORT": "8001"
  "containerParams": {
    "memorySize": 16000,
    "mountROSysfs": false,
    "volumes": {},
    "devices": {}
  "containerMode": "NoContainer",
  "timeoutInSeconds": 60,
  "maxInstancesCount": 100,
  "inputPayloadEncodingType": "json",
  "maxQueueSize": 1000,
  "pinned": true,
  "maxIdleTimeInSeconds": 60,
  "statusTimeoutInSeconds": 2147483645,
  "pubsubTopics": {
    "0": {
      "topic": "*",
      "type": "IOT_CORE"

One interesting note is that the error says "TimeoutOverflowWarning: 2147483645000 does not fit into a 32-bit signed integer"; however, the value in the configuration is 2147483645. My guess is the Lambda runtime is multiplying this value by 1000 and using it as milliseconds for some call to setTimeout()/setInterval(), and the backup value of 1 somehow fixes the issue I'm seeing.

Setting the statusTimeoutInSeconds value to 2147483 (a very large, valid number for setTimeout()/setInterval()) works as well, but would seem to result in just delaying this issue and the Lambda being killed after 2,147,483 seconds (a little less than 25 days) - not ideal.

For the sake of completeness, I also tried a statusTimeoutInSeconds value of 0 (I had to override the setting in the deployment configuration since the AWS IoT UI Console does not allow a value less than 30 when creating a Lambda component). This also does NOT work and results in the timeout firing immediately (crashing the Lambda).

answered a month ago
  • Hi trowbridgec-laird,

    Sorry to see that you are running into this issue with Greengrass.

    I was trying to replicate your issue with the js lambda function but so far I have not been able to reproduce it.

    I copied your lambda function, created a zip including the relevant node_modules, and created a node12 lambda function. I imported this as a Greengrass component and deployed to a device. It starts the server and posts the lambda health status every minute.

    Can I get some more details about your environment?

    1. What version of node are you using? (I was testing with node v12.22.12)
    2. What version of the Nucleus and LambdaManager components are you using?
    3. If possible, can you post the effectiveConfig.yaml from the greengrass config directory? This contains a yaml view of the configuration that Greengrass has loaded.

Hi. Have you seen this?

Long-lived functions have timeouts that are associated with each invocation of their handler. If you want to invoke code that runs indefinitely, you must start it outside of the handler.

Which example did you follow? This is a pinned example, and invokes the code outside of the handler:

Configuration here:

I hope that helps.

answered a month ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions