Any special configs for pinned lambda components required?

0

We face a strange situation with our two pinned lambda based component running in GGv2.
These two components are similar to each other. They both are connected to local MQTT broker from one side and to IoT Core via IPC from another side.
One of the components forwards messages from local MQTT broker to IoT topic. It is subscribed to one local topic and publishes messages to one IoT core topic.
The second component does the same but it uses a different local topic and shadow topics in IoT Core. Plus it forwards messages in the opposite direction as well - from IoT shadow topics to local MQTT topic.
So there is no intersection in topics, components use different scopes of topics on both sides. The only common point are local MQTT broker and IPC. We use the same local mosquitto broker but create two different clients using paho-mqtt python created independently in components. We use aws-iot-device-sdk-python-v2 package to work with IPC and IoT Core from Greengrass.
When we deploy any single component to our device it works fine without any problems. But being deployed together, one of them fails after a while:
2021-03-10T18:17:36.688Z [WARN] (pool-1-thread-3) com.aws.greengrass.lambdamanager.Lambda: The lambda has not reported status within the specified timeout PT1M and will be restarted immediately.. {componentName=CustomComponent}
2021-03-10T18:17:36.688Z [ERROR] (pool-1-thread-3) com.aws.greengrass.lambdamanager.UserLambdaService: service-errored. {serviceInstance=0, serviceName=CustomComponent, currentState=RUNNING}
com.aws.greengrass.lambdamanager.StatusTimeoutException: Lambda status not received within timeout
at com.aws.greengrass.lambdamanager.Lambda.lambda$createInstanceKeepAliveTask$5(Lambda.java:261)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

2021-03-10T18:17:36.688Z [INFO] (pool-1-thread-3) com.aws.greengrass.lambdamanager.UserLambdaService: service-report-state. {serviceInstance=0, serviceName=CustomComponent, currentState=RUNNING, newState=ERRORED} 2021-03-10T18:17:36.688Z [INFO] (CustomComponent-lifecycle) com.aws.greengrass.lambdamanager.UserLambdaService: service-set-state. {serviceInstance=0, serviceName=AWSCM-46.DoorLocker, currentState=RUNNING, newState=BROKEN} 2021-03-10T18:17:36.689Z [INFO] (CustomComponent-lifecycle) com.aws.greengrass.status.FleetStatusService: fss-status-update-published. Status update published to FSS. {serviceName=FleetStatusService, currentState=RUNNING} 2021-03-10T18:17:36.693Z [INFO] (pool-2-thread-73) com.aws.greengrass.lambdamanager.UserLambdaService: Shutdown initiated. {serviceInstance=0, serviceName=CustomComponent, currentState=BROKEN} 2021-03-10T18:17:37.060Z [INFO] (pool-2-thread-73) com.aws.greengrass.lambdamanager.UserLambdaService: generic-service-shutdown. {serviceInstance=0, serviceName=CustomComponent, currentState=BROKEN}

What is more interesting that the component which placed first in deployment.json file used in deployment always fails. We can change the order of components but the first component always fails. It seems as if once the second component is deployed it is somehow affects the first one.
Are there any additional settings required to allow two components to work with IPC at the same time? How to debug this issue in general? It's not clear why a component doesn't report its status.

질문됨 3년 전239회 조회
4개 답변
0

I've tested it with the most simplified configuration. It seems that it is not related to MQTT, brokers, topics or IPC at all.
I created two simplest python lambdas with code:

print("Body")
def lambda_handler(event, context):
pass

Then I created two greengrass components (pinned and with GreengrassContainer mode) with all default params.
Both components were deployed successfully, they printed 'Body' into their logs. But after a while one component failed with the same errors as in my initial post. Actually according to logs one component was restarted two times and another was restarted three times. After the third restart the component was marked as BROKEN. The first component contined to run without any further restarts and errors. After each restart both components printed 'Body' into their logs so every time they started normally.
It seems as if two components interfere each other and maybe compete to some resources. Really weird behavior. Don't know how to dedug it further.

답변함 3년 전
0

The two pinned lambda which you deployed, are they based on the same lambda? In the effectiveConfig.yml file, do your two components have the same lambda arn value in the configuration?

You are running both in container mode, what is the memory limit set to? By default it is 16MB which is quite small for Python. Your lambdas may run out of memory and the OS is killing them. Please also try to increase the memory limit if you just used the default value.

Thanks,
Michael

AWS
전문가
답변함 3년 전
0

Hi MichaelDombrowski-AWS,

Thanks a lot. These two components are based on two different lambdas and ARNs for them are different in effectiveConfig.yml.
But is seems that memory settings have effect. With 128 MB per container two components have been running for 30 minutes without issues.
I thought that 16 MB was quite enough for lambda which only prints a single word. And there were no any hints about lack of memory in logs. Plus the components are independent and it was weird that one of them failed but another one didn't. So it all was confusing.
Nevertheless it raises some additional questions. How this container memory config is tied to Nucleus memory settings? Are these megabytes or part of them consumed from the memory provided for Greengrass core or not? Should I change configurationUpdate for Nucleus deployment accordingly?
E.g. "merge": "{"jvmOptions":"-Xmx2G"}"

BR

답변함 3년 전
0

The lambda memory is not in any way related to the Nucleus JVM memory settings. Greengrass core will not be using any of those 16MB. There will be some amount of memory being used by the Greengrass Python Lambda runtime support library, but that's relatively small.

AWS
전문가
답변함 3년 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠