- Le plus récent
- Le plus de votes
- La plupart des commentaires
I've tested it with the most simplified configuration. It seems that it is not related to MQTT, brokers, topics or IPC at all.
I created two simplest python lambdas with code:
print("Body")
def lambda_handler(event, context):
pass
Then I created two greengrass components (pinned and with GreengrassContainer mode) with all default params.
Both components were deployed successfully, they printed 'Body' into their logs. But after a while one component failed with the same errors as in my initial post. Actually according to logs one component was restarted two times and another was restarted three times. After the third restart the component was marked as BROKEN. The first component contined to run without any further restarts and errors. After each restart both components printed 'Body' into their logs so every time they started normally.
It seems as if two components interfere each other and maybe compete to some resources. Really weird behavior. Don't know how to dedug it further.
The two pinned lambda which you deployed, are they based on the same lambda? In the effectiveConfig.yml file, do your two components have the same lambda arn value in the configuration?
You are running both in container mode, what is the memory limit set to? By default it is 16MB which is quite small for Python. Your lambdas may run out of memory and the OS is killing them. Please also try to increase the memory limit if you just used the default value.
Thanks,
Michael
Hi MichaelDombrowski-AWS,
Thanks a lot. These two components are based on two different lambdas and ARNs for them are different in effectiveConfig.yml.
But is seems that memory settings have effect. With 128 MB per container two components have been running for 30 minutes without issues.
I thought that 16 MB was quite enough for lambda which only prints a single word. And there were no any hints about lack of memory in logs. Plus the components are independent and it was weird that one of them failed but another one didn't. So it all was confusing.
Nevertheless it raises some additional questions. How this container memory config is tied to Nucleus memory settings? Are these megabytes or part of them consumed from the memory provided for Greengrass core or not? Should I change configurationUpdate for Nucleus deployment accordingly?
E.g. "merge": "{"jvmOptions":"-Xmx2G"}"
BR
The lambda memory is not in any way related to the Nucleus JVM memory settings. Greengrass core will not be using any of those 16MB. There will be some amount of memory being used by the Greengrass Python Lambda runtime support library, but that's relatively small.
Contenus pertinents
- demandé il y a un an
- demandé il y a 2 mois
- AWS OFFICIELA mis à jour il y a 3 ans
- AWS OFFICIELA mis à jour il y a 3 ans
- AWS OFFICIELA mis à jour il y a 2 ans