permission denied if i deploy via management console as a lambda func

0

Hi

I have the following error if i deploy a python script as a lambda via the management console. If I deploy it locally via CLI on the gg core it works.

2021-04-19T16:50:01.693Z [ERROR] (pool-2-thread-141) getTemp: FATAL: lambda_runtime.py:428,Failed to initialize Lambda runtime due to exception: [Errno 13] Permission denied: '/dev/input/event0'. {serviceInstance=0, serviceName=getTemp, currentState=RUNNING}

Before I add "RequiresPrivilege": "true" to the local recipe, I've got the same error message also locally. After I let run it as root, it worked properly. If I am using an own lambda via management console, I can not change the recipe, however I see that the start lambda launcher is with root rights. So I am wondering what should I change in Lambda settings to able to run it.
Script:
import json
import sys
import datetime
from threading import Timer
import time
from sense_hat import SenseHat

sense = SenseHat()

def getTemperatur_run():
message="Die Temperatur ist: {}".format(sense.get_temperature())
print(message)

Timer(1*30, getTemperatur_run).start()

getTemperatur_run()

def lambda_handler(event, context):
# TODO implement
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}

local recipe:
{
"RecipeFormatVersion": "2020-01-25",
"ComponentName": "com.example.HelloWorld",
"ComponentVersion": "1.0.1",
"ComponentDescription": "My first AWS IoT Greengrass component.",
"ComponentPublisher": "Amazon",
"ComponentConfiguration": {
"DefaultConfiguration": {
"Message": "world"
}
},
"Manifests": [
{
"Platform": {
"os": "linux"
},
"Lifecycle": {
"Run": {
"Script": "python3 -u {artifacts:path}/hello_world.py '{configuration:/Message}'",
"RequiresPrivilege": "true"
}
}
}
]
}

Receip creted automatically by Lamdba import:
{
"RecipeFormatVersion": "2020-01-25",
"ComponentName": "getTemp",
"ComponentVersion": "1.0.3",
"ComponentType": "aws.greengrass.lambda",
"ComponentDescription": "",
"ComponentPublisher": "AWS Lambda",
"ComponentSource": "arn:aws:lambda:eu-central-1:454451735829:function:getTemp:1",
"ComponentConfiguration": {
"DefaultConfiguration": {
"lambdaExecutionParameters": {
"EnvironmentVariables": {}
},
"containerParams": {
"memorySize": 16000,
"mountROSysfs": false,
"volumes": {},
"devices": {}
},
"containerMode": "NoContainer",
"timeoutInSeconds": 3,
"maxInstancesCount": 100,
"inputPayloadEncodingType": "json",
"maxQueueSize": 1000,
"pinned": true,
"maxIdleTimeInSeconds": 60,
"statusTimeoutInSeconds": 60,
"pubsubTopics": {}
}
},
"ComponentDependencies": {
"aws.greengrass.LambdaLauncher": {
"VersionRequirement": ">=1.0.0",
"DependencyType": "HARD"
},
"aws.greengrass.TokenExchangeService": {
"VersionRequirement": ">=1.0.0",
"DependencyType": "HARD"
},
"aws.greengrass.LambdaRuntimes": {
"VersionRequirement": ">=1.0.0",
"DependencyType": "SOFT"
}
},
"Manifests": [
{
"Platform": {},
"Lifecycle": {},
"Artifacts": [
{
"Uri": "greengrass:lambda-artifact.zip",
"Digest": "fjx_flt4EP9UUcBRbSTxUSI_3RDyKJKEeQ5Dhzt/Z3c=",
"Algorithm": "SHA-256",
"Unarchive": "ZIP",
"Permission": {
"Read": "OWNER",
"Execute": "NONE"
}
}
]
}
],
"Lifecycle": {
"startup": {
"requiresPrivilege": true,
"script": "{aws.greengrass.LambdaLauncher:artifacts:path}/lambda-launcher start"
},
"setenv": {
"AWS_GREENGRASS_LAMBDA_CONTAINER_MODE": "{configuration:/containerMode}",
"AWS_GREENGRASS_LAMBDA_ARN": "arn:aws:lambda:eu-central-1:454451735829:function:getTemp:1",
"AWS_GREENGRASS_LAMBDA_FUNCTION_HANDLER": "lambda_function.lambda_handler",
"AWS_GREENGRASS_LAMBDA_ARTIFACT_PATH": "{artifacts:decompressedPath}/lambda-artifact",
"AWS_GREENGRASS_LAMBDA_CONTAINER_PARAMS": "{configuration:/containerParams}",
"AWS_GREENGRASS_LAMBDA_STATUS_TIMEOUT_SECONDS": "{configuration:/statusTimeoutInSeconds}",
"AWS_GREENGRASS_LAMBDA_ENCODING_TYPE": "{configuration:/inputPayloadEncodingType}",
"AWS_GREENGRASS_LAMBDA_PARAMS": "{configuration:/lambdaExecutionParameters}",
"AWS_GREENGRASS_LAMBDA_RUNTIME_PATH": "{aws.greengrass.LambdaRuntimes:artifacts:decompressedPath}/runtime/",
"AWS_GREENGRASS_LAMBDA_EXEC_ARGS": "["python3.7","-u","/runtime/python/lambda_runtime.py","--handler=lambda_function.lambda_handler"]",
"AWS_GREENGRASS_LAMBDA_RUNTIME": "python3.7"
},
"shutdown": {
"requiresPrivilege": true,
"script": "{aws.greengrass.LambdaLauncher:artifacts:path}/lambda-launcher stop; {aws.greengrass.LambdaLauncher:artifacts:path}/lambda-launcher clean"
}
}
}

질문됨 3년 전460회 조회
10개 답변
0

Hello,
First off, if you're writing new code and not porting code from v1 then I would not necessarily recommend that you use lambda. I'd suggest that you just use a regular Greengrass component instead.

Assuming that you want to continue with lambda, then what you need to do is to run the lambda as root. This is not accomplished by setting requiresPrivilege since that is always true for the lambda launcher. You must instead deploy the lambda component and set the runWith user setting in the deployment to "root". Have a look at the "runWith" section of this documentation: https://docs.aws.amazon.com/greengrass/v2/developerguide/create-deployments.html.

Hope that helps,
Michael Dombrowski

AWS
전문가
답변함 3년 전
0

Hi Michael,

yes I remember, that I had to run the lambda with root in v1 as well but I can not find the place where I can set it in the new v2 management console. Could you help me?
If I import lambda in the management console the yml will created automatically and I can not add additional parameters as run with.

yes I know what you mean. I've seen that with lambda it will be dificcult instead of using plain greengrass. My problmem is, that i want to share my codes with other sutdents withotu paying s3 buckets or so. Until now, i sent them the lambda.zip they imported it and was able to deploy in any gg core devices. Of course I can create and run the script on the core with gg cli but this is not real life. In another post somebody else asked the same. Sometimes you do not have ssh on the core and you want to "manage" the deployments from AWS and not on the device itself.
So I am wondering, how could I share the code and deploy it on many core devices the same code, without to use s3 to store the components....

br
hankie

답변함 3년 전
0

is there any python lib for ggv2 already, which I can put into my lambda.zip as in v1? or I have to use https://github.com/aws/aws-iot-device-sdk-python-v2

답변함 3년 전
0

Since you are new to v2, you should be using our v2 SDK which you linked. The existing greengrass sdk from v1 does still work for lambdas however the v2 sdk will give you more capabilities.

Michael

AWS
전문가
답변함 3년 전
0

ok thanks. It works if I deploy on my RasPi a GGv2 component directly and I can send sensor data to AWS IoT Core. So i can configure my component as I want and put for example root rights for the executation of install steps for the lifecycle.
I thought, it would work as with v1 and Lambda, where I put the ggv1 sdk and my lambda.py into one zip file, upload it via management console and was able to deploy and run it (and change the user to root to have access on the sensors).

thanks for the clarification!

답변함 3년 전
0

my final settings is

{
"RecipeFormatVersion": "2020-01-25",
"ComponentName": "com.example.HelloWorld",
"ComponentVersion": "1.0.0",
"ComponentDescription": "My first AWS IoT Greengrass component.",
"ComponentPublisher": "Amazon",
"ComponentConfiguration": {
"DefaultConfiguration": {
"Message": "world",
"accessControl": {
"aws.greengrass.ipc.mqttproxy": {
"com.example.HelloWorld:pubsub:1": {
"policyDescription": "Allows access to publish to test/topic.",
"operations": [
"aws.greengrass#PublishToIoTCore"
],
"resources": [
"test/topic"
]
}
}
}
}
},
"Manifests": [
{
"Platform": {
"os": "linux"
},
"Lifecycle": {
"Run": {
"Script": "python3 -u {artifacts:path}/hello_world.py '{configuration:/Message}'",
"RequiresPrivilege": "true"
}
}
}
]
}

답변함 3년 전
0

to get the sensor value and push it to AWS IoT Core:

import sys
import datetime
from threading import Timer
import time
from sense_hat import SenseHat
import awsiot.greengrasscoreipc
from awsiot.greengrasscoreipc.model import (
QOS,
PublishToIoTCoreRequest
)

sense = SenseHat()

def getTemperatur_run():
topic = "test/topic";
message="Die Temperatur ist: {}".format(sense.get_temperature())
TIMEOUT = 10
ipc_client = awsiot.greengrasscoreipc.connect()
qos = QOS.AT_LEAST_ONCE

request = PublishToIoTCoreRequest()
request.topic_name = topic
request.payload = bytes(message, "utf-8")
request.qos = qos
operation = ipc_client.new_publish_to_iot_core()
operation.activate(request)
future = operation.get_response()
future.result(TIMEOUT)

Timer(1*30, getTemperatur_run).start()

getTemperatur_run()

답변함 3년 전
0

Hi,
Sorry I missed this post earlier. You can set the user that the lambda runs as when you create the deployment. If you're using the console, on the third page, click on the lambda component name and click configure. Then there will be a window which allows you to set the user and group.

As for sharing code without s3 buckets, you can use our local deployments through the Greengrass CLI component. Have a look at https://docs.aws.amazon.com/greengrass/v2/developerguide/greengrass-cli-component.html

AWS
전문가
답변함 3년 전
0

Hi,

no worry about that. I can deploy the code with the cli and I have uploaded the artefacts in S3. But I realised that on RasPi only 1.16 aws cli is available and it supports only greengrass v1. Can I use this as well to create an own component based on my recipe and if yes with which commaned. For v2 it would be

sudo aws greengrassv2 create-component-version \
--inline-recipe fileb://home/pi/greengrassv2/recipes/com.raspi.GetTemp-1.0.0.json

답변함 3년 전
0

Hi,
You can upgrade the cli easily by running pip3 install --upgrade awscli.

But, when I was talking about a local deployment I mean to use the local Greengrass CLI which I linked. It is separate from the main AWS CLI and it would allow you to perform development and deployments without any cloud interaction at all; no Greengrass, IoT, or S3 involved.

AWS
전문가
답변함 3년 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠