By using AWS re:Post, you agree to the Terms of Use
/AWS IoT Greengrass/

Questions tagged with AWS IoT Greengrass

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Problems using gdk

I am trying to use GDK (Greengrass Development Kit) for developing IoT Greengrass components. I've followed by [instruction](https://docs.aws.amazon.com/greengrass/v2/developerguide/create-components.html). I've done the next steps: init: > gdk component init --template HelloWorld --language python Then have changed `gdk-config.json` file: ``` { "component": { "com.example.PythonHelloWorld": { "author": "SOME", "version": "1.0.0", "build": { "build_system" : "zip" }, "publish": { "bucket": "greengrass-component-artifacts", "region": "eu-central-1" } } }, "gdk_version": "1.0.0" } ``` And after `build` command I got an error: ``` aleksandr@myPC:~/test/test$ gdk component build [2021-12-23 10:34:34] INFO - Getting project configuration from gdk-config.json [2021-12-23 10:34:34] INFO - Found component recipe file 'recipe.yaml' in the project directory. [2021-12-23 10:34:34] INFO - Building the component 'com.example.PythonHelloWorld' with the given project configuration. [2021-12-23 10:34:34] INFO - Using 'zip' build system to build the component. [2021-12-23 10:34:34] WARNING - This component is identified as using 'zip' build system. If this is incorrect, please exit and specify custom build command in the 'gdk-config.json'. [2021-12-23 10:34:34] INFO - Zipping source code files of the component. =============================== ERROR =============================== Failed to build the component with the given project configuration. Error building the component with the given build system. Failed to zip the component in default build mode. maximum recursion depth exceeded while calling a Python object ``` Can anyone please help me? What I have done wrong? Python 3.8.0 was used.
3
answers
0
votes
11
views
OleksandrTurok
asked 24 days ago

How to fix `Authorization Failure` error when installing Greengrass Core software on edge device?

So I am using fleet provisioning to provision devices from as described in the fleet tempalte. That did work a few times and when today I tried to do it again, I get the following error in Cloud Watch logs: ``` { "timestamp": "2021-12-21 20:59:22.486", "logLevel": "ERROR", "traceId": "0cdb55f5-2d44-7057-e224-a28735791", "accountId": "accound_id", "status": "Failure", "eventType": "Connect", "protocol": "MQTT", "clientId": "b99f2af6-4195-4145-86c4-", "principalId": "d4ef80aa40cbed0388db1b682198e9879fd009b8f89cf2037a9853fe", "sourceIp": "80.57.107.22", "sourcePort": 52891, "reason": "AUTHORIZATION_FAILURE", "details": "Authorization Failure" } ``` I have not changed anything from what have worked yesterday. This is the logs from the edge device: ``` 2021-12-21T20:59:21.997Z [WARN] (main) com.aws.greengrass.deployment.DeviceConfiguration: Error looking up AWS region. {} software.amazon.awssdk.core.exception.SdkClientException: Unable to load region from any of the providers in the chain software.amazon.awssdk.regions.providers.DefaultAwsRegionProviderChain@c05fddc: [software.amazon.awssdk.regions.providers.SystemSettingsRegionProvider@5e2c3d18: Unable to load region from system settings. Region must be specified either via environment variable (AWS_REGION) or system property (aws.region)., software.amazon.awssdk.regions.providers.AwsProfileRegionProvider@6440112d: No region provided in profile: default, software.amazon.awssdk.regions.providers.InstanceProfileRegionProvider@7e990ed7: Unable to contact EC2 metadata service.] ``` As far as I can see I have my region properly defined and if it was not defined well, it should have given me this exception the first time which it did not since it worked before. This is my IoT Policy: ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "iot:Publish", "iot:Subscribe", "iot:Receive", "iot:Connect", "greengrass:*" ], "Resource": [ "*" ] }, { "Effect": "Allow", "Action": "iot:AssumeRoleWithCertificate", "Resource": "arn:aws:iot:region:accoun_id:rolealias/GGCV2TokenExchangeRoleAlias" } ] } ``` Fleet Provisioning Template: ``` { "Parameters": { "ThingName": { "Type": "String" }, "ThingGroupName": { "Type": "String" }, "AWS::IoT::Certificate::Id": { "Type": "String" } }, "Resources": { "certificate": { "Properties": { "CertificateId": { "Ref": "AWS::IoT::Certificate::Id" }, "Status": "Active" }, "Type": "AWS::IoT::Certificate" }, "policy": { "Properties": { "PolicyName": "GGCV2IoTThingPolicy" }, "Type": "AWS::IoT::Policy" }, "thing": { "OverrideSettings": { "AttributePayload": "MERGE", "ThingGroups": "DO_NOTHING", "ThingTypeName": "REPLACE" }, "Properties": { "AttributePayload": {}, "ThingGroups": [], "ThingName": { "Fn::Join": [ "", [ "Prefix_", { "Ref": "ThingName" } ] ] } }, "Type": "AWS::IoT::Thing" } } } ``` Greengrass Config file: ``` services: aws.greengrass.Nucleus: version: "2.5.2" aws.greengrass.FleetProvisioningByClaim: configuration: rootPath: /greengrass/v2 awsRegion: "region" iotDataEndpoint: "endpoint" iotCredentialEndpoint: "credentialsPoint" iotRoleAlias: "GGCV2TokenExchangeRoleAlias" provisioningTemplate: "GGCV2FleetProvisioning" claimCertificatePath: "/greengrass/v2/claim-certs/claim.pem.crt" claimCertificatePrivateKeyPath: "/greengrass/v2/claim-certs/claim.private.pem.key" rootCaPath: "/greengrass/v2/AmazonRootCA1.pem" templateParameters: ThingName: "MyGreengrassCore" ThingGroupName: "MyGreengrassCoreGroup" ```
1
answers
0
votes
8
views
AWS-User-2130413
asked a month ago

How to automate the creation of Greengrass Core Device Shadow with fleet provisioning?

Hi all, So I have managed to create a fleet provisioning template which I use to register Greengrass Core V2 devices and it works fine with the only exception that it does not create the greengrass core device if a shadow does not exist before that I get this error in the cloud watch logs: ``` "details": "No shadow exists with name: 'MyGreengrassCore2' ``` Then I manually add the device shadow and then everything works fine but creating the shadow manually is not desired. I checked different places in the aws documentation but I did not find how to add device shadow creation as part of the fleet provisioning template. Is that possible? If yes, how to do that? Thanks in advance. EDIT: Added some more context - cloud watch logs, IoT Policy Cloud Watch Logs: ``` { "timestamp": "2021-12-18 19:56:06.050", "logLevel": "ERROR", "traceId": "a4003747-a168-1956-ab44", "accountId": "account_id", "status": "Failure", "eventType": "GetThingShadow", "protocol": "MQTT", "deviceShadowName": "Prefix_MyGreengrassCore2", "topicName": "$aws/things/Prefix_MyGreengrassCore2/shadow/name/AWSManagedGreengrassV2Deployment/get", "details": "No shadow exists with name: 'Prefix_MyGreengrassCore2~AWSManagedGreengrassV2Deployment'" } ``` IoT Policy ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "iot:Publish", "iot:Subscribe", "iot:Receive", "iot:Connect", "greengrass:*" ], "Resource": [ "*" ] }, { "Effect": "Allow", "Action": "iot:AssumeRoleWithCertificate", "Resource": "arn:aws:iot:region:accoun_id:rolealias/GGCV2TokenExchangeRoleAlias" } ] } ```
2
answers
0
votes
9
views
AWS-User-2130413
asked a month ago

"Operation not permitted" while deploying the Hello world comonent in AWS Greengrass device

We have followed the AWS documentation to deploy the Hello world python component on AWS greengrass which is running on a container. **Note : I need to run the container with greengrass as a non-root user. ** **Workaround used in Dockerfile: ** ``` RUN apt-get update -y && apt-get install sudo RUN groupadd ggc_group && \ useradd -m -G ggc_group ggc_user && echo "ggc_user:ggc_user" | chpasswd && adduser ggc_user sudo USER ggc_user ``` **Also when I do `whoami` inside the container, I get a random user such as `u7777775emnfnppabnt3r7cpg5q` instead of ggc_user ** I was able to deploy the greengrass cli without any issue. But the Hello world deployment is throwing the errors as shown below: ``` 2021-12-13T09:45:32.066Z [ERROR] (pool-2-thread-23) com.aws.greengrass.lifecyclemanager.GenericExternalService: update-artifact-owner. Error updating service artifact owner. {serviceName=com.example.HelloWorld, currentState=STARTING, user=ggc_user, group=ggc_group} java.nio.file.FileSystemException: /var/lib/veea/greengrasspv/app/greengrass/v2/packages/artifacts/com.example.HelloWorld/1.0.0/hello_world.py: Operation not permitted at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) at java.base/sun.nio.fs.UnixFileAttributeViews$Posix.setOwners(UnixFileAttributeViews.java:268) at java.base/sun.nio.fs.UnixFileAttributeViews$Posix.setOwner(UnixFileAttributeViews.java:290) at com.aws.greengrass.util.platforms.unix.UnixPlatform.setOwner(UnixPlatform.java:382) at com.aws.greengrass.util.platforms.Platform.lambda$setPermissions$1(Platform.java:147) at com.aws.greengrass.util.platforms.Platform$1.visitFile(Platform.java:178) at com.aws.greengrass.util.platforms.Platform$1.visitFile(Platform.java:167) at java.base/java.nio.file.Files.walkFileTree(Files.java:2725) at java.base/java.nio.file.Files.walkFileTree(Files.java:2797) at com.aws.greengrass.util.platforms.Platform.setPermissions(Platform.java:167) at com.aws.greengrass.util.platforms.Platform.setPermissions(Platform.java:109) at com.aws.greengrass.lifecyclemanager.RunWithPathOwnershipHandler.setPermissions(RunWithPathOwnershipHandler.java:91) at com.aws.greengrass.lifecyclemanager.RunWithPathOwnershipHandler.updateOwner(RunWithPathOwnershipHandler.java:74) at com.aws.greengrass.lifecyclemanager.GenericExternalService.updateComponentPathOwner(GenericExternalService.java:593) at com.aws.greengrass.lifecyclemanager.GenericExternalService.run(GenericExternalService.java:655) at com.aws.greengrass.lifecyclemanager.GenericExternalService.run(GenericExternalService.java:625) at com.aws.greengrass.lifecyclemanager.GenericExternalService.handleRunScript(GenericExternalService.java:444) at com.aws.greengrass.lifecyclemanager.GenericExternalService.startup(GenericExternalService.java:364) at com.aws.greengrass.lifecyclemanager.Lifecycle.lambda$handleStateTransitionStartingToRunningAsync$9(Lifecycle.java:531) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) 2021-12-13T09:45:32.075Z [ERROR] (pool-2-thread-23) com.aws.greengrass.lifecyclemanager.GenericExternalService: Service artifacts may not be accessible to user. {serviceName=com.example.HelloWorld, currentState=STARTING} 2021-12-13T09:45:32.094Z [INFO] (pool-2-thread-23) com.aws.greengrass.lifecyclemanager.GenericExternalService: service-report-state. {serviceName=com.example.HelloWorld, currentState=STARTING, newState=RUNNING} ``` **Kindly help me to resolve this user permissions issue. **
2
answers
0
votes
10
views
Hariharnath Paduchuru
asked a month ago

accessing iot core topic from within docker c++ program

I have a c++ program that runs within a docker container. docker container receipe: ``` { "RecipeFormatVersion": "2020-01-25", "ComponentName": "au.com.mycompany.smartdvr.docker.streamcontroller", "ComponentVersion": "1.3.22", "ComponentType": "aws.greengrass.generic", "ComponentDescription": "A component that runs the smart dvr Docker container from a private Amazon ECR image. Video/Audio saving and streaming to Kinesis added logging to file, file culling,ridgun optimisation", "ComponentPublisher": "MYCompany", "ComponentConfiguration": { "DefaultConfiguration": { "accessControl": { "aws.greengrass.ipc.mqttproxy": { "au.com.mycompany.smartdvr.docker.streamcontroller:mqttproxy:1": { "policyDescription": "Allows access to publish/subscribe to all topics.", "operations": [ "aws.greengrass#PublishToIoTCore", "aws.greengrass#SubscribeToIoTCore" ], "resources": [ "*" ] } } } } }, "ComponentDependencies": { "aws.greengrass.DockerApplicationManager": { "VersionRequirement": ">=2.0.0 <2.1.0", "DependencyType": "HARD" }, "aws.greengrass.TokenExchangeService": { "VersionRequirement": ">=2.0.0 <2.1.0", "DependencyType": "HARD" }, "au.com.mycompany.smartdvr.docker.gstd": { "VersionRequirement": ">=0.0.1 <5.0.0", "DependencyType": "HARD" } }, "Manifests": [ { "Platform": { "os": "linux" }, "Lifecycle": { "Run": "echo '==========>>>>>>>'; $(docker kill streamcontroller || true) ; $(docker rm streamcontroller || true); docker run --name=streamcontroller --cap-add=SYS_PTRACE --runtime=nvidia -e DISPLAY=$DISPLAY --privileged --volume /tmp/.X11-unix:/tmp/.X11-unix --net=host -e NVIDIA_VISIBLE_DEVICES=all -v $HOME/.Xauthority:/root/.Xauthority -v /run/udev/control:/run/udev/control -v /greengrass/v2:/greengrass/v2 -v $AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT:$AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT -e SVCUID -e AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT=$AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT -v /dev:/dev -v /sys/firmware/devicetree/base/serial-number:/sys/firmware/devicetree/base/serial-number -v /data:/data -e NVIDIA_DRIVER_CAPABILITIES=compute,utility,graphics xxxxxxxx.dkr.ecr.ap-southeast-2.amazonaws.com/smartdvr:latest2 runstreamcontroller.sh" }, "Artifacts": [ { "Uri": "docker:xxxxxxx.dkr.ecr.ap-southeast-2.amazonaws.com/smartdvr:latest2", "Unarchive": "NONE", "Permission": { "Read": "OWNER", "Execute": "NONE" } } ] } ], "Lifecycle": {} } ``` inside the c++ program i subscribe to an iot core topic like: ``` void ExternalBroker::subscribe_to_video_request_topic() { string topic("mtd/"); topic += LocalConfig::get_machine_id(); topic += "/v1/mdvr/video/request"; g_logging->info("External Broker subscribing to topic " + topic); QOS qos = QOS_AT_MOST_ONCE; int timeout = 10; g_logging->info("External Broker : --00" ); SubscribeToIoTCoreRequest request; request.SetTopicName(topic.c_str()); request.SetQos(qos); g_logging->info("External Broker : --01" ); ExternalBrokerVideoRequestTopicHandler streamHandler; SubscribeToIoTCoreOperation operation = ipcClient.NewSubscribeToIoTCore(streamHandler); g_logging->info("External Broker : --02" ); auto activate = operation.Activate(request, nullptr); g_logging->info("External Broker : --03" ); activate.wait(); g_logging->info("External Broker : --11" ); auto responseFuture = operation.GetResult(); g_logging->info("External Broker : --22" ); if (responseFuture.wait_for(std::chrono::seconds(timeout)) == std::future_status::timeout) { std::cerr << "External Broker Operation timed out while waiting for response from Greengrass Core." << std::endl; return; } g_logging->info("External Broker : auto response = responseFuture.get()" ); auto response = responseFuture.get(); g_logging->info("External Broker : respnse" ); if (!response) { // Handle error. g_logging->error("External Broker : ERROR" ); auto errorType = response.GetResultType(); if (errorType == OPERATION_ERROR) { auto *error = response.GetOperationError(); // Handle operation error. g_logging->error("External Broker : operation ERROR" ); } else { // Handle RPC error. g_logging->error("External Broker : RPC ERROR" ); } } else { g_logging->info("External Broker : got response: " ); } } ``` but it never gets past the ``` activate.wait(); ``` and when i publish something on that topic using the consoles test page ( https://ap-southeast-2.console.aws.amazon.com/iot/home?region=ap-southeast-2#/test ) I never see the callback to the ``` class ExternalBrokerVideoRequestTopicHandler : public SubscribeToIoTCoreStreamHandler ``` ``` void ExternalBrokerVideoRequestTopicHandler::OnStreamEvent(IoTCoreMessage *response) { g_logging->info("External Broker : aa" ); auto message = response->GetMessage(); if (message.has_value() && message.value().GetPayload().has_value()) { g_logging->error("External Broker : bb" ); auto messageBytes = message.value().GetPayload().value(); std::string messageString(messageBytes.begin(), messageBytes.end()); std::string topicName = message.value().GetTopicName().value().c_str(); // Handle message. g_logging->info("External Broker received iot message on " + topicName + " messgae: " + messageString); // try to load the json json payload_json = nlohmann::json::parse(messageString); if( payload_json ) { g_logging->info("External Broker : cc" ); // forward request to queue and save queue } else { g_logging->error("External Broker can't load " + messageString); } } ``` log output: ``` [2021-12-13 05:29:59.956] [streamcontroller] [info] External Broker trying to connect to Greengrass APC [2021-12-13 05:29:59.964] [streamcontroller] [info] External Broker conneted [2021-12-13 05:29:59.964] [streamcontroller] [info] External Broker set_connected : [2021-12-13 05:29:59.964] [streamcontroller] [error] External Broker : 22 [2021-12-13 05:29:59.964] [streamcontroller] [error] External Broker : 33 [2021-12-13 05:29:59.964] [streamcontroller] [info] External Broker subscribing to topic mtd/smartdvr-1423019132001/v1/mdvr/video/request [2021-12-13 05:29:59.964] [streamcontroller] [info] External Broker : --00 [2021-12-13 05:29:59.965] [streamcontroller] [info] External Broker : --01 [2021-12-13 05:29:59.965] [streamcontroller] [info] External Broker : --02 [2021-12-13 05:29:59.965] [streamcontroller] [info] External Broker : --03 [2021-12-13 05:30:09.907] [streamcontroller] [info] ExternalBroker timer tick.. [2021-12-13 05:30:19.947] [streamcontroller] [info] ExternalBroker timer tick.. [2021-12-13 05:30:29.952] [streamcontroller] [info] ExternalBroker timer tick.. [2021-12-13 05:30:40.003] [streamcontroller] [info] ExternalBroker timer tick.. ``` i can't see any errors in greengrass.log what am i missing here ?
4
answers
0
votes
8
views
clogwog
asked a month ago

AWS IoT Timestream Rule Action Multi Measure Record

Hi, Is it possible to create a single db record with multiple measurements using IoT Greengrass Timestream rule action? I want to show 3 measurements from a device in a single row. Even though my select query has 3 measurement they are all inserted in to table as different rows. My Timestream rule in CF template: ``` TimestreamRule: Type: AWS::IoT::TopicRule Properties: TopicRulePayload: RuleDisabled: false Sql: !Join [ '', [ "SELECT cpu_utilization, memory_utilization, disc_utilization FROM 'device/+/telemetry'", ], ] Actions: - Timestream: DatabaseName: !Ref TelemetryTimestreamDatabase TableName: !GetAtt DeviceTelemetryTimestreamTable.Name Dimensions: - Name: device Value: ${deviceId} RoleArn: !GetAtt SomeRole.Arn Timestamp: Unit: SECONDS Value: ${time} ``` My message payload: ``` { "cpu_utilization": 8, "memory_utilization": 67.4, "disc_utilization": 1.1, "deviceId": "asdasdasd123123123", "time": "1639141461" } ``` Resulting records in Timestream: | device | measure_name | time | measure_value::bigint | measure_value::double| | --- | | 61705b3f6ac7696431ac6b12 | disc_utilization | 2021-12-10 13:03:47.000000000 | - | 1.1 | | 61705b3f6ac7696431ac6b12 | memory_utilization | 2021-12-10 13:03:47.000000000 | - | 67.1 | | 61705b3f6ac7696431ac6b12 | cpu_utilization | 2021-12-10 13:03:47.000000000 | - | 12.1 | This is not what I want. I want to have a single record including all three measurements, cpu, disc and memory. I know it is possible to do it somehow because provided sample db has multi measurement records, such as: | hostname | az | region | measure_name | time | memory_utilization | cpu_utilization | | --- | | host-n2Rxl |eu-north-1a | eu-north-1 | DevOpsMulti-stats | 2021-12-10 13:03:47.000000000 | 40.324917071566546 | 91.85944083569557 | | host-sEUc8 |us-west-2a | us-west-2 | DevOpsMulti-stats | 2021-12-10 13:03:47.000000000 | 59.224512780289224 | 18.09011541205904 | How can I achieve this? Please help! Bests,
3
answers
0
votes
13
views
savcuoglu
asked a month ago
1
answers
0
votes
6
views
Albogd
asked 3 months ago

greengrass lambda to be triggered by shadow update

I've created a Lambda that does something when a named shadow updates #greengrass-cli component list .... Component Name: mtdshadowconfiglambda-dev-sync_remote_config#2 Version: 1.0.18 State: RUNNING Configuration: {"containerMode":"NoContainer","containerParams":{"devices":{},"memorySize":16000.0,"mountROSysfs":false,"volumes":{}},"inputPayloadEncodingType":"json","lambdaExecutionParameters":{"EnvironmentVariables":{}},"maxIdleTimeInSeconds":60.0,"maxInstancesCount":100.0,"maxQueueSize":1000.0,"pinned":false,"pubsubTopics":{"0":{"topic":"**$aws/things/+/shadow/name/#**","type":"IOT_CORE"}},"statusTimeoutInSeconds":60.0,"timeoutInSeconds":30.0} ...... i'm trying to lock down the trigger a bit more to: {"topic":"**$aws/things/${AWS_IOT_THING_NAME}/shadow/name/config#**","type":"IOT_CORE"} (side question, is this how you do this with the environment variable AWS_IOT_THING_NAME ? ) so that's what i enter in the console when i create a new version of the lambda, however after everything is pushed to the device the same topic is retained even though the version number has updated: Component Name: mtdshadowconfiglambda-dev-sync_remote_config Version: 1.0.23 State: FINISHED Configuration: {"containerMode":"NoContainer","containerParams":{"devices":{},"memorySize":16000.0,"mountROSysfs":false,"volumes":{}},"inputPayloadEncodingType":"json","lambdaExecutionParameters":{"EnvironmentVariables":{}},"maxIdleTimeInSeconds":60.0,"maxInstancesCount":100.0,"maxQueueSize":1000.0,"pinned":false,"pubsubTopics":{"0":{"topic":"**$aws/things/+/shadow/name/#**","type":"IOT_CORE"}},"statusTimeoutInSeconds":60.0,"timeoutInSeconds":30.0} is there a way to change a lambda's trigger once it is installed ? Edited by: clogwog on Oct 6, 2021 3:30 AM Edited by: clogwog on Oct 6, 2021 3:42 AM
2
answers
0
votes
1
views
clogwog
asked 3 months ago

GGV2: Unable to run docker containers: docker.sock - permission denied

This was working just fine a month ago, but now: => When Greengrass tries to install docker images with "docker load -i \[...]" I get this error: 2021-04-14T15:24:51.673Z \[WARN] (Copier) xxxxxxxxx: stderr. Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.24/images/load?quiet=1: dial unix /var/run/docker.sock: connect: permission denied. {scriptName=services.xxxxxxxxx.lifecycle.Install.Script, serviceName=xxxxxxxxx, currentState=NEW} 2021-04-14T15:24:51.676Z \[WARN] (pool-2-thread-17) xxxxxxxxx: shell-runner-error. {scriptName=services.xxxxxxxxx.lifecycle.Install.Script, serviceName=xxxxxxxxx, currentState=NEW, command=\["docker load -i /greengrass/v2/packages/artifacts/xxxxxxxxx/1...."]} I tried: * reverting back to Nucleus 2.0.3 but I seem to get the same problem. * The only thing that solves it, is to make the docker.sock world-writable... but that is not going to production. More info: * Greengrass is running as root. * I can run these commands myself with no problem in a shell * privileged containers I spin up can access the docker.sock with no problem * This happens both in my arm and amd64 devices I don't know how to check in what user does Greengrass try to run the docker load command, but I assume it is its own user. Has anyone experienced something similar? I feel silly asking this question because it was working before, but I did not change anything so I am confused.
2
answers
0
votes
1
views
QuantumLove
asked 9 months ago

Greengrass V2 behind Network Proxy - Failed to negotiate version with cloud

Hello AWS team, thank you very much for updating the documentation to allow an installation behind a network proxy. Very much appreciated. I successfully installed the greengrass core. But I failed with deploying the first component - a Lambda Function. Infos: - Network Proxy and Port 443 have been configured - the Network Proxy does not terminate the TLS connection - I tested this with (output please see below): curl --insecure -vvI https://iot.eu-central-1.amazonaws.com 2>&1 | awk 'BEGIN { cert=0 } /^\** SSL connection/ { cert=1 } /^\**/ { if (cert) print }' ``` 2021-03-08T13:58:40.708Z [ERROR] (pool-2-thread-26) com.aws.greengrass.componentmanager.ComponentManager: Failed to negotiate version with cloud and no local version to fall back to. {componentName=XXXXX, versionRequirement={thinggroup/XXXXXXGreengrassCoreGroup==1.0.0}} software.amazon.awssdk.services.greengrassv2.model.GreengrassV2Exception: Greengrass service only supports connections via TLS mutual auth (Service: GreengrassV2, Status Code: 403, Request ID: 861d34a9-d648-4a0a-a079-1af57fa18cf1, Extended Request ID: null) at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handleErrorResponse(CombinedResponseHandler.java:123) at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handleResponse(CombinedResponseHandler.java:79) at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handle(CombinedResponseHandler.java:59) at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handle(CombinedResponseHandler.java:40) at software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.execute(HandleResponseStage.java:40) at software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.execute(HandleResponseStage.java:30) at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206) at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptTimeoutTrackingStage.execute(ApiCallAttemptTimeoutTrackingStage.java:73) at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptTimeoutTrackingStage.execute(ApiCallAttemptTimeoutTrackingStage.java:42) at software.amazon.awssdk.core.internal.http.pipeline.stages.TimeoutExceptionHandlingStage.execute(TimeoutExceptionHandlingStage.java:77) at software.amazon.awssdk.core.internal.http.pipeline.stages.TimeoutExceptionHandlingStage.execute(TimeoutExceptionHandlingStage.java:39) at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptMetricCollectionStage.execute(ApiCallAttemptMetricCollectionStage.java:50) at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptMetricCollectionStage.execute(ApiCallAttemptMetricCollectionStage.java:36) at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:64) at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:34) at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206) at software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:56) at software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:36) at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.executeWithTimer(ApiCallTimeoutTrackingStage.java:80) at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:60) at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:42) at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:48) at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:31) at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206) at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206) at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:37) at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:26) at software.amazon.awssdk.core.internal.http.AmazonSyncHttpClient$RequestExecutionBuilderImpl.execute(AmazonSyncHttpClient.java:193) at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.invoke(BaseSyncClientHandler.java:133) at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.doExecute(BaseSyncClientHandler.java:159) at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.lambda$execute$1(BaseSyncClientHandler.java:112) at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.measureApiCallSuccess(BaseSyncClientHandler.java:167) at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.execute(BaseSyncClientHandler.java:94) at software.amazon.awssdk.core.client.handler.SdkSyncClientHandler.execute(SdkSyncClientHandler.java:45) at software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler.execute(AwsSyncClientHandler.java:55) at software.amazon.awssdk.services.greengrassv2.DefaultGreengrassV2Client.resolveComponentCandidates(DefaultGreengrassV2Client.java:1905) at com.aws.greengrass.componentmanager.ComponentServiceHelper.resolveComponentVersion(ComponentServiceHelper.java:67) at com.aws.greengrass.componentmanager.ComponentManager.lambda$negotiateVersionWithCloud$0(ComponentManager.java:198) at com.aws.greengrass.util.RetryUtils.runWithRetry(RetryUtils.java:46) at com.aws.greengrass.componentmanager.ComponentManager.negotiateVersionWithCloud(ComponentManager.java:197) at com.aws.greengrass.componentmanager.ComponentManager.resolveComponentVersion(ComponentManager.java:154) at com.aws.greengrass.componentmanager.DependencyResolver.lambda$resolveDependencies$1(DependencyResolver.java:108) at com.aws.greengrass.componentmanager.DependencyResolver.resolveComponentDependencies(DependencyResolver.java:215) at com.aws.greengrass.componentmanager.DependencyResolver.resolveDependencies(DependencyResolver.java:107) at com.aws.greengrass.deployment.DefaultDeploymentTask.lambda$call$2(DefaultDeploymentTask.java:98) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) ``` XX@XX:~$ curl --insecure -vvI https://iot.eu-central-1.amazonaws.com 2>&1 | awk 'BEGIN { cert=0 } /^\** SSL connection/ { cert=1 } /^\**/ { if (cert) print }' * SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256 * ALPN, server accepted to use h2 * Server certificate: * subject: CN=iot.eu-central-1.amazonaws.com * start date: Nov 13 00:00:00 2020 GMT * expire date: Dec 12 23:59:59 2021 GMT * issuer: C=US; O=Amazon; OU=Server CA 1B; CN=Amazon * SSL certificate verify ok. * Using HTTP2, server supports multi-use * Connection state changed (HTTP/2 confirmed) * Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0 * Using Stream ID: 1 (easy handle 0x55a53ac33580) * Connection state changed (MAX_CONCURRENT_STREAMS updated)! * Connection #0 to host 10.XX.XX.XX left intact Thank you very much for your help!
6
answers
0
votes
1
views
lukas-o
asked 10 months ago

How to trigger Lambdas from outside the GG environment, for testing purpose

Hello! We are developing one iot device on top of GGV2 and there are a couple of components that we have made as lambdas. We can deploy and run them run properly, but the even sources are quite limited, if I want to trigger it, I need to either: 1) Have another component there making the calls I want. This is horrible because I need to re-deploy that component every time I want it to send different payloads to the Lambda, and it is overhead that is not helping the development of this isolated component. 2) Trigger from IoT Core. Not ideal since I have to specify a topic and it cannot be device-specific even, because that would require each device to have its own deployment of these components. Here are the concrete questions: 1) What is the best way manually trigger a GG lambda? Are you planning to add a feature to trigger it as one does in the AWS console > Lambda? 2) Are you planning on adding the option to use recipe variables inside the IOT_CORE event source? would be nice. 3) Will it possible to create local deployments for Lambda functions? It is also very handy for the iterations to go faster. I am only concerned with development/testing, as for the production it is the local topics. As a final note, we are aware of IDP. We are planning on using it but it is not handy for development to force having a device ready. In fact, we usually have GG running inside a docker container.
4
answers
0
votes
0
views
QuantumLove
asked 10 months ago

Resource defined in recipe seems cached when MQTT publish to IoT Core

I am new to Greengrass, so this question might be obvious to some people. I create a simple module publish to topic "my/test" to IoTCore. Everything seems to be working okay. Then I changed the topic to "mytopic/test" both in my code and recipe. I thought it should just work, but I get: ``` 2021-03-03T17:14:40.789Z [INFO] (Thread-5) com.aws.greengrass.builtin.services.mqttproxy.MqttProxyIPCAgent: Not Authorized. {error=Principal com.example.PubSub is not authorized to perform aws.greengrass.ipc.mqttproxy:aws.greengrass#PublishToIoTCore on resource mytopic/test} ``` My recipe looks like this: ``` { "RecipeFormatVersion": "2020-01-25", "ComponentName": "com.example.PubSub", "ComponentVersion": "1.0.0", "ComponentDescription": "My test AWS IoT Greengrass component.", "ComponentPublisher": "3S", "ComponentConfiguration": { "DefaultConfiguration": { "accessControl": { "aws.greengrass.ipc.mqttproxy": { "com.example.PubSub:pubsub:1": { "policyDescription":"Allows access to publish to my topic", "operations" : [ "aws.greengrass#PublishToIoTCore", "aws.greengrass#SubscribeToIoTCore" ], "resources": [ "mytopic/test" ] } } }, "Message": "whatever" } }, "Manifests": [ { "Platform": { "os": "linux" }, "Lifecycle": { "Run": "python3 {artifacts:path}/my_pubsub.py '{configuration:/Message}'" } } ] } ``` Even when I change resource to "*", I still get the same authorization error. However, if I change my code back to publish on the original topic "my/test", it works as before as if the resource defined in the recipe had no effect. Thanks in advance.
9
answers
0
votes
0
views
jcai
asked 10 months ago

Greengrass v2 how to subscribe MQTT message from IoT Core

Hi, Publishing works. IoT Core receives MQTT message from this program. But I can't get to receive MQTT message from IoT Core. I have read below document and forum. The exact sample program in the document. It finishes the program right away. And doesn't wait to receive message from IoT core since they are not synchronized. I saw that I have to use queue. But I can't get to work. Could you help me? https://docs.aws.amazon.com/greengrass/v2/developerguide/ipc-iot-core-mqtt.html https://forums.aws.amazon.com/thread.jspa?threadID=334561&tstart=0 I also tried that I made two ipc_client for publish and ipc_client2 to subscribe just in case. ------------------------------Publish works, Subscribe doesn't---------------------------- import queue import os import json import datetime import time import random import awsiot.greengrasscoreipc.client as client from awsiot.eventstreamrpc import Connection, LifecycleHandler, MessageAmendment from awscrt.io import ( ClientBootstrap, DefaultHostResolver, EventLoopGroup, SocketDomain, SocketOptions, ) from awsiot.greengrasscoreipc.model import ( IoTCoreMessage, QOS, SubscribeToIoTCoreRequest, PublishToIoTCoreRequest ) class IPCUtils: def connect(self): elg = EventLoopGroup() resolver = DefaultHostResolver(elg) bootstrap = ClientBootstrap(elg, resolver) socket_options = SocketOptions() socket_options.domain = SocketDomain.Local amender = MessageAmendment.create_static_authtoken_amender(os.getenv("SVCUID")) hostname = os.getenv("AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT") print(hostname) connection = Connection( host_name=hostname, port=8033, bootstrap=bootstrap, socket_options=socket_options, connect_message_amender=amender, ) self.lifecycle_handler = LifecycleHandler() connect_future = connection.connect(self.lifecycle_handler) TIMEOUT = 10 connect_future.result(TIMEOUT) return connection ipc_utils = IPCUtils() connection = ipc_utils.connect() ipc_client = client.GreengrassCoreIPCClient(connection) class StreamHandler(client.SubscribeToIoTCoreStreamHandler): def __init__(self): super().__init__() def on_stream_event(self, event: IoTCoreMessage) -> None: message = str(event.message.payload, "utf-8") print("MESSAGE RECEIVED: ") print(message) queue.put(message) def on_stream_error(self, error: Exception) -> bool: # Handle error. print('Error ---!') return True def on_stream_closed(self) -> None: pass dt=datetime.datetime.now().isoformat(timespec='seconds') message = {"deviceid":"ggc", "timestamp": dt , "Temperature":random.randint(20,37)} request = PublishToIoTCoreRequest() request.topic_name = "ggc/topic" request.payload = json.dumps(message).encode('utf-8') request.qos = QOS.AT_LEAST_ONCE operation = ipc_client.new_publish_to_iot_core() operation.activate(request) future = operation.get_response() TIMEOUT=10 future.result(TIMEOUT) queue = queue.Queue() request = SubscribeToIoTCoreRequest() request.topic_name = "ggc/#" request.qos = QOS.AT_LEAST_ONCE handler = StreamHandler() operation = ipc_client.new_subscribe_to_iot_core(handler) future = operation.activate(request) future.result(TIMEOUT) i=0 while queue.empty(): i=i+1 print(i) time.sleep(1) print(queue.get()) queue.join()
18
answers
0
votes
1
views
jx2900
asked a year ago

Proper way to interact with resources/services outside of GG v2

Hi, We have a MQTT events flow generated outside of Greengrass Core on IoT device and we want to forward those events to AWS IoT Core using custom forwarder deployed as Greengrass component. It seems a few options are available. 1. Use 3rd-party MQTT broker running outside GG core environment. It aggregates all events and provides some topics to subscribe to. GG component subscribes to these topics and once a new event appears, it pulls it, transforms it if needed and publishes to IoT core topic through IPC. GG component is a generic or lambda-based component written in Python so it is possible to use 'paho-mqtt' package to connect to external MQTT broker and 'awsiotsdk' to communicate to Iot Core via IPC. Looks fine in general. Custom authentication and authorization mechanizms can be implemented in MQTT broker and it's independend from GG. 2. The same custom MQTT broker as in #1 but deployed as GG component. External datasources publish their events the same way but lifecycle of this broker is managed by GG because it is its component. Not bad, but it looks a bit redundant because GG v2 has it's own internal topics to be used to connect different components to each other. 3. Use IPC and internal GG topics directly. External datasources publish events directly to these topics. This idea appeared base on this discussion - https://github.com/aws/aws-iot-device-sdk-python-v2/issues/145. Looks promising because in this case we use native GG mechanisms without any additional resources. The problem is authentication and authorization. In order to connect to internal topic we need to have AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT and SVCUID. AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT is static but I guess SVCUID is generated for each component individually and updated periodically. It raises some security concerns how to get this info from external processes, where to store it and how to change. But at least this approach exists and can be potentially used. So the question is which approach could be considered as the most proper, well-architectured and secured? I think GG developers team thought about this scenario and maybe they have some recommendations. Is #3 just a dirty hack or is it one of the 'official' ways to pass events to GG/IoT from outside? Maybe other options which I missed? Thanks in advance. Edited by: lacteolus on Feb 18, 2021 1:58 AM
3
answers
0
votes
1
views
lacteolus
asked a year ago

AWS MQTT Push Not working, what is SVCUID

I can't seem to get mqtt pushing to work. Trying to run a lambda at the edge. I put in the topic name when creating the component, I've attached a policy to the cert. Component upload successfully. When I run the script I get **Traceback (most recent call last):** File "main.py", line 54, in <module> connection = ipc_utils.connect() File "main.py", line 47, in connect connect_future = connection.connect(self.lifecycle_handler) File "/greengrass/v2/packages/artifacts/MqttTestGGCv2/4.0.0/awsiot/eventstreamrpc.py", line 440, in connect raise e File "/greengrass/v2/packages/artifacts/MqttTestGGCv2/4.0.0/awsiot/eventstreamrpc.py", line 434, in connect tls_connection_options=self._tls_connection_options) File "/greengrass/v2/packages/artifacts/MqttTestGGCv2/4.0.0/awscrt/eventstream/rpc.py", line 307, in connect connection) RuntimeError: 1047 (AWS_IO_SOCKET_CONNECTION_REFUSED): socket connection refused. I believe the error is: amender = MessageAmendment.create_static_authtoken_amender(os.getenv("SVCUID")) I don't know what SVCUID is? **Full python code**: from awsiot.greengrasscoreipc.model import ( PublishToTopicRequest, PublishMessage, BinaryMessage ) import os import awsiot.greengrasscoreipc.client as client import time from awsiot.greengrasscoreipc.model import ( QOS, PublishToIoTCoreRequest ) from awscrt.io import ( ClientBootstrap, DefaultHostResolver, EventLoopGroup, SocketDomain, SocketOptions, ) from awsiot.eventstreamrpc import Connection, LifecycleHandler, MessageAmendment TIMEOUT = 15 class IPCUtils: def connect(self): elg = EventLoopGroup() resolver = DefaultHostResolver(elg) bootstrap = ClientBootstrap(elg, resolver) socket_options = SocketOptions() socket_options.domain = SocketDomain.Local amender = MessageAmendment.create_static_authtoken_amender(os.getenv("SVCUID")) #amender = MessageAmendment.create_static_authtoken_amender(100061) #hostname = os.getenv("AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT") hostname = "/greengrass/v2/ipc.socket" connection = Connection( host_name=hostname, port=8033, bootstrap=bootstrap, socket_options=socket_options, connect_message_amender=amender, ) self.lifecycle_handler = LifecycleHandler() connect_future = connection.connect(self.lifecycle_handler) connect_future.result(TIMEOUT) return connection while True: ipc_utils = IPCUtils() connection = ipc_utils.connect() ipc_client = client.GreengrassCoreIPCClient(connection) topic = "my/topic" message = "Hello, World!" request = PublishToTopicRequest() request.topic = topic request.qos = QOS.AT_LEAST_ONCE publish_message = PublishMessage() publish_message.binary_message = BinaryMessage() publish_message.binary_message.message = bytes(message, "utf-8") request.publish_message = publish_message #try: operation = ipc_client.new_publish_to_topic() #except Exception as e: # print("ERRORRRR: " + str(e)) #operation.activate(request) #future = operation.get_response() #future.result(TIMEOUT) time.sleep(5) **Attached Policy**: { "Version": "2012-10-17", "Statement": \[ { "Effect": "Allow", "Action": "iot:Connect", "Resource": "*" }, { "Effect": "Allow", "Action": "iot:Publish", "Resource": "*" } ] } Edited by: brayden on Feb 11, 2021 2:45 PM
11
answers
0
votes
0
views
brayden
asked a year ago

Stream Manager cannot load cbor2 module

I'm attempting to use the stream manager sdk for GG version 2: https://github.com/aws-greengrass/aws-greengrass-stream-manager-sdk-python I installed the requirements for stream manager on my test core machine: pip3 install --user -r requirements.txt Then I installed the sdk by downloading the source code and running: sudo python3 setup.py install After installing the sdk, I realized that the installed .egg file was not readable by users outside of root/staff so I chmod'd to allow others to read it. Then I re-tested and it cannot read the cbor2 module. I ran another test component that just imported cbor2. It failed with the same errors as below. The log for my component contains this error: 2021-02-03T19:47:26.630Z \[WARN] (Copier) com.example.stream.create: stderr. Traceback (most recent call last):. {scriptName=services.com.example.stream.create.lifecycle.Run.script, serviceName=com.example.stream.create, currentState=RUNNING} 2021-02-03T19:47:26.630Z \[WARN] (Copier) com.example.stream.create: stderr. File "create-stream.py", line 5, in <module>. {scriptName=services.com.example.stream.create.lifecycle.Run.script, serviceName=com.example.stream.create, currentState=RUNNING} 2021-02-03T19:47:26.630Z \[WARN] (Copier) com.example.stream.create: stderr. import cbor2. {scriptName=services.com.example.stream.create.lifecycle.Run.script, serviceName=com.example.stream.create, currentState=RUNNING} 2021-02-03T19:47:26.630Z \[WARN] (Copier) com.example.stream.create: stderr. ModuleNotFoundError: No module named 'cbor2'. {scriptName=services.com.example.stream.create.lifecycle.Run.script, serviceName=com.example.stream.create, currentState=RUNNING} Then I tried adding pip3 install cbor2 to the install script of the component recipe. This didn't install correctly: 2021-02-03T20:48:47.613Z \[WARN] (Copier) com.example.stream.create: stderr. Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-l9bd94a1/cbor2/. {scriptName=services.com.example.stream.create.lifecycle.Install.script, serviceName=com.example.stream.create, currentState=NEW} 2021-02-03T20:48:47.669Z \[WARN] (pool-2-thread-67) com.example.stream.create: shell-runner-error. {scriptName=services.com.example.stream.create.lifecycle.Install.script, serviceName=com.example.stream.create, currentState=NEW, command=\["pip3 install cbor2 "]} On the same machine, I can run a python3 script directly and both cbor2 and stream_manager run fine (except of course for stream manager complaining about authorization). What could be causing the module loading problems? Thanks!
2
answers
0
votes
1
views
DarrenB
asked a year ago

Lambda can not load python modules installed with component

Hi, I have installed Greengrass Core on a Raspberry Pi 4. I want to run a python script using Lambda functions. For that I created 2 components. ComponentA installs all python dependencies required to run python script. ComponentB is the Lambda component that will run the python script. When I deploy both components I can see that ComponentA installs the python packages under gcc_user user directory. But ComponentB always fails because it can not find/load the packages required. Both components should be run as the same gcc_user. Any ideas? Bests, ComponentA recipe : { "RecipeFormatVersion": "2020-01-25", "ComponentName": "componentA", "ComponentVersion": "1.0.3", "ComponentType": "aws.greengrass.generic", "ComponentDescription": "DHT11 dependencies", "ComponentPublisher": "SomePublisher", "ComponentConfiguration": {}, "Manifests": \[ { "Platform": { "os": "linux" }, "Name": "Linux", "Lifecycle": { "Install": "python3 -m pip install dht11 RPi.GPIO boto3" }, "Artifacts": \[] } ], "Lifecycle": {} } Logs for ComponentA: ComponentA stdout. Downloading https://files.pythonhosted.org/packages/ea/43/4b4a1b26eb03a429a4c37ca7fdf369d938bd60018fc194e94b8379b0c77c/s3transfer-0.3.4-py2.py3-none-any.whl (69kB). {scriptName=services.componentA.lifecycle.Install, serviceName=componentA, currentState=NEW} ComponentA stdout. Collecting botocore<1.20.0,>=1.19.59 (from boto3). {scriptName=services.componentA.lifecycle.Install, serviceName=componentA, currentState=NEW} ComponentA stdout. Downloading https://files.pythonhosted.org/packages/ee/10/08dc3b74cc9c47a2c81b2e88e06c2661783b86fd77fc80f7a3eb1bf56905/botocore-1.19.59-py2.py3-none-any.whl (7.2MB). {scriptName=services.componentA.lifecycle.Install, serviceName=componentA, currentState=NEW} ComponentA stdout. Collecting urllib3<1.27,>=1.25.4; python_version != "3.4" (from botocore<1.20.0,>=1.19.59->boto3). {scriptName=services.componentA.lifecycle.Install, serviceName=componentA, currentState=NEW} ComponentA stdout. Downloading https://files.pythonhosted.org/packages/f5/71/45d36a8df68f3ebb098d6861b2c017f3d094538c0fb98fa61d4dc43e69b9/urllib3-1.26.2-py2.py3-none-any.whl (136kB). {scriptName=services.componentA.lifecycle.Install, serviceName=componentA, currentState=NEW} ComponentA stdout. Collecting python-dateutil<3.0.0,>=2.1 (from botocore<1.20.0,>=1.19.59->boto3). {scriptName=services.componentA.lifecycle.Install, serviceName=componentA, currentState=NEW} ComponentA stdout. Downloading https://files.pythonhosted.org/packages/d4/70/d60450c3dd48ef87586924207ae8907090de0b306af2bce5d134d78615cb/python_dateutil-2.8.1-py2.py3-none-any.whl (227kB). {scriptName=services.componentA.lifecycle.Install, serviceName=componentA, currentState=NEW} ComponentA stdout. Requirement already satisfied: six>=1.5 in /usr/lib/python3/dist-packages (from python-dateutil<3.0.0,>=2.1->botocore<1.20.0,>=1.19.59->boto3) (1.12.0). {scriptName=services.componentA.lifecycle.Install, serviceName=componentA, currentState=NEW} ComponentA stdout. Installing collected packages: jmespath, urllib3, python-dateutil, botocore, s3transfer, boto3. {scriptName=services.componentA.lifecycle.Install, serviceName=componentA, currentState=NEW} ComponentA stdout. Successfully installed boto3-1.16.59 botocore-1.19.59 jmespath-0.10.0 python-dateutil-2.8.1 s3transfer-0.3.4 urllib3-1.26.2. {scriptName=services.componentA.lifecycle.Install, serviceName=componentA, currentState=NEW} ls -ll /home/ggc_user/.local/lib/python3.7/site-packages drwxr-xr-x 4 ggc_user ggc_group 4096 Jan 6 12:52 RPi drwxr-xr-x 2 ggc_user ggc_group 4096 Jan 6 12:52 RPi.GPIO-0.7.0.dist-info -rwxr-xr-x 1 ggc_user ggc_group 10834284 Jan 5 11:33 _awscrt.cpython-37m-arm-linux-gnueabihf.so drwxr-xr-x 4 ggc_user ggc_group 4096 Jan 5 11:33 awscrt drwxr-xr-x 2 ggc_user ggc_group 4096 Jan 5 11:33 awscrt-0.9.15.dist-info drwxr-xr-x 4 ggc_user ggc_group 4096 Jan 5 11:33 awsiot drwxr-xr-x 2 ggc_user ggc_group 4096 Jan 5 11:33 awsiotsdk-1.5.3.dist-info drwxr-xr-x 10 ggc_user ggc_group 4096 Jan 26 10:31 boto3 drwxr-xr-x 2 ggc_user ggc_group 4096 Jan 26 10:31 boto3-1.16.59.dist-info drwxr-xr-x 7 ggc_user ggc_group 4096 Jan 26 10:31 botocore drwxr-xr-x 2 ggc_user ggc_group 4096 Jan 26 10:31 botocore-1.19.59.dist-info drwxr-xr-x 6 ggc_user ggc_group 4096 Jan 26 10:31 dateutil drwxr-xr-x 3 ggc_user ggc_group 4096 Jan 6 12:52 dht11 drwxr-xr-x 2 ggc_user ggc_group 4096 Jan 6 12:52 dht11-0.1.0.dist-info drwxr-xr-x 3 ggc_user ggc_group 4096 Jan 26 10:31 jmespath drwxr-xr-x 2 ggc_user ggc_group 4096 Jan 26 10:31 jmespath-0.10.0.dist-info drwxr-xr-x 2 ggc_user ggc_group 4096 Jan 26 10:31 python_dateutil-2.8.1.dist-info drwxr-xr-x 3 ggc_user ggc_group 4096 Jan 26 10:31 s3transfer drwxr-xr-x 2 ggc_user ggc_group 4096 Jan 26 10:31 s3transfer-0.3.4.dist-info drwxr-xr-x 6 ggc_user ggc_group 4096 Jan 26 10:31 urllib3 drwxr-xr-x 2 ggc_user ggc_group 4096 Jan 26 10:31 urllib3-1.26.2.dist-info Logs for ComponentB: ComponentB: FATAL: lambda_runtime.py:147,Failed to import handler function "lambda_function.lambda_handler" due to exception: No module named 'boto3'. {serviceInstance=0, serviceName=componentB, currentState=RUNNING} ComponentB: FATAL: lambda_runtime.py:428,Failed to initialize Lambda runtime due to exception: No module named 'boto3'. {serviceInstance=0, serviceName=componentB, currentState=RUNNING} ComponentB: shell-runner-start. {scriptName=services.componentB.lifecycle.shutdown.script, serviceInstance=0, serviceName=componentB, currentState=BROKEN, command=\["/greengrass/v2/packages/artifacts/aws.greengrass.LambdaLauncher/2.0.3/lambda-l..."]}
6
answers
0
votes
0
views
savcuoglu
asked a year ago

Greengrass V2 S3 export problem

Hi, I'm trying to develop a component that will be exporting media files from IoT device (nvidia jetson nano). I wanted to start with a simple example provided in this repo: https://github.com/aws-greengrass/aws-greengrass-stream-manager-sdk-python/blob/main/samples/stream_manager_s3.py I basically copy/pasted this example with one change. I had to configure stream manager client to connect to different port that default one (8099) because 8088 is already taken. The change looks like this: client = StreamManagerClient(port=8099) Everything else looks the same as in the example. The issue I'm having is that the code is not able to read the status messages. I'm getting this in the logs: 2021-01-08T16:25:20.633Z \[INFO] (pool-2-thread-35) media_exporter: shell-runner-start. {scriptName=services.media_exporter.lifecycle.Run, serviceName=media_exporter, currentState=STARTING, command=\["python3 /home/greengrass/packages/artifacts-unarchived/media_exporter/1.6.0/ex..."]} 2021-01-08T16:25:21.199Z \[WARN] (Copier) media_exporter: stderr. INFO:root:Successfully appended S3 Task Definition to stream with sequence number 0. {scriptName=services.media_exporter.lifecycle.Run, serviceName=media_exporter, currentState=RUNNING} 2021-01-08T16:25:26.211Z \[WARN] (Copier) media_exporter: stderr. ERROR:root:Exception while running. {scriptName=services.media_exporter.lifecycle.Run, serviceName=media_exporter, currentState=RUNNING} 2021-01-08T16:25:26.211Z \[WARN] (Copier) media_exporter: stderr. Traceback (most recent call last):. {scriptName=services.media_exporter.lifecycle.Run, serviceName=media_exporter, currentState=RUNNING} 2021-01-08T16:25:26.211Z \[WARN] (Copier) media_exporter: stderr. File "/home/greengrass/packages/artifacts-unarchived/media_exporter/1.6.0/exporter/exporter.py", line 89, in main. {scriptName=services.media_exporter.lifecycle.Run, serviceName=media_exporter, currentState=RUNNING} 2021-01-08T16:25:26.211Z \[WARN] (Copier) media_exporter: stderr. status_stream_name, ReadMessagesOptions(min_message_count=1, read_timeout_millis=5000). {scriptName=services.media_exporter.lifecycle.Run, serviceName=media_exporter, currentState=RUNNING} 2021-01-08T16:25:26.211Z \[WARN] (Copier) media_exporter: stderr. File "/home/greengrass/packages/artifacts-unarchived/media_exporter/1.6.0/exporter/stream_manager/streammanagerclient.py", line 460, in read_messages. {scriptName=services.media_exporter.lifecycle.Run, serviceName=media_exporter, currentState=RUNNING} 2021-01-08T16:25:26.211Z \[WARN] (Copier) media_exporter: stderr. return UtilInternal.sync(self._read_messages(stream_name, options), loop=self.__loop). {scriptName=services.media_exporter.lifecycle.Run, serviceName=media_exporter, currentState=RUNNING} 2021-01-08T16:25:26.211Z \[WARN] (Copier) media_exporter: stderr. File "/home/greengrass/packages/artifacts-unarchived/media_exporter/1.6.0/exporter/stream_manager/utilinternal.py", line 39, in sync. {scriptName=services.media_exporter.lifecycle.Run, serviceName=media_exporter, currentState=RUNNING} 2021-01-08T16:25:26.212Z \[WARN] (Copier) media_exporter: stderr. return asyncio.run_coroutine_threadsafe(coro, loop=loop).result(). {scriptName=services.media_exporter.lifecycle.Run, serviceName=media_exporter, currentState=RUNNING} 2021-01-08T16:25:26.212Z \[WARN] (Copier) media_exporter: stderr. File "/usr/lib/python3.7/concurrent/futures/_base.py", line 435, in result. {scriptName=services.media_exporter.lifecycle.Run, serviceName=media_exporter, currentState=RUNNING} 2021-01-08T16:25:26.212Z \[WARN] (Copier) media_exporter: stderr. return self.__get_result(). {scriptName=services.media_exporter.lifecycle.Run, serviceName=media_exporter, currentState=RUNNING} 2021-01-08T16:25:26.212Z \[WARN] (Copier) media_exporter: stderr. File "/usr/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result. {scriptName=services.media_exporter.lifecycle.Run, serviceName=media_exporter, currentState=RUNNING} 2021-01-08T16:25:26.212Z \[WARN] (Copier) media_exporter: stderr. raise self._exception. {scriptName=services.media_exporter.lifecycle.Run, serviceName=media_exporter, currentState=RUNNING} 2021-01-08T16:25:26.212Z \[WARN] (Copier) media_exporter: stderr. File "/home/greengrass/packages/artifacts-unarchived/media_exporter/1.6.0/exporter/stream_manager/streammanagerclient.py", line 415, in _read_messages. {scriptName=services.media_exporter.lifecycle.Run, serviceName=media_exporter, currentState=RUNNING} 2021-01-08T16:25:26.212Z \[WARN] (Copier) media_exporter: stderr. UtilInternal.raise_on_error_response(read_messages_response). {scriptName=services.media_exporter.lifecycle.Run, serviceName=media_exporter, currentState=RUNNING} 2021-01-08T16:25:26.212Z \[WARN] (Copier) media_exporter: stderr. File "/home/greengrass/packages/artifacts-unarchived/media_exporter/1.6.0/exporter/stream_manager/utilinternal.py", line 202, in raise_on_error_response. {scriptName=services.media_exporter.lifecycle.Run, serviceName=media_exporter, currentState=RUNNING} 2021-01-08T16:25:26.212Z \[WARN] (Copier) media_exporter: stderr. raise NotEnoughMessagesException(response.error_message, response.status, response.request_id). {scriptName=services.media_exporter.lifecycle.Run, serviceName=media_exporter, currentState=RUNNING} 2021-01-08T16:25:26.212Z \[WARN] (Copier) media_exporter: stderr. stream_manager.exceptions.NotEnoughMessagesException: not enough messages to return before time out. {scriptName=services.media_exporter.lifecycle.Run, serviceName=media_exporter, currentState=RUNNING} I understand the error (not enough messages to return before time out.) but why this is happening? At the end the files are being exported but this read status part it's not working at all. Here is the snippet of the code that creates the streams: # Try deleting the status stream (if it exists) so that we have a fresh start try: client.delete_message_stream(stream_name=status_stream_name) except ResourceNotFoundException: pass # Try deleting the stream (if it exists) so that we have a fresh start try: client.delete_message_stream(stream_name=stream_name) except ResourceNotFoundException: pass exports = ExportDefinition( s3_task_executor=\[ S3ExportTaskExecutorConfig( identifier="S3TaskExecutor" + stream_name, # Required # Optional. Add an export status stream to add statuses for all S3 upload tasks. status_config=StatusConfig( status_level=StatusLevel.INFO, # Default is INFO level statuses. # Status Stream should be created before specifying in S3 Export Config. status_stream_name=status_stream_name, ), ) ] ) # Create the Status Stream. client.create_message_stream( MessageStreamDefinition(name=status_stream_name, strategy_on_full=StrategyOnFull.OverwriteOldestData) ) # Create the message stream with the S3 Export definition. client.create_message_stream( MessageStreamDefinition( name=stream_name, strategy_on_full=StrategyOnFull.OverwriteOldestData, export_definition=exports ) ) # Append a S3 Task definition and print the sequence number s3_export_task_definition = S3ExportTaskDefinition(input_url=file_url, bucket=bucket_name, key=key_name) logger.info( "Successfully appended S3 Task Definition to stream with sequence number %d", client.append_message(stream_name, Util.validate_and_serialize_to_json_bytes(s3_export_task_definition)), )
13
answers
0
votes
0
views
szymon888
asked a year ago

Confusion on Greengrass Certificate Rotation

I have a question around certificate rotation. As you know the MQTT server in GG uses a server certificate signed by a group CA certificate. In GG [documentation][1] it is mentioned that the certificate is rotated per the setting in greengrass (7 to 30 days). But it is not clear if it is the server certificate or the group CA itself. I found some previous posts that seem to indicate that both the group CA and server cert are rotated. However, in my testing that doesn't seem to be the case. On creation, group CA certificate seem to show an expiry date until the end of the century (2100). The expiry date on the server certificate seemed to match the duration specified in the setting, so my guess is that the setting is for server certificate and the group CA remains the same unless manually changed. However, when you change the slider to adjust the expiration time, the server certificate on GG core doesn't seem to get updated. Can someone clarify the rotation process, which certificate is it supposed to rotate and when? Here is the ultimate issue I am trying to solve for. I have a non Greengrass aware device that connects to Greengrass core using manually configured information (since it doesn't support discovery). I am trying to determine at what interval (or on what event) is it necessary to update the CA certificate on the client so it continues to make connection to Greengrass core MQTT broker. [1]: https://docs.aws.amazon.com/greengrass/latest/developerguide/device-auth.html
1
answers
0
votes
1
views
AWS-User-4499555
asked a year ago

Failure to build and run GGC in Docker on Ubuntu Core

I have Ubuntu Core 18 (GNU/Linux 5.3.0-1033-raspi2 armv7l) installed on a Raspberry Pi 4 Model B. As the aws-iot-greengrass snap has limited functionality, I decided to explore the possibility of installing GGC in a Docker container. I installed the Docker snap (v19.03.11), and attempted to install the amazon/ws-iot-greengrass docker image from docker hub ( <https://hub.docker.com/r/amazon/aws-iot-greengrass> ). After putting the GGC certs on the system, my first attempt at running the image failed with the following error: **FATAL tini (6) exec /greengrass-entrypoint.sh failed: Exec format error** As I appeared to have the same issue described in ( <https://forums.aws.amazon.com/thread.jspa?threadID=309740> ), I removed the previous GG image and built my own Docker image using the Dockerfile download from <https://docs.aws.amazon.com/greengrass/latest/developerguide/what-is-gg.html#gg-docker-download> I followed the instructions in the README, using the 'Dockerfile.alpine-armv7l' dockerfile. The build appeared to be successful, as the Dockerfile appeared to execute without issues, aside from two warnings: **WARNING: Ignoring APKINDEX.00740ba1.tar.gz: No such file or directory** **WARNING: Ignoring APKINDEX.d8b2a6f4.tar.gz: No such file or directory** The image can be revealed with **docker image ls** as arm7l/aws-greengrass:1.10.2. However, when I try to run the image I receive the error: **FATAL tini (6) exec /greengrass-entrypoint.sh failed: Permission denied** I would appreciate any help in figuring out why this error occurs and whether my use scenario is even possible. I understand it's not conventional. Edited by: ole-OG on Sep 14, 2020 9:06 PM
4
answers
0
votes
0
views
ole-OG
asked a year ago

Connection-refused in SiteWise Connector 6. Is there a bug?

FYI: I ran into problems using the latest SiteWise connector. Downgrading solved the issue. Is there a bug here? Before: * Connector was upgraded to version 6 * Network changed (from wifi location a, to wifi-location b) * Greengrass functions and were reponsive (e.g. on self-build lamdbas) * Sitewise connector was down showing Connection issues (See log snippet). * The configurations in /var/sitewise/config/ didnt update anymore and held the old settings, event Actions to correct the issue: * All gateways re-created (this step may not be required) * Downgraded IoT SiteWise Connector to version 5 * Restarted greengrassd KR, Henk-Jan Log snippet: \[2020-05-14T23:29:21.318+02:00]\[INFO]-2020-05-14 23:29:21 INFO OpcUaConfigOverrideModule:45 - No overrides specified for the OPC-UA configuration \[2020-05-14T23:29:22.804+02:00]\[INFO]-2020-05-14 23:29:22 INFO CompositeConfigurationProvider:27 - Unable to load configuration file. Attempting to use environment variables \[2020-05-14T23:29:22.812+02:00]\[INFO]-2020-05-14 23:29:22 INFO CompositeConfigurationProvider:31 - Unable to load configuration file from Environment variables. Using default config. \[2020-05-14T23:29:22.974+02:00]\[ERROR]-com.amazonaws.greengrass.streammanager.client.StreamManagerClientImpl: Connect failed \[2020-05-14T23:29:22.974+02:00]\[ERROR]-java.net.ConnectException: Connection refused (Connection refused) \[2020-05-14T23:29:22.974+02:00]\[ERROR]- at java.net.PlainSocketImpl.socketConnect(Native Method) \[2020-05-14T23:29:22.974+02:00]\[ERROR]- at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) \[2020-05-14T23:29:22.974+02:00]\[ERROR]- at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) \[2020-05-14T23:29:22.974+02:00]\[ERROR]- at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) \[2020-05-14T23:29:22.974+02:00]\[ERROR]- at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
3
answers
0
votes
1
views
HCastermans
asked 2 years ago

Deploy HelloWorld successful but no message in console

Hi guys, I used the greengrass dev guide for Raspberry Pi. It looked everything ok until the helloworld test. From console no message arrives on _hello/world_ topic The deploy of lambda function is successful, so I have to assume there is no connection problem. The runtime.log says ``` ]-Created worker. {"functionArn": "arn:aws:lambda:::function:GGCloudSpooler:1", "workerId": edited, "pid": 1469} [2020-03-01T21:12:55.553Z][INFO]-Created worker. {"functionArn": "arn:aws:lambda:::function:GGShadowService", "workerId": edited, "pid": 1506} [2020-03-01T21:12:55.555Z][INFO]-Created worker. {"functionArn": "arn:aws:lambda:::function:GGShadowSyncManager", "workerId": edited, "pid": 1489} [2020-03-01T21:12:55.563Z][INFO]-Created worker. {"functionArn": "arn:aws:lambda:::function:GGIPDetector:1", "workerId": edited, "pid": 1513} [2020-03-01T21:12:55.633Z][INFO]-Created worker. {"functionArn": "arn:aws:lambda:::function:GGSecretManager:1", "workerId": edited, "pid": 1537} [2020-03-01T21:12:55.863Z][INFO]-Created worker. {"functionArn": "arn:aws:lambda:::function:GGDeviceCertificateManager", "workerId": edited, "pid": 1592} [2020-03-01T21:12:55.903Z][INFO]-Created worker. {"functionArn": "arn:aws:lambda:::function:GGTES", "workerId": edited, "pid": 1594} [2020-03-01T21:13:01.736Z][INFO]-Handled functions request. {"functionName": "arn:aws:lambda:::function:GGStreamManager:1", "invocationId": ""} [2020-03-01T21:13:01.736Z][INFO]-Started all system components. [2020-03-01T21:13:01.953Z][INFO]-Created worker. {"functionArn": "edited:function:Greengrass_HelloWorld:2", "workerId": "edited", "pid": 1643} ``` Can you help me to understand where to look for at this point? How can I understand from the RPi the lambda python function is executed and messages sent? Thank you in advance
2
answers
0
votes
0
views
diul
asked 2 years ago

Greengrass StreamManager error: "Unable to read from socket, likely socket is closed or server died"

I have a sample lambda that uses the newly introduced StreamManager. The main idea is a device publishes data to a channel and Greengrass lambda is subscribed to that channel. When a data is received, it writes the data to StreamManager. StreamManager exports the data to Kinesis. Sometimes (after deployments) I get the following error in the lambda log: ERROR-streammanagerclient.py:177,Unable to read from socket, likely socket is closed or server died My lambda is pinned (long-running) and the code is (Python 3.7): import asyncio import logging import random import time from greengrasssdk.stream_manager import ( ExportDefinition, KinesisConfig, MessageStreamDefinition, ReadMessagesOptions, ResourceNotFoundException, StrategyOnFull, StreamManagerClient, ) stream_name = "FarmDataStream" iot_channel_name = "farmdatachannel" kinesis_stream_name = "farmDataKinesisStream" # Create a client for the StreamManager client = StreamManagerClient() try: client.delete_message_stream(stream_name=stream_name) except ResourceNotFoundException: pass exports = ExportDefinition( kinesis=[KinesisConfig(identifier="KinesisExport" + stream_name, kinesis_stream_name=kinesis_stream_name)] ) client.create_message_stream( MessageStreamDefinition( name=stream_name, strategy_on_full=StrategyOnFull.OverwriteOldestData, export_definition=exports ) ) # initialize the logger logger = logging.getLogger() logger.setLevel(logging.INFO) def lambda_handler(event, context): logger.info(event) global stream_name client.append_message(stream_name, event) return
1
answers
0
votes
0
views
AWS-User-9565912
asked 2 years ago

AWS IDT problem [Error: 106] problem

Hi I just got some problem when I using device tester The following are my error message and my config file I don't know what I missing casue that problem? I already use IAM create account and export in terminal. yichen@yichen-desktop:~/devicetester_greengrass_linux/bin$ sudo ./devicetester_linux_x86-64 run-suite \[sudo] password for yichen: time="2019-10-02T15:23:48+08:00" level=info msg=Using suite: GGQ_1 time="2019-10-02T15:23:48+08:00" level=info msg=Using pool: 1 time="2019-10-02T15:23:48+08:00" level=info msg=Running test case... executionId=9326d229-e4e5-11e9-a446-080027639d9 suiteId=GGQ groupId=version testCaseId=ggc_version_check_test deviceId=123 time="2019-10-02T15:23:48+08:00" level=error msg=AWS credentials not found: EnvAccessKeyNotFound: AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY not found in environment suiteId=GGQ groupId=version testCaseId=ggc_version_check_test deviceId=123 executionId=9326d229-e4e5-11e9-a446-0800276f39d9 time="2019-10-02T15:23:48+08:00" level=info msg=Finished running test case... testCaseId=ggc_version_check_test deviceId=123 executionId=9326d229-e4e5-11e9-a446-0800276f39d9 suiteId=GGQ groupId=version time="2019-10-02T15:23:48+08:00" level=info msg=--- FAIL: TestGGCVersion (0.00s) suiteId=GGQ groupId=version testCaseId=ggc_version_check_test deviceId=123 executionId=9326d229-e4e5-11e9-a446-0800276f39d9 time="2019-10-02T15:23:48+08:00" level=info msg=FAIL executionId=9326d229-e4e5-11e9-a446-0800276f39d9 suiteId=GG Q groupId=version testCaseId=ggc_version_check_test deviceId=123 time="2019-10-02T15:23:48+08:00" level=error msg=Test exited unsuccessfully executionId=9326d229-e4e5-11e9-a446-800276f39d9 testCaseId=ggc_version_check_test error=exit status 1 time="2019-10-02T15:23:48+08:00" level=info msg=All tests finished. executionId=9326d229-e4e5-11e9-a446-0800276f9d9 time="2019-10-02T15:23:49+08:00" level=info msg= ========== Test Summary ========== Execution Time: 1s Tests Completed: 1 Tests Passed: 0 Tests Failed: 1 Tests Skipped: 0 ---------------------------------- Test Groups: version: FAILED ---------------------------------- Failed Tests: Group Name: version Test Name: Test GGC Version version Reason: \[Error: 106] ValidationError: AWS credentials not found: EnvAccessKeyNotFound: AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY not found in environment. Refer to the logs and troubleshooting section of IDT User Guide https://docs.aws.amazon.com/greengrass/latest/developerguide/device-tester-for-greengrass-ug.html for more information. ---------------------------------- Path to AWS IoT Device Tester Report: /home/yichen/devicetester_greengrass_linux/results/9326d229-e4e5-11e9-a446 -0800276f39d9/awsiotdevicetester_report.xml Path to Test Execution Logs: /home/yichen/devicetester_greengrass_linux/results/9326d229-e4e5-11e9-a446-0800276f9d9/logs Path to Aggregated JUnit Report: /home/yichen/devicetester_greengrass_linux/results/9326d229-e4e5-11e9-a446-0800276f39d9/GGQ_Report.xml ================================== { "log": { "location": "../logs/" }, "configFiles": { "root": "../configs", "device": "../configs/device.json" }, "testPath": "../tests/", "reportPath": "../results/", "certificatePath": "../certificates/", "awsRegion": "us-west-2", "auth": { "method": "environment" } \[ { "id": "1", "sku": "8783d", "features": \[ { "name": "os", "value": "ubuntu" }, { "name": "arch", "value": "x86_64" } ], "kernelConfigLocation": "", "greengrassLocation": "/greengrass", "devices": \[ { "id": "123", "connectivity": { "protocol": "ssh", "ip": "10.1.70.127", "auth": { "method": "pki", "credentials": { "user": "yichen", "privKeyPath": "/home/yichen/.ssh/id_rsa" } } } } ] } ]
4
answers
0
votes
0
views
yichen
asked 2 years ago

issues Node.js hello world running core 1.9.2

I have an issue where the Node HelloWorld example behaves differently than the python version. I have tested this with core 1.9.2 on two different hardware platforms (both armv7l - one is raspi). The issue are: 1. MQTT messages from cloud to core are received three times by the HW lambda( only sent once) 2. Lambda receives SIGTERM after even+ timeout lambda is configured with heaps of RAM (100MB), longlived, timeout 30 seconds 1. could be explained by QoS, but I'm not sure how to configure this - wasn't an issue with python 2. this related to timeout, but python HelloW did't have this issue. Only one mqtt message was sent by test interface, but lambda received it three times. \[2019-09-12T14:46:47.791+01:00]\[INFO]-Posting work for function \[arn:aws:lambda:::function:GGRouter] to /2016-11-01/functions/arn:aws:lambda:::function:GGRouter \[2019-09-12T14:46:47.797+01:00]\[INFO]-Work posted with invocation ID \[8d5d5ef1-4464-4468-5c34-b367a29cd6e9] \[2019-09-12T14:46:47.798+01:00]\[INFO]-null \[2019-09-12T14:46:47.799+01:00]\[INFO]-8d5d5ef1-4464-4468-5c34-b367a29cd6e9 \[2019-09-12T14:46:49.972+01:00]\[INFO]-Got work item with invocation id \[52b9df8c-d6d1-4f8e-5624-a8d29108db4e] \[2019-09-12T14:46:49.981+01:00]\[INFO]-START RequestId: 52b9df8c-d6d1-4f8e-5624-a8d29108db4e ``` [2019-09-12T14:46:49.984+01:00][INFO]-EVENT: { [2019-09-12T14:46:49.984+01:00][INFO]- "message": "msg1" [2019-09-12T14:46:49.984+01:00][INFO]-} ``` \[2019-09-12T14:46:49.984+01:00]\[INFO]-End RequestId: 52b9df8c-d6d1-4f8e-5624-a8d29108db4e \[2019-09-12T14:46:57.801+01:00]\[INFO]-Publishing message on topic "hello/world" with Payload " { "message": "Hello world!!! Sent from Greengrass Core running on platform: linux-4.14.79-v7+ using NodeJS" } " \[2019-09-12T14:46:57.801+01:00]\[INFO]-Posting work for function \[arn:aws:lambda:::function:GGRouter] to /2016-11-01/functions/arn:aws:lambda:::function:GGRouter \[2019-09-12T14:46:57.807+01:00]\[INFO]-Work posted with invocation ID \[3f5ccd05-1c1e-42d1-563c-562cb3e57ecc] \[2019-09-12T14:46:57.807+01:00]\[INFO]-null \[2019-09-12T14:46:57.808+01:00]\[INFO]-3f5ccd05-1c1e-42d1-563c-562cb3e57ecc \[2019-09-12T14:47:07.814+01:00]\[INFO]-Publishing message on topic "hello/world" with Payload " { "message": "Hello world!!! Sent from Greengrass Core running on platform: linux-4.14.79-v7+ using NodeJS" } " \[2019-09-12T14:47:07.814+01:00]\[INFO]-Posting work for function \[arn:aws:lambda:::function:GGRouter] to /2016-11-01/functions/arn:aws:lambda:::function:GGRouter \[2019-09-12T14:47:07.82+01:00]\[INFO]-Work posted with invocation ID \[91c9c396-b3e7-40c4-5567-7db36a5ccfd7] \[2019-09-12T14:47:07.822+01:00]\[INFO]-null \[2019-09-12T14:47:07.822+01:00]\[INFO]-91c9c396-b3e7-40c4-5567-7db36a5ccfd7 \[2019-09-12T14:47:17.826+01:00]\[INFO]-Publishing message on topic "hello/world" with Payload " { "message": "Hello world!!! Sent from Greengrass Core running on platform: linux-4.14.79-v7+ using NodeJS" } " \[2019-09-12T14:47:17.827+01:00]\[INFO]-Posting work for function \[arn:aws:lambda:::function:GGRouter] to /2016-11-01/functions/arn:aws:lambda:::function:GGRouter \[2019-09-12T14:47:17.833+01:00]\[INFO]-Work posted with invocation ID \[3132ce31-3509-4ea4-50f4-237606a8c74f] \[2019-09-12T14:47:17.834+01:00]\[INFO]-null \[2019-09-12T14:47:17.835+01:00]\[INFO]-3132ce31-3509-4ea4-50f4-237606a8c74f **\[2019-09-12T14:47:19.972+01:00]\[INFO]-Caught SIGTERM. Stopping runtime.** \[2019-09-12T14:47:20.745+01:00]\[INFO]-Running \[arn:aws:lambda:us-east-1:886556155174:function:NodeHelloW:9] \[2019-09-12T14:47:20.797+01:00]\[INFO]-Getting work for function \[arn:aws:lambda:us-east-1:886556155174:function:NodeHelloW:9] from /2016-11-01/functions/arn:aws:lambda:us-east-1:886556155174:function:NodeHelloW:9/work \[2019-09-12T14:47:20.837+01:00]\[INFO]-Got work item with invocation id \[52b9df8c-d6d1-4f8e-5624-a8d29108db4e] \[2019-09-12T14:47:20.845+01:00]\[INFO]-START RequestId: 52b9df8c-d6d1-4f8e-5624-a8d29108db4e ``` [2019-09-12T14:47:20.848+01:00][INFO]-EVENT: { [2019-09-12T14:47:20.849+01:00][INFO]- "message": "msg1" [2019-09-12T14:47:20.849+01:00][INFO]-} ``` \[2019-09-12T14:47:20.849+01:00]\[INFO]-End RequestId: 52b9df8c-d6d1-4f8e-5624-a8d29108db4e \[2019-09-12T14:47:30.768+01:00]\[INFO]-Publishing message on topic "hello/world" with Payload " { "message": "Hello world!!! Sent from Greengrass Core running on platform: linux-4.14.79-v7+ using NodeJS" } " \[2019-09-12T14:47:30.774+01:00]\[INFO]-Posting work for function \[arn:aws:lambda:::function:GGRouter] to /2016-11-01/functions/arn:aws:lambda:::function:GGRouter \[2019-09-12T14:47:30.789+01:00]\[INFO]-Work posted with invocation ID \[7b398182-e1b9-43bd-4bba-e89caa272d76] \[2019-09-12T14:47:30.793+01:00]\[INFO]-null \[2019-09-12T14:47:30.794+01:00]\[INFO]-7b398182-e1b9-43bd-4bba-e89caa272d76 \[2019-09-12T14:47:40.791+01:00]\[INFO]-Publishing message on topic "hello/world" with Payload " { "message": "Hello world!!! Sent from Greengrass Core running on platform: linux-4.14.79-v7+ using NodeJS" } " \[2019-09-12T14:47:40.791+01:00]\[INFO]-Posting work for function \[arn:aws:lambda:::function:GGRouter] to /2016-11-01/functions/arn:aws:lambda:::function:GGRouter \[2019-09-12T14:47:40.799+01:00]\[INFO]-Work posted with invocation ID \[0dcd71b8-38fa-426e-483a-30ff1d1433d2] \[2019-09-12T14:47:40.8+01:00]\[INFO]-null \[2019-09-12T14:47:40.801+01:00]\[INFO]-0dcd71b8-38fa-426e-483a-30ff1d1433d2 \[2019-09-12T14:47:50.796+01:00]\[INFO]-Publishing message on topic "hello/world" with Payload " { "message": "Hello world!!! Sent from Greengrass Core running on platform: linux-4.14.79-v7+ using NodeJS" } " \[2019-09-12T14:47:50.796+01:00]\[INFO]-Posting work for function \[arn:aws:lambda:::function:GGRouter] to /2016-11-01/functions/arn:aws:lambda:::function:GGRouter \[2019-09-12T14:47:50.802+01:00]\[INFO]-Work posted with invocation ID \[17b39082-17fc-49da-42a1-807569f76dc4] \[2019-09-12T14:47:50.803+01:00]\[INFO]-null \[2019-09-12T14:47:50.804+01:00]\[INFO]-17b39082-17fc-49da-42a1-807569f76dc4 **\[2019-09-12T14:47:50.831+01:00]\[INFO]-Caught SIGTERM. Stopping runtime.** \[2019-09-12T14:47:51.766+01:00]\[INFO]-Running \[arn:aws:lambda:us-east-1:886556155174:function:NodeHelloW:9] \[2019-09-12T14:47:51.817+01:00]\[INFO]-Getting work for function \[arn:aws:lambda:us-east-1:886556155174:function:NodeHelloW:9] from /2016-11-01/functions/arn:aws:lambda:us-east-1:886556155174:function:NodeHelloW:9/work \[2019-09-12T14:47:51.856+01:00]\[INFO]-Got work item with invocation id \[52b9df8c-d6d1-4f8e-5624-a8d29108db4e] \[2019-09-12T14:47:51.865+01:00]\[INFO]-START RequestId: 52b9df8c-d6d1-4f8e-5624-a8d29108db4e ``` [2019-09-12T14:47:51.868+01:00][INFO]-EVENT: { [2019-09-12T14:47:51.868+01:00][INFO]- "message": "msg1" [2019-09-12T14:47:51.868+01:00][INFO]-} ``` \[2019-09-12T14:47:51.868+01:00]\[INFO]-End RequestId: 52b9df8c-d6d1-4f8e-5624-a8d29108db4e \[2019-09-12T14:48:01.789+01:00]\[INFO]-Publishing message on topic "hello/world" with Payload " { "message": "Hello world!!! Sent from Greengrass Core running on platform: linux-4.14.79-v7+ using NodeJS" } " \[2019-09-12T14:48:01.795+01:00]\[INFO]-Posting work for function \[arn:aws:lambda:::function:GGRouter] to /2016-11-01/functions/arn:aws:lambda:::function:GGRouter \[2019-09-12T14:48:01.811+01:00]\[INFO]-Work posted with invocation ID \[f3aa5543-8ca8-427a-58a8-d782b893f65d] \[2019-09-12T14:48:01.814+01:00]\[INFO]-null \[2019-09-12T14:48:01.815+01:00]\[INFO]-f3aa5543-8ca8-427a-58a8-d782b893f65d \[2019-09-12T14:48:11.812+01:00]\[INFO]-Publishing message on topic "hello/world" with Payload " { "message": "Hello world!!! Sent from Greengrass Core running on platform: linux-4.14.79-v7+ using NodeJS" } " \[2019-09-12T14:48:11.813+01:00]\[INFO]-Posting work for function \[arn:aws:lambda:::function:GGRouter] to /2016-11-01/functions/arn:aws:lambda:::function:GGRouter \[2019-09-12T14:48:11.819+01:00]\[INFO]-Work posted with invocation ID \[c484d3e1-4042-49dc-7adc-8bd44fe0164f] \[2019-09-12T14:48:11.821+01:00]\[INFO]-null \[2019-09-12T14:48:11.821+01:00]\[INFO]-c484d3e1-4042-49dc-7adc-8bd44fe0164f \[2019-09-12T14:48:21.825+01:00]\[INFO]-Publishing message on topic "hello/world" with Payload " { "message": "Hello world!!! Sent from Greengrass Core running on platform: linux-4.14.79-v7+ using NodeJS" } " \[2019-09-12T14:48:21.826+01:00]\[INFO]-Posting work for function \[arn:aws:lambda:::function:GGRouter] to /2016-11-01/functions/arn:aws:lambda:::function:GGRouter \[2019-09-12T14:48:21.832+01:00]\[INFO]-Work posted with invocation ID \[4f9493c4-94f5-424e-447e-9a302f7d436c] \[2019-09-12T14:48:21.833+01:00]\[INFO]-null \[2019-09-12T14:48:21.834+01:00]\[INFO]-4f9493c4-94f5-424e-447e-9a302f7d436c **\[2019-09-12T14:48:21.851+01:00]\[INFO]-Caught SIGTERM. Stopping runtime.** Edited by: KidIT on Sep 12, 2019 7:34 AM Edited by: KidIT on Sep 12, 2019 7:36 AM
4
answers
0
votes
1
views
KidIT
asked 2 years ago
  • 1
  • 90 / page