By using AWS re:Post, you agree to the Terms of Use
/AWS IoT Core/

Questions tagged with AWS IoT Core

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Mqtt connection between the user's iot devices and the user's phone

I want the communication to be done with publish and subcribe methods over mqtt. I don't want to use Shadow services. With the JITR method, devices can easily authentication with the AWS IoT by using device certificate that was signed by my unique CA. Each device has a unique certificate and a unique policy associated with that certificate. The following policy has only been added to a device's certificate. ``` Device's client id is = edb656635694fb25f2e6d50f361c37d64aa31e72118224df19f151ee70cc2923 ``` ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "iot:Connect", "Resource": "arn:aws:iot:<REGION>:<USER-ID>:client/edb656635694fb25f2e6d50f361c37d64aa31e72118224df19f151ee70cc2923" }, .......... ......... ] } ``` The user who buys the IOT device performs the following steps during registration with the iot device: 1. Sign up the AWS Cognito Service. 2. Policy name and client id info are sent from the iot device to the phone via Bluettoth. 3. It registers the Cognito identity with Policy using AttachPolicy. [https://imgur.com/a/hfWqjkD]() I found out that it only accepts a single connection with the client id. That's why the above didn't work. ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "iot:Connect", "Resource": [ "arn:aws:iot:<REGION>:<USER-ID>:client/edb656635694fb25f2e6d50f361c37d64aa31e72118224df19f151ee70cc2923", "arn:aws:iot:<REGION>:<USER-ID>:client/mobileUser1" ] }, ``` When I changed the identity as above, the system worked. With this method, I was able to restrict the resources of both iot devices and phone users. But I did the above process manually(adding a new line to policy), What should I do for mass production? At the same time, another iot device will have its own policy. How can the user communicate with iot devices? At the same time, more than one client can be paired to an iot device. I think I'm on the wrong way please guide me.
0
answers
0
votes
1
views
AWS-User-8111104
asked 2 days ago

AWS IoT Thing provisioning fails on Windows during certificate loading

Hello, I have a problem during the provisioning of the IoT thing using claim certificates. We are using the fleet provisioning by claim mechanism. We are following the steps described in this PDF: https://d1.awsstatic.com/whitepapers/device-manufacturing-provisioning.pdf When we start the provisioning process, we are providing the `AwsIotMqttConnectionBuilder` with the claim certificate and claim private key(which are generated in previous step). The problem comes when we are building the `MqttClientConnection` with which to make the request to the AWS IoT core for the provisioning. Here is a detailed exception: ``` Exception occurred during fleet provisioning by claim at com.iav.de.ota.provisioning.flow.FleetProvisioningByClaimFlowExecutor.execute(FleetProvisioningByClaimFlowExecutor.java:50) at com.iav.de.ota.provisioning.ProvisioningFacade.provision(ProvisioningFacade.java:60) at com.iav.de.ota.provisioning.ProvisioningFacade.provisionToDeviceManagementCloud(ProvisioningFacade.java:54) at com.iav.de.ota.provisioning.ProvisioningFacade.provision(ProvisioningFacade.java:39) at com.iav.de.ota.Main.main(Main.java:42) Caused by: software.amazon.awssdk.crt.CrtRuntimeException: TlsContext.tls_ctx_new: Failed to create new aws_tls_ctx (aws_last_error: AWS_IO_FILE_VALIDATION_FAILURE(1038), A file was read and the input did not match the expected value) AWS_IO_FILE_VALIDATION_FAILURE(1038) at software.amazon.awssdk.crt.io.TlsContext.tlsContextNew(Native Method) at software.amazon.awssdk.crt.io.TlsContext.<init>(TlsContext.java:24) at software.amazon.awssdk.crt.io.ClientTlsContext.<init>(ClientTlsContext.java:26) at software.amazon.awssdk.iot.AwsIotMqttConnectionBuilder.build(AwsIotMqttConnectionBuilder.java:502) at com.iav.de.ota.mqtt.MqttConnectionFactory.create(MqttConnectionFactory.java:44) at com.iav.de.ota.provisioning.flow.FleetProvisioningByClaimFlowExecutor.execute(FleetProvisioningByClaimFlowExecutor.java:42) ``` Going throught the error, I have found that this error `AWS_IO_FILE_VALIDATION_FAILURE(1038)` indicates that the expected claim private key/certificate is not matching the ones which we are giving it to it. So, I started going further into the issue and found that the library which we are using for reading the private key(bouncy castle) is reading a key which different than the input one. In other words, when I inspect the claim private key with Notepad and compare it with the one which the BouncyCastle has read - they are different. The problem is more interesting because this does not happen on Linux machines and only on Windows machines. I have even tried to read the claim private key as plain string from the file and pass it to the MqttConnection and this works. Unfortunately, this is not a solution because later on(after the provisioning) we are storing the real certificate and private key, for later on communication with the AWS IoT Core, in a KeyStore which we are reading with BouncyCastle, again. So, we need the library(BouncyCastle or other) in order to read the private key/certificate in any moment of the execution of the progam(either during the provisioning or later on during the other AWS IoT Core calls with the real certificates). Forgot to mention, the claim private key and claim certificate are stored in PEM format. Could you tell me what can be done here? Is there any AWS supported library for reading the claim private key/certificate without using BouncyCastle? Any suggestions here are welcomed because we are stucked and the requirements are that each AWS IoT Things will be running on Windows OS. Thanks a lot, Encho
1
answers
0
votes
7
views
Encho Belezirev
asked 6 days ago

Errors at dimensions (empty value) in Timestream from an IoT Rule

Hello: I'm trying to insert data into Timestream from AWS IoT, where I created a rule as: ``` SELECT * FROM 'dataTopic' ACTIONS: Write a message into a Timestream DB: test, table: sensors, dimension name: device, dimension value: ${device} Republish a message to an AWS IoT topic: test ERROR ACTION: Republish a message to an AWS IoT topic: error ``` And publishing data as: ``` { "device": "abc123", "temperature": "24.50", "humidity": "49" } ``` **works fine.** **NOW, my real data** is actually like this; ``` { "state": { "reported": { "device": "abc123", "temperature": "24.50", "humidity": "49" } } } ``` So I had to modify my Rule as: `SELECT state.reported.* FROM 'dataTopic'` but when I test it, I get an error from Timestream as it seems ``` "failures" : [ { "failedAction" : "TimestreamAction", "failedResource" : "test#sensors", "errorMessage" : "Failed to write records to Timestream. The error received was 'Errors at dimensions.0: [Dimension value can not be empty.]'. Message arrived on dataTopic, Action: timestream, Database: test, Table: sensors" ``` However, **checking the data received at topic test, I don't see differences** with the original data; ``` { "device" : "abc123", "temperature" : "24.50", "humidity" : "49" } ``` What could be the problem? So far, I see same data being ingested but for some reason, Timestream is seeing something different. I tried to use Cloudwatch to see what exactly is Timestream receiving, but I couldn't see the logs from this service. I would appreciate any help. Thanks
1
answers
0
votes
3
views
AWS-User-SOS
asked 13 days ago
1
answers
0
votes
3
views
philm001
asked 25 days ago

AWS IoT Timestream Rule Action Multi Measure Record

Hi, Is it possible to create a single db record with multiple measurements using IoT Greengrass Timestream rule action? I want to show 3 measurements from a device in a single row. Even though my select query has 3 measurement they are all inserted in to table as different rows. My Timestream rule in CF template: ``` TimestreamRule: Type: AWS::IoT::TopicRule Properties: TopicRulePayload: RuleDisabled: false Sql: !Join [ '', [ "SELECT cpu_utilization, memory_utilization, disc_utilization FROM 'device/+/telemetry'", ], ] Actions: - Timestream: DatabaseName: !Ref TelemetryTimestreamDatabase TableName: !GetAtt DeviceTelemetryTimestreamTable.Name Dimensions: - Name: device Value: ${deviceId} RoleArn: !GetAtt SomeRole.Arn Timestamp: Unit: SECONDS Value: ${time} ``` My message payload: ``` { "cpu_utilization": 8, "memory_utilization": 67.4, "disc_utilization": 1.1, "deviceId": "asdasdasd123123123", "time": "1639141461" } ``` Resulting records in Timestream: | device | measure_name | time | measure_value::bigint | measure_value::double| | --- | | 61705b3f6ac7696431ac6b12 | disc_utilization | 2021-12-10 13:03:47.000000000 | - | 1.1 | | 61705b3f6ac7696431ac6b12 | memory_utilization | 2021-12-10 13:03:47.000000000 | - | 67.1 | | 61705b3f6ac7696431ac6b12 | cpu_utilization | 2021-12-10 13:03:47.000000000 | - | 12.1 | This is not what I want. I want to have a single record including all three measurements, cpu, disc and memory. I know it is possible to do it somehow because provided sample db has multi measurement records, such as: | hostname | az | region | measure_name | time | memory_utilization | cpu_utilization | | --- | | host-n2Rxl |eu-north-1a | eu-north-1 | DevOpsMulti-stats | 2021-12-10 13:03:47.000000000 | 40.324917071566546 | 91.85944083569557 | | host-sEUc8 |us-west-2a | us-west-2 | DevOpsMulti-stats | 2021-12-10 13:03:47.000000000 | 59.224512780289224 | 18.09011541205904 | How can I achieve this? Please help! Bests,
3
answers
0
votes
13
views
savcuoglu
asked a month ago

UnauthorizedError when publishing to local MQTT

Hey folks, Trying to get IPC working for custom components, and I've hit a wall. I've configured local IPC according to the documentation (as far as I can tell), but whenever I publish to a topic I get an UnauthorizedError. I assumed that this was a misconfiguration of access control in the recipe, but I don't see any differences between my recipe and the examples. Any help would be much appreciated. Here's the relevant bit of the recipe: ```yaml ComponentConfiguration: DefaultConfiguration: accessControl: aws.greengrass.ipc.pubsub: "my.custom.component:pubsub:1": policyDescription: "Publish access for database interface." operations: - "aws.greengrass#PublishToTopic" resources: - "*" ``` and here's the code that publishes: ``` def publish_to_topic(topic, message): logger.info(f"sending: {message} to {topic}") request = PublishToTopicRequest() request.topic = topic publish_message = PublishMessage() publish_message.binary_message = BinaryMessage() publish_message.binary_message.message = bytes(dumps(message), "utf-8") request.publish_message = publish_message operation = ipc_client.new_publish_to_topic() operation.activate(request) future = operation.get_response() try: future.result(TIMEOUT) logger.info('Successfully published to topic: ' + topic) except concurrent.futures.TimeoutError: logger.error('Timeout occurred while publishing to topic: ' + topic) except UnauthorizedError as e: logger.error('Unauthorized error while publishing to topic: ' + topic) raise e except Exception as e: logger.error('Exception while publishing to topic: ' + topic) raise e TIMEOUT = 10 ipc_client = awsiot.greengrasscoreipc.connect() topic = "my/test/topic" message = { 'foo': 'FOO', 'bar': 'BAR' } publish_to_topic(topic, message) ```
2
answers
0
votes
1
views
continuities
asked 7 months ago

Authenticating with StreamManager in python Docker container

In the developer guide for Greengrass V2, it says that to use inter-process communication, a docker container must have the following environment variables set. # AWS_REGION # SVCUID # AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT # AWS_CONTAINER_AUTHORIZATION_TOKEN # AWS_CONTAINER_CREDENTIALS_FULL_URI <https://docs.aws.amazon.com/greengrass/v2/developerguide/run-docker-container.html> I have written a simple Docker container that simply installs the stream manager sdk, and attempts to create a StreamManagerClient instance. The Dockerfile looks like this: ``` FROM python:3.9-alpine ENV PYTHONUNBUFFERED=1 WORKDIR /workdir RUN apk add git && pip install git+https://github.com/aws-greengrass/aws-greengrass-stream-manager-sdk-python COPY test_connect.py test_connect.py CMD [ "python", "test_connect.py" ] ``` The python script `test_connect.py` looks like this: ``` from stream_manager import StreamManagerClient def main(): client = StreamManagerClient() print(f"Connected to client: {client}") if __name__ == "__main__": main() ``` The "Run" Lifecycle step in my component recipe looks like this: ``` docker run -v /greengrass/v2:/greengrass/v2 -e AWS_REGION=$AWS_REGION -e SVCUID=$SVCUID -e AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT=$AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT -e AWS_CONTAINER_AUTHORIZATION_TOKEN=$AWS_CONTAINER_AUTHORIZATION_TOKEN -e AWS_CONTAINER_CREDENTIALS_FULL_URI=$AWS_CONTAINER_CREDENTIALS_FULL_URI --rm stream-test-docker ``` When this component runs, the python script fails with the following error: ``` Connection error while connecting to server: [Errno 111] Connect call failed ('127.0.0.1', 8088) Traceback (most recent call last): File "/workdir/test_connect.py", line 10, in <module> main() File "/workdir/test_connect.py", line 5, in main client = StreamManagerClient() File "/usr/local/lib/python3.9/site-packages/stream_manager/streammanagerclient.py", line 112, in __init__ UtilInternal.sync(self.__connect(), loop=self.__loop) File "/usr/local/lib/python3.9/site-packages/stream_manager/utilinternal.py", line 39, in sync return asyncio.run_coroutine_threadsafe(coro, loop=loop).result() File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 440, in result return self.__get_result() File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 389, in __get_result raise self._exception File "/usr/local/lib/python3.9/site-packages/stream_manager/streammanagerclient.py", line 140, in __connect self.__reader, self.__writer = await asyncio.wait_for( File "/usr/local/lib/python3.9/asyncio/tasks.py", line 481, in wait_for return fut.result() File "/usr/local/lib/python3.9/asyncio/streams.py", line 52, in open_connection transport, _ = await loop.create_connection( File "/usr/local/lib/python3.9/asyncio/base_events.py", line 1056, in create_connection raise exceptions[0] File "/usr/local/lib/python3.9/asyncio/base_events.py", line 1041, in create_connection sock = await self._connect_sock( File "/usr/local/lib/python3.9/asyncio/base_events.py", line 955, in _connect_sock await self.sock_connect(sock, address) File "/usr/local/lib/python3.9/asyncio/selector_events.py", line 502, in sock_connect return await fut File "/usr/local/lib/python3.9/asyncio/selector_events.py", line 537, in _sock_connect_cb raise OSError(err, f'Connect call failed {address}') ConnectionRefusedError: [Errno 111] Connect call failed ('127.0.0.1', 8088) ``` I have confirmed that the relevant environment variables are being mapped into the container, as is my greengrass root dir "/greengrass/v2". This is the same error returned by any other non-greengrass client on the system (i.e. if I run the same python script outside of the docker container). I have also tried setting the StreamManager config `STREAM_MANAGER_AUTHENTICATE_CLIENT` to `"false"`, with no luck (though I don't believe this should be the case, as the container is running via Greengrass and has all the recommended environment variables set; the documentation was quite vague on this ). The only successful workaround I was able to find was to set `--network=host` in my `docker run` command, but this is not ideal, as there should be no need to operate this particular component's container in host networking mode aside from this. Is this possibly the only way? So in summary, my questions: # What is the appropriate method for interacting with StreamManager inside a greengrass-managed docker container? # What is the correct StreamManager configuration value for `STREAM_MANAGER_AUTHENTICATE_CLIENT` when using a Greengrass-managed docker container (as opposed to a docker container or other process unrelated to Greengrass).
2
answers
0
votes
0
views
travipross
asked 10 months ago

AUTHORIZATION_FAILURE error while publishing messages from Java Client

Hi, I started testing our Java Client with AWS IoT using MQTT Protocol. I read some documents, finished few hours of training sessions before attempting the POC. So far what i have achieved is: CONNECT, SUBSCRIBE (i get SUBACK), PING. When i try publishing messages, i am getting AUTHORIZATION_FAILURE failure. I don't think that the error is due to policy settings or certificates. Because, i am able to connect, subscribe, receive messages sent through AWS IoT Test Console. No other details in logs to debug further. Here are my policy settings: { "Version": "2012-10-17", "Statement": \[ { "Effect": "Allow", "Action": "iot:Connect", "Resource": "arn:aws:iot:us-west-2:XXXXXXXXXXXX:client/${iot:ClientId}" }, { "Effect": "Allow", "Action": "iot:Subscribe", "Resource": "arn:aws:iot:us-west-2:XXXXXXXXXXXX:topicfilter/java-client" }, { "Effect": "Allow", "Action": "iot:Receive", "Resource": "arn:aws:iot:us-west-2:XXXXXXXXXXXX:topic/java-client" }, { "Effect": "Allow", "Action": "iot:Publish", "Resource": "arn:aws:iot:us-west-2:XXXXXXXXXXXX:topic/home-devices/router" } ] } The policy is attached to the certificate that i am using to connect to AWS IoT. Other details if it helps to answer my question. Protocol: **MQTT** Payload format: **Binary (Google Protocol Buffers)** Error fields: details Authorization Failure eventType Publish-In logLevel ERROR protocol MQTT reason AUTHORIZATION_FAILURE status Failure Note: I have not set any rules. Is it **mandatory** to set rules to consume MQTT messages in binary format and Republish the same message to other topic? Thanks, Mahesh
2
answers
0
votes
0
views
MaheshRudraiah
asked a year ago

Problem with IOT Core converting double to int in JSON parser

I have a temperature sensor device that I am creating. Everything is working. I can publish to IOT Core and I have rule setup to send the data to AWS Timestream. However, I find that the JSON parser in AWS IoT Core is converting floats/doubles into ints IF they end in .0. So, a value sent through as 30.0 gets converted to 30. This happens whether I send the JSON object from the device or from the IOT Client on AWS IoT core. Here is my example object: {"location":"synders", "type": "weather", "id": "synders_5cE89C", "temperature": 23.0, "humidity": 35.0, "rain": 0.0, "groundtemperature": 0.0, "groundmoisture": 0} Here is what is received from the test client: { "location": "synders", "type": "weather", "id": "synders_5cE89C", "temperature": 23, "humidity": 35, "rain": 0, "groundtemperature": 0, "groundmoisture": 0 } If I change the data to use anything other than .0, it works as expected: {"location":"synders", "type": "weather", "id": "synders_5cE89C", "temperature": 23.1, "humidity": 35.1, "rain": 0.1, "groundtemperature": 0.1, "groundmoisture": 0} Becomes in the IoT client: { "location": "synders", "type": "weather", "id": "synders_5cE89C", "temperature": 23.1, "humidity": 35.1, "rain": 0.1, "groundtemperature": 0.1, "groundmoisture": 0 } This is a problem because Timestream thinks the values are bigints instead of doubles because the decimal place is missing. This causes an error and the data is not recorded. It is not necessary to publish data from the device to see the issue. I've only used the AWS IoT MQTT client. I have to think the solution is quite simple and I missed something in the docs or a setting somewhere. Suggestions anyone? Help Mark Edited by: MarkBuckaway on Feb 4, 2021 8:19 AM
2
answers
0
votes
0
views
MarkBuckaway
asked a year ago

JITP cert not created with mbedTLS+ATECC608A (works with moquitto_pub)

Hello, I have the following setup: - ATECC608A - mbedTLS - coreMQTT The certificate chain is the following: RootCA > SignerCA > DeviceCert. I've registered both RootCA and SignerCA as CAs in the AWS IoT Console When connecting to my ats-endpoint with that stack, the TLS handshake is successul: the device cert and signerCA are presented and AWS presents its cert chain as well. mbedTLS seems to be happy. I then use the created mbedtls_ssl_context to connect coreMQTT. From the log, coreMQTT is able to write on the socket but AWS closes the connection. I expect the first connection to fail. But in this case, the certificate does not appear in the AWS IoT console and subsequent connection attempts fail as well. I double checked the signerCA stored and its policy. They seem fine. To ensure that this confg is correct, I manually created a certificate on my machine and signed it with my SignerCA.I then used that certificate with mosquitto_pub. The process works and my certificate appears in the AWS IoT console with the correct policy attached. Another verification I've done is to check that my coreMQTT connection is correct. To connect without the JITP provisioning, I extracted the device certificate from the ATECC, manually uploaded its PEM and attached a policy to the device in the AWS console. That MQTT connection was successful (and I see the 'MQTT.Connect event in the AWS logs). Questions: - mbedTLS seems to present the two concatened certificates. Would the handshake be successful if one them was not correct or if mbedTLS was misconfigured? - Could the X509v3 extensions be responsible for the JITP failure? The device cert has them, while the manually-generated one has not. - Is there a way to log mutual authentication failures in Cloudwatch? Is there anything that I missed? I could not attach any logs, I get the validation error "Your post contains inappropriate content. Please review and adjust before posting." when I try to include them. The full logs are available on the twin SO question: https://stackoverflow.com/questions/65735301/jitp-cert-not-created-with-mbedtlsatecc608a-works-with-moquitto-pub Thanks!
3
answers
0
votes
0
views
fstephany
asked a year ago

Issue while using local IPC for messaging between two local component

publish("test/test",message={"a":1}) def publish(self, topic, message, message_format="json"): """Publishes message to a topic of AWS IOT Args: topic (string): AWS IoT Core Topic message (dict / string): Message to be published message_format """ request = PublishToTopicRequest() request.topic_name = topic publish_message = PublishMessage() if message_format != "json": publish_message.binary_message = BinaryMessage(message=bytes(message, "utf-8")) # publish_message.binary_message.message = bytes(message, "utf-8") else: publish_message.json_message = JsonMessage(message=message) # publish_message.json_message.message = message request.publish_message = publish_message operation = self.ipc_client.new_publish_to_topic() operation.activate(request) future = operation.get_response() future.result(TIMEOUT) **accessControl permissions are:** com.test.test:pubsub:1: policyDescription: "Allows communication with other components" operations: - "aws.greengrass#PublishToTopic" resources: - "test/test" **and Greengrass.log are:** 2021-01-06T10:30:13.515Z \[INFO] (Thread-6) software.amazon.awssdk.eventstreamrpc.RpcServer: New connection code \[AWS_ERROR_SUCCESS] for \[Id 816, Class ServerConnection, Refs 1](2021-01-06T10:30:13.515Z) - <null>. {} 2021-01-06T10:30:13.515Z \[INFO] (Thread-6) software.amazon.awssdk.eventstreamrpc.ServiceOperationMappingContinuationHandler: aws.greengrass#GreengrassCoreIPC authenticated identity: com.test.test. {} 2021-01-06T10:30:13.515Z \[INFO] (Thread-6) software.amazon.awssdk.eventstreamrpc.ServiceOperationMappingContinuationHandler: Connection accepted for com.test.test. {} 2021-01-06T10:30:13.515Z \[INFO] (Thread-6) software.amazon.awssdk.eventstreamrpc.ServiceOperationMappingContinuationHandler: Sending connect response for com.test.test. {} 2021-01-06T10:30:13.517Z \[INFO] (Thread-6) software.amazon.awssdk.eventstreamrpc.RpcServer: New connection code \[AWS_ERROR_SUCCESS] for \[Id 818, Class ServerConnection, Refs 1](2021-01-06T10:30:13.517Z) - <null>. {} 2021-01-06T10:30:13.517Z \[INFO] (Thread-6) software.amazon.awssdk.eventstreamrpc.ServiceOperationMappingContinuationHandler: aws.greengrass#GreengrassCoreIPC authenticated identity: com.test.test {} 2021-01-06T10:30:13.517Z \[INFO] (Thread-6) software.amazon.awssdk.eventstreamrpc.ServiceOperationMappingContinuationHandler: Connection accepted for com.test.test. {} 2021-01-06T10:30:13.517Z \[INFO] (Thread-6) software.amazon.awssdk.eventstreamrpc.ServiceOperationMappingContinuationHandler: Sending connect response for com.test.test. {} 2021-01-06T10:30:13.518Z \[ERROR] (Thread-6) com.aws.greengrass.ipc.common.ExceptionUtil: Unhandled exception in IPC. {} java.lang.NullPointerException Edited by: allenkallz on Jan 6, 2021 3:21 AM
4
answers
0
votes
1
views
allenkallz
asked a year ago

Publish to iot core from an imported lambda using greengrassv2

Hi, I am having trouble sending message to IoT Core using an imported Lambda. Here are the steps that I follow: - I setup Greengrass Core SDK on my Raspberry Pi and I am able to see the group and core device on Greengrass console. - I created a Lambda that uses awsiot sdk and publishes a message using new_publish_to_iot_core(). - I created a component on AWS Greengrass console. In this component I chose "Import Lambda Function" option and selected my previously created Lambda. On "Lambda function configuration" section I entered "hello/world" topic for "AWS IoT Core MQTT" as an event source. On "Lambda function configuration" section I chose "No Container" option. - I created a deployment to deploy component to my Pi. After deployment succeed to core device I get this error on greengrass.log: "com.aws.greengrass.builtin.services.mqttproxy.MqttProxyIPCAgent: Not Authorized. {error=Principal Greengrass_helloworld_version2 is not authorized to perform aws.greengrass.ipc.mqttproxy:aws.greengrass#PublishToIoTCore on resource hello/world}" and this error log on "Greengrass_helloworld_version2.log": "Greengrass_helloworld_version2: lambda_function.py:40,Failed to publish message : UnauthorizedError(message='Not Authorized'). {serviceInstance=0, serviceName=Greengrass_helloworld_version2, currentState=RUNNING}" I attached my Lambda Code and recipe that is created automatically. I guess I should insert following configuration to recipe but since it is an imported Lambda component it does not allow me to change it. { "ComponentConfiguration": { "DefaultConfiguration": { "accessControl": { "aws.greengrass.ipc.mqttproxy": { "Greengrass_helloworld_version2:pubsub:1": { "policyDescription": "Allows access to publish to hello/world.", "operations": \[ "aws.greengrass#PublishToIoTCore" ], "resources": \[ "hello/world" ] } } } } } } I am quite frustrated with this new version of Greengrass since the documentation is quite lack of information at the moment. What do i need to change? Please help!
6
answers
0
votes
0
views
savcuoglu
asked a year ago

[S3] Kinesis, File Gateway, or direct S3 writing?

Hi, I have a customer who wants to write solar power generator's sensor data to S3. The data stream happens usually during the day time and almost no data on night time. It will likely be about 1MB / second during the day time. It may vary to 5MB or more depending on how many solar panels in the deployed generator area. There may be network off time to time since solar power generators are usually place on mountain area. They want to save the sensor data to S3 since it's all read only data there. They will use SageMaker as well for complex Machine Learning process. The ML process + weather information will eventually make prediction about how much power will be generated for the next month after power generation commitment is made. There is no control data going back to edge side, so I filtered out IoT Core from the data ingestion consideration. There were similar previous project in Korea using IoT Core, but had trouble from streaming data to the cloud and found rather Kinesis was better approach. However, in the later stage, when there will be control data going back to the edge side, Greengrass or IoT Core will be considered for non-stream data. The customer and I would like to know which of the following (or some new method) would be the best approach. - Directly writing to S3 using CLI (or other method) would be worthwhile since S3 writing directly is free. I never observed any projects or architecture diagram writing to S3 directly. So I answered to the customer that this is unlikely, but they demand why which I do not know at this moment. - Writing to S3 using Kinesis Data Stream and turning the stream shard off on night time. Currently, this is my best bet, but I would like to know your opinion. - Using AWS File Gateway to write to S3. But I think this is not worthwhile since local gateway does not need to access the cached files. It's just one way to S3 from the sensors. Could you please share your opinion? Thank you!
1
answers
0
votes
1
views
AWS-User-6598922
asked a year ago

AWS embedded C SDK - Cannot connect

Hi, I have an ESP32 board which connects successfully to AWS IoT, publish messages and receives messages from subscribed topics, using Amazon certificates. I cannot connect using AWS embedded C SDK using same certificates. Any connect using certificates end up this way: \[INFO] \[DEMO] \[mqtt_demo_mutual_auth.c:584] Establishing a TLS session to XXXXXXXXXXXXXXXXX.iot.eu-west-1.amazonaws.com:8883. \[INFO] \[DEMO] \[mqtt_demo_mutual_auth.c:1264] Creating an MQTT connection to XXXXXXXXXXXXXXXXX.iot.eu-west-1.amazonaws.com. \[ERROR] \[Transport_OpenSSL_Sockets] \[openssl_posix.c:696] Failed to receive data over network: SSL_read failed: ErrorStatus=EVP lib. ..................... (huge numbers of lines like one above)........................................................................................................................................ \[ERROR] \[Transport_OpenSSL_Sockets] \[openssl_posix.c:696] Failed to receive data over network: SSL_read failed: ErrorStatus=EVP lib. \[ERROR] \[Transport_OpenSSL_Sockets] \[openssl_posix.c:696] Failed to receive data over network: SSL_read failed: ErrorStatus=EVP lib. \[ERROR] \[MQTT] \[core_mqtt.c:1531] CONNACK recv failed with status = MQTTNoDataAvailable. \[ERROR] \[MQTT] \[core_mqtt.c:1802] MQTT connection failed with status = MQTTNoDataAvailable. \[ERROR] \[SHADOW_DEMO] \[shadow_demo_helpers.c:604] Connection with MQTT broker failed with status 7. \[ERROR] \[SHADOW] \[shadow_demo_main.c:528] Failed to connect to MQTT broker. Any help is appreciated. Regards, Pavel
2
answers
0
votes
0
views
ptonev
asked a year ago

Job executes successfully, but still marked as "failed". Why?

I'm using aws-iot-sdk-js "jobs-agent.js" to "install packages" ( but really just put files and execute commands). The following works just fine: (Note, I have a cron job writing "\[]" to "installedPackages.json" every minute to prevent all the package management stuff from happening, hence "doesntmatter". Yes, I know it's hacky.) { "operation": "install", "packageName": "doesntmatter", "workingDirectory": "/home/pi/tamgbin", "launchCommand": "chmod 755 /home/pi/tamgbin/sessionValidator && chmod 755 /home/pi/tamgbin/tagGenerator && rm /home/pi/tamgbin/*.old", "autoStart": "false", "files": \[ { "fileName": "sessionValidator", "fileVersion": "0.0.0.1", "fileSource": { "url": "https://mybucketname.s3.amazonaws.com/sessionValidator" } }, { "fileName": "tagGenerator", "fileVersion": "0.0.0.1", "fileSource": { "url": "https://mybucketname.s3.amazonaws.com/tagGenerator" } }, { "fileName": "tamgSessionValidator.jar", "fileVersion": "0.0.0.1", "fileSource": { "url": "https://mybucketname.s3.amazonaws.com/tamgSessionValidator.jar" } }, { "fileName": "tamgTagGenerator.jar", "fileVersion": "0.0.0.1", "fileSource": { "url": "https://mybucketname.s3.amazonaws.com/tamgTagGenerator.jar" } } ] } My files get placed and the launchCommand executes perfectly. The only problem is, in the IOT console, the job is listed as "failed" and "completed". Why? The console doesn't give any useful information about the WHY of the failure.
1
answers
0
votes
0
views
Cyrus
asked a year ago

Unknown Host Error

I'm working on a proof-of concept project for a customer. It will connect to AWS IoT to send device logs to S3. To test that the device can handle the activity, I'm starting with the AWS embedded C SDK. My workflow is to build the code for an x86-64 Linux host for testing and debug. Then, I move the code to another workspace where I cross-compile it for the embedded device, a 32-bit ARM-based Linux device. Up to yesterday, the subscribe-publish sample was working successfully on both my development system and the target device. Yesterday, I created a new thing in a different region from the original (moved from us-east-2 to us-east-1) so I could access some of the customer's AWS Lambda rules. I updated my device certificates and keys as well as the endpoint in the sample program header file. Since the move, I'm unable to connect to the AWS IoT service in either us-east-1 or us-east-2. When I run the sample code, I get the following output (name and id redacted): DEBUG: iot_tls_connect L#161 . Connecting to arn:aws:iot:us-east-1:********3071:thing/**************/8883... ERROR: iot_tls_connect L#164 failed ! mbedtls_net_connect returned -0x52 ERROR: main L#190 Error(-23) connecting to arn:aws:iot:us-east-1:********3071:thing/***************:8883 As part of my debug effort, I deleted all my things and certificates and created from scratch this morning. I'm at a bit of a loss as to what to do next. The device, certificate and attached policy show up in the console. Just to check out the keys and strings I put them in the Python AWS sample program and had a failure to connect as well. Not sure what I'm missing. Need some guidance. Thanks. Edited by: GaryKB on Aug 6, 2020 10:35 AM
2
answers
0
votes
0
views
GaryKB
asked a year ago

AWS IoT Policy - using * in combination with a text string or variable

When using * in combination with a variable or text string the resource is not working as expected. When using a allow effect for action iot:Connect on a resource ``` "arn:aws:iot:*:*:client/${iot:Connection.Thing.ThingTypeName}-*" ``` I'am expecting to be able to connect using a client id based on the thing name and a string seperated by a dash. Eg. "MyThingName-client1" or "MyThingName-abc" based on the condition that the thing name is "MyThingName". The behaviour i experiance is that i can not connect using the wildcard in combination with variable or a string. A full reproducing example is shown below. I can connect using only the thing name bu not using the thing name dash any string. ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "iot:Connect" ], "Resource": [ "arn:aws:iot:*:*:client/${iot:Connection.Thing.ThingTypeName}", "arn:aws:iot:*:*:client/${iot:Connection.Thing.ThingTypeName}-*" ] }, { "Effect": "Allow", "Action": [ "iot:Receive" ], "Resource": [ "*" ] }, { "Effect": "Allow", "Action": [ "iot:Publish" ], "Resource": [ "arn:aws:iot:*:*:topic/${iot:Connection.Thing.ThingTypeName}/input/${iot:Connection.Thing.ThingName}", "arn:aws:iot:*:*:topic/${iot:Connection.Thing.ThingTypeName}/event/${iot:Connection.Thing.ThingName}" ] }, { "Effect": "Allow", "Action": [ "iot:Subscribe" ], "Resource": [ "arn:aws:iot:*:*:topic/${iot:Connection.Thing.ThingTypeName}/output/${iot:Connection.Thing.ThingName}" ] } ] } ```
2
answers
0
votes
0
views
savnik
asked 2 years ago

Publish to IoT Endpoint of other Account - IoTData SDK NodeJS Lambda

Hello, I have two Accounts running with IoT-Endpoints enabled. My devices connect to the IoT Endpoint on Account A. In Account A I also have a Lambda function running, that is receiving the messages through an IoT Rule. The Lambda determines, if the message has to be forwarded to Account B or not. Since I have other services depending on IoT messages in Account B as well, I just want to republish the MQTT message to the IoT-Endpoint of Account B. Lambda is configured with NodeJS 12.x. Let&#39;s leave the IAM stuff aside. I have some roles/policies set up for that already, but I do not even get that far to test them. I use the following snippet in Lambda of Account A to send the message to the IoT-Endpoint of Account B. ``` let iotData = new AWS.IotData({ endpoint: &#39;<endpointOfAccountB>-ats.iot.eu-west-1.amazonaws.com&#39; }); let topic_params = { topic: "my/topic/on/other/account", payload: JSON.stringify(payload), qos: 1 }; iotData.publish(topic_params).promise(); ``` It seems like the endpoint attribute is just ignored, because the message gets published on Account A instead. I also tried without _-ats_ (result of CLI: _aws iot describe-endpoint_ ): _<endpointOfAccountB>.iot.eu-west-1.amazonaws.com_ -> same result When I print _iotDev.endpoint_ the correct endpoint is configured. Is there a way to publish an MQTT message from Account A to Account B with the IoTData sdk? Am I missing something? Feel free to ask for more details if necessary. Best Regards, Julian
2
answers
0
votes
0
views
julianDev
asked 2 years ago

AWS IoT DynamoDB rule not able to read value SELECTed from topic() function

I&#39;m trying to get data from an AWS IoT MQTT topic into DynamoDB using a rule. An example topic is cooler/cooler42/sensors and example message ``` { "waterTemp": 10, "timestamp": 1580370731383 } ``` I&#39;ve defined the query like so, to extract the deviceName (e.g. cooler42) from the topic and insert it into the JSON: ``` SELECT *, topic(2) AS deviceName FROM &#39;cooler/+/sensors&#39; ``` This does indeed seem to work, as if I republish the message to another topic I now see the same JSON with deviceName added: ``` { "waterTemp": 10, "timestamp": 1580370731383, "deviceName": "cooler42" } ``` My understanding is that all 3 fields should now be available for use within my DynamoDB rule like so: https://i.stack.imgur.com/gO9SG.png However I can see from CloudWatch that the rule is failing with error One or more parameter values were invalid: An AttributeValue may not contain an empty string (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ValidationException and the partition key (aka hash key) is coming through as empty: ``` { "ItemRangeKeyValue":"1580370731383", "IsPayloadJSON":"true", "ItemHashKeyField":"deviceName", "Operation":"Insert", "ItemRangeKeyField":"timestamp", "Table":"SensorDataTest2", "ItemHashKeyValue":"" <--- Empty } ``` Am I not able to use the deviceName I&#39;ve just SELECTed from the topic name in the rule? If not is there another way to extract it? NB If I manually publish a message onto the topic already including the deviceName then it does work fine, but I&#39;m working in a constrained environment and don&#39;t want the extra payload size. NB I&#39;ve also posted this question to SO:<https://stackoverflow.com/questions/59982125/aws-iot-dynamodb-rule-not-able-to-read-value-selected-from-topic-function>
1
answers
0
votes
1
views
TaranS
asked 2 years ago

Protect devices from becoming bricks

We need to be 100% sure that our devices in the field can continue to connect to AWS IoT core for the foreseeable future (30 years minimum). These devices will be deployed in consumer's homes. They must be maintenance free even though it is possible to automate maintenance (if any - such as certificate rotation) if we know those maintenance tasks ahead of time before the devices are manufactured and sold. We of course will plan software/firmware upgrades on the devices. We plan to use the Just In Time Provisioning methodology (https://aws.amazon.com/blogs/iot/setting-up-just-in-time-provisioning-with-aws-iot-core/) to register our devices. The following certificates and keys are used during the JITP process. 1. Root CA created by us once (used to create Custom CA certificate and Device Certificate) 2. Our Custom CA certificate generated and registered with AWS IoT once 3. Device Certificates created during Just in Time provisioning while manufacturing (using our Root CA in step 1) 4. Device Private Key generated during manufacturing 5. AWS Root CA downloaded from AWS and installed on the device during manufacturing 6. Per device certificate issued and attached to the device on AWS IoT One concern we have is that our devices becoming bricks in customer premises when any of these certificates expire and the devices are unable to connect to AWS IoT. We may not be able to remotely manually bring them online one by one as there may be tens of thousands of these devices in the field. We need these devices to be working flawlessly for a minimum of 30 years without manual intervention. AWS IoT developer guide has warning such as these sprinkled all over the guide - _"Device and root CA certificates are subject to expiration or revocation. If your certificates expire or are revoked, you must copy a new CA certificate or private key and device certificate onto your device."_ This is too generic. There are no step by step instructions on what to do. What can we do now (before the devices are manufactured and sold to end users) to make sure we are not surprised. i have Googled and unable to find an authoritative guide on this issue. This must be a common concern for many. The problem here is that we can't afford even a 1% chance of being surprised by something we haven't considered. So making educated guesses of this based on limited understanding is not an option. If we are not 100% sure, we will be better off not using the solution. How do others approach this problem. Any pointers and advice is very helpful. Thanks in advance for your help Edited by: pkongara on Jan 23, 2020 8:29 AM Edited by: DripDeveloper on Jan 23, 2020 8:42 AM
4
answers
0
votes
1
views
DripDeveloper
asked 2 years ago

Greengrass IDT failing [Error: 126] Failed to find libc on your device

I have an Linux (armv7l) device which I am running GGC service on and am running into an error when executing the IDT: Reason: Error: 126 DependenciesNotPresentError: The following dependencies do not exist: Failed to find libc on your device. The running of this test also deletes the users and groups (ggc_user and ggc_group) upon failure. I am running the test as root user. The Greengrass Dependency Checker reports that all of the required dependencies are met. Here is the config file: \[ { "id": "TP", "sku": "sku1234", "features": \[ { "name": "os", "value": "linux" }, { "name": "arch", "value": "armv7l" } ], "kernelConfigLocation": "", "greengrassLocation": "", "devices": \[ { "id": "TP_DEV_Group_core", "connectivity": { "protocol": "ssh", "ip": "10.6.10.145", "auth": { "method": "password", "credentials": { "user": "root", "password": "<password>" } } } } ] } ] ========== Test Summary ========== Execution Time: 31s Tests Completed: 6 Tests Passed: 5 Tests Failed: 1 Tests Skipped: 0 ---------------------------------- Test Groups: ggcdependencies: FAILED version: PASSED ---------------------------------- Failed Tests: Group Name: ggcdependencies Test Name: Test System Configs Dependencies system_configs_check Reason: \[Error: 126] DependenciesNotPresentError: The following dependencies do not exist: Failed to find libc on your device. Please refer to https://docs.aws.amazon.com/greengrass/latest/developerguide/gg-gs.html for more information regarding required dependencies for Greengrass.. Refer to the logs and troubleshooting section of IDT User Guide https://docs.aws.amazon.com/greengrass/latest/developerguide/device-tester-for-greengrass-ug.html for more information. ---------------------------------- Path to AWS IoT Device Tester Report: /Users/jessecox/Dropbox/Greengrass_Development/idt/devicetester_greengrass_mac/results/f35e71a2-1c78-11ea-9e9c-a860b6004763/awsiotdevicetester_report.xml Path to Test Execution Logs: /Users/jessecox/Dropbox/Greengrass_Development/idt/devicetester_greengrass_mac/results/f35e71a2-1c78-11ea-9e9c-a860b6004763/logs Path to Aggregated JUnit Report: /Users/jessecox/Dropbox/Greengrass_Development/idt/devicetester_greengrass_mac/results/f35e71a2-1c78-11ea-9e9c-a860b6004763/GGQ_Report.xml Any advice on this would be appreciated. Edited by: JesseCoxPDX on Dec 11, 2019 4:57 PM
2
answers
0
votes
0
views
JesseCoxPDX
asked 2 years ago

Greengrass core: unable to listen on address localhost 8000

I'm trying to run AWS Greengrass Core on a new armv7l device. Everything installs correctly. config.conf is edited so "useSystemd" is "no". It looks like GGC crashes because another process is already using port 8000. Here's the output of the command **sudo greengrassd start**: ``` Setting up greengrass daemon Validating hardlink/softlink protection Waiting for up to 1m10s for Daemon to start listen tcp 127.0.0.1:8000: bind: address already in use runtime failed to start: unable to listen on address: localhost:8000 amazonaws.com/iot/greengrass/ipc.(*Service).Serve /opt/src/src/amazonaws.com/iot/greengrass/ipc/server.go:75 main.main.func3 /opt/src/src/amazonaws.com/iot/greengrass/daemon/daemon.go:289 runtime.goexit /usr/local/go/lib/src/runtime/asm_arm.s:1015 unable to start server main.main.func3 /opt/src/src/amazonaws.com/iot/greengrass/daemon/daemon.go:291 runtime.goexit /usr/local/go/lib/src/runtime/asm_arm.s:1015 The Greengrass daemon process with [pid = 3027] died ``` Here's relevant output from the command **sudo netstat -tulpn**: ``` Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 584/python3 [the rest deleted] tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN 585/python3 ``` Python3 version is 3.5.3. For various reasons, changing the Python port from 8000 to something else is not an option. Can I change the Greengrass port from 8000 to something else? Is there another solution I'm missing? One additional piece of information: when I kill the two python3 process, Greengrass starts successfully. The "port already in use" error is the only obstacle I've encountered. Edited by: RayFW on Aug 6, 2019 8:52 AM Typo Edited by: RayFW on Aug 6, 2019 9:59 AM GGC successfully started
1
answers
0
votes
0
views
RayFW
asked 2 years ago
  • 1
  • 90 / page