Skip to content

IoT Protobuf support SQL problems

0

I'm thrilled that AWS IoT now supports Protobuf decoding within Rules - it's going so save us a bunch of expensive Lambda function calls to decode simple protobuf messages. However, I'm stuck trying to understand the implementation.

I've followed the instructions to the T, but am stuck on the SQL query inside the IoT rule. Unfortunately it's not very well documented, including what appears to be a missing ' in the example:

SELECT VALUE decode(encode(*, 'base64'), 'proto', '<BUCKET NAME>, '<FILENAME>.desc', '<PROTO_FILENAME>', 'Person') FROM 'test/proto'

Shouldnt it be '<BUCKET NAME>' with the trailing parentheses?

Anyway, if my incoming payload looks like this:

{
  "PayloadData": "CAEQZBj0HA==", 
  "foo":"bar"
}

What is the correct SQL query to decode the payload value (which is a valid Base64 encoded Protobuf message).

I've tried:

SELECT VALUE decode(PayloadData, 'proto', 'example-proto-registry', 'abc123.desc', 'abc123', 'Uplink') FROM 'test/proto'

But that is not working as expected... Is this syntax correct? I'm not seeing any messages flowing to the republish topic that I defined in the rule, nor any errors flowing to the error action republish topic either.

Looking through the logs I do see two errors, but can't seem to overcome them - I've tried several versions of the query and several version of the Role policy (including no restrictions on any S3 bucket with *)

Error 1: Seems to be an issue with the policy...

"reason": "Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: *****; S3 Extended Request ID: ******; Proxy: null)",
    "details": "decode(..., 'proto', 'example-proto-registry', 'abc123.desc', 'abc123', 'Uplink') failed while retrieving S3 object metadata"

Current Policy for the Rule use to test Protobuf - you can see I've granted all access and it's still failing.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "cloudwatch:PutMetricData",
                "firehose:PutRecord",
                "s3:GetObjectVersionTagging",
                "s3:GetStorageLensConfigurationTagging",
                "s3:GetObjectAcl",
                "s3:GetBucketObjectLockConfiguration",
                "s3:GetIntelligentTieringConfiguration",
                "s3:GetObjectVersionAcl",
                "s3:GetBucketPolicyStatus",
                "s3:GetObjectRetention",
                "s3:GetBucketWebsite",
                "dynamodb:PutItem",
                "s3:GetJobTagging",
                "s3:GetMultiRegionAccessPoint",
                "s3:GetObjectAttributes",
                "s3:GetObjectLegalHold",
                "s3:GetBucketNotification",
                "s3:DescribeMultiRegionAccessPointOperation",
                "s3:GetReplicationConfiguration",
                "s3:PutObject",
                "s3:GetObject",
                "kinesis:PutRecord",
                "s3:DescribeJob",
                "s3:GetAnalyticsConfiguration",
                "s3:GetObjectVersionForReplication",
                "s3:GetAccessPointForObjectLambda",
                "s3:GetStorageLensDashboard",
                "es:ESHttpPut",
                "s3:GetLifecycleConfiguration",
                "s3:GetAccessPoint",
                "s3:GetInventoryConfiguration",
                "s3:GetBucketTagging",
                "s3:GetAccessPointPolicyForObjectLambda",
                "s3:GetBucketLogging",
                "s3:GetAccelerateConfiguration",
                "s3:GetObjectVersionAttributes",
                "s3:GetBucketPolicy",
                "sqs:SendMessage*",
                "s3:GetEncryptionConfiguration",
                "s3:GetObjectVersionTorrent",
                "sns:Publish",
                "s3:GetBucketRequestPayment",
                "s3:GetAccessPointPolicyStatus",
                "s3:GetObjectTagging",
                "cloudwatch:SetAlarmState",
                "s3:GetMetricsConfiguration",
                "s3:GetBucketOwnershipControls",
                "iot:Publish",
                "s3:GetBucketPublicAccessBlock",
                "s3:GetMultiRegionAccessPointPolicyStatus",
                "s3:GetMultiRegionAccessPointPolicy",
                "s3:GetAccessPointPolicyStatusForObjectLambda",
                "s3:GetBucketVersioning",
                "s3:GetBucketAcl",
                "s3:GetAccessPointConfigurationForObjectLambda",
                "s3:GetObjectTorrent",
                "s3:GetMultiRegionAccessPointRoutes",
                "s3:GetStorageLensConfiguration",
                "s3:GetAccountPublicAccessBlock",
                "s3:GetBucketCORS",
                "s3:GetBucketLocation",
                "s3:GetAccessPointPolicy",
                "s3:GetObjectVersion"
            ],
            "Resource": "*"
        }
    ]
}

Error 2: Appears to be an issue with the syntax.

"details": "Function 'Decode' failed to execute for rule 'protocoder'. Invalid parameters. Expected decode(*, $scheme, $[scheme parameters])"

What am I doing wrong??

  • Adding more detail here. The docs state "Make sure you grant AWS IoT Core access to read the FileDescriptorSet from S3." but are not clear on where that policy needs to be set. On the Rule that is executing the query? Or in Manage > Security > Policies (at the IoT data plane level)?

    Additionally, using the example policy from the docs: { "Version": "2012-10-17", "Statement": [ { "Sid": "Statement1", "Effect": "Allow", "Principal": { "Service": "iot.amazonaws.com" }, "Action": "s3:Get*", "Resource": "arn:aws:s3:::example-proto-registry/abc123.desc" } ] }

    Returns an error saying: "Policy document is Malformed: Has prohibited field Principal".

    Would really appreciate some assistance on this!

asked 3 years ago684 views
1 Answer
0

In case anyone else experiences the same issues, here's what worked for me:

  1. Create your Protobuf definitions (.proto) and .desc files locally using the instructions
  2. Create a dedicated S3 bucket to store those files
  3. Create a Bucket Policy, inside the S3 bucket, not in the Rule you will create in step 4, that gives permission for IoT to access those files, and list each Resource object (.desc) explicitly. See policy below:
  4. Create an IoT Rule and with the correct SQL syntax. See two examples below:

#4 IoT Rule SQL

If your protobuf payload is coming into IoT already Base64 encoded, for example: {"PayloadData":"CAMQKhiTFQ=="}

Use the following SQL, where PayloadData is the key containing your Base64 encoded Protobuf payload.

SELECT VALUE decode(PayloadData, 'proto', 'example-proto-registry', 'abc123.desc', 'abc123', 'Uplink') FROM 'test/proto'

#3 Bucket Policy Example

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Statement1",
            "Effect": "Allow",
            "Principal": {
                "Service": "iot.amazonaws.com"
            },
            "Action": "s3:Get*",
            "Resource": "arn:aws:s3:::example-proto-registry/abc123.desc",
            "Resource": "arn:aws:s3:::example-proto-registry/def456.desc"
        }
    ]
}
answered 3 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.