How to deploy N models on N Greengrass devices with a unique Lambda for inference logic?

0

Hi,

Let's consider an ML edge inference use-case on Greengrass-managed device. The model is unique to each device, however its architecture and invocation logic are the same for all devices. In other words, the same invocation Lambda could be the same for all devices, only the model parameters would need to change across devices. We'd like to deploy a unique inference Lambda to all devices, and load device-specific artifact to each device.

Can this be achieved with Greengrass ML Inference? It seems that GG MLI requires each model to be associated with a specific Lambda.

Otherwise, is the recommended pattern to self-manage the inference in Lambda? E.g. by loading a specific model from S3 unique a local config file or some env variable?

1 Answer
0
Accepted Answer

In IoT Greengrass 1.x, the configuration is unique to each Greengrass Group. This includes Connectors, Lambdas and ML Resources.

The same Lambda can be referenced by multiple groups as a Greengrass function, which is likely what you want. This is similar to using one of the GG ML connectors (Object Detection or Image Classification).

In addition to your inference code, you'll also need to configure an ML Resource, which has a local name and a remote model. The local name would be the same for all Greengrass Groups, but in each group you will refer to a different remote object (the model) - either S3 or SageMaker job.

Every time a model changes, you will need to redeploy the corresponding Greengrass group for the changes to be deployed locally.

AWS
EXPERT
answered 4 years ago
profile picture
EXPERT
reviewed 7 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions