How to deploy N models on N Greengrass devices with a unique Lambda for inference logic?

0

Hi,

Let's consider an ML edge inference use-case on Greengrass-managed device. The model is unique to each device, however its architecture and invocation logic are the same for all devices. In other words, the same invocation Lambda could be the same for all devices, only the model parameters would need to change across devices. We'd like to deploy a unique inference Lambda to all devices, and load device-specific artifact to each device.

Can this be achieved with Greengrass ML Inference? It seems that GG MLI requires each model to be associated with a specific Lambda.

Otherwise, is the recommended pattern to self-manage the inference in Lambda? E.g. by loading a specific model from S3 unique a local config file or some env variable?

1 Antwort
0
Akzeptierte Antwort

In IoT Greengrass 1.x, the configuration is unique to each Greengrass Group. This includes Connectors, Lambdas and ML Resources.

The same Lambda can be referenced by multiple groups as a Greengrass function, which is likely what you want. This is similar to using one of the GG ML connectors (Object Detection or Image Classification).

In addition to your inference code, you'll also need to configure an ML Resource, which has a local name and a remote model. The local name would be the same for all Greengrass Groups, but in each group you will refer to a different remote object (the model) - either S3 or SageMaker job.

Every time a model changes, you will need to redeploy the corresponding Greengrass group for the changes to be deployed locally.

AWS
EXPERTE
beantwortet vor 3 Jahren
profile picture
EXPERTE
überprüft vor einem Monat

Du bist nicht angemeldet. Anmelden um eine Antwort zu veröffentlichen.

Eine gute Antwort beantwortet die Frage klar, gibt konstruktives Feedback und fördert die berufliche Weiterentwicklung des Fragenstellers.

Richtlinien für die Beantwortung von Fragen