S3 trigger vs synchronous integrations for IoT->cloud requests

0

I see the "Track Coffee Consumption" DeepLens recipe (which runs an object detection model at edge but calls Rekognition for face comparison) works by:

  • Uploading an image from the device to S3, and
  • Triggering a cloud Lambda function from the S3 upload - which calls Rekognition and does something with the result.

Assuming that:

  • The results of the cloud inference needed to be used on device (rather than just powering a Cloud dashboard as in the recipe), and
  • Latency-when-available was more important to the use case than queuing-when-unavailable

...Are there any big reasons not to call the cloud Lambda (or Rekognition) direct from device - using botocore just like the recipe does with S3?

Perhaps the S3 pattern is preferred because there are more restricted deployment cases where access to S3 is available via some IoT services, but general Lambda/etc isn't?

It seems from a quick search like there are not many examples around that (synchronously) invoke cloud services from GreenGrass devices: I'm trying to get a grasp of whether this is driven by expected network connectivity, device support, IoT integration availability, or something else.

AWS
전문가
Alex_T
질문됨 4년 전203회 조회
1개 답변
0
수락된 답변

Hi,

One reason for posting to S3 or IOT is to minimize the blocking on the process that's running the inference. If you wait for the round trip request to return results to DeepLens, the application might miss a new frame. You can certainly run a queue or state machine on the device, but I think the authors wanted to keep it simple.

AWS
답변함 4년 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠