Modelling a greengrass component around different ext4 filesystems

0

I have an application that is present in two different directories /file-path-A/greengrass-component or under /file-path-B/greengrass-component. One of them is updated during the deployment whereas the other one remains dormant. When I say present I mean the latest source code can be present in either one of the two directories and a greengrass deployment should be able to toggle between the two directories and restart the app from it. The app can be a node.js application or a python server or even a docker container. The file system is an ext4 filesystem and during a deployment we send out a blob of the file system which contains the updated source code.

Is there a recommended way I can model my application(greengrass component) to achieve this, I thought of 2 approaches

  1. Have 2 components per application one for the A path and one for B and can toggle between the two once a deployment is triggered.
  2. Other way which would be ideal is to have a single greengrass component per application and I should somehow be able to toggle the file path to the source code when restarting the service, ideally I want this logic to be abstracted from the application itself, probably a middleware that can help with toggling the pointer to the source code.
1 Answer
0

[previous response has been replaced after discussion]

This functionality is already built-in to Greengrass deployments via rollbacks, which are present on the local filesystem

AWS
answered 3 months ago
profile picture
EXPERT
reviewed 24 days ago
  • What we have is an embedded device running linux and we are trying to effectively do AB OTA deployments which is inspired from android - https://source.android.com/docs/core/ota/ab We are going to have to two partitions in our file system for each application in our GG core device, one will be A partition and one will be B. Its very similar to canary deployments but inside an IoT device. are you suggesting running multiple core devices on the same embedded device and each core device is mapped to an application thats running in the device?

  • From what I understand a GG core device maps to a single IoT physical hardware, are you referring to running multiple instances of the Greengrass Core software on the same physical IoT device (for example, running multiple instances of Greengrass Core on a single Raspberry Pi)?

  • Thanks for the use case! I believe I've dived into solutions prematurely, just want to clarify before continuing: greengrass component versions should be thought of as immutable units of functionality. The AB partition approach would break this principle, as it swaps out functionality at runtime. Also, why is AB a requirement here? Greengrass has a built-in concept of rollbacks, should a deployment fail.

  • right, but the rollbacks change things at component level whereas AB always ensures you can boot the device back with a stable build, usually it is used in linux images where can boot back from a stable OS release, whereas we are kind of extending that concept to greengrass components. Couple of reasons why this benefits us is that the device can run into issues with the new components which might not be exactly related to the component such as disk space filling up, config changes causing network disconnection, API interfaces failing, etc. These are things which could be external to the component but could be failing due to other components with bad code, so just single component roll back will not guarantee a stable running device, this is also why android came up with AB, to always have a safe and guaranteed way of fallback.

    Having said that I was thinking of breaking the deployment down into two pieces -

    1. Deploy from AWS to an "orchestrator" sort of component on the device.
    2. Then the orchestrator picks this up and does a local deployment to the other gg components(applications) on the device using gg-cli, etc? and this orchestrator can make sure it switches the A/B slot, etc. The orchestrator will receive the latest versions of the what is to be deployed on the device to the other components and it can do whatever nucleus does - downloading from s3 + restarting components + AB partition switches(custom code), etc.
  • Rollbacks are done by the deployment level, not the component level--previous deployments that exist on the filesystem will be used for rollbacks, which accomplishes AB. So my recommendation is to use Greengrass deployments without the extra layer

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions