- Newest
- Most votes
- Most comments
Unfortunately, SageMaker Neo does not directly support compiling models trained with SageMaker AutoML for edge devices like the Jetson. This is because AutoML uses its own specialized frameworks and formats that are not currently compatible with Neo's compilation process.
However, you still have options to run your AutoML-trained model on your edge device:
-
Export the model: You can export the AutoML model to a standard format supported by Neo, such as ONNX or TensorFlow. This would involve using SageMaker's model export functionality to convert the AutoML model into a format that Neo can work with.
-
Re-train using a supported framework: If exporting is not feasible, you could use the AutoML model as a reference to train a similar model using a framework that Neo supports, such as TensorFlow, PyTorch, or MXNet. This would allow you to maintain similar performance while gaining Neo compatibility.
-
Direct deployment without Neo: If Neo compilation is not essential, you can deploy the AutoML model directly to your edge device. This would require setting up the necessary runtime environment on your Jetson device to support the AutoML model format. You may need to install specific libraries or frameworks that the AutoML model depends on.
-
Use alternative optimization tools: There are other tools and frameworks designed for edge deployment that might be compatible with your AutoML model. These could help optimize the model for edge deployment without using Neo.
Remember that while Neo offers significant optimization benefits, it's not the only path to edge deployment. The key is to find a balance between maintaining the performance benefits of your AutoML model and achieving compatibility with your edge device.
Sources
Amazon SageMaker Neo
Edge Devices - Amazon SageMaker