1 個回答
- 最新
- 最多得票
- 最多評論
0
The default Panorama service leverages NEO to compile and run the models. Unfortunately, the 1P Sagemaker algorithms are not compatible with NEO. Also, the Panorama service leverages Nvidia Jetson hardware, which is not compatible with the 'ml.m5.2xlarge' hardware. With that said, Panorama now supports an option to "bring your own runtime" to Panorama that has recently been introduced that would allow you to leverage 1P Sagemaker models. If you were to train your model with a GPU enabled instance (p3 or g4dn), the process would involve building your own custom container for Panorama with the MXNET 1.4 version installed. If this is still something you are pursuing, reach out and I can help walk you through the process.
已回答 2 年前
相關內容
- AWS 官方已更新 7 個月前
- AWS 官方已更新 2 年前
- AWS 官方已更新 1 年前
- AWS 官方已更新 2 年前
I have similarly tried the SageMaker Neo conversion using the 'Panorama Test Utility' and the steps suggested to optimise the model there. The problem again seems to be an issue with the knn trained model not being based on mxnet or missing a '-symbol.json' file?