1 Answer
- Newest
- Most votes
- Most comments
1
As you mentioned, you changed the Neo compiling target from ml_c5
to jetson_tx2
, the compiled model will require runtime from jetson_tx2
. If you kept other code unchanged, the model will be deployed to a ml.c5.9xlarge
EC2 instance, which doesn't provide Nvida Jeston.
The model can't be loaded and will error out since Jestion is a device Nvidia GPU structure while c5 is only equipped with CPU. No CUDA environment.
If you compile the model with jeston_tx2
as target, you should download the model and run the compiled model in a real Nvidia Jeston device.
answered a year ago
Relevant content
- asked 10 months ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 8 months ago
It looks like I overlooked where the model was actually being deployed. Thanks a lot for pointing it out.