- Mais recentes
- Mais votos
- Mais comentários
It is possible to compile a model to inferentia anywhere. You don't need an inf1 to compile it. Inf1 is only required to run the model later. To compile a Yolov5 model, just make sure you have the correct versions of the libraries/frameworks. The following combination works well:
I tested it using Python 3.6 and 3.7 on an x86_64 instance (CPU only). First prepare the env:
pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com
pip install -r https://raw.githubusercontent.com/ultralytics/yolov5/master/requirements.txt # install dependencies
pip install -U --force-reinstall torch-neuron==1.10.1.2.2.0.0 neuron-cc[tensorflow] "protobuf<4" torchvision==0.11.2
Then compile the model:
import torch
import torch_neuron
model_type='l'
assert(model_type in ['n', 's', 'm', 'l', 'x'])
x = torch.rand([1, 3, 640, 640], dtype=torch.float32)
model = torch.hub.load('ultralytics/yolov5', f'yolov5{model_type}', pretrained=True)
model.eval()
y = model(x) # warmup
torch.neuron.analyze_model(model, example_inputs=x)
model_neuron = torch.neuron.trace( model, example_inputs=x)
## Export to saved model
model_neuron.save("yolov5_neuron.pt")
Then, copy the model to an inf1 instance, prepared with the same libraries versions you compiled the model and run:
import torch
import torch.neuron
model = torch.load('yolov5_neuron.pt')
model.eval()
x = torch.rand([1, 3, 640, 640], dtype=torch.float32)
y = model(x)
ADDITIONAL INFORMATION
You need to post process the predictions externally, to extract the bounding boxes. You can use the following sample to see how to run pre and post processing routines. https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuron/inference/yolov5/Yolov5.ipynb
Conteúdo relevante
- AWS OFICIALAtualizada há 3 anos
- AWS OFICIALAtualizada há um ano
- AWS OFICIALAtualizada há 2 anos