New NLP/CV Examples to Get Started on AWS Inferentia and AWS Trainium

4분 분량
콘텐츠 수준: 중급
1

Announcing the new AWS-Neuron-Samples repo to help users learn how to compile and deploy models for AWS Inferentia or Trainium for a wide range of different Computer Vision and Natural Language Processing models.

Authored by Samir Araujo

We are excited to announce new AWS Inferentia and AWS Trainium examples in the a new AWS Neuron samples repository with many samples & tutorials to help you to prepare and run Deep Learning models. In this repository you can find examples for Computer Vision & NLP models implemented in PyTorch and TensorFlow. Each model notebook has step by step instructions to help you prepare your models and deploy them to Inferentia.

Overview

The repo is organized into two sections Training and Inference.

Training

FrameworkDescriptionInstance Type
PyTorch Neuron (torch-neuronx)Sample training scripts for training various PyTorch models on AWS TrainiumTrn1

Inference

FrameworkDescriptionInstance Type
PyTorch Neuron (torch-neuron)Sample Jupyter notebooks demonstrating model compilation and inference for various PyTorch models on AWS InferentiaInf1
TensorFlow Neuron (tensorflow-neuron)Sample Jupyter notebooks demonstrating model compilation and inference for various TensorFlow models on AWS InferentiaInf1

Getting Started

  1. Start by following the instructions for the first three steps in the Neuron Getting Started Guide, to start your Inf1 instance and install the basic Neuron and Framework packages:

    1. For PyTorch
    2. For TensorFlow
  2. Once you have your Inf1 instance running and configured, setup your Jupyter enviorment and clone the Neuron Samples repo:

      sudo yum install jupyter    
      git clone https://github.com/aws-neuron/aws-neuron-samples.git  
      cd aws-neuron-samples/
      jupyter notebook
    

    Once Jupyter server is launched then in the console the local server IP shown. Copy this into a web browser.

  3. Ready to get started!

Each set of examples will be organized into a table like the one below displaying the library dependencies, versions and original sources. Bookmark or star the repo, as more models will be added frequently. Enter image description here

All the inference notebooks start with a section called "Install Dependencies". This section is responsible for installing/ upgrading packages required for each specific model example (see Yolov5 example below). The models were tested with the exact combination of libraries/versions you see in the section.

Select any of the model examples notebooks to get started. Each one will start with individual dependencies and show to prepare the input data, compile the model and test inference with some sample data. Enter image description here

For inference with Inferentia, depending on the model the notebooks brings cells that customizes the model structure before and after the compilation. This is necessary to make the models compatible with the compiler. For instance, if you have a PyTorch model, it needs to be Jit Traceable.

At the end of each notebook you can see predictions and the output generated by the execution of the compiled model on an Inf1 instance. Once you are done testing you model, you can deploy it directly to EC2 or on SageMaker.

Now it is your time to play with these models! If you have any questions, please ask them below. You can request new examples to this repo by filling an issue or contribute your examples or updates with a pull request.


About the Author

Enter image description here

Samir Araújo is an AI/ML Solutions Architect at AWS. He helps customers creating AI/ML solutions which solve their business challenges using AWS. He has been working on several AI/ML projects related to computer vision, natural language processing, forecasting, ML at the edge, and more. He likes playing with hardware and automation projects in his free time, and he has a particular interest for robotics.

profile pictureAWS
전문가
게시됨 일 년 전1061회 조회
댓글 없음

관련 콘텐츠