All Content tagged with AWS Inferentia

AWS Inferentia is designed to provide high performance inference in the cloud, to drive down the total cost of inference, and to make it easy for developers to integrate machine learning into their business applications.

Content language: English

Select tags to filter
Sort by most recent
55 results
[AWS Neuron Documentation](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/setup/neuron-setup/multiframework/multi-framework-ubuntu22-neuron-dlami.html#setup-ubuntu22-multi-framework-d...
1
answers
0
votes
52
views
AWS
asked 2 months ago
Walk through the options for compiling a model for inference using Inferentia or Trainium. You would need to do this if the model or the configuration you want isn't available in the Hugging Face cac...
profile pictureAWS
published 3 months ago3 votes4.4K views
Step by Step guide to deploy DeepSeek R1 Distilled models.
profile pictureAWS
published 4 months ago0 votes175 views
A list of resources to use when you are first starting with the Neuron SDK and Inferentia or Trainium instances.
Steps to set up Jupyter notebooks and VS Code remote server on Trainium and Inferentia Neuron systems.
profile pictureAWS
published 5 months ago0 votes615 views
Key announcements and discover how industry leaders, like Apple and Anthropic, are revolutionizing AI with AWS Trainium and Inferentia
Get started with Inferentia and Trainium on EC2 using the Hugging Face Neuron Deep Learning Amazon Machine Image (AMI). A short walkthrough of how to deploy an EC2 image with all the Neuron drivers a...
profile pictureAWS
published 5 months ago0 votes1K views
Are you heading to **AWS re:Invent 2024** and looking for AWS Inferentia and Trainium sessions to take your machine learning skills to the next level?
profile pictureAWS
published 9 months ago1 votes1.1K views
See what regions have instances, and find out how to generate your own list with a python script.
Hello AWS team! I am trying to run a suite of inference recommendation jobs leveraging NVIDIA Triton Inference Server on a set of GPU instances (ml.g5.12xlarge, ml.g5.8xlarge, ml.g5.16xlarge) as well...
1
answers
0
votes
730
views
asked 10 months ago
Quick first steps to find out if Inferentia or Trainium is an option for you.
profile pictureAWS
published a year ago1 votes2.8K views
Understand what service quotas are, how they apply to Inferentia and Trainium instances and endpoints, and have an example of what quotas would be appropriate for a POC.