Questions tagged with Amazon Elastic Inference

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

  • 1
  • 12 / page
I'm trying to make a public facing web app that allows for inferencing, with probably ten or so available models to my users. My initial thought was that I would have a front-end basic webpage, that...
1
answers
0
votes
17
views
asked 13 days ago
We want to proceed with prediction through the Elastic Inference accelerator only when a request comes in to Lambda. If that's the case, will I still be charged when EC2 is up and running? Or are you...
1
answers
0
votes
35
views
suesue
asked 3 months ago
I'm using a standard **AWS EKS** cluster, all cloud based (K8S 1.22) with multiple node groups, one of which uses a Launch Template that defines an Elastic Inference Accelerator attached to the...
0
answers
0
votes
88
views
asked a year ago
It is mentioned in the AWS EI docs that we can use EI in US EAST, WEST, ASIA PACIFIC, EU. But when i launch an instance in ohio or Seoul, it shows this error > ``` Instance launch failed The...
1
answers
0
votes
213
views
asked a year ago
I am using the batch transform function in SageMaker for the inference of my PyTorch model. I am using the same structure as...
0
answers
0
votes
69
views
asked a year ago
Hi, After many attempts to use the new create vpc tool that builds the VPC, subnets, gateways, etc. I keep getting to the point where it is allocating IP's and it fails. I have full admin...
1
answers
0
votes
85
views
reade
asked a year ago
Hi Team, Greetings!! We are able to deploy on real-time endpoint with elastic inference accelerators. But SM Elastic Inference Accelerators are not available during inference, could you please have...
0
answers
0
votes
72
views
asked a year ago
Hi Team, Greetings!! We are not able to deploy on real-time endpoint with elastic inference accelerators. Could you please have a look? SageMaker version: 2.76.0 Code: from sagemaker.pytorch...
0
answers
0
votes
65
views
asked a year ago
Hi All, Good day!! Key point to note here is, we have pre-processing script for the text document, deserialize which is required for prediction then we have post-processing script for generating NER...
1
answers
0
votes
172
views
asked a year ago
Hi fellow AWS users, I am working on an inference pipeline on AWS. Simply put, I have trained a PyTorch model and I deployed it (and created an inference endpoint) on Sagemaker from a notebook. On...
1
answers
0
votes
172
views
asked a year ago
Can Amazon SageMaker endpoints be fitted with multiple Amazon Elastic Inference accelerators? I see that [in EC2 it's possible][1], however I don't see it mentioned in Amazon SageMaker...
1
answers
0
votes
80
views
AWS
EXPERT
asked 3 years ago
  • 1
  • 12 / page