Questions tagged with Amazon SageMaker Deployment
Content language: English
Select up to 5 tags to filter
Sort by most recent
Browse through the questions and answers listed below or filter and sort to narrow down your results.
Hi,
I followed [this example](https://github.com/aws/amazon-sagemaker-examples/blob/main/sagemaker_neo_compilation_jobs/tensorflow_distributed_mnist/tensorflow_distributed_mnist_neo.ipynb) up to the...
2
answers
0
votes
274
views
asked a year agolg...
I have referred [this](https://github.com/aws-samples/sagemaker-multi-model-endpoint-tensorflow-computer-vision/blob/main/multi-model-endpoint-tensorflow-cv.ipynb) notebook to deploy PyTorch model but...
2
answers
0
votes
819
views
asked 2 years agolg...
I would like to deploy an async endpoint in SageMaker. However when trying to deploy it I get the following error:
ParamValidationError: Parameter validation failed:
Unknown parameter in input:...
1
answers
0
votes
254
views
asked 2 years agolg...
In SageMaker, I've created an async endpoint from a model. How does one access this endpoint from a notebook instance so that it can be used to make predictions?
Thanks!
1
answers
0
votes
217
views
asked 2 years agolg...
2022/11/09 05:21:09 [error] 11#11: *213 upstream prematurely closed connection while reading response header from upstream, client: 169.254.178.2, server: , request: "POST /invocations HTTP/1.1",...
1
answers
0
votes
781
views
asked 2 years agolg...
I have built a pipeline using sklearn.pipeline that can take in, pre-process and make predictions on text. Is it possible to deploy this Pipeline object as an endpoint using SageMaker?
Thanks!
1
answers
0
votes
216
views
asked 2 years agolg...
I'm trying to Neo compile a Pytorch YoloV5 Large model for edge deployment on an Nvidia Jetson Xavier NX device. I'm able to do it using the default settings for FP32 precision but I'm unable to do it...
1
answers
0
votes
267
views
asked 2 years agolg...
I am facing issue in implementing BYOC with multi container with one end point. I am training my own model in Container and when I am running this container singularly it is working but integrating...
1
answers
0
votes
248
views
asked 2 years agolg...
Hi,
I've deployed a custom model container as Sagemaker Async Endpoint which works fine so far. It's callable and provides results as expected.
I now want to apply a TargetTrackingScalingPolicy but...
0
answers
0
votes
102
views
asked 2 years agolg...
I want to train and deploy multiple comprehend custom classifiers (for example 50 models). I want to be able to classify my documents in near real-time (a couple of seconds are fine) 24/7. The problem...
1
answers
0
votes
328
views
asked 2 years agolg...
Following this [document](https://sagemaker.readthedocs.io/en/stable/api/inference/multi_data_model.html), I try to async inference with Multi Model Endpoint.
I try to set **kwargs from...
1
answers
0
votes
1516
views
asked 2 years agolg...
Hey I trained a sickst learn model using python sdk and I want to deploy the model as a Serverless inference now. I am new to AWS and can't seem to make sense of the documentation. the model is fit it...
2
answers
0
votes
1039
views
asked 2 years agolg...