- Newest
- Most votes
- Most comments
There's quite a few different ways to go about this. So I'll try to steer you in the right direction.
For training, taking a look at the Sagemaker SDK would be a good start. It allows you to write code locally, but say train a model using remotely using Sagemaker. Note, that this will create Model in Sagemaker's model registry. If you don't want that, using a bare VM might be the best choice.
If you were set on using your own custom Docker container, you can still use Sagemaker for deployment (or something like ECS), this page here would be a helpful start, particularly the Steps for model deployment and Bring your own model section would be helpful.
Hi,
Did you try the (very) new local mode of Sagemaker Studio announced in December: https://aws.amazon.com/about-aws/whats-new/2023/12/sagemaker-studio-local-mode-docker/
Studio users can now run SageMaker processing, training, inference and batch
transform jobs locally on their Studio IDE instance. Users can also build and test
SageMaker compatible Docker images locally in Studio IDEs.
Data scientists can iteratively develop ML models and debug code changes quickly
without leaving their IDE or waiting for remote compute resources. Users can run
small-scale jobs locally to test implementations and inspect outputs before running
full jobs in the cloud. This optimizes workflows by providing instant feedback on code changes
and catching issues early without waiting for cloud resources.
It seems to match very well want you want to achieve.
Reference documentation is here: https://docs.aws.amazon.com/sagemaker/latest/dg/pipelines-local-mode.html
Best,
Didier
Relevant content
- asked 2 years ago
- asked 6 months ago
- AWS OFFICIALUpdated 5 days ago
- AWS OFFICIALUpdated a month ago
- AWS OFFICIALUpdated a month ago
- AWS OFFICIALUpdated a year ago