Sagemaker Notebook Kernel Dying During Training

0

I created a machine learning pipeline in a Sagemaker notebook instance (ml.m4.10xlarge, volume size: 16384GB EBS) where the kernel keeps restarting around 20% of the way through the process. I want to upgrade my notebook instances to meet the requirements for my workflow but am a bit confused as to what instance type would be sufficient to complete the task and also where I can purchase notebook instances.

Any help is appreciated and I am happy to provide further commentary as needed.

질문됨 2년 전1887회 조회
1개 답변
0

You should probably be using SageMaker training jobs for this, rather than trying to scale up your notebook instance.

SageMaker is more than a managed Jupyter service. By running your model training through the training job APIs (for e.g. as discussed here, using the high-level SageMaker Python SDK, you get benefits of:

  • Automatic tracking of runs (e.g. input parameters and code, output artifacts, logs, resource usage metrics, custom algorithm metrics, container image, etc.)
  • Reproducible containerized environments (pre-built containers with requirements.txt support, in case you don't want to build customized containers yourself)
  • Right-sizing your infrastructure usage to optimize cost - keep your notebook instance small, request bigger instance(s) for your training job, and only pay for the time the training job is actually running.
  • Integration with SageMaker options for model deployment / batch inference, etc.
  • Training runs separate from the notebook, so you can e.g. restart your notebook kernel, kill the notebook instance, struggle with connectivity, etc... during training with no impact.

So I would suggest to set up your training job referring to the Using XYZ with the SageMaker Python SDK sections of the developer guide, and the Amazon SageMaker Examples. This likely won't immediately fix your scaling challenge, but it should put you in a better position for scaling further (e.g. distributed training) and tracking your work. For most of my work, I just use e.g. t3.medium notebooks and interact with the SageMaker APIs to run jobs with on-demand infrastructure.

With that being said, your instance already sounds very large (160GB RAM, 16TB disk). The most common causes I've seen of kernel dying are failure to allocate memory - so if you're using in-memory libraries like Scikit-Learn/etc, perhaps it could be that one of them is not able to handle a single massive data structure, even though there is physical memory available? E.g. due to something assuming 32 bit indexing, or some other aspect of the script/libraries being used. It's interesting that you manage to get 20% of the way through training, since often ML training is usually pretty homogeneous (e.g. for gradient descent, often if you can complete one epoch, you can run 'em all). Perhaps you have a memory leak somewhere? Giving more details about what framework & model type you're using might help guide suggestions, but ultimately I think it might require debugging your code to see where exactly things are going wrong.

AWS
전문가
Alex_T
답변함 2년 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠