SageMaker Job hangs after python script finishes

0

We have a step functions pipeline that executes a bunch of SageMaker training and processing jobs in series. However, in the last couple weeks it happened four times that one of these jobs failed (timed out), even though the underlying python script finished successfully.

So what happens is that the python script reaches the last print statement of the main function (the print statement is the last line of code that needs to be executed), but then the SageMaker job doesn't report "SUCCESS". Instead, the job continues to run seemingly indefinitely (CPU and GPU utilization drop to zero) until it times out. It seems to happen randomly, and when re-starting the pipeline it runs just fine. Has anyone experienced the same issue and found a solution? Can I pass an explicit "Script finished successfully" message to the SageMaker job?

Note: when this issue happened the first time the underlying script or the docker image that I use hadn't been changed in months.

Phil
질문됨 9달 전38회 조회
답변 없음

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠