AWS Batch GPU busy or unavailable

0

I'm trying to deploy a Python app in a Docker container that utilizes CUDA to AWS Batch. When I try to run a Batch Job I get this error:

RuntimeError: CUDA error: all CUDA-capable devices are busy or unavailable

I'm a bit confused as I thought AWS Batch would assign an EC2 instance with an available GPU. I request at least 1 GPU when I submit a job. Haven't had any luck finding anyone with the same issue. It's possible I messed up configuring something in my Dockerfile or in AWS Batch, but it sounds like I'm correctly accessing the GPU and something on AWS' end is messed up. Let me know if you need any other info from me.

Docker environment: nvidia/cuda:11.6.0-cudnn8-devel-ubuntu20.04

Compute Environment: p2-family EC2s (not spot instances)

jlin
已提问 2 年前59 查看次数
没有答案

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则