DLAMI does not have CUDA/NVIDIA (and cannot access cuda from pytorch)

0

I running on :Deep Learning AMI (Ubuntu 18.04) Version 56.0 - ami-083abc80c473f5d88, but I have tried several similar DLAMI. I am unable to access CUDA from pytorch to train my models.

See here:

$ apt list --installed | grep -i "nvidia"
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

libnvidia-compute-460-server/bionic-updates,bionic-security,now 460.106.00-0ubuntu0.18.04.2 amd64 [installed,automatic]
libnvidia-container-tools/bionic,now 1.7.0-1 amd64 [installed,automatic]
libnvidia-container1/bionic,now 1.7.0-1 amd64 [installed,automatic]
nvidia-container-toolkit/bionic,now 1.7.0-1 amd64 [installed]
nvidia-cuda-dev/bionic,now 9.1.85-3ubuntu1 amd64 [installed,automatic]
nvidia-cuda-doc/bionic,now 9.1.85-3ubuntu1 all [installed,automatic]
nvidia-cuda-gdb/bionic,now 9.1.85-3ubuntu1 amd64 [installed,automatic]
nvidia-cuda-toolkit/bionic,now 9.1.85-3ubuntu1 amd64 [installed]
nvidia-docker2/bionic,now 2.8.0-1 all [installed]
nvidia-fabricmanager-450/now 450.142.00-1 amd64 [installed,upgradable to: 450.156.00-0ubuntu0.18.04.1]
nvidia-opencl-dev/bionic,now 9.1.85-3ubuntu1 amd64 [installed,automatic]
nvidia-profiler/bionic,now 9.1.85-3ubuntu1 amd64 [installed,automatic]
nvidia-visual-profiler/bionic,now 9.1.85-3ubuntu1 amd64 [installed,automatic]

And it shows I have Nvidia. However, when I run python:

~$ bpython
bpython version 0.22.1 on top of Python 3.8.12 /home/ubuntu/anaconda3/envs/pytorch_p38/bin/python3.8
>>> import torch.nn as nn
>>> import torch
>>> torch.cuda.is_available()
False

Even after I re-install nvidia

sudo apt install nvidia-driver-455

I get this:

(pytorch_p38) ubuntu@ip-172-31-95-17:~$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Mon_Oct_12_20:09:46_PDT_2020
Cuda compilation tools, release 11.1, V11.1.105
Build cuda_11.1.TC455_06.29190527_0
(pytorch_p38) ubuntu@ip-172-31-95-17:~$ bpython
bpython version 0.22.1 on top of Python 3.8.12 /home/ubuntu/anaconda3/envs/pytorch_p38/bin/python3.8
>>> import torch
>>> torch.cuda.is_available()
False

Does anyone know how to get pytorch to be able to access cuda? Any help is greatly appreciated

  • What instance type are you using?

  • Which AMI version are you using, and are you by any chance using a g5-series instance?

No Answers

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions