How do I install NVIDIA GPU driver, CUDA Toolkit, NVIDIA Container Toolkit on Amazon EC2 instances running Ubuntu Linux?
I want to install NVIDIA driver, CUDA Toolkit, NVIDIA Container Toolkit, and other NVIDIA software on Ubuntu 24.04 / 22.04 / 20.04 (x86_64/arm64)
Overview
This article suggests how to install NVIDIA GPU driver, CUDA Toolkit, NVIDIA Container Toolkit and other NVIDIA software directly from NVIDIA repository on NVIDIA GPU EC2 instances running Ubuntu on AWS.
Note that by using this method, you agree to NVIDIA Driver License Agreement, End User License Agreement and other related license agreement. If you are doing development, you may want to register for NVIDIA Developer Program.
Pre-built AMIs
If you need AMIs preconfigured with TensorFlow, PyTorch, NVIDIA CUDA drivers and libraries, consider AWS Deep Learning AMIs. Refer to Release notes for DLAMIs for currently supported options.
For container workloads, consider Amazon ECS-optimized Linux AMIs and Amazon EKS optimized AMIs
Note: instructions in this article are not applicable to pre-built AMIs.
GUI (graphical desktop) remote access
If you need remote graphical desktop access, refer to How do I install GUI (graphical desktop) on Amazon EC2 instances running Ubuntu Linux?
Note that this article installs NVIDIA Tesla driver (also know as NVIDIA Datacenter Driver), which is intended primarily for GPU compute workloads. If configured in xorg.conf
, Tesla drivers support one display of up to 2560x1600 resolution. GRID drivers provide access to four 4K displays per GPU and are certified to provide optimal performance for professional visualization applications.
About CUDA toolkit
CUDA Toolkit is generally optional when GPU instance is used to run applications (as opposed to develop applications) as the CUDA application typically packages (by statically or dynamically linking against) the CUDA runtime and libraries needed.
System Requirements
This article covers the following platforms
- Ubuntu Linux 24.04 (x86_64 and arm64)
- Ubuntu Linux 22.04 (x86_64 and arm64)
- Ubuntu Linux 20.04 (x86_64 and arm64)
Refer to Driver installation guide for supported kernel versions, compilers and libraries.
Prepare Ubuntu Linux
Launch a new NVIDIA GPU instance running Ubuntu Linux preferably with at least 20 GB storage and connect to the instance
Update OS, and install DKMS, kernel headers and development packages
sudo apt update
sudo apt upgrade -y
sudo apt autoremove -y
sudo apt install -y dkms linux-headers-aws linux-modules-extra-aws unzip gcc make libglvnd-dev pkg-config
Restart your EC2 instance if kernel is updated
Add NVIDIA repository
Configure Network Repo installation
DISTRO=$(. /etc/os-release;echo $ID$VERSION_ID | sed -e 's/\.//g')
if (arch | grep -q x86); then
ARCH=x86_64
else
ARCH=sbsa
fi
cd /tmp
curl -L -O https://developer.download.nvidia.com/compute/cuda/repos/$DISTRO/$ARCH/cuda-keyring_1.1-1_all.deb
sudo apt install -y ./cuda-keyring_1.1-1_all.deb
sudo apt update
Install NVIDIA Driver
To install latest Tesla driver
sudo apt install -y cuda-drivers
To install a specific version, e.g. 565
sudo apt install -y cuda-drivers-565
The above install NVIDIA Proprietary kernel module. Refer to Driver Installation Guide about NVIDIA Kernel Modules and installation options.
Verify
Restart your instance
nvidia-smi
Output should be similar to below
Sat Nov 2 07:34:33 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 565.57.01 Driver Version: 565.57.01 CUDA Version: 12.7 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 |
| N/A 31C P8 9W / 70W | 1MiB / 15360MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+
Optional: CUDA Toolkit
To install latest CUDA Toolkit
sudo apt install -y cuda-toolkit
To install a specific version, e.g. 12.6
sudo apt install -y cuda-toolkit-12-6
Refer to CUDA Toolkit documentation about supported platforms and installation options.
Verify
/usr/local/cuda/bin/nvcc -V
Output should be similar to below
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Thu_Sep_12_02:18:05_PDT_2024
Cuda compilation tools, release 12.6, V12.6.77
Build cuda_12.6.r12.6/compiler.34841621_0
Post-installation Actions
Refer to NVIDIA CUDA Installation Guide for Linux for post-installation actions before CUDA Toolkit can be used. For example, you may want to include /usr/local/cuda/bin
to your PATH
variable as per Post-installation Actions: Mandatory Actions
Optional: NVIDIA Container Toolkit
NVIDIA Container toolkit supports Ubuntu on both x86_64 and arm64. For arm64, use g5g.2xlarge
or larger instance size as g5g.xlarge
may cause failures due to the limited system memory.
To install latest NVIDIA Container Toolkit
sudo apt install -y nvidia-container-toolkit
Refer to NVIDIA Container toolkit documentation about supported platforms, prerequisites and installation options
Verify
nvidia-container-cli -V
Output should be similar to below
cli-version: 1.17.0
lib-version: 1.17.0
build date: 2024-10-31T09:18+00:00
build revision: 63d366ee3b4183513c310ac557bf31b05b83328f
build compiler: x86_64-linux-gnu-gcc-7 7.5.0
build platform: x86_64
build flags: -D_GNU_SOURCE -D_FORTIFY_SOURCE=2 -DNDEBUG -std=gnu11 -O2 -g -fdata-sections -ffunction-sections -fplan9-extensions -fstack-protector -fno-strict-aliasing -fvisibility=hidden -Wall -Wextra -Wcast-align -Wpointer-arith -Wmissing-prototypes -Wnonnull -Wwrite-strings -Wlogical-op -Wformat=2 -Wmissing-format-attribute -Winit-self -Wshadow -Wstrict-prototypes -Wunreachable-code -Wconversion -Wsign-conversion -Wno-unknown-warning-option -Wno-format-extra-args -Wno-gnu-alignof-expression -Wl,-zrelro -Wl,-znow -Wl,-zdefs -Wl,--gc-sections
Container engine configuration
Refer to NVIDIA Container Toolkit documentation about container engine configuration.
Install and configure Docker
To install and configure docker
sudo apt install -y docker.io
sudo usermod -aG docker ubuntu
sudo systemctl enable docker
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
Verify Docker engine configuration
To verify docker configuration
sudo docker run --rm --runtime=nvidia --gpus all public.ecr.aws/ubuntu/ubuntu:latest nvidia-smi
Output should be similar to below
Unable to find image 'public.ecr.aws/ubuntu/ubuntu:latest' locally
latest: Pulling from ubuntu/ubuntu
25a614108e8d: Pull complete
Digest: sha256:5b2fc4131b3c134a019c3ea815811de70e6ad9ee1626f59bf302558a95b436e5
Status: Downloaded newer image for public.ecr.aws/ubuntu/ubuntu:latest
Sat Nov 2 07:33:40 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 565.57.01 Driver Version: 565.57.01 CUDA Version: 12.7 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 |
| N/A 30C P8 9W / 70W | 1MiB / 15360MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+
Install NVIDIA driver, CUDA toolkit and NVIDIA container toolkit on EC2 instance at launch
To install NVIDIA driver, CUDA toolkit and NVIDIA container toolkit including Docker when launching a new GPU instance, you can use the following as user data script.
#!/bin/bash
export DEBIAN_FRONTEND=noninteractive
sudo apt update
sudo apt upgrade -y
sudo apt autoremove -y
sudo apt install -y dkms linux-headers-aws linux-modules-extra-aws unzip gcc make libglvnd-dev pkg-config
DISTRO=$(. /etc/os-release;echo $ID$VERSION_ID | sed -e 's/\.//g')
if (arch | grep -q x86); then
ARCH=x86_64
else
ARCH=sbsa
fi
cd /tmp
curl -L -O https://developer.download.nvidia.com/compute/cuda/repos/$DISTRO/$ARCH/cuda-keyring_1.1-1_all.deb
sudo apt install -y ./cuda-keyring_1.1-1_all.deb
sudo apt update
sudo apt install -y cuda-drivers
sudo apt install -y cuda-toolkit
sudo apt install -y docker.io
sudo usermod -aG docker ubuntu
sudo systemctl enable docker
sudo apt install -y nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
sudo reboot
Verify
Connect to your EC2 instance.
nvidia-smi
/usr/local/cuda/bin/nvcc -V
nvidia-container-cli -V
sudo docker run --rm --runtime=nvidia --gpus all public.ecr.aws/ubuntu/ubuntu:latest nvidia-smi
View /var/log/cloud-init-output.log
to troubleshoot any installation issues.
Perform post-installation actions in order to use CUDA toolkit. To verify integrity of installation, you can download, compile and run CUDA samples such as deviceQuery.
Other software
AWS CLI
To install AWS CLI (AWS Command Line Interface) v2 through Snap
sudo snap install aws-cli --classic
Verify
aws --version
Output should be similar to below
aws-cli/2.19.4 Python/3.12.6 Linux/6.8.0-1016-aws exe/aarch64.ubuntu.24
cuDNN (CUDA Deep Neural Network library)
To install cuDNN for the latest available CUDA version.
sudo apt install -y zlib1g cudnn
Refer to cuDNN documentation about installation options and support matrix
NCCL (NVIDIA Collective Communication Library)
To install latest NCCL
sudo apt install -y libnccl2 libnccl-dev
Refer to NCCL documentation about installation options
DCGM (NVIDIA Data Center GPU Manager)
To install latest DCGM
sudo apt install -y datacenter-gpu-manager
Refer to DCGM documentation for more information
Verify
dcgmi -v
Output should be similar to below
Version : 3.3.8
Build ID : 43
Build Date : 2024-09-03
Build Type : Release
Commit ID : be8d66b4318e1d5d6e31b67759dc924d1bc18681
Branch Name : rel_dcgm_3_3
CPU Arch : aarch64
Build Platform : Linux 4.15.0-180-generic #189-Ubuntu SMP Wed May 18 14:13:57 UTC 2022 x86_64
CRC : 93724fdcffc34a2656865a161c2d79df
Fabric Manager
To install latest Fabric Manager and driver
sudo apt install -y cuda-drivers-fabricmanager
To install specific version, e.g. 565
sudo apt install -y cuda-drivers-fabricmanager-565
Refer to Fabric Manager documentation for supported platforms and installation options
Verify
nv-fabricmanager -v
Output should be similar to below
Fabric Manager version is : 565.57.01
Relevant content
- asked 2 years agolg...
- asked a year agolg...
- asked 3 years agolg...
- AWS OFFICIALUpdated 7 months ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated a year ago