Using Nvidia GPUs With Docker In 5 Minutes

Michael Levan - Oct 22 - - Dev Community

Containers have been around for a long time starting with Linux Containers (LXC). In 2013, Docker revolutionized how engineers can use containers. Essentially, they made the idea of containers mainstream because they became far more straightforward to use.

With the growth of containers over years, they’ve went from being a toy to being relevant to being in just about every production environment.

Now that the wave of AI and ML is here, engineers need a way to run those workloads in a small form factor the same way that engineers needed to run application stacks in a more efficient way, which is what led them to containers in the first place.

In this quickstart, you’ll learn how to get a container up and running and connected to a GPU in 5 minutes.

Prerequisite

The first thing that you want to do is confirm that that your server/instance has a Nvidia GPU. If it doesn’t, the rest of this article won’t work on your system.

To confirm that a GPU is available, run the following:

lspci | grep -i nvidia
Enter fullscreen mode Exit fullscreen mode

Install Docker

Without a container engine, you won’t be able to use a Nvidia GPU within a container.

To install Docker on Ubuntu, run the following:

sudo apt install docker.io
Enter fullscreen mode Exit fullscreen mode

You can also use CRI-O or Containerd for the container engine.

Drivers

As with all hardware, Device Drivers are needed so the hardware can properly communicate with the Operating System. With containers and GPUs, it’s no different, so you will need to intall the Nvidia Device Drivers.

On Ubuntu, you can use the following package.

sudo apt install ubuntu-drivers-common -y
Enter fullscreen mode Exit fullscreen mode

Once installed, use the package to install the automatic configuration for Nvidia drivers. You can also specify the GPU model, but the automatic option typically makes more sense as you may not know the specific GPU model or it may change.

sudo ubuntu-drivers install
Enter fullscreen mode Exit fullscreen mode

CUDA Toolkit

The Compute United Device Architecture (CUDA) is, by definition per Nvidia docs: “a parallel computing platform and programming model that allows developers to use a graphics processing unit (GPU) for general-purpose processing”

To install the CUDA toolkit, run the following:

# Download the package
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb

# Put the package in the package manager
sudo dpkg -i cuda-keyring_1.1-1_all.deb

# Update the system
sudo apt-get update

# Install CUDA
sudo apt-get -y install cuda-toolkit-12-6
Enter fullscreen mode Exit fullscreen mode

Nvidia Container Toolkit

The last step is the Nvidia Container Toolkit, which gives you the ability to, as the name suggests, use a GPU within containers.

First, bring down the package.

curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
  && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
    sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
    sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
Enter fullscreen mode Exit fullscreen mode

Next, add the package to the package manager.

sed -i -e '/experimental/ s/^#//g' /etc/apt/sources.list.d/nvidia-container-toolkit.list
Enter fullscreen mode Exit fullscreen mode

Update the system.

sudo apt-get update
Enter fullscreen mode Exit fullscreen mode

Install the toolkit.

sudo apt-get install -y nvidia-container-toolkit
Enter fullscreen mode Exit fullscreen mode

Configure the Ncidia Container toolkit to work with the Docker engine.

sudo nvidia-ctk runtime configure --runtime=docker
Enter fullscreen mode Exit fullscreen mode

Restart Docker.

sudo systemctl restart docker
Enter fullscreen mode Exit fullscreen mode

Test

Now that the Drivers, CUDA Toolkit, and Container Toolkit are installed, it’s time to test and see if everything is working as expected.

To test, you can run the following command:

nvidia-smi
Enter fullscreen mode Exit fullscreen mode

You’ll see an output similar to the below specifying the Nvidia GPU.

Mon Oct 14 23:53:12 2024
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.256.02   Driver Version: 470.256.02   CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla K80           Off  | 00000000:00:1E.0 Off |                    0 |
| N/A   46C    P0    61W / 149W |      0MiB / 11441MiB |     93%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
Enter fullscreen mode Exit fullscreen mode

You can now run a Docker Container and configure it to use GPUs like in the example below.

docker run --rm -it --gpus all nvcr.io/nvidia/pytorch:22.03-py3
Enter fullscreen mode Exit fullscreen mode

Image description

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Terabox Video Player