XClose

Running code that requires CUDA enabled GPUs on multiple platforms

The following Python code: mandelbrot_gpu.py creates a mandelbrot image, using Python’s numba package with the CUDA toolkit on GPUs. For our purposes, let’s just consider the time taken to create the image, which is printed (see line 57: mandelbrot_gpu.py).

This code was taken from harrism’s notebook featured in the NVIDIA developer blog.

Capable computing platforms

Running a CUDA container requires a machine with at least one CUDA-capable GPU and a driver compatible with the CUDA toolkit version you are using. Take a look at the requirements table here.

The machine running the CUDA container only requires the NVIDIA driver, the CUDA toolkit doesn’t have to be installed.

On a linux machine with NVIDIA GPU(s), the nvidia-smi command can be used to reveal the driver version and other useful information.

Creating a Docker image

The Dockerfile below specifies an image that can be used to create a container capable of running mandelbrot_gpu.py. Note that the base image is an official CUDA image from NVIDIA and that the cudatoolkit version installed with Anaconda (9.0) matches the CUDA version specified by the image.

A Docker image has been built and pushed to Docker Hub with this Dockerfile:

  1. docker build -t edwardchalstrey/mandelbrot_gpu .
  2. docker push edwardchalstrey/mandelbrot_gpu

It can then can be run with Docker, but requires nvidia-docker to also be installed:

nvidia-docker run edwardchalstrey/mandelbrot_gpu

If your platform doesn’t have nvidia-docker, see the installation instructions

%%writefile Dockerfile
FROM nvidia/cuda:9.0-cudnn7-runtime-ubuntu16.04

RUN  apt-get update \
  && apt-get install -y wget vim bzip2\
  && rm -rf /var/lib/apt/lists/*

RUN apt-get update
RUN apt-get -y install curl

#Install MINICONDA
RUN wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O Miniconda.sh && \
    /bin/bash Miniconda.sh -b -p /opt/conda && \
    rm Miniconda.sh

ENV PATH /opt/conda/bin:$PATH

RUN conda install numpy scipy matplotlib numba cudatoolkit=9.0 pyculib -y

COPY mandelbrot_gpu.py /mandelbrot_gpu.py

CMD python3 mandelbrot_gpu.py
Overwriting Dockerfile

Creating a Singularity image

A Singularity image that can be used to create a container capable of running mandelbrot_gpu.py can be made from the Docker image we already pushed to Docker Hub. This requires a simple definition file such as the below.

These are the Singularity commands needed to build a Singularity image from the Docker Hub image and the Singularity.mandelbrot_gpu definition file (see below), then run a container:

  1. singularity build mandelbrot_gpu.sif Singularity.mandelbrot_gpu
  2. singularity run --nv mandelbrot_gpu.sif

Note, the Singularity container needs to be run in the same dir as a file called mandelbrot_gpu.py for it to run this way. You may wish to not include anything in the %files section of the definition file and instead specify the file to run in the run command.

In this case I have built the image with Singularity Hub by linking it to my GitHub repo, which contains the definition file, named such that the image will be built on each commit. Instructions on how to this can be found here.

Since my Singularity definition file below is called Singularity.mandelbrot_gpu and is kept in a github repo called edwardchalstrey1/turingbench, the image becomes available at shub://singularity-hub.org/edwardchalstrey1/turingbench:mandelbrot_gpu.

A container based on the image can then be run on any platform with Singularity with the following command (using the --nv option to leverage the nvidia GPU):

singularity run --nv shub://singularity-hub.org/edwardchalstrey1/turingbench:mandelbrot_gpu

%%writefile Singularity.mandelbrot_gpu
BootStrap: docker
From: edwardchalstrey/mandelbrot_gpu

%post
    apt-get -y update

%files      
    mandelbrot_gpu.py /mandelbrot_gpu.py
Overwriting Singularity.mandelbrot_gpu

Definition file notes: According to Singularity docs, the BootStrap keyword needs to be the first entry in the header section for the build to be compatible with all versions of Singularity.

Running code that requires CUDA enabled GPUs on a Microsoft Azure Virtual Machine

Running a container based the on Docker or Singularity images we have created in Microsoft Azure, requires a virtual machine (VM) with a CUDA-capable GPU and a driver compatible with the CUDA toolkit version you are using, which in our case is 9.0 (see above).

First, select and create an appropriate VM in Azure. This can be done from the Azure Portal Home with the following steps:

  1. Click “Virtual machines”
  2. Click “Add”
  3. Choose the “Size” when setting up the VM; for example, select a Standard NV6 which uses the NVIDIA K80 GPU.

After checking the driver requirements page, an appropriate NVIDIA driver must be installed, which in our case must be a version >= 384.81, because we are using CUDA 9.0.

The correct steps will depend on the OS of your VM; if we use an Ubuntu VM, it’s as simple as:

  1. Searching for drivers: apt search nvidia-driver

  2. Installing the available driver (ensuring first it is a recent enough version): sudo apt install nvidia-driver-390

  3. Reboot the VM

  4. Check the above worked with the nvidia-smi command

Installing the container software on the VM

With the NVIDIA driver installed, containers based on the images we created to run mandelbrot_gpu.py can be run on the VM, so long as we have a working installation of either:

  1. Docker and NVIDIA-Docker OR

  2. Singularity

Running our containers

Running the mandelbrot Python code with each container software on the VM can then be done with the following commands (as explained above):

Docker: nvidia-docker run edwardchalstrey/mandelbrot_gpu

Singularity: singularity run --nv shub://singularity-hub.org/edwardchalstrey1/turingbench:mandelbrot_gpu

Running code that requires CUDA enabled GPUs on a HPC

To run a Singularity container based on this image on a HPC available to Turing researchers (e.g. JADE or CSD3, a submission script is required.

The script below has been set up using the instuctions for JADE (see here) which uses the SLURM scheduler; an open source workload management and job scheduling system.

In JADE, the submission script must be executable: chmod +x jade_sub.sh

Then it can be run with a command such as this:

srun --gres=gpu:1 -p small --pty jade_sub.sh (which runs the Singularity container on the small partition with a single GPU)

%%writefile jade_sub.sh
#!/bin/bash

# set the number of nodes
#SBATCH --nodes=1

# set max wallclock time
#SBATCH --time=00:30:00

# set name of job
#SBATCH --job-name=echalstrey_singularity_cuda_test1

# set number of GPUs
#SBATCH --gres=gpu:4

# mail alert at start, end and abortion of execution
#SBATCH --mail-type=ALL

# send mail to this address
#SBATCH --mail-user=echalstrey@turing.ac.uk

# run the application
module load singularity
singularity run --nv shub://singularity-hub.org/edwardchalstrey1/turingbench:mandelbrot_gpu
Overwriting jade_sub.sh

Results

By following the steps in this guide I was able to run mandelbrot_gpu.py on these platforms:

Platform Container Mandelbrot creation time in s
Azure (Nvidia K80) Docker 5.369710
Azure (Nvidia K80) Singularity 3.2 5.734466
JADE (Nvidia P100) Singularity 2.4 0.934607