SLURM GPU


GPU Jobs

Research Computing hosts 28 compute nodes with GPU capabilities. Requesting GPU resources requires additional parameters to be added to your job script in order to land on the right equipment and ensure that your job is dispatched correctly.


Basic GPU Job

For basic GPU jobs, where you will be using a single CPU thread and a single GPU, the following will be sufficient:

#SBATCH --job-name=gpu_job
#SBATCH --time=01:00:00
#SBATCH --partition=snsm_itn19
#SBATCH --qos=openaccess
#SBATCH --gres=gpu:1

./GpuEnabledExe

You can request up to two GPUs be used by a single node job. This can be requested by calling

#SBATCH --gres=gpu:2

MPI GPU Jobs

On the CIRCE/Student clusters, MPI parallel jobs with GPU support require extra care when crafting your request. You MUST use the nodes,ntasks-per-node parameters for your job request. In this case, every node you select (up to 20) will get you an additional GPU. So, lets say I want to use 8 GPUs with MPI, each with a single CPU thread:

#SBATCH --job-name=myMPIGPUjob
#SBATCH --time=01:00:00
#SBATCH --nodes=8
#SBATCH --ntasks-per-node=1
#SBATCH --partition=snsm_itn19
#SBATCH --qos=openaccess
#SBATCH --gres=gpu:1

./MpiGpuExe

The largest possible GPU job is expressed below (this may take quite some time to dispatch)

#SBATCH --job-name=myMPIGPUjob
#SBATCH --time=01:00:00
#SBATCH --nodes=20
#SBATCH --ntasks-per-node=2
#SBATCH --partition=snsm_itn19
#SBATCH --qos=openaccess
#SBATCH --gres=gpu:2

./MpiGpuExe

This will request the use of 40 GPUs for an MPI application.

Specify a GPU model

The available GPU versions on CIRCE are listed below:

Model Constraint Number Available Partition Availability
GeForce GTX 1080 Ti gpu_gtx1080ti 49 snsm_itn19

However, for inquisitive users who would like to specify a GPU model using a SLURM "Constraint", your submission script would look similar to:

#SBATCH --job-name=gpu_job
#SBATCH --time=01:00:00
#SBATCH --partition=snsm_itn19
#SBATCH --qos=openaccess
#SBATCH --gres=gpu:1
#SBATCH -C "gpu_K20"