Difference between revisions of "SLURM GPU"
Line 12: | Line 12: | ||
<pre style="white-space:pre-wrap; width:22%; border:1px solid lightgrey; background:#E0E0E0; color:black;">#SBATCH --name=gpu_job | <pre style="white-space:pre-wrap; width:22%; border:1px solid lightgrey; background:#E0E0E0; color:black;">#SBATCH --name=gpu_job | ||
#SBATCH --time=01:00:00 | #SBATCH --time=01:00:00 | ||
#SBATCH --partition=cuda | |||
#SBATCH --gres=gpu:1 | #SBATCH --gres=gpu:1 | ||
./GpuEnabledExe</pre> | ./GpuEnabledExe</pre> | ||
You can request up to two GPUs be used by a single node job. This can be requested by calling<br /> | You can request up to two GPUs be used by a single node job. This can be requested by calling<br /> | ||
Line 25: | Line 28: | ||
#SBATCH --nodes=8 | #SBATCH --nodes=8 | ||
#SBATCH --ntasks-per-node=1 | #SBATCH --ntasks-per-node=1 | ||
#SBATCH --partition=cuda | |||
#SBATCH --gres=gpu:1 | #SBATCH --gres=gpu:1 | ||
./MpiGpuExe</pre> | ./MpiGpuExe</pre> | ||
The largest possible GPU job is expressed below (this may take quite some time to dispatch) | The largest possible GPU job is expressed below (this may take quite some time to dispatch) | ||
Line 33: | Line 39: | ||
#SBATCH --nodes=20 | #SBATCH --nodes=20 | ||
#SBATCH --ntasks-per-node=2 | #SBATCH --ntasks-per-node=2 | ||
#SBATCH --partition=cuda | |||
#SBATCH --gres=gpu:2 | #SBATCH --gres=gpu:2 | ||
./MpiGpuExe</pre> | ./MpiGpuExe</pre> | ||
This will request the use of 40 GPUs for an MPI application. | This will request the use of 40 GPUs for an MPI application. |
Revision as of 14:15, 16 April 2018
GPU Jobs
Research Computing hosts 28 compute nodes with GPU capabilities. Requesting GPU resources requires additional parameters to be added to your job script in order to land on the right equipment and ensure that your job is dispatched correctly.
Basic GPU Job
For basic GPU jobs, where you will be using a single CPU thread and a single GPU, the following will be sufficient:
#SBATCH --name=gpu_job #SBATCH --time=01:00:00 #SBATCH --partition=cuda #SBATCH --gres=gpu:1 ./GpuEnabledExe
You can request up to two GPUs be used by a single node job. This can be requested by calling
#SBATCH --gres=gpu:2
MPI GPU Jobs
On the CIRCE/Student clusters, MPI parallel jobs with GPU support require extra care when crafting your request. You MUST use the nodes,ntasks-per-node parameters for your job request. In this case, every node you select (up to 20) will get you an additional GPU. So, lets say I want to use 8 GPUs with MPI, each with a single CPU thread:
#SBATCH --name=myMPIGPUjob #SBATCH --time=01:00:00 #SBATCH --nodes=8 #SBATCH --ntasks-per-node=1 #SBATCH --partition=cuda #SBATCH --gres=gpu:1 ./MpiGpuExe
The largest possible GPU job is expressed below (this may take quite some time to dispatch)
#SBATCH --name=myMPIGPUjob #SBATCH --time=01:00:00 #SBATCH --nodes=20 #SBATCH --ntasks-per-node=2 #SBATCH --partition=cuda #SBATCH --gres=gpu:2 ./MpiGpuExe
This will request the use of 40 GPUs for an MPI application.
Specify a GPU model
Since there are 2 generations of GPUs available (one per cluster, e.g. CIRCE vs. SC), you do not need to specify which you'd like to use. The versions are listed below:
Model | Nickname | Constraint | Number Available | Cluster Availability |
M2070 | Fermi | gpu_M2070 | 8 | SC only |
K20 | Kepler | gpu_K20 | 42 | CIRCE only |
However, for inquisitive users who would like to specify a GPU model using a SLURM "Constraint", your submission script would look similar to:
#SBATCH --name=gpu_job #SBATCH --time=01:00:00 #SBATCH --gres=gpu:1 #SBATCH -C "gpu_K20"