SLURM Using Features and Constraints
Using Features/Constraints with SLURM
Features/Constraints allow users to make very specific requests to the scheduler such as what interconnect your application prefers, how much available memory you require, whether you mind running on low priority machines, etc. To get the most out of the queuing environment, it is very useful to have a reasonable understanding of how the features work and which features you should request for your type of application.
Important Features
The following table lists custom defined features/constraints which are necessary for getting your job to run on the right hardware.
Feature Name | Value | Description | Default |
--time | Wall clock time limit | Request and set a certain amount of run-time for your job. Jobs are killed when they reach this limit | 1 hour |
--mem | Memory per node | mem (defined in megabytes) will set the amount of memory allocated per node. (For serial jobs) | 512MB (0.5GB) |
--mem-per-cpu | Memory per core | mem (defined in megabytes) will set the amount of memory allocated per cpu core. (For multi-core jobs) | 512MB (0.5GB) |
Basic Commands & Job Submission
This document will provide information on submitting jobs, removing jobs, checking job status, and interacting with the scheduler.
Key Commands
Command | Description |
sbatch <script_name> | Submit jobs. Refer to the command man sbatch to display detailed explanations for each available option. These options can be added to the command line or to your submit script |
squeue | Display the status of your jobs. The man page for qstat will provide detailed explanations for each available option. Useful options include: -u [user_name] to filter a single user; -j [job_id] for detailed job information |
sjobets | Display the status and estimated start time (ets) of all jobs in the queue. Takes the same options as squeue |
scancel <job_id> | Delete/stop jobs. Again, the man page will provide further information. |
sinfo | Display a status summary of the entire grid. This displays total, used, and available cores per queue. |
Complete command-line options can be found by issuing -help at the end of any of the above commands or by utilizing the manual pages, e.g. run man sbatch
Submitting Jobs
You will need to create a submit script to run your job on the cluster. The submit script will specify what type of hardware you wish to run on, what priority your job has, what do do with console output, etc. We will look at SLURM submit scripts for Serial and Parallel jobs so that you may have a good understanding of how they work and what it is they do.
Submission for Serial Jobs
SLURM uses pre-processed shell scripts to submit jobs. SLURM provides predefined variables to help integrate your process with the scheduler and the job dispatcher. It is likely that you will need to pass options to SLURM to retieive statistical information, set job specifications, redirect your I/O, change your working directory, and possibly notify you of job failure or completion. To set these options, you will need to pass arguments to qsub or you can embed these options in your submit file.
A simple job for SLURM would look like this
date
It is a simple script that calls *date* on whatever machine SLURM decides to run your job on. Let's have a look at another submit file that does the same thing:
#!/bin/bash #SBATCH --job-name=get_date #SBATCH --time=00:30:00 date
An overview of the options (following the character sequence "#SBATCH") is as follows:
- jobname=get_date: Set job name for queue (in this example, the job is named "get_date"). You can set this to a job name of your choice.
- time=00:30:00: Tell the scheduler that this process should only run for 30 minutes.
- *Note*: If a time is not specified, a default runtime of 1 hour (01:00:00) will be applied to the job.
These options should be sufficient for the most basic serial jobs.
With this file, we have specified our output file, the name of the job, and that SLURM should execute this job from within the same directory as it was submitted. Let us call this script date.sh and submit the job to the queue:
[user@login0 ~]$ sbatch date.sh Submitted batch job 40638
Let's now check the status of our job:
[user@login0 ~]$ squeue -u user JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) 40641 circe get_date user PD 0:00 1 (None)
You can see job 40641 (as an example) listed in the output. You can see it is in the state PD which means it is waiting to be dispatched pending the arrival of the next scheduler iteration OR when sufficient resources become available.
Submission for Parallel Jobs
Because many of the applications available for science and engineering are highly resource intensive, many of them have been "parallelized" in order to run on multiple pieces of hardware to divide the work load. Most applications have standardized upon the MPI or MPI2 specification. Since many of you will want to run your applications in parallel in order to take advantage of performance gains, you'll need to know how to create a job script for submitting such an applications. SLURM provides "Parallel Environments" for interfacing your job with an MPI library and distributing it across a cluster.
Rather than explain everything all at once, here is a submit script for a parallel job:
#!/bin/bash #SBATCH --job-name=parallel-job #SBATCH --time=08:00:00 #SBATCH --output=output.%j #SBATCH --ntasks=4 mpirun parallel-executable
Most of the submit directives (remember, those lines starting with "#SBATCH" ?) should already be familiar to you. But you should also notice that we've added a few new directives:
- ntasks=4: This specifies the number of CPUs you would like allocated for your job.
- output=output.%j: This specifies the file that the job will write all output to. It's extension will be the SLURM job ID number.
Following the exact same steps for a serial job submission script that were described above, we can submit our parallel job script to the cluster using sbatch to view its execution status.
Interactive jobs
Interactive jobs can be run via the srun command, and uses many of the same options that are available to submit scripts. For more information, please see the SLURMInteractive documentation.
Available Environment Variables
The following environment variables are defined by SLURM at run time. They may be referenced in your submit scripts to add functionality to your code:
- $SLURM_JOB_ID: The job number assigned by the scheduler to your job
- $SBATCH_ACCOUNT: The username of the person currently running the job
- $SLURM_JOB_NAME: The job name specified by the "--job-name" option
- $SLURM_JOB_NODELIST: Name of execution host
For more information, please reference the man page for sbatch.
Feature Table
This is a table of user-addressable SLURM features. You can request machines with a variety of characteristics such as machines with a certain amount of memory or a particular architecture type.
Feature | Description |
cpu_amd | Select only nodes with AMD cpus |
cpu_xeon | Select only nodes with Intel Xeon cpus only |
gpu_T10 | Select only nodes with T10 GPU (used with --gres. Refer to the GPU Guide for more info) |
gpu_M2070 | Select only nodes with M2070 GPU (used with --gres. Refer to the GPU Guide for more info) |
gpu_K20 | Select only nodes with K20 GPU (used with --gres. Refer to the GPU Guide for more info) |
ib_sdr | Select only nodes with Infiniband Single Data Rate |
ib_ddr | Select only nodes with Infiniband Double Data Rate |
ib_qdr | Select only nodes with Infiniband Quad Data Rate |
ib_ofa | Select only nodes with Mellanox Infiniband cards |
ib_psm | Select only nodes with QLOGIC Infiniband cards |
opteron_2220 | Select only nodes with AMD Opteron 2220 CPU chips |
opteron_2384 | Select only nodes with AMD Opteron 2384 CPU chips |
opteron_2427 | Select only nodes with AMD Opteron 2427 CPU chips |
sse4 | Select only nodes with sse4 (and above) CPU instruction set |
sse41 | Select only nodes with sse41 (and above) CPU instruction set |
sse4a | Select only nodes with sse4a (and above) CPU instruction set |
sse42 | Select only nodes with sse4a (and above) CPU instruction set |
tpa | Select only nodes in Tampa |
wh | Select only nodes in Winter Haven |
xeon_X5355 | Select only nodes with Intel Xeon X5355 CPU chips |
xeon_X5460 | Select only nodes with Intel Xeon X5460 CPU chips |
xeon_E5649 | Select only nodes with Intel Xeon E5649 CPU chips |
xeon_E7330 | Select only nodes with Intel Xeon E7330 CPU chips |
xeon_E52630 | Select only nodes with Intel Xeon E5-2630 CPU chips |
xeon_E52650 | Select only nodes with Intel Xeon E5-2650 CPU chips |
xeon_E52670 | Select only nodes with Intel Xeon E5-2670 CPU chips |