SC Data Management

Managing your data on SC (Student Cluster)

This page will provide guidelines for managing your data effectively on SC. Since we have various storage locations available as well as differing rules for how those storage locations are managed, its good to have an idea of what you should do to ensure your data is on the best location depending on your requirements.

Guidelines for Running Jobs

1. Compress results that you would like to store permanently! There is no reason not to do this and it helps you to keep you under your quota, allowing you to store more results.

Running Jobs Example

Typically, jobs should be run from a staging directory under /home. This can be accomplished by creating a directory under /home for your job input files and resulting output. We’ll show an example below:

1. Create the directory:

[user@login0 ~]$ mkdir $HOME/myjob

 

2. Put your input files inside the directory:

[user@login0 ~]$ cp INPUT1 INPUT2 INPUT3 $HOME/myjob

 

3. Next, lets change to our job directory:

[user@login0 ~]$ cd $HOME/myjob

 

4. Let’s next create our submit script, and then run the job. We’ll be running myapp against the input files in this directory, so lets create a submit script (named myjob.sh), with the following contents:

#!/bin/bash
#SBATCH --job-name=myjob
#SBATCH --time=10:00:00
#SBATCH --ntasks=16
#SBATCH --output=output.%j.myjob

module add apps/myapp/1.0

myapp < INPUT*

 

5.Then, we can submit the job to the SLURM scheduler:

[user@login0 myjob]$ sbatch ./myjob.sh

 

6. After we run and the job completes, there should be output as well:

[user@login0 myjob]$ ls
INPUT1 INPUT2 INPUT3 OUTPUT1 OUTPUT2 OUTPUT3 output.45698.myjob

 

7. Let’s do some post-processing and review our data while its on /work:

[user@login0 myjob]$