SAS

Description

From the SAS Home Page: SAS software provides business analytics solutions through the utilization of high-performance computing, data management, and the management, processing, and analysis of “big data.”

Version

  • 9.4

Authorized Users

  • CIRCE account holders
  • RRA account holders
  • SC account holders

Platforms

  • CIRCE cluster
  • RRA cluster
  • SC cluster

Modules

  • apps/sas/9.4
  • apps/sas/9.4.a
    • Includes revisions up to TS1M2
  • apps/sas/9.4.b
    • Includes revisions up to TS1M4

Running SAS on CIRCE

The SAS user guide is essential to understanding the application and making the most of it. The guide and this page should help you to get started with your simulations. Please refer to the Documentation section for a link to the guide.

  • Note on CIRCE: Make sure to run your jobs from your $WORK directory!
  • Note: Scripts are provided as examples only. Your SLURM executables, tools, and options may vary from the example below. For help on submitting jobs to the queue, see our SLURM User’s Guide.

Interactive Mode

Establishing a GUI connection to CIRCE/SC

To use SAS, you will need to connect to CIRCE/SC with GUI redirection, either using:

  • CIRCE/SC Desktop Environment
  • SSH with X11 redirection
    • If connecting from OSX or Linux via SSH, please ensure that you use one of the following commands to properly redirect X11:
      • [user@localhost ~]$ ssh -X circe.rc.usf.edu
        or
      • [user@localhost ~]$ ssh -X sc.rc.usf.edu

Interactive GUI Jobs

After connecting to CIRCE using one of the methods above, a few more steps must be taken to run your application using the GUI.

1. Ensure that your current X display is exported properly. When you run the following command (echo $DISPLAY), you should see the login node you're connected to, following by a colon, and then the display and port number, as below:

[user@itn0 ~]$ echo $DISPLAY
itn0.rc.usf.edu:7009.0


Please note: If you see output that looks like below (especially if you're connecting via X2Go):

[user@itn0 ~]$ echo $DISPLAY
:7009.0


...then you will need to run the following export command (exactly as shown: export DISPLAY=$(hostname)$DISPLAY) to properly set your DISPLAY environment variable before moving to step 2:

[user@itn0 ~]$ echo $DISPLAY
:7009.0
[user@itn0 ~]$ export DISPLAY=$(hostname)$DISPLAY
[user@itn0 ~]$ echo $DISPLAY
itn0.rc.usf.edu:7009.0


Also: take note of the full hostname of the login node you are connected to (in this case: itn0.rc.usf.edu), in-case it is needed for step 4.

2. You now have your display variables necessary to use the a GUI-based application on a compute node within the cluster. You now need to run the srun command below (with example resources of 2 hour time limit, and 4 cores) to request an interactive session:

[user@itn0 ~]$ srun --ntasks=4 --time=02:00:00 --pty /bin/bash 


3. One your job is allocated resources and dispatched, you should see that your prompt’s hostname has changed to that of a compute node (in this example, svc-3024-9-1):

[user@itn0 ~]$ srun --ntasks=4 --time=02:00:00 --pty /bin/bash 
srun: job 12345678 queued and waiting for resources
srun: job 12345678 has been allocated resources 
[user@svc-3024-9-1 ~]$ 


4. Check one more time to ensure your DISPLAY environment variable is properly set to the hostname and port from the login node you are connecting from:

[user@svc-3024-9-1 ~]$ echo $DISPLAY
itn0.rc.usf.edu:7009.0


If it is not, you must manually set it using the full hostname and display port number from step 2:

[user@svc-3024-9-1 ~]$ export DISPLAY=itn0.rc.usf.edu:7009.0
[user@svc-3024-9-1 ~]$ echo $DISPLAY
itn0.rc.usf.edu:7009.0


5. The last step is to load the SAS module as described above and then start SAS by typing the commands below on the console:

[user@svc-3024-9-1 ~]$ module add apps/sas/9.4.b 
[user@svc-3024-9-1 ~]$ sas

Batch Job submission

To run batch jobs on the CIRCE cluster, users will need to submit their jobs to the scheduling environment if their jobs take more than 20 minutes to run on a standard PC.

If, for example, you have a SAS script file named test.sas with all your tasks defined in it, you would set up a submit script to use the SAS kernel like this:

#!/bin/bash
#
#SBATCH --job-name=sas-test
#SBATCH --time=48:00:00
#SBATCH --mem=8192
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=4
#SBATCH --output=output.%j.sas-test

#### SLURM 4 processor SAS test to run for 48 hours.

# To prevent a lot of IO, store temporary data on local /tmp storage,
# but store log/output files to $WORK

export TMP_DIR=/tmp/${SLURM_JOB_USER}_${SLURM_JOB_ID}
mkdir -p $TMP_DIR

export PRINT_DIR=$WORK/sas/logs/${SLURM_JOB_ID}/
mkdir -p $PRINT_DIR


# Load the SAS module:
module load apps/sas/9.4.b

# Start SAS
sas test.sas -noterminal \
  -filelocks none \
  -work $TMP_DIR \
  -utilloc $TMP_DIR \
  -print $PRINT_DIR \
  -log $PRINT_DIR/$SLURM_JOB_NAME.$SLURM_JOB_ID.$SLURM_ARRAY_TASK_ID

SAS script files can either be created using a standard text editor or by exporting an existing worksheet from the graphical SAS interface in .sas format.

  • NOTE: When requesting your resources, you will need 1 more processor than the number of tasks in your SAS job. So, if you are using 3 tasks, you will need to request —ntasks-per-node=4.
  • NOTE: Please ensure that -work and -utilloc are located on /tmp of the compute node your job lands on, and NOT $WORK or $HOME!

 
Next, you can change to your job’s directory, and run the sbatch command to submit the job:

[user@login0 ~]$ cd my/jobdir
[user@login0 jobdir]$ sbatch ./sas-test.sh
  • You can view the status of your job with the “squeue -u <username>” command

Documentation

Home Page, User Guides, and Manuals

Benchmarks, Known Tests, Examples, Tutorials, and Other Resources

More Job Information

See the following for more detailed job submission information:

Reporting Bugs

Report bugs with SAS to the IT Help Desk: rc-help@usf.edu