ANSYS
Description
ANSYS is an engineering package and support routine for general-purpose, finite-element analysis: statics, mode frequency, stability analysis, heat transfer, magnetostatics, coupled field analysis, modeling, etc. ANSYS was developed and is supported by ANSYS, Inc.
Version
- 18.1
Authorized Users
- Department of Mechanical Engineering
- Access must be requested via rc-help@usf.edu
Platforms
CIRCE
clusterRRA
clusterSC
cluster- Desktop Workstation
Modules
ANSYS requires the following module file to run:
apps/ansys/18.1
- See Modules for more information.
Running ANSYS on CIRCE/SC
The ANSYS user guide is essential to understanding the application and making the most of it. The guide and this page should help you to get started with your simulations. Please refer to the Documentation section for a link to the guide.
- Note on CIRCE: Make sure to run your jobs from your $WORK directory!
- Note: Scripts are provided as examples only. Your SLURM executables, tools, and options may vary from the example below. For help on submitting jobs to the queue, see our SLURM User’s Guide.
Interactive Mode
Establishing a GUI connection to CIRCE/SC
To use ANSYS, you will need to connect to CIRCE/SC with GUI redirection, either using:
- CIRCE/SC Desktop Environment
- SSH with X11 redirection
- If connecting from OSX or Linux via SSH, please ensure that you use one of the following commands to properly redirect X11:
[user@localhost ~]$ ssh -X circe.rc.usf.edu
or[user@localhost ~]$ ssh -X sc.rc.usf.edu
- If connecting from OSX or Linux via SSH, please ensure that you use one of the following commands to properly redirect X11:
Interactive GUI Jobs
After connecting to CIRCE using one of the methods above, a few more steps must be taken to run your application using the GUI.
1. Ensure that your current X display is exported properly. When you run the following command (echo $DISPLAY), you should see the login node you're connected to, following by a colon, and then the display and port number, as below:
[user@itn0 ~]$ echo $DISPLAY itn0.rc.usf.edu:7009.0
Please note: If you see output that looks like below (especially if you're connecting via X2Go):
[user@itn0 ~]$ echo $DISPLAY :7009.0
...then you will need to run the following export command (exactly as shown: export DISPLAY=$(hostname)$DISPLAY) to properly set your DISPLAY environment variable before moving to step 2:
[user@itn0 ~]$ echo $DISPLAY :7009.0 [user@itn0 ~]$ export DISPLAY=$(hostname)$DISPLAY [user@itn0 ~]$ echo $DISPLAY itn0.rc.usf.edu:7009.0
Also: take note of the full hostname of the login node you are connected to (in this case: itn0.rc.usf.edu), in-case it is needed for step 4.
2. You now have your display variables necessary to use the a GUI-based application on a compute node within the cluster. You now need to run the srun command below (with example resources of 2 hour time limit, and 4 cores) to request an interactive session:
[user@itn0 ~]$ srun --ntasks=4 --time=02:00:00 --pty /bin/bash
3. One your job is allocated resources and dispatched, you should see that your prompt’s hostname has changed to that of a compute node (in this example, svc-3024-9-1):
[user@itn0 ~]$ srun --ntasks=4 --time=02:00:00 --pty /bin/bash srun: job 12345678 queued and waiting for resources srun: job 12345678 has been allocated resources [user@svc-3024-9-1 ~]$
4. Check one more time to ensure your DISPLAY environment variable is properly set to the hostname and port from the login node you are connecting from:
[user@svc-3024-9-1 ~]$ echo $DISPLAY itn0.rc.usf.edu:7009.0
If it is not, you must manually set it using the full hostname and display port number from step 2:
[user@svc-3024-9-1 ~]$ export DISPLAY=itn0.rc.usf.edu:7009.0 [user@svc-3024-9-1 ~]$ echo $DISPLAY itn0.rc.usf.edu:7009.0
5. The last step is to load the ANSYS module as described above and then start ANSYS, Workbench, or Fluent by typing the commands below on the console:
- Ansys:
[user@svc-3024-9-1 ~]$ ansys181 -g
- Workbench:
[user@svc-3024-9-1 ~]$ unset SLURM_GTIDS && runwb2
- Mechanical APDL:
[user@svc-3024-9-1 ~]$ launcher181
- Fluent:
[user@svc-3024-9-1 ~]$ fluent
How to Submit Ansys Jobs
Provided are batch scripts for running ANSYS as a single processor and multi-processor job. Most existing ANSYS scripts will run in parallel mode with no modification. These scripts can be copied into your work directory (the folder with your input files and database files) so that you can submit batch processes to the queue. For help on submitting jobs to the queue, see our SLURM User’s Guide. Scripts are provided as examples only. Your SLURM executables, tools, and options will vary.
Serial Submit Script
If, for example, you have an Ansys input file named ansys-test.in, you would set up your submit script like one of the examples below:
- The scripts below (for testing, name them “ansys-serial-test.sh”, “ansys-multithread-test.sh”, or “ansys-parallel-test.sh” respectively) can be copied into your job directory (the folder with your input files) and modified so that you can submit batch processes to the queue.
#!/bin/bash # #SBATCH --comment=ansys-serial-test #SBATCH --ntasks=1 #SBATCH --job-name=ansys-serial-test #SBATCH --output=output.%j.ansys-serial-test #SBATCH --time=01:00:00 #### SLURM 1 processor Ansys test to run for 1 hour. module add apps/ansys/18.1 ANSYS_OPTS="-p aa_r -dir $(pwd) -b" time ansys181 $ANSYS_OPTS < ansys-test.in
Multi-Threaded Parallel Submit Script
#!/bin/bash # #SBATCH --comment=ansys-multithread-test #SBATCH --nodes=1 #SBATCH --ntasks-per-node=4 #SBATCH --job-name=ansys-multithread-test #SBATCH --output=output.%j.ansys-multithread-test #SBATCH --time=01:00:00 #### SLURM 1 node, 4 processor per node Ansys test to run for 1 hour. module add apps/ansys/18.1 ANSYS_OPTS="-p aa_r -dir $(pwd) -b -np $SLURM_NTASKS" time ansys181 $ANSYS_OPTS < ansys-test.in
Distributed Parallel
#!/bin/bash # #SBATCH --comment=ansys-parallel-test #SBATCH --ntasks=8 #SBATCH --job-name=ansys-parallel-test #SBATCH --output=output.%j.ansys-parallel-test #SBATCH --time=01:00:00 #### SLURM 8 processor Ansys test to run for 1 hour. module add apps/ansys/18.1 # Create our hosts file ala slurm srun hostname -s &> $(pwd)/slurmhosts.$SLURM_JOB_ID.txt ANSYS_OPTS="-p aa_r -dir $(pwd) -b -dis -machines=$(pwd)/slurmhosts.$SLURM_JOB_ID.txt" time ansys181 $ANSYS_OPTS < ansys-test.in
Next, you can change to your job’s directory, and run the sbatch command to submit the job:
[user@itn0 ~]$ cd my/jobdir [user@itn0 jobdir]$ sbatch ./ansys-[serial,multithread,parallel]-test.sh
- You can view the status of your job with the “squeue -u <username>” command
How to Submit Ansys Fluent Jobs
Provided are batch scripts for running ANSYS Fluent as a multi-processor job. Most existing ANSYS scripts will run in parallel mode with no modification. These scripts can be copied into your work directory (the folder with your input files and database files) so that you can submit batch processes to the queue.
Parallel Submit Script
If, for example, you have an Ansys Fluent input file named fluent-test.in, you would set up your submit script like one of the examples below:
- The script below (for testing, name it “fluent-parallel-test.sh”) can be copied into your job directory (the folder with your input files) and modified so that you can submit batch processes to the queue.
#!/bin/bash # #SBATCH --comment=fluent-parallel-test #SBATCH --nodes=2 #SBATCH --ntasks-per-node=8 #SBATCH --job-name=fluent-parallel-test #SBATCH --output=output.%j.fluent-parallel-test #SBATCH --time=150:00:00 #### SLURM 2 node, 8 processor per node Ansys Fluent test to run for 150 hours. module purge module add apps/ansys/18.1 # Create our hosts file ala slurm srun hostname -s |sort -V > $(pwd)/slurmhosts.$SLURM_JOB_ID.txt time fluent 3ddp -g -slurm -mpi=openmpi -pib -i fluent-test.in -cnf=$(pwd)/slurmhosts.$SLURM_JOB_ID.txt
Next, you can change to your job’s directory, and run the sbatch command to submit the job:
[user@itn0 ~]$ cd my/jobdir [user@itn0 ~]$ sbatch ./fluent-parallel-test.sh
- You can view the status of your job with the “squeue -u <username>” command
Licensing
Since we have a limited number of ANSYS licenses, you’ll need to specify some options when running your job. You can see how these options are set in the above sample scripts.
- For 1-2 processor ANSYS jobs, the option
aa_r
is all you need to be concerned with. You can set this to one in either case. This will insure that your job does not try to run when there are no licenses available and thus failing (potentially after waiting for some time in the queue… this can be aggravating).
- For 3-8 processor ANSYS jobs, you’ll need to set
aa_r
to 2. You’ll also need to set `aa_r_hpc` to `($NSLOTS-2)` or the number of processors minus two. That figure, unfortunately, must be calculated and included before submitting your job as there is no automatic way to accomplish this. You can determine `feature` from this table:
Feature | Description |
aa_mcad | Academic Research |
aa_r | Academic Research |
aa_r_hpc | HPC License |
The version of Ansys provided by USF Research Computing does not use a local license file, instead a license server must be provided.
Licensing for Local Installation
Please ensure that the following settings are used when specifying the license server information:
ansys port (default): 2325
flexLM port: 27000
license server: license0.rc.usf.edu
Documentation
Home Page, User Guides, and Manuals
- ANSYS Documentation
- /apps/ansys/v181/doc
More Job Information
See the following for more detailed job submission information:
Reporting Bugs
Report bugs with ANSYS to the IT Help Desk: rc-help@usf.edu