ANSYS
Description
ANSYS is an engineering package and support routine for general-purpose, finite-element analysis: statics, mode frequency, stability analysis, heat transfer, magnetostatics, coupled field analysis, modeling, etc. ANSYS was developed and is supported by ANSYS, Inc.
Version
- 18.1
Authorized Users
- Department of Mechanical Engineering
- Access must be requested via rc-help@usf.edu
Platforms
CIRCE
clusterRRA
clusterSC
cluster- Desktop Workstation
Modules
ANSYS requires the following module file to run:
apps/ansys/18.1
- See Modules for more information.
Running ANSYS on CIRCE/SC
The ANSYS user guide is essential to understanding the application and making the most of it. The guide and this page should help you to get started with your simulations. Please refer to the Documentation section for a link to the guide.
- Note on CIRCE: Make sure to run your jobs from your $WORK directory!
- Note: Scripts are provided as examples only. Your SLURM executables, tools, and options may vary from the example below. For help on submitting jobs to the queue, see our SLURM User’s Guide.
Interactive Execution
To run Ansys/Workbench/Fluent using the GUI, a few more steps must be taken, in addition to using another command sequence.
1. You must take note of the CIRCE login node that you’re presently logged into. You should see something like:
[user@login0 ~]$
2. You must take note of the current X display that you’re using so that it can be properly exported. To do this, you’ll run the command below and what you should expect to see on the shell:
[user@login0 ~]$ echo $DISPLAY login0.rc.usf.edu:7009.0
3. You now have your display variables necessary to use the ANSYS GUI on a compute node within the cluster. You now need to run the command below with example resources:
srun --ntasks=4 --time=02:00:00 --pty /bin/bash
4. If all goes well, you should see that your prompt’s hostname has changed to that of a compute node:
[user@wh-520-4-1 ~]$
5. You’ll now need to export the display variables taken from steps 1 and 2 above. You’ll run the following command:
export DISPLAY=login0:7009.0
6. The last step is to load the ANSYSmodule as described above and then start Ansys, Workbench, or Fluent by typing the commands below on the console:
- Ansys:
[user@login0 ~]$ ansys170 -g
- Workbench:
[user@login0 ~]$ unset SLURM_GTIDS && runwb2
- Mechanical APDL:
[user@login0 ~]$ launcher170
- Fluent:
[user@login0 ~]$ fluent
How to Submit Ansys Jobs
Provided are batch scripts for running ANSYS as a single processor and multi-processor job. Most existing ANSYS scripts will run in parallel mode with no modification. These scripts can be copied into your work directory (the folder with your input files and database files) so that you can submit batch processes to the queue. For help on submitting jobs to the queue, see our SLURM User’s Guide. Scripts are provided as examples only. Your SLURM executables, tools, and options will vary.
Serial Submit Script
If, for example, you have an Ansys input file named ansys-test.in, you would set up your submit script like one of the examples below:
- The scripts below (for testing, name them “ansys-serial-test.sh”, “ansys-multithread-test.sh”, or “ansys-parallel-test.sh” respectively) can be copied into your job directory (the folder with your input files) and modified so that you can submit batch processes to the queue.
#!/bin/bash # #SBATCH --comment=ansys-serial-test #SBATCH --ntasks=1 #SBATCH --job-name=ansys-serial-test #SBATCH --output=output.%j.ansys-serial-test #SBATCH --time=01:00:00 #### SLURM 1 processor Ansys test to run for 1 hour. module add apps/ansys/18.1 ANSYS_OPTS="-p aa_r -dir `pwd` -b" time ansys170 $ANSYS_OPTS < ansys-test.in
Multi-Threaded Parallel Submit Script
#!/bin/bash # #SBATCH --comment=ansys-multithread-test #SBATCH --nodes=1 #SBATCH --ntasks-per-node=4 #SBATCH --job-name=ansys-multithread-test #SBATCH --output=output.%j.ansys-multithread-test #SBATCH --time=01:00:00 #### SLURM 1 node, 4 processor per node Ansys test to run for 1 hour. module add apps/ansys/18.1 ANSYS_OPTS="-p aa_r -dir `pwd` -b -np $SLURM_NTASKS" time ansys170 $ANSYS_OPTS < ansys-test.in
Distributed Parallel
#!/bin/bash # #SBATCH --comment=ansys-parallel-test #SBATCH --ntasks=8 #SBATCH --job-name=ansys-parallel-test #SBATCH --output=output.%j.ansys-parallel-test #SBATCH --time=01:00:00 #### SLURM 8 processor Ansys test to run for 1 hour. module add apps/ansys/18.1 # Create our hosts file ala slurm srun hostname -s &> `pwd`/slurmhosts.$SLURM_JOB_ID.txt ANSYS_OPTS="-p aa_r -dir `pwd` -b -dis -machines=`pwd`/slurmhosts.$SLURM_JOB_ID.txt" time ansys170 $ANSYS_OPTS < ansys-test.in
Next, you can change to your job’s directory, and run the sbatch command to submit the job:
[user@login0 ~]$ cd my/jobdir [user@login0 jobdir]$ sbatch ./ansys-[serial,multithread,parallel]-test.sh
- You can view the status of your job with the “squeue -u <username>” command
How to Submit Ansys Fluent Jobs
Provided are batch scripts for running ANSYS Fluent as a multi-processor job. Most existing ANSYS scripts will run in parallel mode with no modification. These scripts can be copied into your work directory (the folder with your input files and database files) so that you can submit batch processes to the queue.
Parallel Submit Script
If, for example, you have an Ansys Fluent input file named fluent-test.in, you would set up your submit script like one of the examples below:
- The script below (for testing, name it “fluent-parallel-test.sh”) can be copied into your job directory (the folder with your input files) and modified so that you can submit batch processes to the queue.
#!/bin/bash # #SBATCH --comment=fluent-parallel-test #SBATCH --nodes=2 #SBATCH --ntasks-per-node=8 #SBATCH --job-name=fluent-parallel-test #SBATCH --output=output.%j.fluent-parallel-test #SBATCH --time=150:00:00 #### SLURM 2 node, 8 processor per node Ansys Fluent test to run for 150 hours. module purge module add apps/ansys/18.1 # Create our hosts file ala slurm srun hostname -s |sort -V > `pwd`/slurmhosts.$SLURM_JOB_ID.txt time fluent 3ddp -g -slurm -mpi=openmpi -pib -i fluent-test.in -cnf=`pwd`/slurmhosts.$SLURM_JOB_ID.txt
Next, you can change to your job’s directory, and run the sbatch command to submit the job:
[user@login0 ~]$ cd my/jobdir [user@login0 ~]$sbatch ./fluent-parallel-test.sh
- You can view the status of your job with the “squeue -u <username>” command
Licensing
Since we have a limited number of ANSYS licenses, you’ll need to specify some options when running your job. You can see how these options are set in the above sample scripts.
- For 1-2 processor ANSYS jobs, the option
aa_r
is all you need to be concerned with. You can set this to one in either case. This will insure that your job does not try to run when there are no licenses available and thus failing (potentially after waiting for some time in the queue… this can be aggravating).
- For 3-8 processor ANSYS jobs, you’ll need to set
aa_r
to 2. You’ll also need to set `aa_r_hpc` to `($NSLOTS-2)` or the number of processors minus two. That figure, unfortunately, must be calculated and included before submitting your job as there is no automatic way to accomplish this. You can determine `feature` from this table:
Feature | Description |
aa_mcad | Academic Research |
aa_r | Academic Research |
aa_r_hpc | HPC License |
The version of Ansys provided by USF Research Computing does not use a local license file, instead a license server must be provided.
Licensing for Local Installation
Please ensure that the following settings are used when specifying the license server information:
ansys port (default): 2325
flexLM port: 27000
license server: license0.rc.usf.edu
Documentation
Home Page, User Guides, and Manuals
- ANSYS Documentation
- /apps/ansys/v171/doc
More Job Information
See the following for more detailed job submission information:
Reporting Bugs
Report bugs with ANSYS to the IT Help Desk: rc-help@usf.edu