Difference between revisions of "ANSYS"

 
(43 intermediate revisions by 3 users not shown)
Line 3: Line 3:
'''ANSYS''' is an engineering package and support routine for general-purpose, finite-element analysis: statics, mode frequency, stability analysis, heat transfer, magnetostatics, coupled field analysis, modeling, etc. ANSYS was developed and is supported by ANSYS, Inc.
'''ANSYS''' is an engineering package and support routine for general-purpose, finite-element analysis: statics, mode frequency, stability analysis, heat transfer, magnetostatics, coupled field analysis, modeling, etc. ANSYS was developed and is supported by ANSYS, Inc.


{{AppVersion|17.1}}
{{AppVersion|2021r1}}


=== Authorized Users ===
=== Authorized Users ===


* Department of Mechanical Engineering
* Department of Mechanical Engineering
** Access must be requested via [mailto:help@usf.edu?Subject=ANSYS%20Access%20Request help@usf.edu]
** Access must be requested via {{rchelp}}


{{Platforms}}
{{Platforms}}
*Desktop Workstation
*Desktop Workstation


{{AppModule|apps/ansys/17.1}}
{{AppModule|apps/ansys/2021r1}}


== Running Ansys/Ansys Fluent Jobs on CIRCE ==
{{AppRunningOnAll}}


{{PleaseReadUserGuide}}
=== Interactive Mode ===


The fastest and easiest way to run ANSYS jobs on the Cluster is to run it via '''Web Apps'''. Click [https://cwa.rc.usf.edu/cwa_applications/display/22 here] to run ANSYS.
{{X11Connection}}


If you’re feeling adventurous or you need more control over your workflow, keep reading below.
==== Interactive GUI Jobs ====


=== Interactive Execution ===
After connecting to CIRCE using one of the methods above, a few more steps must be taken to run your application using the GUI.


To run Ansys/Workbench/Fluent using the GUI, a few more steps must be taken, in addition to using another command sequence.
1. Ensure that your current X display is exported properly. When you run the following command ('''echo $DISPLAY'''), you should see the login node you're connected to, following by a colon, and then the display and port number, as below:


1. You must take note of the CIRCE login node that you’re presently logged into. You should see something like:
<pre style="white-space:pre-wrap; width:30%; border:1px solid lightgrey; background:#000000; color:white;">
 
[user@itn0 ~]$ echo $DISPLAY
<pre style="white-space:pre-wrap; width:15%; border:1px solid lightgrey; background:#000000; color:white;">
itn0.rc.usf.edu:7009.0</pre>
[user@login0 ~]$</pre>
<br>
<br>


2. You must take note of the current X display that you’re using so that it can be properly exported. To do this, you’ll run the command below and what you should expect to see on the shell:
Please note: If you see output that looks like below (especially if you're connecting via X2Go):


<pre style="white-space:pre-wrap; width:30%; border:1px solid lightgrey; background:#000000; color:white;">
<pre style="white-space:pre-wrap; width:30%; border:1px solid lightgrey; background:#000000; color:white;">
[user@login0 ~]$ echo $DISPLAY
[user@itn0 ~]$ echo $DISPLAY
login0.rc.usf.edu:7009.0</pre>
:7009.0</pre>
<br>
<br>
3. You now have your display variables necessary to use the [[ANSYS]] GUI on a compute node within the cluster. You now need to run the command below with example resources:
 
...then you will need to run the following '''export''' command (exactly as shown: '''export DISPLAY=$(hostname)$DISPLAY''') to properly set your DISPLAY environment variable before moving to step 2:


<pre style="white-space:pre-wrap; width:40%; border:1px solid lightgrey; background:#000000; color:white;">
<pre style="white-space:pre-wrap; width:40%; border:1px solid lightgrey; background:#000000; color:white;">
srun --ntasks=4 --time=02:00:00 --pty /bin/bash </pre>
[user@itn0 ~]$ echo $DISPLAY
:7009.0
[user@itn0 ~]$ export DISPLAY=$(hostname)$DISPLAY
[user@itn0 ~]$ echo $DISPLAY
itn0.rc.usf.edu:7009.0</pre>
<br>
<br>


4. If all goes well, you should see that your prompt’s hostname has changed to that of a compute node:
Also: take note of the full hostname of the login node you are connected to (in this case: itn0.rc.usf.edu), in-case it is needed for step 4.
 
2. You now have your display variables necessary to use the a GUI-based application on a compute node within the cluster. You now need to run the '''srun''' command below (with example resources of 2 hour time limit, and 4 cores) to request an interactive session:


<pre style="white-space:pre-wrap; width:20%; border:1px solid lightgrey; background:#000000; color:white;">
<pre style="white-space:pre-wrap; width:65%; border:1px solid lightgrey; background:#000000; color:white;">
[user@wh-520-4-1 ~]$ </pre>
[user@itn0 ~]$ srun --nodes=1 --cpus-per-task=4 --mem-per-cpu=1024 --time=02:00:00 --pty /bin/bash </pre>
<br>
<br>
5. You’ll now need to export the display variables taken from steps 1 and 2 above. You’ll run the following command:


<pre style="white-space:pre-wrap; width:25%; border:1px solid lightgrey; background:#000000; color:white;">
3. One your job is allocated resources and dispatched, you should see that your prompt’s hostname has changed to that of a compute node (in this example, svc-3024-9-1):
export DISPLAY=login0:7009.0</pre>
 
<pre style="white-space:pre-wrap; width:65%; border:1px solid lightgrey; background:#000000; color:white;">
[user@itn0 ~]$ srun --nodes=1 --cpus-per-task=4 --mem-per-cpu=1024 --time=02:00:00 --pty /bin/bash
srun: job 12345678 queued and waiting for resources
srun: job 12345678 has been allocated resources
[user@svc-3024-9-1 ~]$ </pre>
<br>
<br>
6. The last step is to load the ANSYSmodule as described above and then start [[Ansys]], Workbench, or Fluent by typing the commands below on the console:


* Ansys:<pre style="white-space:pre-wrap; width:25%; border:1px solid lightgrey; background:#000000; color:white;">[user@login0 ~]$ ansys171 -g</pre>
4. Check one more time to ensure your DISPLAY environment variable is properly set to the hostname and port from the login node you are connecting from:
* Workbench:<pre style="white-space:pre-wrap; width:25%; border:1px solid lightgrey; background:#000000; color:white;">[user@login0 ~]$ runwb2 </pre>
 
* Mechanical APDL:<pre style="white-space:pre-wrap; width:25%; border:1px solid lightgrey; background:#000000; color:white;">[user@login0 ~]$ launcher171 </pre>
<pre style="white-space:pre-wrap; width:30%; border:1px solid lightgrey; background:#000000; color:white;">
* Fluent:<pre style="white-space:pre-wrap; width:25%; border:1px solid lightgrey; background:#000000; color:white;">[user@login0 ~]$ fluent</pre>
[user@svc-3024-9-1 ~]$ echo $DISPLAY
itn0.rc.usf.edu:7009.0</pre>
<br>
 
If it is not, you must manually set it using the full hostname and display port number from step 2:
 
<pre style="white-space:pre-wrap; width:45%; border:1px solid lightgrey; background:#000000; color:white;">
[user@svc-3024-9-1 ~]$ export DISPLAY=itn0.rc.usf.edu:7009.0
[user@svc-3024-9-1 ~]$ echo $DISPLAY
itn0.rc.usf.edu:7009.0</pre>
<br>
 
5. Load the module for the version of ANSYS you wish to run (for example: ANSYS 2021r1):
 
<pre style="white-space:pre-wrap; width:45%; border:1px solid lightgrey; background:#000000; color:white;">[user@svc-3024-9-1 ~]$ module add apps/ansys/2021r1</pre>
&nbsp;
 
6. The last step is to start ANSYS, Workbench, or Fluent by typing the commands below on the console:
 
* Ansys:<pre style="white-space:pre-wrap; width:30%; border:1px solid lightgrey; background:#000000; color:white;">[user@svc-3024-9-1 ~]$ ansys211 -g</pre>
* Workbench:<pre style="white-space:pre-wrap; width:38%; border:1px solid lightgrey; background:#000000; color:white;">[user@svc-3024-9-1 ~]$ unset SLURM_GTIDS && runwb2 </pre>
* Mechanical APDL:<pre style="white-space:pre-wrap; width:30%; border:1px solid lightgrey; background:#000000; color:white;">[user@svc-3024-9-1 ~]$ launcher211 </pre>
* Fluent:<pre style="white-space:pre-wrap; width:25%; border:1px solid lightgrey; background:#000000; color:white;">[user@svc-3024-9-1 ~]$ fluent</pre>


=== How to Submit Ansys Jobs ===
=== How to Submit Ansys Jobs ===
Line 83: Line 114:
#### SLURM 1 processor Ansys test to run for 1 hour.
#### SLURM 1 processor Ansys test to run for 1 hour.


module add apps/ansys/17.1
module add apps/ansys/2021r1


ANSYS_OPTS=&quot;-p aa_r -dir `pwd` -b&quot;  
ANSYS_OPTS=&quot;-p aa_r -dir $(pwd) -b&quot;  
time ansys150 $ANSYS_OPTS &lt; ansys-test.in  
time ansys211 $ANSYS_OPTS &lt; ansys-test.in  


</pre>
</pre>
Line 104: Line 135:
#### SLURM 1 node, 4 processor per node Ansys test to run for 1 hour.
#### SLURM 1 node, 4 processor per node Ansys test to run for 1 hour.


module add apps/ansys/17.1
module add apps/ansys/2021r1


ANSYS_OPTS=&quot;-p aa_r -dir `pwd` -b -np $SLURM_NTASKS&quot;  
ANSYS_OPTS=&quot;-p aa_r -dir $(pwd) -b -np $SLURM_NTASKS&quot;  
time ansys150 $ANSYS_OPTS &lt; ansys-test.in</pre>
time ansys211 $ANSYS_OPTS &lt; ansys-test.in</pre>


=== Distributed Parallel ===
=== Distributed Parallel ===


<pre style="white-space:pre-wrap; width:45%; border:1px solid lightgrey; background:#E0E0E0; color:black;">
<pre style="white-space:pre-wrap; width:60%; border:1px solid lightgrey; background:#E0E0E0; color:black;">
#!/bin/bash
#!/bin/bash
#
#
Line 124: Line 155:




module add apps/ansys/17.1
module add apps/ansys/2021r1


# Create our hosts file ala slurm
# Create our hosts file ala slurm
srun hostname -s &amp;&gt; `pwd`/slurmhosts.$SLURM_JOB_ID.txt
srun hostname -s &amp;&gt; $(pwd)/slurmhosts.$SLURM_JOB_ID.txt


ANSYS_OPTS=&quot;-p aa_r -dir `pwd` -b -dis -machines=`pwd`/slurmhosts.$SLURM_JOB_ID.txt&quot;
ANSYS_OPTS=&quot;-p aa_r -dir $(pwd) -b -dis -machines=$(pwd)/slurmhosts.$SLURM_JOB_ID.txt&quot;
time ansys150 $ANSYS_OPTS &lt; ansys-test.in
time ansys201 $ANSYS_OPTS &lt; ansys-test.in
</pre>
</pre>
&nbsp;<br>
&nbsp;<br>
Line 136: Line 167:


<pre style="white-space:pre-wrap; width:60%; border:1px solid lightgrey; background:#000000; color:white;">
<pre style="white-space:pre-wrap; width:60%; border:1px solid lightgrey; background:#000000; color:white;">
[user@login0 ~]$ cd my/jobdir
[user@itn0 ~]$ cd my/jobdir
[user@login0 jobdir]$ sbatch ./ansys-[serial,multithread,parallel]-test.sh</pre>
[user@itn0 jobdir]$ sbatch ./ansys-[serial,multithread,parallel]-test.sh</pre>


*You can view the status of your job with the “squeue -u <username>” command
*You can view the status of your job with the “squeue -u <username>” command
Line 151: Line 182:
*The script below (for testing, name it “fluent-parallel-test.sh”) can be copied into your job directory (the folder with your input files) and modified so that you can submit batch processes to the queue.
*The script below (for testing, name it “fluent-parallel-test.sh”) can be copied into your job directory (the folder with your input files) and modified so that you can submit batch processes to the queue.


<pre style="white-space:pre-wrap; width:65%; border:1px solid lightgrey; background:#E0E0E0; color:black;">
<pre style="white-space:pre-wrap; width:85%; border:1px solid lightgrey; background:#E0E0E0; color:black;">
#!/bin/bash
#!/bin/bash
#
#
Line 164: Line 195:


module purge
module purge
module add apps/ansys/17.1
module add apps/ansys/2021r1


# Create our hosts file ala slurm
# Create our hosts file ala slurm
srun hostname -s |sort -V &gt; `pwd`/slurmhosts.$SLURM_JOB_ID.txt
srun hostname -s |sort -V &gt; $(pwd)/slurmhosts.$SLURM_JOB_ID.txt


time fluent 3ddp -g -slurm -mpi=openmpi -pib -i fluent-test.in -cnf=`pwd`/slurmhosts.$SLURM_JOB_ID.txt</pre>
time fluent 3ddp -g -peth -mpi=ibmmpi -cnf=slurmhosts.$SLURM_JOB_ID.txt -t $SLURM_NTASKS -slurm -i fluent-test.in</pre>
&nbsp;<br>
&nbsp;<br>
* '''NOTE''': If you're submitting your job to a partition that uses [[CIRCE Hardware|Infiniband as the interconnect]], you can change the bottom line of the above script to the line below for increased performance:
<pre style="white-space:pre-wrap; width:85%; border:1px solid lightgrey; background:#E0E0E0; color:black;">
time fluent 3ddp -g -pib -mpi=ibmmpi -cnf=slurmhosts.$SLURM_JOB_ID.txt -t $SLURM_NTASKS -slurm -i fluent-test.in </pre>
&nbsp;<br>
Next, you can change to your job’s directory, and run the sbatch command to submit the job:
Next, you can change to your job’s directory, and run the sbatch command to submit the job:
<pre style="white-space:pre-wrap; width:45%; border:1px solid lightgrey; background:#000000; color:white;">
<pre style="white-space:pre-wrap; width:45%; border:1px solid lightgrey; background:#000000; color:white;">
[user@login0 ~]$ cd my/jobdir
[user@itn0 ~]$ cd my/jobdir
[user@login0 ~]$sbatch ./fluent-parallel-test.sh</pre>
[user@itn0 ~]$ sbatch ./fluent-parallel-test.sh</pre>


*You can view the status of your job with the “squeue -u <username>” command
*You can view the status of your job with the “squeue -u <username>” command
Line 184: Line 221:
*'''For 1-2 processor ANSYS jobs''', the option <code>aa_r</code> is all you need to be concerned with. You can set this to one in either case. This will insure that your job does not try to run when there are no licenses available and thus failing (potentially after waiting for some time in the queue… this can be aggravating).
*'''For 1-2 processor ANSYS jobs''', the option <code>aa_r</code> is all you need to be concerned with. You can set this to one in either case. This will insure that your job does not try to run when there are no licenses available and thus failing (potentially after waiting for some time in the queue… this can be aggravating).


*'''For 3-8 processor ANSYS jobs''', you’ll need to set <code>aa_r</code> to 2. You’ll also need to set `aa_r_hpc` to `($NSLOTS-2)` or the number of processors minus two. That figure, unfortunately, must be calculated and included before submitting your job as there is no automatic way to accomplish this. This will be fixed in the next release of !GridEngine. You can determine `feature` from this table:
*'''For 3-8 processor ANSYS jobs''', you’ll need to set <code>aa_r</code> to 2. You’ll also need to set `aa_r_hpc` to `($NSLOTS-2)` or the number of processors minus two. That figure, unfortunately, must be calculated and included before submitting your job as there is no automatic way to accomplish this. You can determine `feature` from this table:


{| class=wikitable
{| class=wikitable
Line 207: Line 244:
Please ensure that the following settings are used when specifying the license server information:
Please ensure that the following settings are used when specifying the license server information:


'''default port:''' 2325<br>
'''ansys port (default):''' 2325<br>
'''flexLM port:''' 27000<br>
'''flexLM port:''' 27000<br>
'''license server:''' license0.rc.usf.edu<br>
'''license server:''' license0.rc.usf.edu<br>
Line 213: Line 250:
{{Documentation}}
{{Documentation}}
*ANSYS Documentation
*ANSYS Documentation
**/apps/ansys/v171/doc
**/apps/ansys/v181/doc


{{AppStandardFooter}}
{{AppStandardFooter}}

Latest revision as of 14:25, 31 May 2022

Description

ANSYS is an engineering package and support routine for general-purpose, finite-element analysis: statics, mode frequency, stability analysis, heat transfer, magnetostatics, coupled field analysis, modeling, etc. ANSYS was developed and is supported by ANSYS, Inc.

Version

  • 2021r1

Authorized Users

  • Department of Mechanical Engineering

Platforms

  • CIRCE cluster
  • RRA cluster
  • SC cluster
  • Desktop Workstation

Modules

ANSYS requires the following module file to run:

  • apps/ansys/2021r1

Running ANSYS on CIRCE/SC

The ANSYS user guide is essential to understanding the application and making the most of it. The guide and this page should help you to get started with your simulations. Please refer to the Documentation section for a link to the guide.

  • Note on CIRCE: Make sure to run your jobs from your $WORK directory!
  • Note: Scripts are provided as examples only. Your SLURM executables, tools, and options may vary from the example below. For help on submitting jobs to the queue, see our SLURM User’s Guide.

Interactive Mode

Establishing a GUI connection to CIRCE/SC

To use ANSYS, you will need to connect to CIRCE/SC with GUI redirection, either using:

  • CIRCE/SC Desktop Environment
  • SSH with X11 redirection
    • If connecting from OSX or Linux via SSH, please ensure that you use one of the following commands to properly redirect X11:
      • [user@localhost ~]$ ssh -X circe.rc.usf.edu
        or
      • [user@localhost ~]$ ssh -X sc.rc.usf.edu

Interactive GUI Jobs

After connecting to CIRCE using one of the methods above, a few more steps must be taken to run your application using the GUI.

1. Ensure that your current X display is exported properly. When you run the following command (echo $DISPLAY), you should see the login node you're connected to, following by a colon, and then the display and port number, as below:

[user@itn0 ~]$ echo $DISPLAY
itn0.rc.usf.edu:7009.0


Please note: If you see output that looks like below (especially if you're connecting via X2Go):

[user@itn0 ~]$ echo $DISPLAY
:7009.0


...then you will need to run the following export command (exactly as shown: export DISPLAY=$(hostname)$DISPLAY) to properly set your DISPLAY environment variable before moving to step 2:

[user@itn0 ~]$ echo $DISPLAY
:7009.0
[user@itn0 ~]$ export DISPLAY=$(hostname)$DISPLAY
[user@itn0 ~]$ echo $DISPLAY
itn0.rc.usf.edu:7009.0


Also: take note of the full hostname of the login node you are connected to (in this case: itn0.rc.usf.edu), in-case it is needed for step 4.

2. You now have your display variables necessary to use the a GUI-based application on a compute node within the cluster. You now need to run the srun command below (with example resources of 2 hour time limit, and 4 cores) to request an interactive session:

[user@itn0 ~]$ srun --nodes=1 --cpus-per-task=4 --mem-per-cpu=1024 --time=02:00:00 --pty /bin/bash 


3. One your job is allocated resources and dispatched, you should see that your prompt’s hostname has changed to that of a compute node (in this example, svc-3024-9-1):

[user@itn0 ~]$ srun --nodes=1 --cpus-per-task=4 --mem-per-cpu=1024 --time=02:00:00 --pty /bin/bash 
srun: job 12345678 queued and waiting for resources
srun: job 12345678 has been allocated resources 
[user@svc-3024-9-1 ~]$ 


4. Check one more time to ensure your DISPLAY environment variable is properly set to the hostname and port from the login node you are connecting from:

[user@svc-3024-9-1 ~]$ echo $DISPLAY
itn0.rc.usf.edu:7009.0


If it is not, you must manually set it using the full hostname and display port number from step 2:

[user@svc-3024-9-1 ~]$ export DISPLAY=itn0.rc.usf.edu:7009.0
[user@svc-3024-9-1 ~]$ echo $DISPLAY
itn0.rc.usf.edu:7009.0


5. Load the module for the version of ANSYS you wish to run (for example: ANSYS 2021r1):

[user@svc-3024-9-1 ~]$ module add apps/ansys/2021r1

 

6. The last step is to start ANSYS, Workbench, or Fluent by typing the commands below on the console:

  • Ansys:
    [user@svc-3024-9-1 ~]$ ansys211 -g
  • Workbench:
    [user@svc-3024-9-1 ~]$ unset SLURM_GTIDS && runwb2 
  • Mechanical APDL:
    [user@svc-3024-9-1 ~]$ launcher211 
  • Fluent:
    [user@svc-3024-9-1 ~]$ fluent

How to Submit Ansys Jobs

Provided are batch scripts for running ANSYS as a single processor and multi-processor job. Most existing ANSYS scripts will run in parallel mode with no modification. These scripts can be copied into your work directory (the folder with your input files and database files) so that you can submit batch processes to the queue. For help on submitting jobs to the queue, see our SLURM User’s Guide. Scripts are provided as examples only. Your SLURM executables, tools, and options will vary.

Serial Submit Script

If, for example, you have an Ansys input file named ansys-test.in, you would set up your submit script like one of the examples below:

  • The scripts below (for testing, name them “ansys-serial-test.sh”, “ansys-multithread-test.sh”, or “ansys-parallel-test.sh” respectively) can be copied into your job directory (the folder with your input files) and modified so that you can submit batch processes to the queue.
#!/bin/bash
#
#SBATCH --comment=ansys-serial-test
#SBATCH --ntasks=1
#SBATCH --job-name=ansys-serial-test
#SBATCH --output=output.%j.ansys-serial-test
#SBATCH --time=01:00:00

#### SLURM 1 processor Ansys test to run for 1 hour.

module add apps/ansys/2021r1

ANSYS_OPTS="-p aa_r -dir $(pwd) -b" 
time ansys211 $ANSYS_OPTS < ansys-test.in 

Multi-Threaded Parallel Submit Script

#!/bin/bash
#
#SBATCH --comment=ansys-multithread-test
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=4
#SBATCH --job-name=ansys-multithread-test
#SBATCH --output=output.%j.ansys-multithread-test
#SBATCH --time=01:00:00

#### SLURM 1 node, 4 processor per node Ansys test to run for 1 hour.

module add apps/ansys/2021r1

ANSYS_OPTS="-p aa_r -dir $(pwd) -b -np $SLURM_NTASKS" 
time ansys211 $ANSYS_OPTS < ansys-test.in

Distributed Parallel

#!/bin/bash
#
#SBATCH --comment=ansys-parallel-test
#SBATCH --ntasks=8
#SBATCH --job-name=ansys-parallel-test
#SBATCH --output=output.%j.ansys-parallel-test
#SBATCH --time=01:00:00


#### SLURM 8 processor Ansys test to run for 1 hour.


module add apps/ansys/2021r1

# Create our hosts file ala slurm
srun hostname -s &> $(pwd)/slurmhosts.$SLURM_JOB_ID.txt

ANSYS_OPTS="-p aa_r -dir $(pwd) -b -dis -machines=$(pwd)/slurmhosts.$SLURM_JOB_ID.txt"
time ansys201 $ANSYS_OPTS < ansys-test.in

 
Next, you can change to your job’s directory, and run the sbatch command to submit the job:

[user@itn0 ~]$ cd my/jobdir
[user@itn0 jobdir]$ sbatch ./ansys-[serial,multithread,parallel]-test.sh
  • You can view the status of your job with the “squeue -u <username>” command

How to Submit Ansys Fluent Jobs

Provided are batch scripts for running ANSYS Fluent as a multi-processor job. Most existing ANSYS scripts will run in parallel mode with no modification. These scripts can be copied into your work directory (the folder with your input files and database files) so that you can submit batch processes to the queue.

Parallel Submit Script

If, for example, you have an Ansys Fluent input file named fluent-test.in, you would set up your submit script like one of the examples below:

  • The script below (for testing, name it “fluent-parallel-test.sh”) can be copied into your job directory (the folder with your input files) and modified so that you can submit batch processes to the queue.
#!/bin/bash
#
#SBATCH --comment=fluent-parallel-test
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=8
#SBATCH --job-name=fluent-parallel-test
#SBATCH --output=output.%j.fluent-parallel-test
#SBATCH --time=150:00:00

#### SLURM 2 node, 8 processor per node Ansys Fluent test to run for 150 hours.

module purge
module add apps/ansys/2021r1

# Create our hosts file ala slurm
srun hostname -s |sort -V > $(pwd)/slurmhosts.$SLURM_JOB_ID.txt

time fluent 3ddp -g -peth -mpi=ibmmpi -cnf=slurmhosts.$SLURM_JOB_ID.txt -t $SLURM_NTASKS -slurm -i fluent-test.in

 

  • NOTE: If you're submitting your job to a partition that uses Infiniband as the interconnect, you can change the bottom line of the above script to the line below for increased performance:
 time fluent 3ddp -g -pib -mpi=ibmmpi -cnf=slurmhosts.$SLURM_JOB_ID.txt -t $SLURM_NTASKS -slurm -i fluent-test.in 

 

Next, you can change to your job’s directory, and run the sbatch command to submit the job:

[user@itn0 ~]$ cd my/jobdir
[user@itn0 ~]$ sbatch ./fluent-parallel-test.sh
  • You can view the status of your job with the “squeue -u <username>” command

Licensing

Since we have a limited number of ANSYS licenses, you’ll need to specify some options when running your job. You can see how these options are set in the above sample scripts.

  • For 1-2 processor ANSYS jobs, the option aa_r is all you need to be concerned with. You can set this to one in either case. This will insure that your job does not try to run when there are no licenses available and thus failing (potentially after waiting for some time in the queue… this can be aggravating).
  • For 3-8 processor ANSYS jobs, you’ll need to set aa_r to 2. You’ll also need to set `aa_r_hpc` to `($NSLOTS-2)` or the number of processors minus two. That figure, unfortunately, must be calculated and included before submitting your job as there is no automatic way to accomplish this. You can determine `feature` from this table:
Feature Description
aa_mcad Academic Research
aa_r Academic Research
aa_r_hpc HPC License

The version of Ansys provided by USF Research Computing does not use a local license file, instead a license server must be provided.

Licensing for Local Installation

Please ensure that the following settings are used when specifying the license server information:

ansys port (default): 2325
flexLM port: 27000
license server: license0.rc.usf.edu

Documentation

Home Page, User Guides, and Manuals

  • ANSYS Documentation
    • /apps/ansys/v181/doc

More Job Information

See the following for more detailed job submission information:

Reporting Bugs

Report bugs with ANSYS to the IT Help Desk: rc-help@usf.edu