Skip to content

Ansys Fluent#

Fluent is a general-purpose Computational Fluid Dynamics (CFD) code developed by Ansys. It is used for a wide range of engineering applications, as it provides a variety of physical models for turbulent flows, acoustics, Eulerian and Lagrangian multi-phase flow modeling, radiation, combustion, and chemical reactions, and heat and mass transfer.

Licensing#

Please note that the clusters do not come with any license. If you want to use Ansys products on the HPC clusters, you have to have access to suitable licenses. These can be purchased directly from RRZE. To efficiently use the HPC resources, Ansys HPC licenses are necessary.

Availability / Target HPC systems#

Different versions of all Ansys products are available via the modules system, which can be listed by module avail Ansys. A special version can be loaded, e.g. by module load Ansys/2023R2.

Production jobs should be run on the parallel HPC systems in batch mode.

Ansys Fluent can also be used in interactive GUI mode. This should only be used to make quick simulation setup changes. Most changes in the simulation setup can also be done in batch mode via the fluent-specific TUI (text user interface). It is not permitted to run computationally intensive Ansys Fluent simulation runs or serial/parallel post-processing sessions with large memory consumption on login nodes.

For pre- or post-processing of larger simulation cases via interactive GUI mode, have a look at our visualization node.

Usage#

The (graphical) Fluent launcher is started by the command fluent. Here, you have to specify the properties of the simulation run: 3D or 2D, single or double precision, meshing or solver mode, and serial or parallel mode. When using Fluent in a batch job, all these properties have to be specified on the command line, e.g.

fluent 3ddp -g -t 20 -cnf="$NODELIST"

This launches a 3D, double-precision simulation. For a 2D, single-precision simulation 2dsp has to be specified. With the -g option, no GUI or graphics are launched. If your simulation should produce graphical output, e.g. plot of convergence history in PNG or JPEG format, -gu -driver null has to be used instead.

The number of processes is defined by the -t option. This number corresponds to the number of physical CPU cores that should be used. Using SMT threads is not recommended. The hostnames of the compute nodes and the number of processes to be launched on each node have to be specified in a host list via the -cnf option. Please refer to the sample script below for more information.

For more information about the available parameters, use fluent -help.

Journal files#

For Ansys Fluent, submitting the .cas file is not sufficient to run a simulation on a parallel cluster. For a proper simulation run using a batch job, a simple journal file (.jou) is required to specify the solution steps.

The steps are specified with TUI commands that are specific to Ansys Fluent. Details on these commands can be found in the Ansys Fluent documentation, Part II: Solution Mode; Chapter 2: Text User Interface (TUI).

Every configuration that is done in the GUI also has a corresponding TUI command. You can, therefore, change the configuration of the simulation during the simulation run, for example by adjusting the solution time step after a specified number of iterations. A simple example journal file for a steady-state simulation is given below. Please note that running a transient simulation would require different commands for time integration. The same applies when re-starting the simulation from a previous run or initialization.

The journal file has to be specified at the time of the application launch with -i <journal-file>.

Notes#

  • Ansys Fluent does not consist of different pre-, solver, and post-processing applications. Everything is included in one single application.
  • The in-build Fluent post-processing can also be run in parallel mode. Normally, much fewer processes than for simulation runs are needed. However, do not use this on the login nodes!
  • We recommend writing automatic backup files (every 6 to 12 hours) for longer runs to be able to restart the simulation in case of a job or machine failure. This can be specified in Ansys Fluent under Solution → Calculation Activities → Autosave Every Iterations.
  • Fluent cannot stop a simulation based on elapsed time. Therefore, you have to estimate the number of iterations that will fit into the requested runtime. Plan enough buffer time for writing the final output, depending on your simulation, this can take quite a long time!
  • Please note that for some versions (<2023R2), the default (Intel) MPI startup mechanism is not working on Meggie and Fritz. This will lead to the solver hanging without producing any output. Use the option -mpi=openmpi to prevent this.
  • GPU support: since porting of functionalities to GPU is still ongoing, always use the newest Ansys version available! In initial benchmarks, a ratio of 1:1 for number of GPUs to CPU processes was found to be ideal.

Sample job scripts#

All job scripts have to contain the following information:

  • Resource definition for the queuing system (more details here)
  • Load Ansys environment module
  • Generate a file with names of hosts of the current simulation run to tell Fluent on which nodes it should run (see example below)
  • Execute fluent with appropriate command line parameters (available options via fluent -help)
  • Specify Ansys Fluent journal file (*.jou) as input; this is used to control the execution of the simulation since .cas files do not contain any solver control information

Parallel job on Meggie#

The following script runs Ansys Fluent on Meggie in parallel on 4 nodes with 20 CPUs each.

#!/bin/bash -l
#SBATCH --job-name=myfluent
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=20
#SBATCH --time=24:00:00
#SBATCH --export=NONE

unset SLURM_EXPORT_ENV

# load environment module 
module load ansys/XXXX 

# generate node list 
NODELIST=$(for node in $( scontrol show hostnames ${SLURM_JOB_NODELIST} | uniq ); do echo -n "${node}:${SLURM_NTASKS_PER_NODE},"; done | sed 's/,$//')
# calculate the number of cores actually used 
CORES=$(( ${SLURM_JOB_NUM_NODES} * ${SLURM_NTASKS_PER_NODE} )) 

# execute fluent with command line parameters (in this case: 3D, double precision) 
# Please insert here your own .jou and .out file with their correct names! 
fluent 3ddp -g -t ${CORES} -mpi=openmpi -cnf="$NODELIST" -i fluent_batch.jou > outfile.out

Parallel job on Fritz#

The following script runs Ansys Fluent on Fritz in parallel on 4 nodes with 72 CPUs each.

#!/bin/bash -l
#SBATCH --job-name=myfluent
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=72
#SBATCH --time=24:00:00
#SBATCH --export=NONE

unset SLURM_EXPORT_ENV

# load environment module 
module load ansys/XXXX 

# generate node list 
NODELIST=$(for node in $( scontrol show hostnames ${SLURM_JOB_NODELIST} | uniq ); do echo -n "${node}:${SLURM_NTASKS_PER_NODE},"; done | sed 's/,$//')
# calculate the number of cores actually used 
CORES=$(( ${SLURM_JOB_NUM_NODES} * ${SLURM_NTASKS_PER_NODE} )) 

# execute fluent with command line parameters (in this case: 3D, double precision) 
# Please insert here your own .jou and .out file with their correct names! 
fluent 3ddp -g -t ${CORES} -mpi=openmpi -cnf="$NODELIST" -i fluent_batch.jou > outfile.out

GPU job on Alex#

The following script runs Ansys Fluent on Alex in parallel using 2 A100 GPUs.

#!/bin/bash -l
#SBATCH --job-name=myfluent
#SBATCH --gres=gpu:a100:2
#SBATCH --time=24:00:00
#SBATCH --export=NONE

unset SLURM_EXPORT_ENV

# load environment module 
module load ansys/2023R2

# execute fluent with command line parameters (in this case: 3D, double precision) 
# Please insert here your own .jou and .out file with their correct names! 
fluent 3ddp -g -t ${SLURM_GPUS_ON_NODE} -gpu -i fluent_batch.jou > outfile.out

Example journal file for steady-state simulation#

;feel free to modify all subsequent lines to adapt them to your application case
;read case file
/file/read-case "./example-case.cas"

;initialization and start of steady state simulation

/solve/initialize/hyb-initialization
(format-time #f #f)
/solve/iterate 100
(format-time #f #f)

;write final output and exit
/file/write-case-data "./example-case-final.cas"

exit y