Skip to content

OpenFOAM#

OpenFOAM is a C++ toolbox for the development of customized numerical solvers, and pre-/post-processing utilities for the solution of continuum mechanics problems, most prominently including computational fluid dynamics (CFD). It contains solvers for a wide range of problems, from simple laminar regimes to DNS or LES including reactive turbulent flows. It provides a framework for manipulating fields and solving general partial differential equations on unstructured grids based on finite volume methods. Therefore, it is suitable for complex geometries and a wide range of configurations and applications.

There are three main variants of OpenFOAM that are released as free and open-source software: ESI OpenFOAM, The OpenFOAM Foundation,Foam Extend.

Availability / Target HPC systems#

We provide modules for major versions of ESI OpenFOAM and The OpenFOAM Foundation on request. The installed versions may differ between the different HPC clusters. You can check the available versions via module avail openfoam. A special version can be loaded, e.g. by module load openfoam/2112 or module load openfoam-org/8.0 respectively.

Please note that we will only provide modules for fully released versions. If you need some specific custom configuration or version, please consider building it yourself. Installation guides are available from the respective OpenFOAM distributors.

Production jobs should be run on the parallel HPC systems in batch mode. It is not permitted to run computationally intensive OpenFOAM simulation runs or serial/parallel post-processing sessions with large memory consumption on login nodes.

Notes#

  • OpenFOAM produces per default lots of small files. The parallel file system ($FASTTMP) is not made for such a finely grained file/folder structure. If possible, use collated I/O (option -fileHandler collated), which produces somewhat less problematic output.
  • Paraview for post-processing is available via the modules system (module avail paraview). We recommend to only use this on the visualization node.

Sample job scripts#

All job scripts have to contain the following information:

  • Resource definition for the queuing system (more details (more details here)
  • Load OpenFOAM environment module
  • Start command for parallel execution of solver of choice

Note

It is recommended to use srun instead of mpirun. Both take the parameters (nodes, ntasks-per-node) that you specified as option for sbatch. You don't have to specify this again in your srun/mpirun call. Note that the total number of MPI tasks ( nodes times ntasks-per-node) must be equal to numberOfSubdomains specified in system/decomposeParDict!

parallel OpenFOAM on Meggie#

The following script runs OpenFOAM on Meggie in parallel on 4 nodes with 20 CPUs each.

#!/bin/bash -l
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=20                
#SBATCH --time=24:00:00 
#SBATCH --job-name=my-job-name
#SBATCH --export=NONE 

unset SLURM_EXPORT_ENV 

# load environment module 
module load openfoam/XXXX 

# Please insert here your preferred solver executable! 
srun icoFoam -parallel -fileHandler collated > logfile

parallel OpenFOAM on Fritz#

The following script runs OpenFOAM on Fritz in parallel on 4 nodes with 72 CPUs each.

#!/bin/bash -l
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=72                   
#SBATCH --time=24:00:00 
#SBATCH --job-name=my-job-name
#SBATCH --export=NONE 

unset SLURM_EXPORT_ENV 

# load environment module 
module load openfoam/XXXX 

# Please insert here your preferred solver executable! 
srun icoFoam -parallel -fileHandler collated > logfile

Further information#