Skip to content

Ansys Mechanical#

Ansys Mechanical is a computational structural mechanics software that makes it possible to solve structural engineering problems. It is available in two different software environments: Ansys Workbench (the newer GUI-oriented environment) and Ansys Mechanical APDL (sometimes called Ansys Classic, the older MAPDL scripted environment).

Licensing#

Please note that the clusters do not come with any license. If you want to use Ansys products on the HPC clusters, you have to have access to suitable licenses. These can be purchased directly from RRZE. To efficiently use the HPC resources, Ansys HPC licenses are necessary.

Availability / Target HPC systems#

Production jobs should be run on parallel HPC systems in batch mode. For simulations with high memory requirements, a single-node job on TinyFat or Woody can be used.

Ansys Mechanical can also be used in interactive GUI mode via Workbench for serial pre-and/or post-processing on the login nodes. This should only be used to make quick simulation setup changes. It is not permitted to run computationally/memory-intensive Ansys Mechanical simulations on login nodes.

Different versions of all Ansys products are available via the modules system, which can be listed by module avail ansys. A special version can be loaded, e.g. by module load ansys/2023R2.

Notes#

  • The name of the Ansys executable depends on the loaded version, e.g. ansys232 for ansys/2023R2.
  • Two different parallelization methods are available:
    • Shared-memory parallelization: uses multiple cores on a single node; specify via ansys232 -smp -np N, default: N=2
    • Distributed-memory parallelization: uses multiple nodes; specify via ansys232 -dis -b -machines machine1:np:machine2:np:...

Sample job scripts#

All job scripts have to contain the following information:

  • Resource definition for the queuing system (more details here)
  • Load Ansys environment module
  • Generate a variable with the names of hosts of the current simulation run and specify the number of processes per host
  • Execute Mechanical with appropriate command line parameters (distributed memory run in batch mode)
  • Specify input and output file

Shared parallel job on Woody#

#!/bin/bash -l
#SBATCH --job-name=ansys_mechanical
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=4
#SBATCH --time=24:00:00
#SBATCH --export=NONE

unset SLURM_EXPORT_ENV
# load environment module 
module load ansys/XXXX

# execute mechanical with command line parameters 
# Please insert here the correct version and your own input and output file with its correct name! 
ansysXXX -smp -np $SLURM_CPUS_PER_TASK < input.dat > output.out

Distributed parallel job on Meggie#

#!/bin/bash -l
#SBATCH --job-name=Ansys_mechanical
#SBATCH --nodes=2
#SBATCH --time=24:00:00
#SBATCH --export=NONE

unset SLURM_EXPORT_ENV
# load environment module 
module load Ansys/XXXX

# number of cores to use per node
PPN=20
# generate machine list, uses $PPN processes per node
NODELIST=$(for node in $( scontrol show hostnames $SLURM_JOB_NODELIST | uniq ); do echo -n "${node}:$PPN:"; done | sed 's/:$//')

# execute mechanical with command line parameters
# Please insert here the correct version and your own input and output file with its correct name!
ansysXXX -dis -b -machines $NODELIST < input.dat >  output.out

Distributed parallel job on Fritz#

#!/bin/bash -l
#SBATCH --job-name=Ansys_mechanical
#SBATCH --nodes=2
#SBATCH --time=24:00:00
#SBATCH --export=NONE

unset SLURM_EXPORT_ENV
# load environment module 
module load Ansys/XXXX

# number of cores to use per node
PPN=72
# generate machine list, uses $PPN processes per node
NODELIST=$(for node in $( scontrol show hostnames $SLURM_JOB_NODELIST | uniq ); do echo -n "${node}:$PPN:"; done | sed 's/:$//')

# execute mechanical with command line parameters
# Please insert here the correct version and your own input and output file with its correct name!
ansysXXX -dis -b -machines $NODELIST < input.dat >  output.out