GROMACS#
GROMACS (GROningen MAchine for Chemical Simulations) is a molecular dynamics package primarily designed for simulations of proteins, lipids and nucleic acids.
We provide GROMACS versions without and with PLUMED. GROMACS (and PLUMED) are built using Spack.
Availability / Target HPC systems#
- Fritz: GROMACS versions without and with PLUMED, both were built using Spack.
- TinyGPU: best value if only one GPU is used per run -- use the latest versions of GROMACS as they allow more and more offloading to the GPU
- parallel computers: experiment to find proper setting for
-npme
- Use
module avail gromacs
to see a list of available GROMACS modules.
New versions of GROMACS are installed by RRZE upon request.
Alex#
GROMACS usually delivers the most economic performance if only one GPGPU
is used together with the Thread-MPI implementation of GROMACS (no
mpirun
needed; the number of processes is specified directly using the
gmx mdrun
command line argument -ntmpi
) are used. Therefore, a
"real" MPI version of GROMACS is only provided together with PLUMED. In
that case the binary name is gmx_mpi
and must be started with srun
or mpirun
like any other MPI program.
Notes#
GROMACS can produce large amounts of data in small increments:
- Do not start
gmx mdrun
with the option-v
. The verbose output will only create extra large Slurm stdout files and your jobs will suffer if the NFS servers have high load. - Try to reduce the frequency and amount of data as much as possible.
- It also may be useful to stage the generated output in the node's
RAM disk (i.e. in the directory
/dev/shm/
) first and only copy it back to e.g.$WORK
once just before quitting the job. - The high output frequency of small amounts of data is NOT suitable
for
$FASTTMP
. - For serial and single-node simulations you have to use
gmx mdrun
; for multi-node simulations, the binary to use withsrun
isgmx_mpi mdrun
. See the sample scripts below! - the offloading of "updates and constraints" to GPU (= -update gpu) does not work with the OPC water model and the Nose-Hoover thermostat
Sample job scripts#
parallel job on Meggie#
#!/bin/bash -l
#
# allocate 4 nodes with 20 cores per node = 4*20 = 80 MPI tasks
#SBATCH --nodes=4
#SBATCH --tasks-per-node=20
#
# allocate nodes for 6 hours
#SBATCH --time=06:00:00
# job name
#SBATCH --job-name=my-gmx
# do not export environment variables
#SBATCH --export=NONE
#
# first non-empty non-comment line ends SBATCH options
# do not export environment variables
unset SLURM_EXPORT_ENV
# jobs always start in submit directory
module load gromacs/2021.5-gcc11.2.0-mkl
### 1) The argument of -maxh should match the requested walltime!
### 2) Performance often can be optimized if -npme # with a proper number of pme tasks is specified;
### experiment of use tune_mpe to find the optimal value.
### Using the SMT threads can sometimes be beneficial, however, requires testing.
### 3) Number of openMP threads also has to be tested beforehand and is limited by the number of pme tasks.
srun gmx_mpi mdrun [-npme #] [-ntomp #] -maxh 6 -dlb yes -s my.tpr
parallel job on Fritz#
#!/bin/bash -l
#SBATCH --job-name=my-gmx
#SBATCH --nodes=3
#SBATCH --ntasks-per-node=72
#SBATCH --partition=multinode
#SBATCH --cpus-per-task=1
#SBATCH --time=10:00:00
#SBATCH --export=NONE
unset SLURM_EXPORT_ENV
module load gromacs/2021.5-gcc11.2.0-impi-mkl
srun gmx_mpi mdrun [-npme #] -maxh 9.5 [-ntomp #] -dlb yes -s my.tpr
single GPU job on TinyGPU#
#!/bin/bash -l
# allocate nodes for 6 hours
#SBATCH --time=06:00:00
# job name
#SBATCH --job-name=Testjob
#SBATCH --gres=gpu:1
# do not export environment variables
#SBATCH --export=NONE
# do not export environment variables
unset SLURM_EXPORT_ENV
module load gromacs/2021.1-gcc-mkl-cuda11.2
### 1) the argument of -maxh should match the requested walltime!
### 2) optional but recommented arguments are: -pme gpu
### -bonded gpu
### -update gpu
gmx mdrun -maxh 6 -s my.tpr -nb gpu -pin on -pinstride 1 -ntmpi 1 -ntomp 8
### try automatic restart (adapt the conditions to fit your needs)
if [ -f confout.gro ]; then
echo "*** confout.gro found; no re-submit required"
exit
if [ $SECONDS -lt 1800 ]; then
echo "*** no automatic restart as runtime of the present job was too short"
exit
fi
sbatch.tinygpu job_script
single GPU job on Alex#
#!/bin/bash -l
#SBATCH --job-name=my-gmx
#SBATCH --time=24:00:00
#SBATCH --gres=gpu:a40:1
#SBATCH --export=NONE
unset SLURM_EXPORT_ENV
cd $SLURM_SUBMIT_DIR
module load gromacs/2021.5-gcc11.2.0-mkl-cuda
TPR="name-of-run-input-file"
rsync -a --exclude="*.o$SLURM_JOB_ID" --exclude="*.tpr" ${SLURM_SUBMIT_DIR}/* ${TMPDIR}/
cd $TMPDIR
# the offloading of "updates and constraints" to GPU (= -update gpu) does
# not work with the OPC water model and the Nose-Hoover thermostat
gmx mdrun -s ${SLURM_SUBMIT_DIR}/${TPR}.tpr -ntmpi 1 -ntomp 16 -pme gpu -bonded gpu [-update gpu] -pin on -pinstride 1 -deffnm $TPR -cpi $TPR
rsync -a ${TMPDIR}/* ${SLURM_SUBMIT_DIR}/
cd $SLURM_SUBMIT_DIR
### try automatic restart (adapt the conditions to fit your needs)
if [ -f ${TPR}.gro ]; then
echo "*** ${TPR}.gro found; no re-submit required"
exit
if [ $SECONDS -lt 1800 ]; then
echo "*** no automatic restart as runtime of the present job was too short"
exit
fi
sbatch job_script
multiple GPUs job on TinyGPU#
The performance benefit of using multiple GPUs is often very low! You get much better throughout if you run multiple independent jobs on a single GPUs as shown above.
Even if using multiple GPUs do not use the MPI-parallel version
(mdrun_mpi
) but the thread-mpi version (gmx mdrun
) of Gromacs.
-ntmpi #
usually should match the number of GPUs available.
#!/bin/bash -l
# allocated one GPU
#SBATCH --gres=gpu:gtx3080:2
# allocate nodes for 6 hours
#SBATCH --time=06:00:00
# job name #SBATCH --job-name=Testjob
# do not export environment variables
#SBATCH --export=NONE
# do not export environment variables
unset SLURM_EXPORT_ENV
module load gromacs/2021.1-gcc-mkl-cuda11.2
### 1) The argument of -maxh should match the requested walltime!
### 2) Typical optional arguments are: -pme gpu
### -bonded gpu
# these variables are needed for halo exchange and
# optimized communication between the GPUs
export GMX_GPU_DD_COMMS=true
export GMX_GPU_PME_PP_COMMS=true
export GMX_GPU_FORCE_UPDATE_DEFAULT_GPU=true
gmx mdrun -ntmpi 4 -ntomp 2 -maxh 6 -s my.tpr -npme 1 -pin on -pinstride 1
### try automatic restart (adapt the conditions to fit your needs)
if [ -f confout.gro ]; then
echo "*** confout.gro found; no re-submit required"
exit
if [ $SECONDS -lt 1800 ]; then
echo "*** no automatic restart as runtime of the present job was too short"
exit
fi
sbatch job_script
multiple walker metadynamic on multiple GPUs on TinyGPU#
This is an example script for running a meta-dynamic simulation with 32 walkers with Gromacs patched with Plumed on eight of our RTX3080 GPUs. Transfer to other GPU hardware is possible, but may require adjustment of settings (e.g. MPS-server [y/n], flags for mpirun and Gromacs program flags).
Please note: The run-input-file (*.tpr
) for each walker needs to be
in its own directory and it must be given the same name inside that
directory.
#!/bin/bash -l
# allocated one GPU
#SBATCH --gres=gpu:gtx3080:8
# allocate nodes for 6 hours
#SBATCH --time=06:00:00
# job name
#SBATCH --job-name=Testjob
# do not export environment variables
#SBATCH --export=NONE
# do not export environment variables
unset SLURM_EXPORT_ENV
module load gromacs/2022.5-gcc11.2.0-openmpi-mkl-cuda-plumed
TPR=name
# not necessary, but makes sure the directories are in correct order
directories=`echo dir{0..9} dir{1..2}{0..9} dir3{0..1}`
# these variables are needed to start the MPS-server
# Select a location that's accessible to the given $UID
export CUDA_MPS_PIPE_DIRECTORY=/tmp/nvidia-mps.$SLURM_JOB_ID
export CUDA_MPS_LOG_DIRECTORY=/tmp/nvidia-log.$SLURM_JOB_ID
# Start the daemon.
nvidia-cuda-mps-control -d
# these variables need to be placed directly before the Gromacs invocation
# these variables are needed for halo exchange and
# optimized communication between the GPUs
export GMX_GPU_DD_COMMS=true
export GMX_GPU_PME_PP_COMMS=true
export GMX_GPU_FORCE_UPDATE_DEFAULT_GPU=true
# --oversubscribe is necessary, otherwise mpirun aborts
# -s is needed, otherwise gromacs complains
# -pme -nb -update -bonded make sure everything is offloaded to the GPU
# -pin -pinstride order the threads on the CPU, otherwise there's
# wild chaos on the CPU
# -plumed ../plumed_in.dat needs to point to where the file is relative
# to the directory the .tpr is in
mpirun -n 32 --oversubscribe gmx_mpi mdrun -s $TPR -pme gpu -nb gpu -update gpu -bonded gpu -pin on -pinstride 1 -plumed ../plumed_in.dat -multidir ${directories} -cpi $TPR -maxh 6
# this will stop the MPS-server
echo quit | nvidia-cuda-mps-control
Further information#
- https://manual.gromacs.org/documentation/current/
- https://doi.org/10.1002/jcc.26011 -- More bang for your buck: Improved use of GPU nodes for GROMACS 2018
- our own evaluation -- Multi-GPU GROMACS Jobs on TinyGPU -- GROMACS Shootout: Intel Xeon Ice Lake vs. NVIDIA A100, A40, and others -- GROMACS performance on different GPU types -- GROMACS2024 on modern GPUs