Amber/AmberTools#
NHR@FAU holds a "compute center license" of Amber 20 and 22; thus,
Amber is generally available to everyone for non-profit use, i.e. for
academic research. AmberTools are open-source, while Amber (pmemd
) requires a license.
Amber and AmberTools are a suite of biomolecular simulation programs.
Here, the term "Amber" does not refer to the set of molecular mechanical
force fields for the simulation of biomolecules but to the package of
molecular simulation programs consisting of the AmberTools (sander
and
many more) and Amber (pmemd
).
Availability / Target HPC systems#
- TinyGPU and Alex: typically use
pmemd.cuda
for single GPU. Thermodynamic integration (TI) may require special tuning; contact us! - throughput cluster Woody and parallel computers: only use
sander.MPI
if the input is not supported bypmemd.MPI
.cpptraj
is also available in parallel versions (cpptraj.OMP
andcpptraj.MPI
). - Use
module avail amber
to see a list of available Amber modules.
New versions of Amber/AmberTools are installed by RRZE upon request.
Alex#
The amber/20p12-at21p11-ompi-gnu-cuda11.5
module from 11/2021 contains
the additional bug fix discussed in
http://archive.ambermd.org/202110/0210.html and
http://archive.ambermd.org/202110/0218.html.
Notes#
The CPU-only module is called amber
while the GPU version (which only
contains pmemd.cuda
) is called amber-gpu
. The numbers in the module
name specify the Amber version, Amber patch level, the AmberTools
version, and the AmberTools patch level. The numbers are complemented by
the used compilers/tools, e.g.
amber/18p14-at19p03-intel17.0-intelmpi2017
or
amber-gpu/18p14-at19p03-gnu-cuda10.0
.
pmemd
and sander
do not have internal measures to limit the run
time. Thus, you have to estimate the number of time steps which can
finish within the requested wall time before hand and use that in your
mdin
file.
Recent versions of AmberTools install their only version of Python, which is independent of the Python of the Linux distribution or the usual Python modules of RRZE.
Sample job scripts#
pmemd
on TinyGPU#
#!/bin/bash -l
#SBATCH --time=06:00:00
#SBATCH --job-name=Testjob
#SBATCH --gres=gpu:1
#SBATCH --export=NONE
unset SLURM_EXPORT_ENV
module add amber-gpu/20p08-at20p12-gnu-cuda11.2
### there is no need to fiddle around with CUDA_VISIBLE_DEVICES!
pmemd.cuda -O -i mdin ...
pmemd
on Alex#
#!/bin/bash -l
#
#SBATCH --job-name=my-pmemd
#SBATCH --ntasks=16
#SBATCH --time=06:00:00
# use gpu:a100:1 and partition=a100 for A100
#SBATCH --gres=gpu:a40:1
#SBATCH --partition=a40
#SBATCH --export=NONE
unset SLURM_EXPORT_ENV
module load amber/20p12-at21p11-gnu-cuda11.5
srun pmemd.cuda -O -i mdin -c inpcrd -p prmtop -o output
parallel pmemd
on Meggie#
#!/bin/bash -l
#
# allocate 4 nodes with 20 cores per node = 4*20 = 80 MPI tasks
#SBATCH --nodes=4
#SBATCH --tasks-per-node=20
#
# allocate nodes for 6 hours
#SBATCH --time=06:00:00
# job name
#SBATCH --job-name=my-pmemd
# do not export environment variables
#SBATCH --export=NONE
#
# first non-empty non-comment line ends SBATCH options
# do not export environment variables
unset SLURM_EXPORT_ENV
# jobs always start in submit directory
module load amber/20p03-at20p07-intel17.0-intelmpi2017
# run
srun pmemd.MPI -O -i mdin ...