LAMMPS#
LAMMPS is a classical molecular dynamics code with a focus on materials modeling. LAMMPS has potentials for solid-state materials (metals, semiconductors) and soft matter (biomolecules, polymers) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale.
Availability / Target HPC systems#
- Woody, Meggie, Fritz
- TinyGPU, Alex
Most of these installations were made using through SPACK -- check https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lammps/package.py for possible versions and build options if you'd like to request a different compilation
Allocate an interactive job and run mpirun -np 1 lmp -help to see
which LAMMPS packages have been included in a specific build. Use module avail lammps to see the list of available LAMMPS modules.
On Fritz, in addition to the installations from SPACK (normally based on GNU compilers), there is a LAMMPS installation built with the Intel compilers. For this installation the following plugins were included:
AMOEBA ASPHERE ATC AWPMD BOCS BODY BPM BROWNIAN CG-DNA CG-SPICA CLASS2
COLLOID COLVARS COMPRESS CORESHELL DIELECTRIC DIFFRACTION DIPOLE
DPD-BASIC DPD-MESO DPD-REACT DPD-SMOOTH DRUDE EFF ELECTRODE
EXTRA-COMPUTE EXTRA-DUMP EXTRA-FIX EXTRA-MOLECULE EXTRA-PAIR FEP
GRANULAR INTEL INTERLAYER KIM KSPACE LATBOLTZ MACHDYN MANIFOLD MANYBODY
MC MDI MEAM MESONT MGPT MISC ML-HDNNP ML-IAP ML-PACE ML-POD ML-RANN
ML-SNAP MOFFF MOLECULE MOLFILE MPIIO OPENMP OPT ORIENT PERI PHONON
PLUGIN POEMS PTM QEQ QMMM QTB REACTION REAXFF REPLICA RIGID SHOCK SMTBQ
SPH SPIN SRD TALLY UEF VORONOI YAFF
On Fritz the lammps/20211027-gcc11.2.0-ompi-mkl module has been compiled using
Gcc-11.2.0, Open MPI 4.1.1, and Intel oneAPI MKL using:
-DBUILD_SHARED_LIBS:BOOL=ON -DLAMMPS_EXCEPTIONS:BOOL=OFF \
-DBUILD_MPI=ON -DBUILD_OMP:BOOL=ON -DPKG_OPENMP=ON -DPKG_GPU=OFF \
-DBUILD_LIB=ON -DWITH_JPEG:BOOL=ON -DWITH_PNG:BOOL=ON \
-DWITH_FFMPEG:BOOL=ON -DPKG_ASPHERE=ON -DPKG_BODY=ON \
-DPKG_CLASS2=ON -DPKG_COLLOID=ON -DPKG_COMPRESS=ON \
-DPKG_CORESHELL=ON -DPKG_DIPOLE=ON -DPKG_GRANULAR=ON \
-DPKG_KSPACE=ON -DPKG_KOKKOS=ON -DPKG_LATTE=ON -DPKG_MANYBODY=ON \
-DPKG_MC=ON -DPKG_MEAM=OFF -DPKG_MISC=ON -DPKG_MLIAP=OFF \
-DPKG_MOLECULE=ON -DPKG_MPIIO=ON -DPKG_OPT=OFF -DPKG_PERI=ON \
-DPKG_POEMS=ON -DPKG_PYTHON=ON -DPKG_QEQ=ON -DPKG_REPLICA=ON \
-DPKG_RIGID=ON -DPKG_SHOCK=ON -DPKG_SNAP=ON -DPKG_SPIN=ON \
-DPKG_SRD=ON -DPKG_USER-ATC=ON -DPKG_USER-ADIOS=OFF \
-DPKG_USER-AWPMD=OFF -DPKG_USER-BOCS=OFF -DPKG_USER-CGSDK=OFF \
-DPKG_USER-COLVARS=OFF -DPKG_USER-DIFFRACTION=OFF \
-DPKG_USER-DPD=OFF -DPKG_USER-DRUDE=OFF -DPKG_USER-EFF=OFF \
-DPKG_USER-FEP=OFF -DPKG_USER-H5MD=ON -DPKG_USER-LB=ON \
-DPKG_USER-MANIFOLD=OFF -DPKG_USER-MEAMC=ON \
-DPKG_USER-MESODPD=OFF -DPKG_USER-MESONT=OFF -DPKG_USER-MGPT=OFF \
-DPKG_USER-MISC=ON -DPKG_USER-MOFFF=OFF -DPKG_USER-NETCDF=ON \
-DPKG_USER-OMP=ON -DPKG_USER-PHONON=OFF -DPKG_USER-PLUMED=OFF \
-DPKG_USER-PTM=OFF -DPKG_USER-QTB=OFF -DPKG_USER-REACTION=OFF \
-DPKG_USER-REAXC=ON -DPKG_USER-SDPD=OFF -DPKG_USER-SMD=OFF \
-DPKG_USER-SMTBQ=OFF -DPKG_USER-SPH=OFF -DPKG_USER-TALLY=OFF \
-DPKG_USER-UEF=OFF -DPKG_USER-YAFF=OFF -DPKG_VORONOI=ON \
-DPKG_KIM=ON -DFFT=MKL -DEXTERNAL_KOKKOS=ON
Alex#
The modules lammps/20201029-gcc10.3.0-openmpi-mkl-cuda and
lammps/20211027-gcc10.3.0-openmpi-mkl-cuda have been compiled using
Gcc-10.3.0, Intel OneAPI MKL, Open MPI 4.1.1, and with
- GPU package API: CUDA; GPU package precision: mixed; for sm_80
- KOKKOS package API: CUDA OpenMP Serial; KOKKOS package precision:
  double; for sm_80
- Installed packages for 20201029:
- Installed packages for 20211027:
Run module avail lammps to see all currently installed LAMMPS modules.
Allocate an interactive job and run mpirun -np 1 lmp -help to see
which LAMMPS packages have been included in a specific build.
Notes#
We regularly observe that LAMMPS jobs have severe load balancing issues; this can be cause by inhomogeneous distribution of particles in a system or can happen in systems that have lots of empty space. It is possible to handle these problems with LAMMPS commands like processors, balance or fix balance. Please follow the links to the LAMMPS documentation.
Sample job scripts#
single GPU job on Alex#
#!/bin/bash -l
#SBATCH --time=10:00:00
#SBATCH --partition=a40
#SBATCH --gres=gpu:a40:1
#SBATCH --job-name=my-lammps
#SBATCH --export=NONE
unset SLURM_EXPORT_ENV
module load lammps/20201029-gcc10.3.0-openmpi-mkl-cuda
cd $SLURM_SUBMIT_DIR
srun --ntasks=16 --cpu-bind=core --mpi=pmi2 lmp -in input.in
MPI parallel job (single-node) on Fritz#
#!/bin/bash -l
#SBATCH --partition=singlenode
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=72
#SBATCH --time=00:05:00
#SBATCH --export=NONE
unset SLURM_EXPORT_ENV
# load required modules
module load lammps/20221222-intel-impi-mkl
# run lammps
srun lmp -in input.lmp
MPI parallel job (multi-node) on Fritz#
#!/bin/bash -l
#SBATCH --partition=multinode
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=72
#SBATCH --time=00:05:00
#SBATCH --export=NONE
unset SLURM_EXPORT_ENV
# load required modules
module load lammps/20221222-intel-impi-mkl
# run lammps
srun lmp -in input.lmp
Hybrid OpenMP/MPI job (single node) on Fritz#
#!/bin/bash -l
#SBATCH --partition=singlenode
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=4
#SBATCH --cpus-per-task=18
#SBATCH --time=00:05:00
#SBATCH --export=NONE
unset SLURM_EXPORT_ENV
# load required modules
module load lammps/20221222-intel-impi-mkl
# specify the number of OpenMP threads
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
# run lammps
srun lmp -sf omp -in input.lmp
Hybrid OpenMP/MPI job (multi-node) on Fritz#
#!/bin/bash -l
#SBATCH --partition=multinode
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=4
#SBATCH --cpus-per-task=18
#SBATCH --time=00:05:00
#SBATCH --export=NONE
unset SLURM_EXPORT_ENV
# load required modules
module load lammps/20221222-intel-impi-mkl
# specify the number of OpenMP threads
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
# run lammps
srun lmp -sf omp -in input.lmp
Setting up LAMMPS restart jobs and resubmitting automatically#
To have check-pointing in LAMMPS, one needs to add multiple commands in the input file. For example,
results in writing data every1000 MD steps for subsequent restarting. It will create restart file(s)
enumerated (appended) by numbers which are multiple of 1000.
Presumably, the last file should be used for the restart run.
The last restart file should be preferably renamed to a fixed name
say positions.restart in the job script so that the input file would not need
modification when the last restart file is not known in advance.
On FAU clusters, the runtime limit is 24 hours.
Therefore, we recommend you to stop LAMMPS smoothly by using fix halt in the input file as
and setting the variable maxtime to less than 24 hours while calling LAMMPS binary as
In  the example, we have assumed that 600 seconds is long enough, i.e. 100 MD step
takes less than 600 seconds. If this does not apply to your calculations, you should either
change the value of maxtime or the number of step for fix halt.
In summary, if provide the input structure from a file say called positions.newrun,
you need to have the following in the LAMMPS input file:
fix  2 all halt 100 tlimit > ${maxtime}
if "${restart}==TRUE" then &
    "read_restart    positions.restart" &
else &
    "read_data       positions.newrun"
#!/bin/bash -l
#SBATCH --partition=singlenode
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=72
#SBATCH --time=24:00:00
#SBATCH --export=NONE
unset SLURM_EXPORT_ENV
# load required modules
module load lammps/20221222-intel-impi-mkl
MAXTIME=$((24*3600-600))
# run lammps
if compgen -G "positions.restart.*" > /dev/null; then
  filename=$(ls positions.restart.* |sort -V |tail -n 1)
  mv -v $filename positions.restart
  srun lmp -i in.lammps -var restart TRUE -var maxtime $MAXTIME
else
  srun lmp -i in.lammps -var restart FALSE -var maxtime $MAXTIME
fi
if [ "$SECONDS" -gt "3600" ]; then
  cd ${SLURM_SUBMIT_DIR}
  sbatch job_script
fi
job_script is the file name of the job script.
The bash environment variable $SECONDS contains the run time of the shell in seconds.
In this example, sbatch is only called when the previous job script has run for at least 1 hour.