LAMMPS#
LAMMPS is a classical molecular dynamics code with a focus on materials modeling. LAMMPS has potentials for solid-state materials (metals, semiconductors) and soft matter (biomolecules, polymers) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale.
Availability / Target HPC systems#
- Woody, Meggie, Fritz
- TinyGPU, Alex
Most of these installations were made using through SPACK -- check https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lammps/package.py for possible versions and build options if you'd like to request a different compilation
Allocate an interactive job and run mpirun -np 1 lmp -help
to see
which LAMMPS packages have been included in a specific build. Use module avail lammps
to see the list of available LAMMPS modules.
On Fritz, in addition to the installations from SPACK (normally based on GNU compilers), there is a LAMMPS installation built with the Intel compilers. For this installation the following plugins were included:
AMOEBA ASPHERE ATC AWPMD BOCS BODY BPM BROWNIAN CG-DNA CG-SPICA CLASS2
COLLOID COLVARS COMPRESS CORESHELL DIELECTRIC DIFFRACTION DIPOLE
DPD-BASIC DPD-MESO DPD-REACT DPD-SMOOTH DRUDE EFF ELECTRODE
EXTRA-COMPUTE EXTRA-DUMP EXTRA-FIX EXTRA-MOLECULE EXTRA-PAIR FEP
GRANULAR INTEL INTERLAYER KIM KSPACE LATBOLTZ MACHDYN MANIFOLD MANYBODY
MC MDI MEAM MESONT MGPT MISC ML-HDNNP ML-IAP ML-PACE ML-POD ML-RANN
ML-SNAP MOFFF MOLECULE MOLFILE MPIIO OPENMP OPT ORIENT PERI PHONON
PLUGIN POEMS PTM QEQ QMMM QTB REACTION REAXFF REPLICA RIGID SHOCK SMTBQ
SPH SPIN SRD TALLY UEF VORONOI YAFF
On Fritz the lammps/20211027-gcc11.2.0-ompi-mkl
module has been compiled using
Gcc-11.2.0, Open MPI 4.1.1, and Intel oneAPI MKL using:
-DBUILD_SHARED_LIBS:BOOL=ON -DLAMMPS_EXCEPTIONS:BOOL=OFF \
-DBUILD_MPI=ON -DBUILD_OMP:BOOL=ON -DPKG_OPENMP=ON -DPKG_GPU=OFF \
-DBUILD_LIB=ON -DWITH_JPEG:BOOL=ON -DWITH_PNG:BOOL=ON \
-DWITH_FFMPEG:BOOL=ON -DPKG_ASPHERE=ON -DPKG_BODY=ON \
-DPKG_CLASS2=ON -DPKG_COLLOID=ON -DPKG_COMPRESS=ON \
-DPKG_CORESHELL=ON -DPKG_DIPOLE=ON -DPKG_GRANULAR=ON \
-DPKG_KSPACE=ON -DPKG_KOKKOS=ON -DPKG_LATTE=ON -DPKG_MANYBODY=ON \
-DPKG_MC=ON -DPKG_MEAM=OFF -DPKG_MISC=ON -DPKG_MLIAP=OFF \
-DPKG_MOLECULE=ON -DPKG_MPIIO=ON -DPKG_OPT=OFF -DPKG_PERI=ON \
-DPKG_POEMS=ON -DPKG_PYTHON=ON -DPKG_QEQ=ON -DPKG_REPLICA=ON \
-DPKG_RIGID=ON -DPKG_SHOCK=ON -DPKG_SNAP=ON -DPKG_SPIN=ON \
-DPKG_SRD=ON -DPKG_USER-ATC=ON -DPKG_USER-ADIOS=OFF \
-DPKG_USER-AWPMD=OFF -DPKG_USER-BOCS=OFF -DPKG_USER-CGSDK=OFF \
-DPKG_USER-COLVARS=OFF -DPKG_USER-DIFFRACTION=OFF \
-DPKG_USER-DPD=OFF -DPKG_USER-DRUDE=OFF -DPKG_USER-EFF=OFF \
-DPKG_USER-FEP=OFF -DPKG_USER-H5MD=ON -DPKG_USER-LB=ON \
-DPKG_USER-MANIFOLD=OFF -DPKG_USER-MEAMC=ON \
-DPKG_USER-MESODPD=OFF -DPKG_USER-MESONT=OFF -DPKG_USER-MGPT=OFF \
-DPKG_USER-MISC=ON -DPKG_USER-MOFFF=OFF -DPKG_USER-NETCDF=ON \
-DPKG_USER-OMP=ON -DPKG_USER-PHONON=OFF -DPKG_USER-PLUMED=OFF \
-DPKG_USER-PTM=OFF -DPKG_USER-QTB=OFF -DPKG_USER-REACTION=OFF \
-DPKG_USER-REAXC=ON -DPKG_USER-SDPD=OFF -DPKG_USER-SMD=OFF \
-DPKG_USER-SMTBQ=OFF -DPKG_USER-SPH=OFF -DPKG_USER-TALLY=OFF \
-DPKG_USER-UEF=OFF -DPKG_USER-YAFF=OFF -DPKG_VORONOI=ON \
-DPKG_KIM=ON -DFFT=MKL -DEXTERNAL_KOKKOS=ON
Alex#
The modules lammps/20201029-gcc10.3.0-openmpi-mkl-cuda
and
lammps/20211027-gcc10.3.0-openmpi-mkl-cuda
have been compiled using
Gcc-10.3.0, Intel OneAPI MKL, Open MPI 4.1.1, and with
- GPU package API: CUDA; GPU package precision: mixed; for
sm_80
- KOKKOS package API: CUDA OpenMP Serial; KOKKOS package precision:
double; for
sm_80
- Installed packages for 20201029:
- Installed packages for 20211027:
Run module avail lammps
to see all currently installed LAMMPS modules.
Allocate an interactive job and run mpirun -np 1 lmp -help
to see
which LAMMPS packages have been included in a specific build.
Notes#
We regularly observe that LAMMPS jobs have severe load balancing issues; this can be cause by inhomogeneous distribution of particles in a system or can happen in systems that have lots of empty space. It is possible to handle these problems with LAMMPS commands like processors, balance or fix balance. Please follow the links to the LAMMPS documentation.
Sample job scripts#
single GPU job on Alex#
#!/bin/bash -l
#SBATCH --time=10:00:00
#SBATCH --partition=a40
#SBATCH --gres=gpu:a40:1
#SBATCH --job-name=my-lammps
#SBATCH --export=NONE
unset SLURM_EXPORT_ENV
module load lammps/20201029-gcc10.3.0-openmpi-mkl-cuda
cd $SLURM_SUBMIT_DIR
srun --ntasks=16 --cpu-bind=core --mpi=pmi2 lmp -in input.in
MPI parallel job (single-node) on Fritz#
#!/bin/bash -l
#SBATCH --partition=singlenode
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=72
#SBATCH --time=00:05:00
#SBATCH --export=NONE
unset SLURM_EXPORT_ENV
# load required modules
module load lammps/20221222-intel-impi-mkl
# run lammps
srun lmp -in input.lmp
MPI parallel job (multi-node) on Fritz#
#!/bin/bash -l
#SBATCH --partition=multinode
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=72
#SBATCH --time=00:05:00
#SBATCH --export=NONE
unset SLURM_EXPORT_ENV
# load required modules
module load lammps/20221222-intel-impi-mkl
# run lammps
srun lmp -in input.lmp
Hybrid OpenMP/MPI job (single node) on Fritz#
#!/bin/bash -l
#SBATCH --partition=singlenode
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=4
#SBATCH --cpus-per-task=18
#SBATCH --time=00:05:00
#SBATCH --export=NONE
unset SLURM_EXPORT_ENV
# load required modules
module load lammps/20221222-intel-impi-mkl
# specify the number of OpenMP threads
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
# run lammps
srun lmp -sf omp -in input.lmp
Hybrid OpenMP/MPI job (multi-node) on Fritz#
#!/bin/bash -l
#SBATCH --partition=multinode
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=4
#SBATCH --cpus-per-task=18
#SBATCH --time=00:05:00
#SBATCH --export=NONE
unset SLURM_EXPORT_ENV
# load required modules
module load lammps/20221222-intel-impi-mkl
# specify the number of OpenMP threads
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
# run lammps
srun lmp -sf omp -in input.lmp