Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.

Auszug

a versatile package to perform molecular dynamics for systems with hundreds to millions of particles.

...

  • GROMACS provides extremely high performance compared to all other programs.

  • GROMACS can make simultaneous use of both CPU and GPU available in a system. There are options to statically and dynamically balance the load between the different resources.

  • GROMACS is user-friendly, with topologies and parameter files written in clear text format.

  • Both run input files and trajectories are independent of hardware endian-ness, and can thus be read by any version GROMACS.

  • GROMACS comes with a large selection of flexible tools for trajectory analysis.

  • GROMACS can be run in parallel, using the standard MPI communication protocol.

  • GROMACS contains several state-of-the-art algorithms.

  • GROMACS is Free Software, available under the GNU Lesser General Public License (LGPL).

Weaknesses

  • GROMACS does not do to much further analysis to get very high simulation speed.

  • Sometimes it is challenging to get non-standard information about the simulated system.

  • Different versions sometimes have differences in default parameters/methods. Reproducing older version simulations with a newer version can be difficult.

  • Additional tools and utilities provided by GROMACS are sometimes not the top quality.

...

Version

Module file

Thread-MPI

(gmx)

MPI

(gmx_mpi) 

Plumed

(gmx_mpi_plumed)

Prerequisites

CPU CLX partition

2021.7

gromacs/2021.7

(Fehler)

(Haken)

(Fehler)

impi/2021.13

2023.0

gromacs/2023.0

(Fehler)

(Haken)

(Haken)

impi/2021.13

CPU Genoa partition

GPU A100 partition

2022.5

gromacs/2022.5

(Fehler)

(Haken)

(Haken)

gcc/11.3.0 cuda/11.8 impiopenmpi/2021gcc.11/4.1.4

2023.0

gromacs/2023.0_tmpi

(Haken)

(Fehler)

(Fehler)

gcc/11.3.0 intel/2023.0.0 cuda/11.8

2024.0

gromacs/2024.0_tmpi

(Haken)

(Fehler)

(Fehler)

gcc/11.3.0 intel/2023.0.0 cuda/12.3

GPU PVC partition

...

If you are using MPI versions (non-thread-MPI, or eg., to take advantage of PLUMED) GPU-accelerated GROMACS, you can proceed in a similar fashion, but instead use the mpirun/mpiexec task launcher before the GROMACS binary. An example job script asking for 2 A100 GPUs across 2 nodes is shown below: 

Codeblock
languagebash
#!/bin/bash 
#SBATCH --time=12:00:00
#SBATCH --partition=gpu-a100
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=724

export SLURM_CPU_BIND=none

module load gcc/11.3.0 cuda/11.8 impiopenmpi/2021gcc.11/4.1.4
module load gromacs/2022.5

export GMX_GPU_DD_COMMS=true
export GMX_GPU_PME_PP_COMMS=true
export GMX_ENABLE_DIRECT_GPU_COMM=true

OMP_NUM_THREADS=9

mpirunmpiexec -np 48 -ppnnpernode 24 gmx_mpi mdrun -ntomp 9 -ntmpi 4 -nb gpu -pme gpu -npme 1 -gpu_id 01 OTHER MDRUNARGUMENTS

...

Note: Settings of the Thread-MPI ranks and OpenMP threads is for achieve optimal performance. The number of ranks should be a multiple of the number of sockets, and the number of cores per node should be a multiple of the number of threads per rank.

Related Modules

Gromacs-Plumed

PLUMED is an open-source, community-developed library that provides a wide range of different methods, such as enhanced-sampling algorithms, free-energy methods and tools to analyze the vast amounts of data produced by molecular dynamics (MD) simulations. PLUMED works together with some of the most popular MD engines.

Gromacs/20XX.X-plumed modules are versions have been patched with PLUMED's modifications, and these versions are able to run meta-dynamics simulationsSince the migration of the CPU partition from CentOS to Rocky 9 Linux, all GROMACS-PLUMED modules have now been combined with normal GROMACS modules. For example, to use GROMACS 2023.0 with PLUMED, one can load gromacs/2022.5 and have access to both normal (gmx_mpi) and PLUMED-patched (gmx_mpi_plumed) binaries.

PLUMED can be used to bias GROMACS simulations with an appropriate PLUMED data file supplied as input for the -plumed option for the gmx_mpi_plumed mdrun command:

Codeblock
languagebash
#!/bin/bash 
#SBATCH --time=12:00:00
#SBATCH --partition=cpu-clx
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=72

export SLURM_CPU_BIND=none

module load gcc/11.3.0 cuda/11.8 openmpi/gcc.11/4.1.4
module load gromacs/2022.5

OMP_NUM_THREADS=2

mpiexec -np 144 -npernode 72 gmx_mpi mdrun -ntomp 2 -npme 1 -pin on -plumed plumed.dat OTHER MDRUNARGUMENTS
Hinweis

Not every GROMACS GPU option is compatible with PLUMED operations. For example, -update gpu, which normally can greatly accelerate all operations in normal GROMACS by forcing simulation updates to occur on the GPU, will lead to incorrect results with PLUMED (https://github.com/plumed/plumed2/commit/20f8be272efa268a31af65c56c3c71af8c13402c#diff-f21f46830c34c99766c30157316251b8354efbf1f6359f18b961c7339af97a77R2144 ). Please familiarize yourself with PLUMED/GROMACS/GPU limitations, observe output warnings, and report any issues to NHR@ZIB staff for assistance.

For additional information about PLUMED, please visit the official website.

Analyzing results

GROMACS Tools

...

Turbo-boost has been mostly disabled on Emmy at GWDG (partitions medium40, large40, standard96large96, and huge96) in order to save energy. However, this has a particularly strong performance impact on GROMACS in the range of 20-40%. Therefore, we recommend that GROMACS jobs be submitted requesting turbo-boost to be enabled with the --constraint=turbo_on option given to srun or sbatch.

Useful links

References

  1. GROMACS User-Guide

  2. PLUMED Home