Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.

Auszug

a versatile package to perform molecular dynamics for systems with hundreds to millions of particles.

...

  • GROMACS provides extremely high performance compared to all other programs.

  • GROMACS can make simultaneous use of both CPU and GPU available in a system. There are options to statically and dynamically balance the load between the different resources.

  • GROMACS is user-friendly, with topologies and parameter files written in clear text format.

  • Both run input files and trajectories are independent of hardware endian-ness, and can thus be read by any version GROMACS.

  • GROMACS comes with a large selection of flexible tools for trajectory analysis.

  • GROMACS can be run in parallel, using the standard MPI communication protocol.

  • GROMACS contains several state-of-the-art algorithms.

  • GROMACS is Free Software, available under the GNU Lesser General Public License (LGPL).

Weaknesses

  • GROMACS does not do to much further analysis to get very high simulation speed.

  • Sometimes it is challenging to get non-standard information about the simulated system.

  • Different versions sometimes have differences in default parameters/methods. Reproducing older version simulations with a newer version can be difficult.

  • Additional tools and utilities provided by GROMACS are sometimes not the top quality.

...

If you are using MPI versions (non-thread-MPI, or eg., to take advantage of PLUMED) GPU-accelerated GROMACS, you can proceed in a similar fashion, but instead use the mpirun/mpiexec task launcher before the GROMACS binary. An example job script asking for 2 A100 GPUs across 2 nodes is shown below: 

Codeblock
languagebash
#!/bin/bash 
#SBATCH --time=12:00:00
#SBATCH --partition=gpu-a100
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=724

export SLURM_CPU_BIND=none

module load gcc/11.3.0 cuda/11.8 openmpi/gcc.11/4.1.4
module load gromacs/2022.5

export GMX_GPU_DD_COMMS=true
export GMX_GPU_PME_PP_COMMS=true
export GMX_ENABLE_DIRECT_GPU_COMM=true

OMP_NUM_THREADS=9

mpiexec -np 48 -npernode 24 gmx_mpi mdrun -ntomp 9 -nb gpu -pme gpu -npme 1 -gpu_id 01 OTHER MDRUNARGUMENTS

...

Note: Settings of the Thread-MPI ranks and OpenMP threads is for achieve optimal performance. The number of ranks should be a multiple of the number of sockets, and the number of cores per node should be a multiple of the number of threads per rank.

Related Modules

Gromacs-Plumed

PLUMED is an open-source, community-developed library that provides a wide range of different methods, such as enhanced-sampling algorithms, free-energy methods and tools to analyze the vast amounts of data produced by molecular dynamics (MD) simulations. PLUMED works together with some of the most popular MD engines.

Since the migration of the CPU partition from CentOS to Rocky 9 Linux, all GROMACS-PLUMED modules have now been combined with normal GROMACS modules. For example, to use GROMACS 2023.0 with PLUMED, one can load gromacs/2022.5 and have access to both normal (gmx_mpi) and PLUMED-patched (gmx_mpi_plumed) binaries.

...

Codeblock
languagebash
#!/bin/bash 
#SBATCH --time=12:00:00
#SBATCH --partition=cpu-clx
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=72

export SLURM_CPU_BIND=none

module load gcc/11.3.0 cuda/11.8 openmpi/gcc.11/4.1.4
module load gromacs/2022.5

OMP_NUM_THREADS=92

mpiexec -np 4144 -npernode 272 gmx_mpi mdrun -ntomp 92 -npme 1 -pin on -plumed plumed.dat OTHER MDRUNARGUMENTS

...

Turbo-boost has been mostly disabled on Emmy at GWDG (partitions medium40, large40, standard96large96, and huge96) in order to save energy. However, this has a particularly strong performance impact on GROMACS in the range of 20-40%. Therefore, we recommend that GROMACS jobs be submitted requesting turbo-boost to be enabled with the --constraint=turbo_on option given to srun or sbatch.

Useful links

References

  1. GROMACS User-Guide

  2. PLUMED Home