Atlassian uses cookies to improve your browsing experience, perform analytics and research, and conduct advertising. Accept all cookies to indicate that you agree to our use of cookies on your device. Atlassian cookies and tracking notice, (opens new window)
User Manual

User Manual
Results will update as you type.
  • Application Guide
  • Status of System
  • Usage Guide
  • Compute partitions
  • Software
    • AI Frameworks and Tools
    • Bring your own license
    • Chemistry
      • CP2K
      • exciting
      • Gaussian
      • GPAW
      • NAMD
      • Octopus
      • Quantum ESPRESSO
      • RELION
      • TURBOMOLE
      • VASP
      • Wannier90
      • GROMACS
      • SIESTA
    • Data Manipulation
    • Engineering
    • Environment Modules
    • Miscellaneous
    • Numerics
    • Virtualization
    • Devtools Compiler Debugger
    • Visualisation Tools
  • FAQ
  • NHR Community
  • Contact

    You‘re viewing this with anonymous access, so some content might be blocked.
    /
    GROMACS
    Updated Okt. 07

    GROMACS

     

    a versatile package to perform molecular dynamics for systems with hundreds to millions of particles.

     

    • 1 Description
    • 2 Strengths
    • 3 GPU support
    • 4 QuickStart
      • 4.1 Environment modules
    • 5 Submission script examples
        • 5.1.1 Simple CPU job script 
        • 5.1.2 Whole node CPU job script
        • 5.1.3 GPU job script
        • 5.1.4 Whole node GPU job script
      • 5.2 Gromacs-Plumed
    • 6 Analyzing results
      • 6.1 GROMACS Tools
      • 6.2 VMD
      • 6.3 Python
    • 7 Usage tips
      • 7.1 System preparation
      • 7.2 Running simulations
      • 7.3 Restarting simulations
    • 8 Performance 
      • 8.1 Special Performance Instructions for Emmy at GWDG
    • 9 Useful links
    • 10 References

    Description

    GROMACS is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions (that usually dominate simulations) many groups are also using it for research on non-biological systems, e.g. polymers and fluid dynamics.

    Read more on the GROMACS home page.

    Strengths

    • GROMACS provides extremely high performance compared to all other programs.

    • GROMACS can make simultaneous use of both CPU and GPU available in a system. There are options to statically and dynamically balance the load between the different resources.

    • GROMACS is user-friendly, with topologies and parameter files written in clear text format.

    • Both run input files and trajectories are independent of hardware endian-ness, and can thus be read by any version GROMACS.

    • GROMACS comes with a large selection of flexible tools for trajectory analysis.

    • GROMACS can be run in parallel, using the standard MPI communication protocol.

    • GROMACS contains several state-of-the-art algorithms.

    • GROMACS is Free Software, available under the GNU Lesser General Public License (LGPL).

    Weaknesses

    • GROMACS does not do to much further analysis to get very high simulation speed.

    • Sometimes it is challenging to get non-standard information about the simulated system.

    • Different versions sometimes have differences in default parameters/methods. Reproducing older version simulations with a newer version can be difficult.

    • Additional tools and utilities provided by GROMACS are sometimes not the top quality.

    GPU support

    GROMACS automatically uses any available GPUs. To achieve the best performance GROMACS uses both GPUs and CPUs in a reasonable balance.

    QuickStart

    Environment modules

    The following versions have been installed:

     

    Version

    Module file

    Thread-MPI

    (gmx)

    MPI

    (gmx_mpi) 

    Plumed

    (gmx_mpi_plumed)

    Prerequisites

    Version

    Module file

    Thread-MPI

    (gmx)

    MPI

    (gmx_mpi) 

    Plumed

    (gmx_mpi_plumed)

    Prerequisites

    CPU CLX partition

    2021.7

    gromacs/2021.7

    impi/2021.13

    2023.0

    gromacs/2023.0

    impi/2021.13

    CPU Genoa partition

     

     

     

     

     

     

     

     

     

     

     

     

    GPU A100 partition

    2022.5

    gromacs/2022.5

    cuda/12.9 gcc/13.3.0 openmpi/gcc.13/5.0.8

    2023.0

    gromacs/2023.0

    cuda/12.9 intel/2025.2 gcc/13.3.0

    2024.2

    gromacs/2024.2

    gcc/13.3.0 intel/2024.2
    cuda/12.9

    GPU PVC partition

     

     

     

     

     

     

     

     

     

     

     

     

    Version

    Installation Path

    modulefile

    compiler

    comment

    Version

    Installation Path

    modulefile

    compiler

    comment

    Deprecated versions

    2018.4

    /sw/chem/gromacs/2018.4/skl/impi

    gromacs/2018.4

    intelmpi

     

    2018.4

    /sw/chem/gromacs/2018.4/skl/impi-plumed

    gromacs/2018.4-plumed

    intelmpi

    with plumed

    2019.6

    /sw/chem/gromacs/2019.6/skl/impi

    gromacs/2019.6

    intelmpi

     

    2019.6

    /sw/chem/gromacs/2019.6/skl/impi-plumed

    gromacs/2019.6-plumed

    intelmpi

    with plumed

    2021.2

    /sw/chem/gromacs/2021.2/skl/impi

    gromacs/2021.2

    intelmpi

     

    2021.2

    /sw/chem/gromacs/2021.2/skl/impi-plumed

    gromacs/2021.2-plumed

    intelmpi

    with plumed

    2022.5

    /sw/chem/gromacs/2022.5/skl/impi

    gromacs/2022.5

    intelmpi

     

    2022.5

    /sw/chem/gromacs/2022.5/skl/impi-plumed

    gromacs/2022.5-plumed

    intelmpi

    with plumed

    2022.5

    /sw/chem/gromacs/2022.5/a100/impi

    gromacs/2022.5

    intelmpi

    with plumed

    2023.0

    /sw/chem/gromacs/2023.0/a100/tmpi

    gromacs/2023.0_tmpi

    intelmpi

     

    *Release notes can be found here. 

     

    These modules can be loaded by using a module load command. Note that Intel MPI module file should be loaded first:

    module load impi/2019.5 gromacs/2019.6


    This provides access to the binary gmx_mpi wich can be used to run simulations with sub-commands as gmx_mpi mdrun

    In order to run simulations MPI runner should be used: 

    mpirun gmx_mpi mdrun MDRUNARGUMENTS


    In order to load the GPU enabled version (avaiable only on the bgn nodes):

    Modules can be loaded by using a module load command. Note that the following module files should be loaded first:

    module load gcc/11.3.0 intel/2023.0.0 cuda/11.8 gromacs/2023.0_tmpi

    Submission script examples

    Simple CPU job script 

    A simple case of a GROMACS job using a total of 640 CPU cores for 12 hours. The requested amount of cores in the example does not include all available cores on the allocated nodes. The job will execute 92 ranks on 3 nodes + 91 ranks on 4 nodes. You can use this example if you know the exact amount of required ranks you want to use.

    #!/bin/bash #SBATCH -t 12:00:00 #SBATCH -p standard96 #SBATCH -n 640 export SLURM_CPU_BIND=none module load impi/2019.5 module load gromacs/2019.6 mpirun gmx_mpi mdrun MDRUNARGUMENTS

     

    Whole node CPU job script

    In case you want to use all cores on the allocated nodes, there are another options of the batch system to request the amount of nodes and number of tasks. The example below will result in running 672 ranks. 

    #!/bin/bash #SBATCH -t 12:00:00 #SBATCH -p standard96 #SBATCH -N 7 #SBATCH --tasks-per-node 96 export SLURM_CPU_BIND=none module load impi/2019.5 module load gromacs/2019.6 mpirun gmx_mpi mdrun MDRUNARGUMENTS

     

    GPU job script

    Following script using four thread-MPI ranks. One is dedicated to the long-range PME calculation. Using the -gputasks 0001 keyword: the first 3 threads offload their short-range non-bonded calculations to the GPU with ID 0, the 4th (PME) thread offloads its calculations to the GPU with ID 1.

    #!/bin/bash #SBATCH --time=12:00:00 #SBATCH --partition=gpu-a100 #SBATCH --ntasks=72 export SLURM_CPU_BIND=none module load gcc/11.3.0 intel/2023.0.0 cuda/11.8 module load gromacs/2023.0_tmpi export GMX_GPU_DD_COMMS=true export GMX_GPU_PME_PP_COMMS=true OMP_NUM_THREADS=9 gmx mdrun -ntomp 9 -ntmpi 4 -nb gpu -pme gpu -npme 1 -gputasks 0001 OTHER MDRUNARGUMENTS

    If you are using MPI versions (non-thread-MPI, or eg., to take advantage of PLUMED) GPU-accelerated GROMACS, you can proceed in a similar fashion, but instead use the mpirun/mpiexec task launcher before the GROMACS binary. An example job script asking for 2 A100 GPUs across 2 nodes is shown below: 

    #!/bin/bash #SBATCH --time=12:00:00 #SBATCH --partition=gpu-a100 #SBATCH --nodes=2 #SBATCH --ntasks-per-node=4 export SLURM_CPU_BIND=none module load gcc/11.3.0 cuda/11.8 openmpi/gcc.11/4.1.4 module load gromacs/2022.5 export GMX_GPU_DD_COMMS=true export GMX_GPU_PME_PP_COMMS=true export GMX_ENABLE_DIRECT_GPU_COMM=true OMP_NUM_THREADS=9 mpiexec -np 8 -npernode 4 gmx_mpi mdrun -ntomp 9 -nb gpu -pme gpu -npme 1 -gpu_id 01 OTHER MDRUNARGUMENTS

    Whole node GPU job script

    To setup a whole node GPU job use the -gputasks keyword. 

    #!/bin/bash #SBATCH --time=12:00:00 #SBATCH --partition=gpu-a100 #SBATCH --ntasks=72 export SLURM_CPU_BIND=none module load gcc/11.3.0 intel/2023.0.0 cuda/11.8 module load gromacs/2023.0_tmpi export GMX_GPU_DD_COMMS=true export GMX_GPU_PME_PP_COMMS=true OMP_NUM_THREADS=9 gmx mdrun -ntomp 9 -ntmpi 16 -gputasks 0000111122223333 MDRUNARGUMENTS

    Note: Settings of the Thread-MPI ranks and OpenMP threads is for achieve optimal performance. The number of ranks should be a multiple of the number of sockets, and the number of cores per node should be a multiple of the number of threads per rank.

    Related Modules

    Gromacs-Plumed

    PLUMED is an open-source, community-developed library that provides a wide range of different methods, such as enhanced-sampling algorithms, free-energy methods and tools to analyze the vast amounts of data produced by molecular dynamics (MD) simulations. PLUMED works together with some of the most popular MD engines.

    Since the migration of the CPU partition from CentOS to Rocky 9 Linux, all GROMACS-PLUMED modules have now been combined with normal GROMACS modules. For example, to use GROMACS 2023.0 with PLUMED, one can load gromacs/2022.5 and have access to both normal (gmx_mpi) and PLUMED-patched (gmx_mpi_plumed) binaries.

    PLUMED can be used to bias GROMACS simulations with an appropriate PLUMED data file supplied as input for the -plumed option for the gmx_mpi_plumed mdrun command:

    #!/bin/bash #SBATCH --time=12:00:00 #SBATCH --partition=cpu-clx #SBATCH --nodes=2 #SBATCH --ntasks-per-node=72 export SLURM_CPU_BIND=none module load gcc/11.3.0 cuda/11.8 openmpi/gcc.11/4.1.4 module load gromacs/2022.5 OMP_NUM_THREADS=2 mpiexec -np 144 -npernode 72 gmx_mpi mdrun -ntomp 2 -npme 1 -pin on -plumed plumed.dat OTHER MDRUNARGUMENTS

    Not every GROMACS GPU option is compatible with PLUMED operations. For example, -update gpu, which normally can greatly accelerate all operations in normal GROMACS by forcing simulation updates to occur on the GPU, will lead to incorrect results with PLUMED (gromacs 2022.4 patch updated · plumed/plumed2@20f8be2 ). Please familiarize yourself with PLUMED/GROMACS/GPU limitations, observe output warnings, and report any issues to NHR@ZIB staff for assistance.

    For additional information about PLUMED, please visit the official website.

    Analyzing results

    GROMACS Tools

    GROMACS contains many tools that for analysing your results such as read trajectories (XTC, TNG or TRR format) as well as a coordinate file (GRO, PDB, TPR) and write plots in the XVG format.  A list of commands with a short description can be find organised by topic at the official website.

    VMD

    VMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting,  it is free of charge, and includes source code..

    Python

    Python packages, MDAnalysis and MDTraj, can read and write trajectory- and coordinate-files of GROMACS and both have a variety of used analysis functions. Both packages integrate well with Python's data-science packages like NumPy, SciPy and Pandas, and with plotting libraries such as Matplotlib.

    Usage tips

    System preparation

    Your tpr file (portable binary run input file) contains your initial structure, molecular topology and all of the simulation parameters. Tpr files are portable can be copied from one computer to another one, but you should always use the same version of mdrun and grompp. Mdrun is able to use tpr files that have been created with an older version of grompp, but this can cause unexpected results in your simulation.

    Running simulations

    Simulations often take longer time than the maximum walltime. By using mdrun with -maxh command will tell the program the requested walltime and  GROMACS will finishes the simulation when reaching 99% of the walltime. At this time, mdrun creates a new checkpoint file and properly close all output files. Using this method, the simulation can be easily restarted from this checkpoint file.

    mpirun gmx_mpi mdrun MDRUNARGUMENTS -maxh 24

    Restarting simulations

    In order to restart a simulation from checkpoint file you can use the same mdrun command as the original simulation and adding -cpi filename.cpt  where the filename is the name of your most recent checkpoint file.

    mpirun gmx_mpi mdrun MDRUNARGUMENTS -cpi filename.cpt

    More detailed information can be find here.

     

    Performance 

    GROMACS prints information about statistics and performance at the end of the md.log file which usually also contains helpful tips to further improve the performance. The performance of the simulation is usually given in ns/day (number if nanoseconds of MD-trajectories simulated within a day).

    More information about performance of the simulations and "how to imporve perfomance" can be find here. 

    Special Performance Instructions for Emmy at GWDG

    Turbo-boost has been mostly disabled on Emmy at GWDG (partitions medium40, large40, standard96, large96, and huge96) in order to save energy. However, this has a particularly strong performance impact on GROMACS in the range of 20-40%. Therefore, we recommend that GROMACS jobs be submitted requesting turbo-boost to be enabled with the --constraint=turbo_on option given to srun or sbatch.

     

    Useful links

    • GROMACS Manuals and documentation

    • GROMACS Community Forums

    • Useful MD Tutorials for GROMACS

    • VMD Visual Molecular Dynamics

    References

    1. GROMACS User-Guide

    2. PLUMED Home

    {"serverDuration": 9, "requestCorrelationId": "1cbfcca14ce244849928b06b2ea3b276"}