Zum Ende der Metadaten springen
Zum Anfang der Metadaten

Sie zeigen eine alte Version dieser Seite an. Zeigen Sie die aktuelle Version an.

Unterschiede anzeigen Seitenhistorie anzeigen

« Vorherige Version anzeigen Version 10 Nächste Version anzeigen »

A versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles.

Description

GROMACS is a versatile package to perform molecular dynamics for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions (that usually dominate simulations) many groups are also using it for research on non-biological systems, e.g. polymers and fluid dynamics.

Read more on the GROMACS home page.

For a manual consult the the GROMACS home page.

Strengths

  • GROMACS provides extremely high performance compared to all other programs.
  • GROMACS can make simultaneous use of both CPU and GPU available in a system. There are options to statically and dynamically balance the load between the different resources.
  • GROMACS is user-friendly, with topologies and parameter files written in clear text format.
  • Both run input files and trajectories are independent of hardware endian-ness, and can thus be read by any version GROMACS.
  • GROMACS comes with a large selection of flexible tools for trajectory analysis.
  • GROMACS can be run in parallel, using either the standard MPI communication protocol, or via our own “Thread MPI” library for single-node workstations.
  • GROMACS contains several state-of-the-art algorithms.
  • GROMACS is Free Software, available under the GNU Lesser General Public License (LGPL).


Weaknesses

  • GROMACS does not do to much further analysis to get very high simulation speed.
  • Sometimes it is challenging to get non-standard information about the simulated system.
  • Different versions sometimes have differences in default parameters/methods. Reproducing older version simulations with a newer version can be difficult.
  • Additional tools and utilities provided by GROMACS are sometimes not the top quality.

GPU support

Coming soon…

QuickStart

Environment modules

The following versions have been installed:

VersionInstallation Pathmodulefilecompilercomment
2018.4/sw/chem/gromacs/2018.4/skl/impigromacs/2018.4intelmpi
2018.4/sw/chem/gromacs/2018.4/skl/impi-plumedgromacs/2018.4-plumedintelmpiwith plumed
2019.6/sw/chem/gromacs/2019.6/skl/impigromacs/2019.6intelmpi
2019.6/sw/chem/gromacs/2019.6/skl/impi-plumedgromacs/2019.6-plumedintelmpiwith plumed
2021.2/sw/chem/gromacs/2021.2/skl/impigromacs/2021.2intelmpi
2021.2/sw/chem/gromacs/2021.2/skl/impi-plumedgromacs/2021.2-plumedintelmpiwith plumed


These modules can be loaded by using a module load command. Note that Intel MPI module file should be loaded first:
module load impi/2019.5 gromacs/2019.6
This provides access to the binary gmx_mpi wich can be used to run simulations with sub-commands as gmx_mpi mdrun

In order to run simulations MPI runner should be used: 

mpirun gmx_mpi mdrun MDRUNARGUMENTS

Job Script Examples

  1. A simple case of a GROMACS job using a total of 640 CPU cores for 12 hours. The requested amount of cores in the example does not include all available cores on the allocated nodes. The job will execute 92 ranks on 3 nodes + 91 ranks on 4 nodes. You can use this example if you know the exact amount of required ranks you want to use.

    #!/bin/bash
    #SBATCH -t 12:00:00
    #SBATCH -p standard96
    #SBATCH -n 640
    
    export SLURM_CPU_BIND=none
    
    module load impi/2019.5
    module load gromacs/2019.6
    
    mpirun gmx_mpi mdrun MDRUNARGUMENTS
  2. In case you want to use all cores on the allocated nodes, there are another options of the batch system to request the amount of nodes and number of tasks. The example below will result in running 672 ranks. 

    #!/bin/bash
    #SBATCH -t 12:00:00
    #SBATCH -p standard96
    #SBATCH -N 7
    #SBATCH --tasks-per-node 96
    
    export SLURM_CPU_BIND=none
    
    module load impi/2019.5
    module load gromacs/2019.6
    
    mpirun gmx_mpi mdrun MDRUNARGUMENTS
  • Keine Stichwörter