Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.

...

NAMD is a parallel application. It is recommended to use mpirun as the job starter for NAMD at HLRN. An MPI module providing the mpirun command needs to be loaded ahead of the NAMD module.



NAMD versionNAMD modulefileNAMD requirements
VersionModule file

Prerequisites

Deprecated version

2.13namd/2.13impi/*   (any version)

CPU CLX partition

3.0namd/3.0impi/2021.13

CPU Genoa partition




GPU A100 partition

2.14namd/2.14impi/2021.11 cuda/11.8
3.0.anamd/3.0.aimpi/2021.13 cuda/11.8

GPU PVC partition




File I/O Considerations

During run time only few files are involved in NAMD's I/O activities. As long as standard MD runs are carried out, this is unlikely to impose stress on the Lustre file system ($WORK) as long as one condition is met. Namely, file metadata operations (file stat, create, open, close, rename) should not occur at too short time intervals. First and foremost, this applies to the management of NAMD restart files. Instead of having a new set of restart files created several times per second, the NAMD input parameter restartfreq should be chosen such that they are written only every 5 minutes or in even longer intervals. For the case of NAMD replica-exchange runs the situation can be more severe. Here we already observed jobs where heavy metadata file I/O on the individual "colvars.state" files located in every replica's subdirectory has overloaded our Lustre metadata servers resulting in a severe slowdown of the entire Lustre file system. Users are advised to set corresponding NAMD input parameters such that each replica performs metadata I/O on these files in intervals not shorter than really needed, or, where affordable, that these files are written only at the end of the run.

Job Script Examples

  1. For Intel

  2. Skylake compute nodes (Göttingen only) – simple case of a NAMD job using a total of 200 CPU cores distributed over 5 nodes running 40 tasks each
    Codeblock
    linenumberstrue
    #!/bin/bash
    #SBATCH -t 12:00:00
    #SBATCH -p medium40
    #SBATCH -N 5
    #SBATCH --tasks-per-node 40
    
    export SLURM_CPU_BIND=none
    
    module load impi/2019.5
    module load namd/2.13
    
    mpirun namd2 inputfile > outputfile
  3. For Intel Cascade Lake compute nodes – simple case of a NAMD job using a total of 960 CPU cores distributed over 10 nodes running 96 tasks each

    Codeblock
    linenumberstrue
    #!/bin/bash
    #SBATCH -t 12:00:00
    #SBATCH -p standard96cpu-clx
    #SBATCH -N 10
    #SBATCH --tasks-per-node 96
    
    export SLURM_CPU_BIND=none
    
    module load impi/20192021.513 
    module load namd/23.130
    
    mpirun namd2namd3 inputfile > outputfile


  4. A set of input files for a small and short replica-exchange simulation is included with the NAMD installation. A description can be found in the NAMD User's Guide. The following job script executes this replica-exchange simulation on 2 nodes using 8 replicas (24 tasks per replica)

    Codeblock
    linenumberstrue
    #!/bin/bash
    #SBATCH -t 0:20:00
    #SBATCH -p standard96cpu-clx
    #SBATCH -N 2
    #SBATCH --tasks-per-node 96
    
    export SLURM_CPU_BIND=none
    
    module load impi/20192021.513
    module load namd/2.1314
    
    cp -r /sw/chem/namd/2.1314/skl/lib/replica .
    cd replica/example/
    mkdir output
    (cd output; mkdir 0 1 2 3 4 5 6 7)
    
    mpirun namd2 +replicas 8 job0.conf +stdout output/%d/job0.%d.log


...