...
NAMD is a parallel application. It is recommended to use mpirun as the job starter for NAMD at HLRN. An MPI module providing the mpirun
command needs to be loaded ahead of the NAMD module.
Version | Module file | Prerequisites |
---|---|---|
Deprecated version | ||
namd/2.13 | impi/* (any version) | |
CPU CLX partition | ||
3.0 | namd/3.0 | impi/2021.13 |
CPU Genoa partition | ||
GPU A100 partition | ||
2.14 | namd/2.14 | impi/2021.11 cuda/11.8 |
3.0.a | namd/3.0.a | impi/2021.13 cuda/11.8 |
GPU PVC partition | ||
File I/O Considerations
During run time only few files are involved in NAMD's I/O activities. As long as standard MD runs are carried out, this is unlikely to impose stress on the Lustre file system ($WORK
) as long as one condition is met. Namely, file metadata operations (file stat
, create
, open
, close
, rename
) should not occur at too short time intervals. First and foremost, this applies to the management of NAMD restart files. Instead of having a new set of restart files created several times per second, the NAMD input parameter restartfreq
should be chosen such that they are written only every 5 minutes or in even longer intervals. For the case of NAMD replica-exchange runs the situation can be more severe. Here we already observed jobs where heavy metadata file I/O on the individual "colvars.state
" files located in every replica's subdirectory has overloaded our Lustre metadata servers resulting in a severe slowdown of the entire Lustre file system. Users are advised to set corresponding NAMD input parameters such that each replica performs metadata I/O on these files in intervals not shorter than really needed, or, where affordable, that these files are written only at the end of the run.
Job Script Examples
For Intel
Skylake compute nodes (Göttingen only) – simple case of a NAMD job using a total of 200 CPU cores distributed over 5 nodes running 40 tasks eachFor Intel Cascade Lake compute nodes – simple case of a NAMD job using a total of 960 CPU cores distributed over 10 nodes running 96 tasks each
Codeblock linenumbers true #!/bin/bash #SBATCH -t 12:00:00 #SBATCH -p standard96cpu-clx #SBATCH -N 10 #SBATCH --tasks-per-node 96 export SLURM_CPU_BIND=none module load impi/20192021.513 module load namd/23.130 mpirun namd2namd3 inputfile > outputfile
A set of input files for a small and short replica-exchange simulation is included with the NAMD installation. A description can be found in the NAMD User's Guide. The following job script executes this replica-exchange simulation on 2 nodes using 8 replicas (24 tasks per replica)Codeblock linenumbers true #!/bin/bash #SBATCH -t 0:20:00 #SBATCH -p standard96cpu-clx #SBATCH -N 2 #SBATCH --tasks-per-node 96 export SLURM_CPU_BIND=none module load impi/20192021.513 module load namd/2.1314 cp -r /sw/chem/namd/2.1314/skl/lib/replica . cd replica/example/ mkdir output (cd output; mkdir 0 1 2 3 4 5 6 7) mpirun namd2 +replicas 8 job0.conf +stdout output/%d/job0.%d.log
Codeblock | ||
---|---|---|
| ||
#!/bin/bash
#SBATCH -t 12:00:00
#SBATCH -p medium40
#SBATCH -N 5
#SBATCH --tasks-per-node 40
export SLURM_CPU_BIND=none
module load impi/2019.5
module load namd/2.13
mpirun namd2 inputfile > outputfile |
...