Atlassian uses cookies to improve your browsing experience, perform analytics and research, and conduct advertising. Accept all cookies to indicate that you agree to our use of cookies on your device. Atlassian cookies and tracking notice, (opens new window)
User Manual

User Manual
Results will update as you type.
  • Application Guide
  • Status of System
  • Usage Guide
  • Compute partitions
  • Software
    • AI Frameworks and Tools
    • Bring your own license
    • Chemistry
      • CP2K
      • exciting
      • Gaussian
      • GPAW
      • NAMD
      • Octopus
      • Quantum ESPRESSO
      • RELION
      • TURBOMOLE
      • VASP
      • Wannier90
      • GROMACS
      • SIESTA
    • Data Manipulation
    • Engineering
    • Environment Modules
    • Miscellaneous
    • Numerics
    • Virtualization
    • Devtools Compiler Debugger
    • Visualisation Tools
  • FAQ
  • NHR Community
  • Contact

    You‘re viewing this with anonymous access, so some content might be blocked.
    /
    NAMD

    NAMD

    Aug. 14, 2025

    Description

    NAMD is a parallel, object-oriented molecular dynamics code designed for high-performance simulations of large biomolecular systems using force fields. The code was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign.

    NAMD current documentation and other material can be found on the NAMD website.

    Prerequisites

    NAMD is distributed free of charge for non-commercial purposes only. Users need to agree to the NAMD license. This includes proper citation of the code in publications.

    Only members of the namd user group have access to NAMD executables provided by HLRN. To have their user ID included in this group, users can send a message to their consultant or to our support.

    Modules

    The environment modules shown in the table below are available to include NAMD executables in the directory search path. To see what is installed and what is the current default version of NAMD at HLRN, a corresponding overview can be obtained by saying module avail namd.

    NAMD is a parallel application. It is recommended to use mpirun as the job starter for NAMD at HLRN. An MPI module providing the mpirun command needs to be loaded ahead of the NAMD module.



    VersionModule file

    Prerequisites

    Deprecated version

    2.13namd/2.13impi/*   (any version)
    2.14namd/2.14impi/2021.11 cuda/11.8
    3.0.anamd/3.0.aimpi/2021.13 cuda/11.8
    3.0.1namd/3.0.1cuda/11.8

    CPU CLX partition

    3.0namd/3.0impi/2021.13

    CPU Genoa partition




    GPU A100 partition

    3.0namd/3.0
     intel/2024.2 cuda/12.9

    GPU PVC partition




    File I/O Considerations

    During run time only few files are involved in NAMD's I/O activities. As long as standard MD runs are carried out, this is unlikely to impose stress on the Lustre file system ($WORK) as long as one condition is met. Namely, file metadata operations (file stat, create, open, close, rename) should not occur at too short time intervals. First and foremost, this applies to the management of NAMD restart files. Instead of having a new set of restart files created several times per second, the NAMD input parameter restartfreq should be chosen such that they are written only every 5 minutes or in even longer intervals. For the case of NAMD replica-exchange runs the situation can be more severe. Here we already observed jobs where heavy metadata file I/O on the individual "colvars.state" files located in every replica's subdirectory has overloaded our Lustre metadata servers resulting in a severe slowdown of the entire Lustre file system. Users are advised to set corresponding NAMD input parameters such that each replica performs metadata I/O on these files in intervals not shorter than really needed, or, where affordable, that these files are written only at the end of the run.

    Job Script Examples

    1. For Intel Cascade Lake compute nodes – simple case of a NAMD job using a total of 960 CPU cores distributed over 10 nodes running 96 tasks each

      #!/bin/bash
      #SBATCH -t 12:00:00
      #SBATCH -p cpu-clx
      #SBATCH -N 10
      #SBATCH --tasks-per-node 96
      
      export SLURM_CPU_BIND=none
      
      module load impi/2021.13 
      module load namd/3.0
      
      mpirun namd3 inputfile > outputfile
    2. A set of input files for a small and short replica-exchange simulation is included with the NAMD installation. A description can be found in the NAMD User's Guide. The following job script executes this replica-exchange simulation on 2 nodes using 8 replicas (24 tasks per replica)

      #!/bin/bash
      #SBATCH -t 0:20:00
      #SBATCH -p cpu-clx
      #SBATCH -N 2
      #SBATCH --tasks-per-node 96
      
      export SLURM_CPU_BIND=none
      
      module load impi/2021.13
      module load namd/2.14
      
      cp -r /sw/chem/namd/2.14/skl/lib/replica .
      cd replica/example/
      mkdir output
      (cd output; mkdir 0 1 2 3 4 5 6 7)
      
      mpirun namd2 +replicas 8 job0.conf +stdout output/%d/job0.%d.log

      3. For NAMD 3.0.1, Only single node GPU jobs are supported currently (https://www.ks.uiuc.edu/Research/namd/3.0.1/announce.html), but multiple GPUs in a single node can be used with the new GPU-resident simulation mode by setting the devices flag:

      #! /bin/bash
      
      #SBATCH -N 1
      #SBATCH --gres=gpu:4
      #SBATCH --ntasks=36
      #SBATCH -p gpu-a100
      
      module load cuda/11.8
      module load namd/3.0.1
      
      charmrun +p 36 namd3 +setcpuaffinity +devices 0,1,2,3 input.namd


    , multiple selections available, Use left or right arrow keys to navigate selected items
    sw-chemistry
    hlrn-sw
    software
    {"serverDuration": 16, "requestCorrelationId": "7b1e0a4847e94df3b05d3527490e9345"}