Atlassian uses cookies to improve your browsing experience, perform analytics and research, and conduct advertising. Accept all cookies to indicate that you agree to our use of cookies on your device. Atlassian cookies and tracking notice, (opens new window)
User Manual

User Manual
Results will update as you type.
  • Application Guide
  • Status of System
  • Usage Guide
  • Compute partitions
  • Software
    • AI Frameworks and Tools
    • Bring your own license
    • Chemistry
      • CP2K
      • exciting
      • Gaussian
      • GPAW
      • NAMD
      • Octopus
      • Quantum ESPRESSO
      • RELION
      • TURBOMOLE
      • VASP
      • Wannier90
      • GROMACS
      • SIESTA
    • Data Manipulation
    • Engineering
    • Environment Modules
    • Miscellaneous
    • Numerics
    • Virtualization
    • Devtools Compiler Debugger
    • Visualisation Tools
  • FAQ
  • NHR Community
  • Contact

    You‘re viewing this with anonymous access, so some content might be blocked.
    /
    VASP

    VASP

    Aug. 19, 2025

    Description

    The Vienna Ab initio Simulation Package (VASP) is a first-principles code for electronic structure calculations and molecular dynamics simulations in materials science and engineering. It is based on plane wave basis sets combined with the projector-augmented wave method or pseudopotentials. VASP is maintained by the Computational Materials Physics Group at the University of Vienna.

    More information is available on the VASP website and from the VASP wiki.

    Usage Conditions

    Access to VASP executables is restricted to users who satisfy the following criteria:

    • The user must be a member of a research group owning a VASP license.
    • The user must employ VASP only for work on projects of this research group.
    • The user must be registered in Vienna as a VASP user of this research group. This is done via the VASP Portal where new users register using their institutional e-mail address.

    Only members of the groups vasp5_2 or vasp6 have access to VASP executables. To have their user ID included in these groups, users can ask their consultant or submit a support request. It is recommended that users make sure that they already got registered in Vienna beforehand, as this will be verified. Users whose research group did not upgrade its VASP license to version 6.x cannot become member of the vasp6 group.

    Modules

    VASP is an MPI-parallel application. We recommend to use mpirun as the job starter for VASP. The environment module providing the mpirun command associated with a particular VASP installation needs to be loaded ahead of the environment module for VASP.

    VASP VersionUser GroupVASP ModulefileCompute PartitionsMPI RequirementCPU/GPUSupported Features
    5.4.4 with patch 16052018vasp5_2vasp/5.4.4.p1CPU CLX (CentOS 7)impi/2019.5(Haken) / (Fehler)
    6.4.1vasp6vasp/6.4.1CPU CLX (CentOS 7)impi/2021.7.1(Haken) / (Fehler)OpenMP, HDF5, Wannier90, Libxc
    6.4.2vasp6vasp/6.4.2CPU CLX (CentOS 7)impi/2021.7.1(Haken) / (Fehler)OpenMP, HDF5, Wannier90, Libxc, DFTD4 van-der-Waals functional
    5.4.4vasp5_2vasp/5.4.4.p1CPU CLX (Rocky Linux 9)impi/2021.15(Haken) / (Fehler)
    6.4.1vasp6vasp/6.4.1GPU A100nvhpc-hpcx/23.1

    (Fehler) / (Haken)

    OpenMP, HDF5, Wannier90
    6.4.3vasp6vasp/6.4.3CPU CLX (Rocky Linux 9)impi/2021.13(Haken) / (Fehler)OpenMP, HDF5, Wannier90, Libxc, DFTD4 van-der-Waals functional, libbeef
    6.4.3vasp6vasp/6.4.3CPU Genoa (Rocky Linux 9)openmpi/gcc/5.0.3(Haken) / (Fehler)OpenMP, HDF5, Wannier90, Libxc, DFTD4 van-der-Waals functional, libbeef
    6.5.1vasp6_5vasp/6.5.1CPU CLX (Rocky Linux 9)

    impi/2021.13 or

    impi/2021.14
    (Haken) / (Fehler)OpenMP, HDF5, Wannier90, Libxc, DFTD4 van-der-Waals functional, libbeef

    Executables

    Our installations of VASP comprise the regular executables (vasp_std, vasp_gam, vasp_ncl) and, optionally, community driven modifications to VASP as shown in the table below. They are available in the directory added to the PATH environment variable by one of the vasp environment modules.

    ExecutableDescription
    vasp_stdmultiple k-points (formerly vasp_cd)
    vasp_gamGamma-point only (formerly vasp_gamma_cd)
    vasp_nclnon-collinear calculations, spin-orbit coupling (formerly vasp)
    vaspsol_[std|gam|ncl]
    set of VASPsol-enabled executables (only for v. 5.4.4)
    vasptst_[std|gam|ncl]
    set of VTST-enabled executables (only for v. 5.4.4)
    vasptstsol_[std|gam|ncl]set of executables combining these modifications (only for v. 5.4.4)

    N.B.: The VTST script collection is not available from the vasp environment modules. Instead, it is provided by the vtstscripts environment module(s).

    Example Jobscripts

    The following example shows a job script that will run on the Nvidia A100 GPU nodes (Berlin). Per default, VASP will use one GPU per MPI task. If you plan to use 4 GPUs per node, you need to set 4 MPI tasks per node. Then, set the number of OpenMP threads to 18 (because 4x18=72 which is the number of CPU cores on GPU A100 partition) to speed up your calculation. This, however, also requires proper process pinning.

    For Nvidia A100 GPU compute nodes
    #!/bin/bash
    #SBATCH --time=12:00:00
    #SBATCH --nodes=2
    #SBATCH --tasks-per-node=4
    #SBATCH --cpus-per-task=18
    #SBATCH --partition=gpu-a100
    
    # Set the number of OpenMP threads as given by the SLURM parameter "cpus-per-task"
    export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
    
    # Binding OpenMP threads
    export OMP_PLACES=cores
    export OMP_PROC_BIND=close
    
    # Avoid hcoll as MPI collective algorithm
    export OMPI_MCA_coll="^hcoll"
    
    # You may need to adjust this limit, depending on the case
    export OMP_STACKSIZE=512m 
    
    module load nvhpc-hpcx/23.1
    module load vasp/6.4.1  
    
    # Carefully adjust ppr:2, if you don't use 4 MPI processes per node
    mpirun --bind-to core --map-by ppr:2:socket:PE=${SLURM_CPUS_PER_TASK} vasp_std

    The following job script exemplifies how to run vasp 6.4.3 making use of OpenMP threads. Here, we have 2 OpenMP threads and 48 MPI tasks per node (the product of these 2 numbers should ideally be equal to the number of CPU cores per node).

    In many cases, running VASP with parallelization over MPI alone already yields good performance. However, certain application cases can benefit from hybrid parallelization over MPI and OpenMP. A detailed discussion is found here. If you opt for hybrid parallelization, please pay attention to process pinning, as shown in the examples below.

    For compute nodes of CPU CLX
    #!/bin/bash
    #SBATCH --time=12:00:00
    #SBATCH --nodes=2
    #SBATCH --tasks-per-node=48
    #SBATCH --cpus-per-task=2
    #SBATCH --partition=cpu-clx
    
    export SLURM_CPU_BIND=none
    
    # Set the number of OpenMP threads as given by the SLURM parameter "cpus-per-task"
    export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
    
    # Adjust the maximum stack size of OpenMP threads
    export OMP_STACKSIZE=512m
    
    # Binding OpenMP threads
    export OMP_PLACES=cores
    export OMP_PROC_BIND=close
    
    # Binding MPI tasks
    export I_MPI_PIN=yes
    export I_MPI_PIN_DOMAIN=omp
    export I_MPI_PIN_CELL=core
    
    module load impi/2021.13
    module load vasp/6.4.3
    
    # This is to avoid the (harmless) warning message "MPI strtup(): warning I_MPI_PMI_LIBRARY will be ignored since the hydra process manager was found"
    unset I_MPI_PMI_LIBRARY
    
    # Our tests have shown that vasp has better performance with psm2 as libfabric provider
    # Check if this also apply to your system
    # To stick to the default provider, comment out the following line
    export FI_PROVIDER=psm2
    
    mpirun vasp_std

    Here is essentially the same example but for the compute nodes of CPU Genoa

    For compute nodes of CPU Genoa
    #!/bin/bash
    #SBATCH --time=12:00:00
    #SBATCH --nodes=2
    #SBATCH --tasks-per-node=96
    #SBATCH --cpus-per-task=2
    #SBATCH --partition=cpu-genoa
    
    export SLURM_CPU_BIND=none
    
    # Set the number of OpenMP threads as given by the SLURM parameter "cpus-per-task"
    export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
    
    # Adjust the maximum stack size of OpenMP threads
    export OMP_STACKSIZE=512m
    
    # Binding OpenMP threads
    export OMP_PLACES=cores
    export OMP_PROC_BIND=close
    
    module load openmpi/gcc/5.0.3
    module load vasp/6.4.3  
    
    # Do not use srun combined with export SLURM_CPU_BIND=none 
    # Important: here we are using mpirun to start the MPI process. The pinning is performed according to the following line
    mpirun --bind-to core --map-by ppr:${SLURM_NTASKS_PER_NODE}:node:pe=${OMP_NUM_THREADS} vasp_std
    , multiple selections available, Use left or right arrow keys to navigate selected items
    software
    hlrn-sw
    sw-chemistry
    {"serverDuration": 11, "requestCorrelationId": "ad7ebaafef8340408663724bc039d62d"}