Atlassian uses cookies to improve your browsing experience, perform analytics and research, and conduct advertising. Accept all cookies to indicate that you agree to our use of cookies on your device. Atlassian cookies and tracking notice, (opens new window)
User Manual

User Manual
Results will update as you type.
  • Application Guide
  • Status of System
  • Usage Guide
  • Compute partitions
  • Software
    • AI Frameworks and Tools
    • Bring your own license
    • Chemistry
      • CP2K
      • exciting
      • Gaussian
      • GPAW
      • NAMD
      • Octopus
      • Quantum ESPRESSO
      • RELION
      • TURBOMOLE
      • VASP
      • Wannier90
      • GROMACS
      • SIESTA
    • Data Manipulation
    • Engineering
    • Environment Modules
    • Miscellaneous
    • Numerics
    • Virtualization
    • Devtools Compiler Debugger
    • Visualisation Tools
  • FAQ
  • NHR Community
  • Contact

    You‘re viewing this with anonymous access, so some content might be blocked.
    /
    exciting
    Updated Sept. 15

    exciting

    Description

    exciting is an ab initio code that implements density-functional theory (DFT), capable of reaching the precision of micro Hartree. As its name suggests, exciting has a strong focus on excited-state properties. Among its features are:

    • G0W0 approximation;

    • Solution to the Bethe-Salpeter equation (BSE), to compute optical properties;

    • Time-dependent DFT (TDDFT) in both frequency and time domains;

    • Density-functional perturbation theory for lattice vibrations.

    exciting is an open-source code, released under the GPL license.

    More information is found on the official website: https://exciting-code.org/

    Modules

    exciting is currently available only on Lise. The standard species files deployed with exciting are located in $EXCITING_SPECIES. If you wish to use a different set, please refer to the manual.

    exciting

    Module file

    Requirement

    Compute Partitions

    Features

    CPU/GPU

    exciting

    Module file

    Requirement

    Compute Partitions

    Features

    CPU/GPU

    fluorine

    exciting/009-fluorine

    impi/2021.7.1

    CentOS 7

    MPI, OpenMP, MKL (including FFTW)

    /

    neon-20

    exciting/010-neon

    impi/2021.7.1

    CentOS 7

    MPI, OpenMP, MKL (including FFTW)

    /

    neon-21

    exciting/010-neon-21

    impi/2021.7.1

    CentOS 7

    MPI, OpenMP, MKL (including FFTW)

    /

    neon-21

    exciting/010-neon-21

    impi/2021.13

    CPU CLX - Rocky Linux 9

    MPI, OpenMP, MKL (including FFTW)

    /

    neon-21

    exciting/010-neon-21

    openmpi/gcc/5.0.3

    CPU Genoa - Rocky Linux 9

    MPI, OpenMP, MKL (including FFTW)

    /

    sodium-alpha

    exciting/011-sodium-alpha

    impi/2021.15

    CPU Genoa - Rocky Linux 9

    MPI, OpenMP, MKL (supporting ScaLAPACK)

    /

    sodium-alpha

    exciting/011-sodium-alpha

    openmpi/gcc/5.0.3

    CPU Genoa - Rocky Linux 9

    MPI, OpenMP, AOCL (supporting ScaLAPACK)

    /

    N.B.: The sodium-alpha version is a pre-release and should therefore be used with caution. Not all unit tests and integration tests (in the test suite) are currently passing.

    Example Jobscripts

    For compute nodes - CPU CLX - Rocky Linux 9
    #!/bin/bash #SBATCH --time 12:00:00 #SBATCH --partition=cpu-clx #SBATCH --nodes=1 #SBATCH --ntasks-per-node=24 #SBATCH --cpus-per-task=4 #SBATCH --job-name=exciting # Load exciting and the required modules # Check the table above to find which module to load, depending on the version to be used module load impi/2021.15 module load exciting/011-sodium-alpha # Set the number of OpenMP threads as given by the SLURM parameter "cpus-per-task" export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK} # Adjust the maximum stack size of OpenMP threads export OMP_STACKSIZE=512m # Do not use the CPU binding provided by slurm export SLURM_CPU_BIND=none # Binding OpenMP threads export OMP_PLACES=cores export OMP_PROC_BIND=close # Binding MPI tasks export I_MPI_PIN=yes export I_MPI_PIN_DOMAIN=omp export I_MPI_PIN_CELL=core # Important: Do not use srun when SLURM_CPU_BIND=none in combination with the pinning settings defined above mpirun exciting

     

    For compute nodes - CPU Genoa - Rocky Linux 9
    #!/bin/bash #SBATCH --time 12:00:00 #SBATCH --partition=cpu-genoa #SBATCH --nodes=3 #SBATCH --ntasks-per-node=12 #SBATCH --cpus-per-task=16 #SBATCH --job-name=exciting # Load exciting and the required modules # Check the table above to find which module to load, depending on the version to be used module load openmpi/gcc/5.0.3 module load exciting/011-sodium-alpha # Set the number of OpenMP threads as given by the SLURM parameter "cpus-per-task" export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK} # Adjust the maximum stack size of OpenMP threads export OMP_STACKSIZE=512m # Do not use the CPU binding provided by slurm export SLURM_CPU_BIND=none # Binding OpenMP threads export OMP_PLACES=cores export OMP_PROC_BIND=close # for unbuffered I/O, uncomment the following line # export GFORTRAN_UNBUFFERED_ALL=1 # Do not use srun combined with export SLURM_CPU_BIND=none # Important: here we are using mpirun to start the MPI process. The pinning is performed according to the following line mpirun --bind-to core --map-by ppr:${SLURM_NTASKS_PER_NODE}:node:pe=${OMP_NUM_THREADS} exciting
    {"serverDuration": 10, "requestCorrelationId": "9d466ca465924db19971734988d2bc75"}