exciting

Description

exciting is an ab initio code that implements density-functional theory (DFT), capable of reaching the precision of micro Hartree. As its name suggests, exciting has a strong focus on excited-state properties. Among its features are:

  • G0W0 approximation;
  • Solution to the Bethe-Salpeter equation (BSE), to compute optical properties;
  • Time-dependent DFT (TDDFT) in both frequency and time domains;
  • Density-functional perturbation theory for lattice vibrations.

exciting is an open-source code, released under the GPL license.

More information is found on the official website: https://exciting-code.org/

Modules

exciting is currently available only on Lise. The standard species files deployed with exciting are located in $EXCITING_SPECIES. If you wish to use a different set, please refer to the manual.

The most recent compiled version is neon, and it has been built using with the intel-oneapi compiler (v. 2021.2) and linked to Intel MKL (including FFTW). N.B.: exciting fluorine is also available.

The exciting module depends on impi/2021.7.1.

excitingModule fileRequirementCompute PartitionsFeaturesCPU/GPULise/Emmy
fluorine
exciting/009-fluorine
impi/2021.7.1
CentOS 7MPI, OpenMP, MKL (including FFTW)(Haken) / (Fehler)(Haken) / (Fehler)
neon-20
exciting/010-neon
impi/2021.7.1
CentOS 7MPI, OpenMP, MKL (including FFTW)(Haken) / (Fehler)(Haken) / (Fehler)
neon-21
exciting/010-neon-21
impi/2021.7.1
CentOS 7MPI, OpenMP, MKL (including FFTW)(Haken) / (Fehler)(Haken) / (Fehler)
neon-21
exciting/010-neon-21
impi/2021.13
Rocky Linux 9MPI, OpenMP, MKL (including FFTW)(Haken) / (Fehler)(Haken) / (Fehler)

Example Jobscripts

For compute nodes with Rocky Linux 9
#!/bin/bash
#SBATCH --time 12:00:00
#SBATCH --partition=cpu-clx
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=24
#SBATCH --cpus-per-task=4
#SBATCH --job-name=exciting
 
module load impi/2021.13
# Load exciting neon 
# Check the table above to find which module to load, depending on the version to be used
module load exciting/010-neon-21
 
# Set the number of OpenMP threads as given by the SLURM parameter "cpus-per-task"
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
  
# Adjust the maximum stack size of OpenMP threads
export OMP_STACKSIZE=512m
 
# Do not use the CPU binding provided by slurm
export SLURM_CPU_BIND=none
  
# Binding OpenMP threads
export OMP_PLACES=cores
export OMP_PROC_BIND=close
  
# Binding MPI tasks
export I_MPI_PIN=yes
export I_MPI_PIN_DOMAIN=omp
export I_MPI_PIN_CELL=core

# Important: Do not use srun when SLURM_CPU_BIND=none in combination with the pinning settings defined above
mpirun exciting
For compute nodes with CentOS 7
#!/bin/bash
#SBATCH --time 12:00:00
#SBATCH --partition standard96
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=24
#SBATCH --cpus-per-task=4
#SBATCH --job-name=exciting
 
module load impi/2021.7.1 
# Load exciting neon 
# Check the table above to find which module to load, depending on the version to be used
module load exciting/010-neon-21
 
# Set the number of OpenMP threads as given by the SLURM parameter "cpus-per-task"
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
  
# Adjust the maximum stack size of OpenMP threads
export OMP_STACKSIZE=512m
 
# Do not use the CPU binding provided by slurm
export SLURM_CPU_BIND=none
  
# Binding OpenMP threads
export OMP_PLACES=cores
export OMP_PROC_BIND=close
  
# Binding MPI tasks
export I_MPI_PIN=yes
export I_MPI_PIN_DOMAIN=omp
export I_MPI_PIN_CELL=core

# Important: Do not use srun when SLURM_CPU_BIND=none in combination with the pinning settings defined above
mpirun exciting