SIESTA
Description
As the official webpage states, SIESTA is both a method and its computer program implementation, to perform efficient electronic structure calculations and ab initio molecular dynamics simulations of molecules and solids. SIESTA's efficiency stems from a basis of localized atomic orbitals.
Among its features are
Total and partial energies.
Atomic forces and stress tensor.
Electric dipole moment, Dielectric polarization
Atomic, orbital, and bond populations (Mulliken).
Electron density, band structure, local and orbital-projected density of states
Geometry relaxation, Molecular dynamics, Phonons;
Spin polarized calculations (collinear or not).
Real-time Time-Dependent Density Functional Theory (TDDFT).
Modules
SIESTA version | Module file | Requirements | Compute Partitions | Features |
---|---|---|---|---|
5.2.2 | siesta/5.2.2 |
| cpu-clx | MPI, OpenMP, NetCDF-4 MPI-IO, ELSI, DFT-D3 |
5.2.2 | siesta/5.2.2 |
| cpu-genoa | MPI, OpenMP, NetCDF-4 MPI-IO, ELSI, DFT-D3 |
The licensing conditions of each feature added to SIESTA can be found in the $SIESTA_ROOT/share
directory.
Example Jobscripts
#!/bin/bash
#SBATCH --time 0-12
#SBATCH --partition=cpu-clx
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=96
#SBATCH --cpus-per-task=1
#SBATCH --job-name=siesta
# Set the number of OpenMP threads as given by the SLURM parameter "cpus-per-task"
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
# Adjust the maximum stack size of OpenMP threads
export OMP_STACKSIZE=1g
# Binding OpenMP threads
export OMP_PLACES=cores
export OMP_PROC_BIND=close
# Do not use the CPU binding provided by slurm
export SLURM_CPU_BIND=none
# Binding MPI tasks
export I_MPI_PIN=yes
export I_MPI_PIN_DOMAIN=omp
export I_MPI_PIN_CELL=core
module load impi/2021.14
module load siesta/5.2.2
mpirun siesta < input.fdf > output
For compute nodes in the cpu-genoa partition
#!/bin/bash
#SBATCH --time 0-12
#SBATCH --partition=cpu-genoa
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=192
#SBATCH --cpus-per-task=1
#SBATCH --job-name=siesta
# Set the number of OpenMP threads as given by the SLURM parameter "cpus-per-task"
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
# Adjust the maximum stack size of OpenMP threads
export OMP_STACKSIZE=1g
# Binding OpenMP threads
export OMP_PLACES=cores
export OMP_PROC_BIND=close
module load openmpi/gcc/5.0.3
module load siesta/5.2.2
mpirun --bind-to core --map-by ppr:${SLURM_NTASKS_PER_NODE}:node:pe=${OMP_NUM_THREADS} siesta < input.fdf > output