Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.

...

CP2K VersionModulefileRequirementSupportCPU/GPULise/Emmy
2022.2cp2k/2022.2

intel/2021.2 (Lise)

intel/2022.2 (Emmy)

libint, fftw3, libxc, elpa, scalapack, cosma, xsmm, spglib, mkl, sirius, libvori and libbqb

(Haken) / (Fehler)(Haken) / (Haken)
2023.1cp2k/2023.1

intel/2021.2 (Lise)

intel/2022.2 (Emmy)

Lise: libint, fftw3, libxc, elpa, scalapack, cosma, xsmm, spglib, mkl, sirius, libvori and libbqb.

Emmy: libint, fftw3, libxc, elpa, scalapack, cosma, xsmm, spglib, mkl and sirius.

(Haken) / (Fehler)(Haken) / (Haken)
2023.1cp2k/2023.1
openmpi/gcc.11/4.1.4
cuda/11.8

libint, fftw3, libxc, elpa, elpa_nvidia_gpu, scalapack, cosma, xsmm, dbcsr_acc, spglib,

mkl, sirius, offload_cuda, spla_gemm, m_offloading, libvdwxc

(Fehler) / (Haken)

(Haken) / (Fehler)

2023.2cp2k/2023.2

intel/2021.2

impi/2021.7.1

libint, fftw3, libxc, elpa, scalapack, cosma, xsmm, spglib, mkl, sirius, libvori and libbqb

(Haken) / (Fehler)

(Haken) / (Fehler)

2023.2cp2k/2023.2openmpi/gcc.11/4.1.4
cuda/11.8

libint, fftw3, libxc, elpa, elpa_nvidia_gpu, scalapack, cosma, xsmm, dbcsr_acc, spglib,

mkl, sirius, offload_cuda, spla_gemm, m_offloading, libvdwxc

(Haken) / (Fehler)

(Haken) / (Fehler)

Remark: cp2k needs special attention when running on GPUs.

...

Codeblock
languagebash
titleLise (using srun)
#!/bin/bash 
#SBATCH --time=12:00:00
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=24
#SBATCH --cpus-per-task=4
#SBATCH --job-name=cp2k

export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}

module load intel/2021.2 impi/2021.7.1 cp2k/2023.12
srun cp2k.psmp input > output

...

Codeblock
languagebash
titleLise (using mpirun)
#!/bin/bash 
#SBATCH --time=12:00:00
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=24
#SBATCH --cpus-per-task=4
#SBATCH --job-name=cp2k

export SLURM_CPU_BIND=none
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}  

# Binding OpenMP threads
export OMP_PLACES=cores
export OMP_PROC_BIND=close

# Binding MPI tasks
export I_MPI_PIN=yes
export I_MPI_PIN_DOMAIN=omp
export I_MPI_PIN_CELL=core

module load intel/2021.2 impi/2021.7.1 cp2k/2023.12
mpirun cp2k.psmp input > output

...

Codeblock
languagebash
titleLise (using mpirun): on Nvidia A100 GPU nodes
#!/bin/bash 
#SBATCH --partition=gpu-a100  
#SBATCH --time=12:00:00
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=4
#SBATCH --cpus-per-task=18
#SBATCH --job-name=cp2k

export SLURM_CPU_BIND=none
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}    
export OMP_PLACES=cores
export OMP_PROC_BIND=close

module load gcc/11.3.0 openmpi/gcc.11/4.1.4 cuda/11.8 cp2k/2023.12

# gpu_bind.sh (see the following script) should be placed inside the same directory where cp2k will be executed
# Don't forget to make gpu_bind.sh executable by running: chmod +x gpu_bind.sh 
mpirun --bind-to core --map-by numa:PE=${SLURM_CPUS_PER_TASK} ./gpu_bind.sh cp2k.psmp input > output

...