Description
CP2K is a package for atomistic simulations of solid state, liquid, molecular, and biological systems offering a wide range of computational methods with the mixed Gaussian and plane waves approaches.
More information about CP2K and the documentation are found on https://www.cp2k.org/
Availability
CP2K is freely available for all users under the GNU General Public License (GPL).
Modules
CP2K is an MPI-parallel application. You can use either mpirun or srun as the job starter for CP2K. If you opt for mpirun, then, apart from loading the corresponding impi or openmpi modules, CPU and/or GPU pinning should be carefully carried out.
CP2K Version | Modulefile | Requirement | Support | CPU/GPU | Lise/Emmy |
---|---|---|---|---|---|
2022.2 | cp2k/2022.2 |
| libint, fftw3, libxc, elpa, scalapack, cosma, xsmm, spglib, mkl, sirius, libvori and libbqb | / | / |
2023.1 | cp2k/2023.1 |
| Lise: libint, fftw3, libxc, elpa, scalapack, cosma, xsmm, spglib, mkl, sirius, libvori and libbqb. Emmy: libint, fftw3, libxc, elpa, scalapack, cosma, xsmm, spglib, mkl and sirius. | / | / |
2023.1 | cp2k/2023.1 |
| libint, fftw3, libxc, elpa, elpa_nvidia_gpu, scalapack, cosma, xsmm, dbcsr_acc, spglib, mkl, sirius, offload_cuda, spla_gemm, m_offloading, libvdwxc | / | / |
Example Jobscripts
#!/bin/bash #SBATCH --time 12:00:00 #SBATCH --nodes=1 #SBATCH --ntasks-per-node=24 #SBATCH --cpus-per-task=4 #SBATCH --time=06:00:00 #SBATCH --job-name=cp2k export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK} module load intel/2021.2 impi/2021.7.1 cp2k/2023.1 srun cp2k.psmp input > output
#!/bin/bash #SBATCH --time 12:00:00 #SBATCH --nodes=1 #SBATCH --ntasks-per-node=24 #SBATCH --cpus-per-task=4 #SBATCH --time=06:00:00 #SBATCH --job-name=cp2k export SLURM_CPU_BIND=none export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK} # Binding OpenMP threads export OMP_PLACES=cores export OMP_PROC_BIND=close # Binding MPI tasks export I_MPI_PIN=yes export I_MPI_PIN_DOMAIN=omp export I_MPI_PIN_CELL=core module load intel/2021.2 impi/2021.7.1 cp2k/2023.1 mpirun cp2k.psmp input > output
#!/bin/bash #SBATCH --partition=gpu-a100 #SBATCH --time 12:00:00 #SBATCH --nodes=1 #SBATCH --ntasks-per-node=4 #SBATCH --cpus-per-task=18 #SBATCH --time=06:00:00 #SBATCH --job-name=cp2k export SLURM_CPU_BIND=none export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK} export OMP_PLACES=cores export OMP_PROC_BIND=close export OMPI_MCA_coll="^hcoll" export OMPI_MCA_btl="smcuda,tcp,vader,self" module load gcc/11.3.0 openmpi/gcc.11/4.1.4 cp2k/2023.1 mpirun --bind-to core --map-by numa:PE=${SLURM_CPUS_PER_TASK} ./gpu_bind.sh cp2k.psmp input > output
#!/bin/bash #SBATCH --time 12:00:00 #SBATCH --nodes=1 #SBATCH --ntasks-per-node=24 #SBATCH --cpus-per-task=4 #SBATCH --time=06:00:00 #SBATCH --job-name=cp2k export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK} module load intel/2022.2 impi/2021.6 cp2k/2023.1 srun cp2k.psmp input > output
Depending on the problem size, it may happen that the code stops with a segmentation fault due to insufficient stack size or due to threads exceeding their stack space. To circumvent this, we recommend inserting in the jobscript:
export OMP_STACKSIZE=512M ulimit -s unlimited