Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.

Content

Inhalt

Code Compilation

For code compilation you can choose one of the two compilers - Intel oneAPI or GnuGNU. Both compilers are able to include the Intel MPI library.

Intel one API compiler

Codeblock
titleplain MPI, icc
collapsetrue
module load intel/19.0.5
module load impi/2019.5
mpiiccmpiicx -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c
mpiifortmpiifx -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.f90
mpiicpcmpiicpx -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.cpp


Codeblock
titlehybrid MPI, /OpenMP, icc
collapsetrue
module load intel/19.0.5
module load impi/2019.5
mpiiccmpiicx -qopenmpfopenmp  -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c
mpiifortmpiifx -qopenmpfopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.f90 
mpiicpcmpiicpx -qopenmpfopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.cpp

...

GNU compiler

Codeblock
titleplain MPI, gcc
collapsetrue
module load gcc/9.3.0
module load impi/2019.5
mpigcc -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c
mpif90 -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.f90
mpigxx -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.cpp


Codeblock
titlehybrid MPI, /OpenMP gcc
collapsetrue
module load gcc/9.3.0
module load impi/2019.5
mpigcc -fopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c
mpif90 -fopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.f90 
mpigxx -fopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.cpp

Slurm job script

You need to start the MPI parallelized code on the system. You can choose between two approaches, namely

  • using mpirun

...

  • and
  • using srun.

Using mpirun

Using mpirun the pinning is controlled by the MPI library. Pinning by slurm SLURM you need to switch off by adding export SLURM_CPU_BIND=none.

MPI only

Codeblock
titleMPI, full node
collapsetrue
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
module load impi/2019.5
export SLURM_CPU_BIND=none
mpirun -ppn 96 ./hello.bin

...

Codeblock
titleMPI, hyperthreading
collapsetrue
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
module load impi/2019.5
export SLURM_CPU_BIND=none
mpirun -ppn 192 ./hello.bin

MPI, OpenMP

You can run one code compiled with MPI and OpenMP. The examples cover the setup

...

Codeblock
titleMPI, OpenMP hyperthreading
collapsetrue
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
module load impi/2019.5
export SLURM_CPU_BIND=none
export OMP_PROC_BIND=spread
export OMP_NUM_THREADS=48
mpirun -ppn 4 ./hello.bin

Using srun

MPI only

Codeblock
titleMPI, full node
collapsetrue
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
srun --ntasks-per-node=96 ./hello.bin

...

Codeblock
titleMPI, half node
collapsetrue
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
srun --ntasks-per-node=48 ./hello.bin

MPI, OpenMP

You can run one code compiled with MPI and OpenMP. The example covers the setup

...