Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.

Inhalt

Code Compilation

For code compilation you can choose one of the two compilers Intel or Gnu. Both compilers are able to include the Intel MPI library.

Intel compiler

Codeblock
titleMPI, icc
collapsetrue
module load intel/19.0.5
module load impi/2019.5
mpiicc -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c
mpiifort -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.f90
mpiicpc -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.cpp

...

Codeblock
titleMPI, OpenMP, icc
collapsetrue
module load intel/19.0.5
module load impi/2019.5
mpiicc -qopenmp  -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c
mpiifort -qopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.f90 
mpiicpc -qopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.cpp

...

Codeblock
titleMPI, gcc
collapsetrue
module load gcc/9.3.0
module load impi/2019.5
mpigcc -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c
mpif90 -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.f90
mpigxx -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.cpp


Codeblock
titleMPI, OpenMP gcc
collapsetrue
module load gcc/9.3.0
module load impi/2019.5
mpigcc -fopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c
mpif90 -fopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.f90 
mpigxx -fopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.cpp

Code execution

You need to start the MPI parallelized code on the system. You can choose between two approaches, namely using mpirun or srun.

...

Using mpirun the pinning is controled controlled by the MPI library. Pinning by slurm you need to switch off by adding export SLURM_CPU_BIND=none.

MPI only

Codeblock
titleMPI, full node
collapsetrue
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
module load impi/2019.5
export SLURM_CPU_BIND=none
mpirun -ppn 96 ./hello.bin


Codeblock
titleMPI scattered, half node
collapsetrue
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
module load impi/2019.5
export SLURM_CPU_BIND=none
export I_MPI_PIN_DOMAIN=core
export I_MPI_PIN_ORDER=scatter
mpirun -ppn 1248 ./hello.bin


Codeblock
titleMPI, hyperthreading
collapsetrue
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
module load impi/2019.5
export SLURM_CPU_BIND=none
mpirun -ppn 192 ./hello.bin

...

You can run one code compiled with MPI and OpenMP. The examples cover the setup

  • 2 nodes,
  • 12 4 processes per node, 2 24 threads per process,one code compiled with MPI and OpenMP.
Codeblock
titleMPI, OpenMP compact, full node
collapsetrue
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
module load impi/2019.5
export SLURM_CPU_BIND=none
export OMP_NUM_THREADS=224
mpirun -ppn 124 ./hello.bin

The example covers the setup

  • 2 nodes,
  • 4 processes per node, 12 threads per process.
Codeblock
titleMPI, OpenMP scattered, half node
collapsetrue
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
module load impi/2019.5
export SLURM_CPU_BIND=none
export OMP_PROC_BIND=spread
export OMP_NUM_THREADS=212
mpirun -ppn 124 ./hello.bin

The example covers the setup

  • 2 nodes,
  • 96 4 processes per node using hyperthreading,
  • 2 48 threads per process.
Codeblock
titleMPI, OpenMP hyperthreading
collapsetrue
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
module load impi/2019.5
export SLURM_CPU_BIND=none
export OMP_PROC_BIND=spread
export OMP_NUM_THREADS=248
mpirun -ppn 4 ./hello.bin

Using srun

MPI only

Codeblock
titleMPI, full node
collapsetrue
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
srun --ntasks-per-node=96 ./hello.bin


Codeblock
titleMPI, half node
collapsetrue
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
srun --ntasks-per-node=48 ./hello.bin

MPI, OpenMP

You can run one code compiled with MPI and OpenMP. The example covers the setup

  • 2 nodes,
  • 4 processes per node, 24 threads per process.
Codeblock
titleMPI, OpenMP, full node
collapsetrue
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
export OMP_PROC_BIND=spread
export OMP_NUM_THREADS=24
srun --ntasks-per-node=4 --cpus-per-task=48 ./hello.bin

The example covers the setup

  • 2 nodes,
  • 4 processes per node, 12 threads per process.
Codeblock
titleMPI, OpenMP, half node
collapsetrue
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
export OMP_PROC_BIND=spread
export OMP_NUM_THREADS=12
srun --ntasks-per-node=4 --cpus-per-task=24 ./hello.bin

The example covers the setup

  • 2 nodes,
  • 4 processes per node using hyperthreading,
  • 48 threads per process.
Codeblock
titleMPI, OpenMP, hyperthreading
collapsetrue
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
export OMP_PROC_BIND=spread
export OMP_NUM_THREADS=48
srun --ntasks-per-node=4 --cpus-per-task=48 ./hello.bin