Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.

Content

Inhalt

Code execution

To execute your code you need to

  1. have a binary, which is the result of code compilation,
  2. create a slurm job script,
  3. submit the slurm jobs script.

...

For examples for code execution, please visit Slurm partition CPU CLX.


Code compilation

For code compilation please use gnu compiler.

Codeblock
titleMPI, gnu
collapsetrue
module load gcc/13.3.0
module load openmpi/gcc/5.0.3
mpicc -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c


Codeblock
titleMPI, OpenMP, gnu
collapsetrue

...

module 

...

Slurm scripts using mpirun

...

load gcc/13.3.0
module load openmpi/gcc/5.0.3
mpicc -fopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c

Slurm job script

A slurm script is submitted to the job scheduler slurm. It contains

  • the request for compute nodes of a Slurm partition CPU CLX and
  • commands to start your binary. You have two options to start an MPI binary.
    • using mpirun
    • using srun

Using mpirun

Using mpirun (from the MPI library) to start the binary you need to switch off slurm binding by adding export SLURM_CPU_BIND=none.

MPI only

Codeblock
titleMPI, full node
collapsetrue
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
module load openmpi/gcc/5.0.3
export SLURM_CPU_BIND=none
mpirun -np 192 --map-by ppr:96:node ./hello.bin


Codeblock
titleMPI, half node
collapsetrue
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
module load openmpi/gcc/5.0.3
export SLURM_CPU_BIND=none
mpirun -np 96 --map-by ppr:48:node ./hello.bin

MPI, OpenMP

You can run one code compiled with MPI and OpenMP. The examples cover example covers the setup

  • 2 nodes,
  • 4 processes per node, 24 threads per process.
Codeblock
titleMPI, OpenMPI, full node
collapsetrue
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
module load openmpi/gcc/5.0.3
export SLURM_CPU_BIND=none
export OMP_NUM_THREADS=24
mpirun -np 8 --map-by ppr:4:node:pe=24 ./hello.bin

Using srun

Codeblock
titleMPI, full node
collapsetrue
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
srun --ntasks-per-node=96 ./hello.bin

You can run one code compiled with MPI and OpenMP. The example covers the setup

  • 2 nodes,
  • 4 processes per node, 24 threads per process.
Codeblock
titleMPI, OpenMP, full node
collapsetrue
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
export OMP_PROC_BIND=spread
export OMP_NUM_THREADS=24
srun --ntasks-per-node=4 --cpus-per-task=48 ./hello.bin