Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.

Content

Inhalt

Code execution

To execute your code you need to

  1. have a binary, which is the result of code compilation,
  2. create a slurm job script,
  3. submit the slurm jobs script.
Codeblock
titleSubmission of a job
blogin> sbatch myjobscipt.slurm
Submitted batch job 8028673
blogin> ls slurm-8028673.out
slurm-8028673.out

When using OpenMPI, binding is controlled using the –-bind-to parameter. To bind processes to cores, use --bind-to core. Possible other values can be found in the man page.

Codeblock
mpirun --bind-to core ./yourprogram

Our hardware supports hyperthreading, allowing you start 192 processes on Cascade Lake machines (*96 partitions) and 80 on Skylake machines.

If no specific requests regarding the number of tasks has been done, mpirun defaults to hyperthreading and starts cores*2 processes. If a number of tasks has been specified (with -N and/or --tasks-per-node), an (Open) mpirun honors this via the flag -map-by. For example:

...

For examples for code execution, please visit Slurm partition CPU CLX.


Code compilation

For code compilation please use gnu compiler.

Codeblock
titleMPI, gnu
collapsetrue
module load gcc/13.3.0
module load openmpi/gcc/5.0.3
mpicc -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c


Codeblock
titleMPI, OpenMP, gnu
collapsetrue
module load gcc/13.3.0
module load openmpi/gcc/5.0.3
mpicc -fopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c

Slurm job script

A slurm script is submitted to the job scheduler slurm. It contains

  • the request for compute nodes of a Slurm partition CPU CLX and
  • commands to start your binary. You have two options to start an MPI binary.
    • using mpirun
    • using srun

Using mpirun

Using mpirun (from the MPI library) to start the binary you need to switch off slurm binding by adding export SLURM_CPU_BIND=none.

Codeblock
titleMPI, full node
collapsetrue
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
module load openmpi/gcc/5.0.3
export SLURM_CPU_BIND=none
mpirun -np 192 --map-by ppr:96:node ./hello.bin


Codeblock
titleMPI, half node
collapsetrue
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
module load openmpi/gcc/5.0.3
export SLURM_CPU_BIND=none
mpirun -np 96 --map-by ppr:48:node ./hello.bin

You can run one code compiled with MPI and OpenMP. The example covers the setup

  • 2 nodes,
  • 4 processes per node, 24 threads per process.
Codeblock
titleMPI, OpenMPI, full node
collapsetrue
#!/bin/bash
#SBATCH -N 4
-nodes=2
#SBATCH --partition=cpu-clx:test
module load gcc/9.3.0 openmpi/gcc.9/45.10.43
export cpusSLURM_perCPU_node=${SLURM_JOB_CPUS_PER_NODE%\(*}

mpirun -BIND=none
export OMP_NUM_THREADS=24
mpirun -np 8 --map-by ppr:4:$cpus_per_node:nodepe=24 ./yourexehello.bin

Using srun

bash
Codeblock
language
titleHyperthreading disabled, 96 processes per node are started.MPI, full node
collapsetrue
#!/bin/bash
#SBATCH --N 4
#SBATCHnodes=2
#SBATCH --partition=cpu-clx:test
srun --tasksntasks-per-node =96 
module load gcc/9.3.0 openmpi/gcc.9/4.1.4

tasks_per_node=${SLURM_TASKS_PER_NODE%\(*}

mpirun -map-by ppr:$tasks_per_node:node ./yourexe./hello.bin

You can run one code compiled with MPI and OpenMP. The example covers the setup

  • 2 nodes,
  • 4 processes per node, 24 threads per process.
Codeblock
titleMPI, OpenMP, full node
collapsetrue
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
export OMP_PROC_BIND=spread
export OMP_NUM_THREADS=24
srun --ntasks-per-node=4 --cpus-per-task=48 ./hello.bin