Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.

...


Codeblock
titleSubmission of a job
blogin> sbatch myjobscipt.slurm
Submitted batch job 8028673
blogin> ls slurm-8028673.out
slurm-8028673.out

Using mpirun

Using mpirun the pinning is controlled by the MPI library. Pinning by slurm you need to switch off by adding export SLURM_CPU_BIND=none.

MPI only

Codeblock
titleMPI, full node
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
module load openmpi/gcc/5.0.3
export SLURM_CPU_BIND=none
mpirun -np 192 --map-by ppr:96:node ./hello.bin

Binding with OpenMPI

When using OpenMPI, binding is controlled using the –-bind-to parameter. To bind processes to cores, use --bind-to core. Possible other values can be found in the man page.

Codeblock
mpirun --bind-to core ./yourprogram

Our hardware supports hyperthreading, allowing you start 192 processes on Cascade Lake machines (*96 partitions) and 80 on Skylake machines.

If no specific requests regarding the number of tasks has been done, mpirun defaults to hyperthreading and starts cores*2 processes. If a number of tasks has been specified (with -N and/or --tasks-per-node), an (Open) mpirun honors this via the flag -map-by. For example:

Codeblock
languagebash
titleHyperthreading active, 80/192 Processes per Node will be started
collapsetrue
#!/bin/bash
#SBATCH -N 4

module load gcc/9.3.0 openmpi/gcc.9/4.1.4

cpus_per_node=${SLURM_JOB_CPUS_PER_NODE%\(*}

mpirun -map-by ppr:$cpus_per_node:node ./yourexe
Codeblock
languagebash
titleHyperthreading disabled, 96 processes per node are started.
collapsetrue
#!/bin/bash
#SBATCH -N 4
#SBATCH --tasks-per-node 96

module load gcc/9.3.0 openmpi/gcc.9/4.1.4

tasks_per_node=${SLURM_TASKS_PER_NODE%\(*}

mpirun -map-by ppr:$tasks_per_node:node ./yourexe