Zum Ende der Metadaten springen
Zum Anfang der Metadaten

Sie zeigen eine alte Version dieser Seite an. Zeigen Sie die aktuelle Version an.

Unterschiede anzeigen Seitenhistorie anzeigen

« Vorherige Version anzeigen Version 19 Nächste Version anzeigen »

Code execution

To execute your code you need to

  1. have a binary, which is the result of code compilation,
  2. create a slurm job script,
  3. submit the slurm jobs script.
Submission of a job
blogin> sbatch myjobscipt.slurm
Submitted batch job 8028673
blogin> ls slurm-8028673.out
slurm-8028673.out

Using mpirun

Using mpirun the pinning is controlled by the MPI library. Pinning by slurm you need to switch off by adding export SLURM_CPU_BIND=none.

MPI only

MPI, full node
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
module load impi/2019.5
export SLURM_CPU_BIND=none
mpirun -ppn 96 ./hello.bin

Binding with OpenMPI

When using OpenMPI, binding is controlled using the –-bind-to parameter. To bind processes to cores, use --bind-to core. Possible other values can be found in the man page.

mpirun --bind-to core ./yourprogram

Our hardware supports hyperthreading, allowing you start 192 processes on Cascade Lake machines (*96 partitions) and 80 on Skylake machines.

If no specific requests regarding the number of tasks has been done, mpirun defaults to hyperthreading and starts cores*2 processes. If a number of tasks has been specified (with -N and/or --tasks-per-node), an (Open) mpirun honors this via the flag -map-by. For example:

Hyperthreading active, 80/192 Processes per Node will be started
#!/bin/bash
#SBATCH -N 4

module load gcc/9.3.0 openmpi/gcc.9/4.1.4

cpus_per_node=${SLURM_JOB_CPUS_PER_NODE%\(*}

mpirun -map-by ppr:$cpus_per_node:node ./yourexe
Hyperthreading disabled, 96 processes per node are started.
#!/bin/bash
#SBATCH -N 4
#SBATCH --tasks-per-node 96

module load gcc/9.3.0 openmpi/gcc.9/4.1.4

tasks_per_node=${SLURM_TASKS_PER_NODE%\(*}

mpirun -map-by ppr:$tasks_per_node:node ./yourexe
  • Keine Stichwörter