Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.

...


Codeblock
titleJob submission
collapsetrue
blogin> sbatch myjobscipt.slurm
Submitted batch job 8028673
blogin> ls slurm-8028673.out
slurm-8028673.out

Slurm scripts

for mpirun

A slurm script is submitted to the job scheduler slurm. It contains

  • the control about requested compute nodes and
  • commands to start your binary.

Using mpirun

Using mpirun (from the MPI library) to start the binary you need to switch off slurm binding by adding export SLURM_CPU_BIND=none.

MPI only

Codeblock
titleMPI, full node
collapsetrue
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
module load openmpi/gcc/5.0.3
export SLURM_CPU_BIND=none
mpirun -np 192 --map-by ppr:96:node ./hello.bin


Codeblock
titleMPI, half node
collapsetrue
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
module load openmpi/gcc/5.0.3
export SLURM_CPU_BIND=none
mpirun -np 96 --map-by ppr:48:node ./hello.bin

MPI, OpenMP

You can run one code compiled with MPI and OpenMP. The examples cover the setup

  • 2 nodes,
  • 4 processes per node, 24 threads per process.

...