...
Slurm job script
A slurm script is submitted to the job scheduler slurm. It contains
Using mpirun
Using mpirun
(from the MPI library) to start the binary you need to switch off slurm binding by adding export SLURM_CPU_BIND=none
.
Codeblock |
---|
title | MPI, full node |
---|
collapse | true |
---|
|
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:testgenoa
module load openmpi/gcc/5.0.3
export SLURM_CPU_BIND=none
mpirun -np 192384 --map-by ppr:96192:node ./hello.bin |
Codeblock |
---|
title | MPI, half node |
---|
collapse | true |
---|
|
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:testgenoa
module load openmpi/gcc/5.0.3
export SLURM_CPU_BIND=none
mpirun -np 96192 --map-by ppr:4896:node ./hello.bin |
You can run one code compiled with MPI and OpenMP. The example covers the setup
- 2 nodes,
- 4 8 processes per node, 24 threads per process.
Codeblock |
---|
title | MPI, OpenMPI, full node |
---|
collapse | true |
---|
|
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:testgenoa
module load openmpi/gcc/5.0.3
export SLURM_CPU_BIND=none
export OMP_NUM_THREADS=24
mpirun -np 816 --map-by ppr:48:node:pe=24 ./hello.bin |
Using srun
Codeblock |
---|
title | MPI, full node |
---|
collapse | true |
---|
|
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:testgenoa
srun --ntasks-per-node=96192 ./hello.bin |
You can run one code compiled with MPI and OpenMP. The example covers the setup
- 2 nodes,
- 4 8 processes per node, 24 threads per process.
Codeblock |
---|
title | MPI, OpenMP, full node |
---|
collapse | true |
---|
|
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:testgenoa
export OMP_PROC_BIND=spread
export OMP_NUM_THREADS=24
srun --ntasks-per-node=48 --cpus-per-task=48 ./hello.bin |
...