...
Codeblock |
---|
title | Job submission |
---|
collapse | true |
---|
|
blogin> sbatch myjobscipt.slurm
Submitted batch job 8028673
blogin> ls slurm-8028673.out
slurm-8028673.out |
A slurm script is submitted to the job scheduler slurm. It contains
- the control about requested compute nodes and
- commands to start your binary. Using
mpirun
the pinning is controlled by the MPI library. Pinning by slurm to start the binary you need to switch off slurm binding by adding export SLURM_CPU_BIND=none
.
MPI only
Codeblock |
---|
title | MPI, full node |
---|
collapse | true |
---|
|
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
module load openmpi/gcc/5.0.3
export SLURM_CPU_BIND=none
mpirun -np 192 --map-by ppr:96:node ./hello.bin |
Codeblock |
---|
title | MPI, half node |
---|
collapse | true |
---|
|
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
module load openmpi/gcc/5.0.3
export SLURM_CPU_BIND=none
mpirun -np 96 --map-by ppr:48:node ./hello.bin |
MPI, OpenMP
You can run one code compiled with MPI and OpenMP. The examples cover the setup
- 2 nodes,
- 4 processes per node, 24 threads per process.
...