Content
Code execution
To execute your code you need to
- compile a binary (executable, model code), see Workflow OpenMPI,
- create a slurm job script, see Slurm script,
- submit the slurm job script.
> sbatch myjobscipt.slurm
Submitted batch job 8028673
> ls slurm-8028673.out
slurm-8028673.out
Code compilation
For code compilation please use gnu compiler.
module load gcc/13.3.0
module load openmpi/gcc/5.0.3
mpicc -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c
module load gcc/13.3.0
module load openmpi/gcc/5.0.3
mpicc -fopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c
Slurm script
A slurm script is submitted to the job scheduler slurm. It contains
- the request for compute nodes and
- commands to start your binary. You have two options to start an MPI binary.
Using mpirun
Using mpirun
(from the MPI library) to start the binary you need to switch off slurm binding by adding export SLURM_CPU_BIND=none
.
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
module load openmpi/gcc/5.0.3
export SLURM_CPU_BIND=none
mpirun -np 192 --map-by ppr:96:node ./hello.bin
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
module load openmpi/gcc/5.0.3
export SLURM_CPU_BIND=none
mpirun -np 96 --map-by ppr:48:node ./hello.bin
You can run one code compiled with MPI and OpenMP. The example covers the setup
- 2 nodes,
- 4 processes per node, 24 threads per process.
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
module load openmpi/gcc/5.0.3
export SLURM_CPU_BIND=none
export OMP_NUM_THREADS=24
mpirun -np 8 --map-by ppr:4:node:pe=24 ./hello.bin
Using srun
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
srun --ntasks-per-node=96 ./hello.bin
You can run one code compiled with MPI and OpenMP. The example covers the setup
- 2 nodes,
- 4 processes per node, 24 threads per process.
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
export OMP_PROC_BIND=spread
export OMP_NUM_THREADS=24
srun --ntasks-per-node=4 --cpus-per-task=48 ./hello.bin