Content
Code
...
execution
For code compilation you can choose one of the two compilers Intel or Gnu. Both compilers are able to include the Intel MPI library.
Intel compiler
Codeblock |
---|
title | MPI, intel |
---|
collapse | true |
---|
|
module load intel/2024.2
module load impi/2021.13
export I_MPI_CC=icx
mpiicc -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c |
Codeblock |
---|
title | MPI, OpenMP, intel |
---|
collapse | true |
---|
|
module load intel/2024.2
module load impi/2021.13
export I_MPI_CC=icx
mpiicc -qopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c |
Gnu compiler
examples for code execution, please visit Slurm partition CPU CLX.
Code compilation
For code compilation please use gnu compiler.
Codeblock |
---|
title | MPI, gnu |
---|
collapse | true |
---|
|
module load gcc/13.3.0
module load openmpi/gcc/5.0.3
mpicc -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c |
...
Codeblock |
---|
title | MPI, OpenMP, gnu |
---|
collapse | true |
---|
|
module load gcc/13.3.0
module load openmpi/gcc/5.0.3
mpicc -fopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c |
Code execution
To execute your code you need to
- have a binary (executable, model code), which is the result of code compilation,
- create a slurm job script,
- submit the slurm jobs script.
Codeblock |
---|
title | Job submission |
---|
collapse | true |
---|
|
blogin> sbatch myjobscipt.slurm
Submitted batch job 8028673
blogin> ls slurm-8028673.out
slurm-8028673.out |
Slurm scripts
Slurm job script
A slurm script is submitted to the job scheduler slurm. It contains
- the control about requested request for compute nodes andof a Slurm partition CPU CLX and
- commands to start your binary. You have two options to start an MPI binary.
Using mpirun
Using mpirun
(from the MPI library) to start the binary you need to switch off slurm binding by adding export SLURM_CPU_BIND=none
.
MPI only Codeblock |
---|
title | MPI, full node |
---|
collapse | true |
---|
|
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
module load openmpi/gcc/5.0.3
export SLURM_CPU_BIND=none
mpirun -np 192 --map-by ppr:96:node ./hello.bin |
Codeblock |
---|
title | MPI, half node |
---|
collapse | true |
---|
|
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
module load openmpi/gcc/5.0.3
export SLURM_CPU_BIND=none
mpirun -np 96 --map-by ppr:48:node ./hello.bin |
MPI, OpenMPYou can run one code compiled with MPI and OpenMP. The examples cover example covers the setup
- 2 nodes,
- 4 processes per node, 24 threads per process.
Codeblock |
---|
title | MPI, OpenMPI, full node |
---|
collapse | true |
---|
|
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
module load openmpi/gcc/5.0.3
export SLURM_CPU_BIND=none
export OMP_NUM_THREADS=24
mpirun -np 8 --map-by ppr:4:node:pe=24 ./hello.bin |
Using srun
MPI only
test
Codeblock |
---|
title | MPI, full node |
---|
collapse | true |
---|
|
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
srun --ntasks-per-node=96 ./hello.bin |
You can run one code compiled with MPI and OpenMP. The example covers the setup
- 2 nodes,
- 4 processes per node, 24 threads per process.
...
|
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
export OMP_PROC_BIND=spread
export OMP_NUM_THREADS=24
srun --ntasks-per-node=4 --cpus-per-task=48 ./hello.bin |