Content
Inhalt |
---|
Code execution
To execute your code you need to
- have a binary (executable, model code), which is the result of code compilation,
- create a slurm job script,
- submit the slurm jobs script.
Codeblock | ||||
---|---|---|---|---|
| ||||
blogin> sbatch myjobscipt.slurm
Submitted batch job 8028673
blogin> ls slurm-8028673.out
slurm-8028673.out |
Code Compilation
For code compilation please use Gnu compiler.
...
Codeblock | ||||
---|---|---|---|---|
| ||||
module load gcc/13.3.0 module load openmpi/gcc/5.0.3 mpicc -fopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c |
Code execution
To execute your code you need to
- have a binary (executable, model code), which is the result of code compilation,
- create a slurm job script,
- submit the slurm jobs script.
...
title | Job submission |
---|---|
collapse | true |
...
Slurm scripts
A slurm script is submitted to the job scheduler slurm. It contains
- the control about requested compute nodes and
- commands to start your binary.
Using mpirun
Using mpirun
(from the MPI library) to start the binary you need to switch off slurm binding by adding export SLURM_CPU_BIND=none
.
MPI only
Codeblock | ||||
---|---|---|---|---|
| ||||
#!/bin/bash #SBATCH --nodes=2 #SBATCH --partition=cpu-clx:test module load openmpi/gcc/5.0.3 export SLURM_CPU_BIND=none mpirun -np 192 --map-by ppr:96:node ./hello.bin |
Codeblock | ||||
---|---|---|---|---|
| ||||
#!/bin/bash #SBATCH --nodes=2 #SBATCH --partition=cpu-clx:test module load openmpi/gcc/5.0.3 export SLURM_CPU_BIND=none mpirun -np 96 --map-by ppr:48:node ./hello.bin |
MPI, OpenMP
You can run one code compiled with MPI and OpenMP. The example covers the setup
- 2 nodes,
- 4 processes per node, 24 threads per process.
...