...
Slurm script
A slurm script is submitted to the job scheduler slurm. It contains
- the control about requested compute nodes and
- commands to start your binary.
Using mpirun
Using mpirun
(from the MPI library) to start the binary you need to switch off slurm binding by adding export SLURM_CPU_BIND=none
.
MPI only
Codeblock | ||||
---|---|---|---|---|
| ||||
#!/bin/bash #SBATCH --nodes=2 #SBATCH --partition=cpu-clx:test module load openmpi/gcc/5.0.3 export SLURM_CPU_BIND=none mpirun -np 192 --map-by ppr:96:node ./hello.bin |
Codeblock | ||||
---|---|---|---|---|
| ||||
#!/bin/bash #SBATCH --nodes=2 #SBATCH --partition=cpu-clx:test module load openmpi/gcc/5.0.3 export SLURM_CPU_BIND=none mpirun -np 96 --map-by ppr:48:node ./hello.bin |
MPI, OpenMP
You can run one code compiled with MPI and OpenMP. The example covers the setup
- 2 nodes,
- 4 processes per node, 24 threads per process.
Codeblock | ||||
---|---|---|---|---|
| ||||
#!/bin/bash #SBATCH --nodes=2 #SBATCH --partition=cpu-clx:test module load openmpi/gcc/5.0.3 export SLURM_CPU_BIND=none export OMP_NUM_THREADS=24 mpirun -np 8 --map-by ppr:4:node:pe=24 ./hello.bin |
Using srun
...
Codeblock | ||||
---|---|---|---|---|
| ||||
#!/bin/bash #SBATCH --nodes=2 #SBATCH --partition=cpu-clx:test srun --ntasks-per-node=96 ./hello.bin |
...
You can run one code compiled with MPI and OpenMP. The example covers the setup
...