OpenMPI on CPU CLX

OpenMPI on CPU CLX

Content

Code execution

For examples for code execution, please visit Slurm partition CPU CLX.

 

Code compilation

For code compilation please use gnu compiler.

module load gcc/13.3.0 module load openmpi/gcc/5.0.3 mpicc -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c
module load gcc/13.3.0 module load openmpi/gcc/5.0.3 mpicc -fopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c

Slurm job script

A slurm script is submitted to the job scheduler slurm. It contains

  • the request for compute nodes of a Slurm partition CPU CLX and

  • commands to start your binary. You have two options to start an MPI binary.

    • using mpirun

    • using srun

Using mpirun

Using mpirun (from the MPI library) to start the binary you need to switch off slurm binding by adding export SLURM_CPU_BIND=none.

#!/bin/bash #SBATCH --nodes=2 #SBATCH --partition=cpu-clx:test module load openmpi/gcc/5.0.3 export SLURM_CPU_BIND=none mpirun -np 192 --map-by ppr:96:node ./hello.bin
#!/bin/bash #SBATCH --nodes=2 #SBATCH --partition=cpu-clx:test module load openmpi/gcc/5.0.3 export SLURM_CPU_BIND=none mpirun -np 96 --map-by ppr:48:node ./hello.bin

You can run one code compiled with MPI and OpenMP. The example covers the setup

  • 2 nodes,

  • 4 processes per node, 24 threads per process.

#!/bin/bash #SBATCH --nodes=2 #SBATCH --partition=cpu-clx:test module load openmpi/gcc/5.0.3 export SLURM_CPU_BIND=none export OMP_NUM_THREADS=24 mpirun -np 8 --map-by ppr:4:node:pe=24 ./hello.bin

Using srun

#!/bin/bash #SBATCH --nodes=2 #SBATCH --partition=cpu-clx:test srun --ntasks-per-node=96 ./hello.bin

You can run one code compiled with MPI and OpenMP. The example covers the setup

  • 2 nodes,

  • 4 processes per node, 24 threads per process.

#!/bin/bash #SBATCH --nodes=2 #SBATCH --partition=cpu-clx:test export OMP_PROC_BIND=spread export OMP_NUM_THREADS=24 srun --ntasks-per-node=4 --cpus-per-task=48 ./hello.bin