Atlassian uses cookies to improve your browsing experience, perform analytics and research, and conduct advertising. Accept all cookies to indicate that you agree to our use of cookies on your device. Atlassian cookies and tracking notice, (opens new window)
User Manual

User Manual
Results will update as you type.
  • Application Guide
  • Status of System
  • Usage Guide
  • Compute partitions
    • CPU CLX partition
      • Workflow CPU CLX
      • Slurm partition CPU CLX
      • Examples and Recipes
        • Compilation on CPU CLX
        • OpenMPI on CPU CLX
        • Intel MPI on CPU CLX
        • OpenMP on CPU CLX
        • Hybrid Jobs
        • Linking the MKL version of fftw3
        • Multiple programs multiple data
      • Fat Tree OPA network of CLX partition
      • Operating system migration from CentOS to Rocky Linux
    • CPU Genoa partition
    • GPU A100 partition
    • GPU PVC partition
    • Next-Gen Technology Pool
  • Software
  • FAQ
  • NHR Community
  • Contact

    You‘re viewing this with anonymous access, so some content might be blocked.
    /
    OpenMPI on CPU CLX

    OpenMPI on CPU CLX

    Okt. 11, 2024

    Content

    Code execution

    For examples for code execution, please visit Slurm partition CPU CLX.


    Code compilation

    For code compilation please use gnu compiler.

    MPI, gnu Quelle erweitern
    module load gcc/13.3.0
    module load openmpi/gcc/5.0.3
    mpicc -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c
    MPI, OpenMP, gnu Quelle erweitern
    module load gcc/13.3.0
    module load openmpi/gcc/5.0.3
    mpicc -fopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c

    Slurm job script

    A slurm script is submitted to the job scheduler slurm. It contains

    • the request for compute nodes of a Slurm partition CPU CLX and
    • commands to start your binary. You have two options to start an MPI binary.
      • using mpirun
      • using srun

    Using mpirun

    Using mpirun (from the MPI library) to start the binary you need to switch off slurm binding by adding export SLURM_CPU_BIND=none.

    MPI, full node Quelle erweitern
    #!/bin/bash
    #SBATCH --nodes=2
    #SBATCH --partition=cpu-clx:test
    module load openmpi/gcc/5.0.3
    export SLURM_CPU_BIND=none
    mpirun -np 192 --map-by ppr:96:node ./hello.bin
    MPI, half node Quelle erweitern
    #!/bin/bash
    #SBATCH --nodes=2
    #SBATCH --partition=cpu-clx:test
    module load openmpi/gcc/5.0.3
    export SLURM_CPU_BIND=none
    mpirun -np 96 --map-by ppr:48:node ./hello.bin

    You can run one code compiled with MPI and OpenMP. The example covers the setup

    • 2 nodes,
    • 4 processes per node, 24 threads per process.
    MPI, OpenMPI, full node Quelle erweitern
    #!/bin/bash
    #SBATCH --nodes=2
    #SBATCH --partition=cpu-clx:test
    module load openmpi/gcc/5.0.3
    export SLURM_CPU_BIND=none
    export OMP_NUM_THREADS=24
    mpirun -np 8 --map-by ppr:4:node:pe=24 ./hello.bin

    Using srun

    MPI, full node Quelle erweitern
    #!/bin/bash
    #SBATCH --nodes=2
    #SBATCH --partition=cpu-clx:test
    srun --ntasks-per-node=96 ./hello.bin

    You can run one code compiled with MPI and OpenMP. The example covers the setup

    • 2 nodes,
    • 4 processes per node, 24 threads per process.
    MPI, OpenMP, full node Quelle erweitern
    #!/bin/bash
    #SBATCH --nodes=2
    #SBATCH --partition=cpu-clx:test
    export OMP_PROC_BIND=spread
    export OMP_NUM_THREADS=24
    srun --ntasks-per-node=4 --cpus-per-task=48 ./hello.bin
    , multiple selections available,
    {"serverDuration": 10, "requestCorrelationId": "27834398bb984129a174723033e05686"}