Atlassian uses cookies to improve your browsing experience, perform analytics and research, and conduct advertising. Accept all cookies to indicate that you agree to our use of cookies on your device. Atlassian cookies and tracking notice, (opens new window)
User Manual

User Manual
Results will update as you type.
  • Application Guide
  • Status of System
  • Usage Guide
  • Compute partitions
    • CPU CLX partition
    • CPU Genoa partition
    • GPU A100 partition
      • Slurm partition GPU A100
      • OpenMP for GPU A100
      • CUDA
      • Apptainer
    • GPU PVC partition
    • Next-Gen Technology Pool
  • Software
  • FAQ
  • NHR Community
  • Contact

    You‘re viewing this with anonymous access, so some content might be blocked.
    /
    OpenMP for GPU A100

    OpenMP for GPU A100

    März 01, 2024

    To build and execute code on the GPU A100 partition, please login to

    • a GPU A100 login node, like bgnlogin.nhr.zib.de.
    • see also GPU A100 partition

    Code build

    For code generation we recommend the software package NVIDIA hpcx which is a combination of compiler and powerful libraries, like e.g. MPI.

    Plain OpenMP for GPU
    bgnlogin1 $ module load nvhpc-hpcx/23.1
    bgnlogin1 $ module list
    Currently Loaded Modulefiles: ... 4) hpcx   5) nvhpc-hpcx/23.1
    bgnlogin1 $ nvc -mp -target=gpu openmp_gpu.c -o openmp_gpu.bin
    MPI + OpenMP for GPU
    bgnlogin1 $ module load nvhpc-hpcx/23.1
    bgnlogin1 $ mpicc -mp -target=gpu mpi_openmp_gpu.c -o mpi_openmp_gpu.bin

    Code execution

    All available slurm partitions for the A100 GPU partition you can see on Slurm partition GPU A100.

    Job script for plain OpenMP
    #!/bin/bash
    #SBATCH --partition=gpu-a100:shared
    #SBATCH --gres=gpu:1
    #SBATCH --nodes=1
    #SBATCH --ntasks-per-node=72
    
    ./openmp_gpu.bin
    
    Job script for MPI + OpenMP
    #!/bin/bash
    #SBATCH --partition=gpu-a100
    #SBATCH --gres=gpu:4
    #SBATCH --nodes=2
    #SBATCH --ntasks-per-node=72
    
    module load nvhpc-hpcx/23.1
    mpirun --np 8 --map-by ppr:2:socket:pe=1 ./mpi_openmp_gpu.bin
    
    , multiple selections available,
    {"serverDuration": 10, "requestCorrelationId": "8c6cc96363a442b5b9448c5a3cb9fea3"}