Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.

To build and execute code on the GPU A100 clusterpartition, please login to

Code build

For code generation we recommend the software package NVIDIA hpcx which is a combination of compiler and powerful libraries, like e.g. MPI.

Codeblock
languagetext
titlePlain OpenMP for GPU
bgnlogin1 ~ $ module load nvhpc-hpcx/23.1
bgnlogin1 ~ $ module list
Currently Loaded Modulefiles: ... 4) hpcx   5) nvhpc-hpcx/23.1
bgnlogin1 ~ $ nvc -mp -target=gpu openmp_gpu.c -o openmp_gpu.bin

...

Codeblock
languagetext
titleMPI and + OpenMP for GPU
bgnlogin1 ~ $ module load nvhpc-hpcx/23.1
bgnlogin1 ~ $ modulempicc list
Currently Loaded Modulefiles: ... 4) hpcx   5) nvhpc-hpcx/23.1

Code execution

Lise's CPU-only partition and the A100 GPU partition share the same SLURM batch system. The main SLURM partition for the A100 GPU partition has the name "gpu-a100". An example job script is shown below.

true
Codeblock
titleGPU job script
linenumbers
-mp -target=gpu mpi_openmp_gpu.c -o mpi_openmp_gpu.bin

Code execution

All available slurm partitions for the A100 GPU partition you can see on Slurm partitions GPU A100.

Codeblock
languagetext
titleJob script for plain OpenMP
#!/bin/bash
#SBATCH --partition=gpu-a100:shared
#SBATCH --gres=gpu:1
#SBATCH --nodes=21
#SBATCH --ntasks=8 -per-node=72

./openmp_gpu.bin


Codeblock
languagetext
titleJob script for MPI + OpenMP
#!/bin/bash
#SBATCH --partition=gpu-a100
#SBATCH --gres=gpu:4
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=72

module load openmpi/gcc.11/4nvhpc-hpcx/23.1.4
mpirun --np 8 --map-by ppr:2:socket:pe=1 ./mycodempi_openmp_gpu.bin