Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.

To build and execute code on the GPU A100 clusterpartition, please use the appropriate login nodes as listed in Quickstart.login to

Code build

For code generation we recommend to use the software package NVIDIA hpcx which includes the is a combination of compiler and access to powerful libraries, like e.g. MPI.

Codeblock
languagetext
titleLoad hpcx evironmentPlain OpenMP for GPU
bgnlogin1 ~ $ module load nvhpc-hpcx/23.1
bgnlogin1 ~ $ module list
Currently Loaded Modulefiles: ... 4) hpcx   5) nvhpc-hpcx/23.1
bgnlogin1 $ nvc -mp -target=gpu openmp_gpu.c -o openmp_gpu.bin


Codeblock
languagetext
titleMPI + OpenMP for GPU
bgnlogin1 $ module load nvhpc-hpcx/23.1
bgnlogin1 $ mpicc -mp -target=gpu mpi_openmp_gpu.c -o mpi_openmp_gpu.bin

Code execution

Lise's CPU-only partition and All available slurm partitions for the A100 GPU partition share the same SLURM batch system. The main SLURM partition for the A100 GPU partition has the name "gpu-a100". An example job script is shown below.

...

titleGPU job script
linenumberstrue

you can see on Slurm partitions GPU A100.

Codeblock
languagetext
titleJob script for plain OpenMP
#!/bin/bash
#SBATCH --partition=gpu-a100:shared
#SBATCH --gres=gpu:1
#SBATCH --nodes=21
#SBATCH --ntasks=8 -per-node=72

./openmp_gpu.bin


Codeblock
languagetext
titleJob script for MPI + OpenMP
#!/bin/bash
#SBATCH --partition=gpu-a100
#SBATCH --gres=gpu:4
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=72

module load openmpinvhpc-hpcx/gcc23.11/4.1.4
mpirun --np ./mycode8 --map-by ppr:2:socket:pe=1 ./mpi_openmp_gpu.bin