To build and execute code on the GPU A100 clusterpartition, please use the appropriate login nodes as listed in Quickstart.login to
- a GPU A100 login node, like bgnlogin.nhr.zib.de.
- see also GPU A100 partition
Code build
For code generation we recommend to use the software package NVIDIA hpcx which includes the is a combination of compiler and access to powerful libraries, like e.g. MPI.
Codeblock |
---|
language | text |
---|
title | Load hpcx evironmentPlain OpenMP for GPU |
---|
|
bgnlogin1 ~ $ module load nvhpc-hpcx/23.1
bgnlogin1 ~ $ module list
Currently Loaded Modulefiles: ... 4) hpcx 5) nvhpc-hpcx/23.1
bgnlogin1 $ nvc -mp -target=gpu openmp_gpu.c -o openmp_gpu.bin |
Codeblock |
---|
language | text |
---|
title | MPI + OpenMP for GPU |
---|
|
bgnlogin1 $ module load nvhpc-hpcx/23.1
bgnlogin1 $ mpicc -mp -target=gpu mpi_openmp_gpu.c -o mpi_openmp_gpu.bin |
Code execution
Lise's CPU-only partition and All available slurm partitions for the A100 GPU partition share the same SLURM batch system. The main SLURM partition for the A100 GPU partition has the name "gpu-a100". An example job script is shown below.
...
title | GPU job script |
---|
linenumbers | true |
---|
you can see on Slurm partitions GPU A100.
Codeblock |
---|
language | text |
---|
title | Job script for plain OpenMP |
---|
|
#!/bin/bash
#SBATCH --partition=gpu-a100:shared
#SBATCH --gres=gpu:1
#SBATCH --nodes=21
#SBATCH --ntasks=8 -per-node=72
./openmp_gpu.bin
|
Codeblock |
---|
language | text |
---|
title | Job script for MPI + OpenMP |
---|
|
#!/bin/bash
#SBATCH --partition=gpu-a100
#SBATCH --gres=gpu:4
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=72
module load openmpinvhpc-hpcx/gcc23.11/4.1.4
mpirun --np ./mycode8 --map-by ppr:2:socket:pe=1 ./mpi_openmp_gpu.bin
|