...
Codeblock |
---|
language | text |
---|
title | MPI and + OpenMP for GPU |
---|
|
bgnlogin1 $ module load nvhpc-hpcx/23.1
bgnlogin1 $ module list
Currently Loaded Modulefiles: ... 4) hpcx 5) nvhpc-hpcx/23.1
bgnlogin1 $ mpicc -mp -target=gpu mpi_openmp_gpu.c -o mpi_openmp_gpu.bin |
Code execution
CPU-only partition and All available slurm partitions for the A100 GPU partition share the same SLURM batch system. The main SLURM partition for the A100 GPU partition has the name "gpu-a100". An example job script is shown belowyou can see on Slurm partitions GPU A100.
Codeblock |
---|
title | GPU job Job script for plain OpenMP |
---|
linenumbers | true |
---|
|
#!/bin/bash
#SBATCH --partition=gpu-a100
#SBATCH --gres=gpu:4
#SBATCH --nodes=21
#SBATCH --ntasks=8 -per-node=72
./openmp_gpu.bin
|
Codeblock |
---|
title | Job script for MPI + OpenMP |
---|
linenumbers | true |
---|
|
#!/bin/bash
#SBATCH --partition=gpu-a100
#SBATCH --gres=gpu:4
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=72
module load openmpi/gcc.11/4nvhpc-hpcx/23.1.4
mpirun ./mycode--np 8 --map-by ppr:2:socket:pe=1 ./mpi_openmp_gpu.bin
|