...
Program build and execution
...
Codeblock |
---|
language | text |
---|
title | Example: Show the currently available software and access compilers |
---|
|
bgnlogin1 $ module avail
...
bgnlogin1 $ module load gcc
...
bgnlogin1 $ module list
Currently Loaded Modulefiles:
1) HLRNenv 2) sw.a100 3) slurm 4) gcc/11.3.0(default) |
Using the slurm batch system
The GPU nodes are available via Slurm partitions GPU A100.Lise's CPU-only partition and the A100 GPU partition share the same SLURM batch system. The main SLURM A100 shares the same slurm batch system with all partitions of System Lise.
- A general introduction to the batch system you find Slurm usage.
- Slurm partitions GPU A100 describes the specific properties of slurm for the GPU A100 partition. The main slurm partition for the A100 GPU partition has the name "gpu-a100". An example job script is shown below.
Codeblock |
---|
title | GPU job script |
---|
linenumbers | true |
---|
|
#!/bin/bash
#SBATCH --partition=gpu-a100
#SBATCH --nodes=2
#SBATCH --ntasks=8
#SBATCH --gres=gpu:4
module load openmpi/gcc.11/4.1.4
mpirun ./mycode.bin
|
...