The GPU A100 shares the same Slurm batch system with all partitions of System Lise. The following Slurm partitions are specific for the GPU A100 partition.

Slurm partitionNode numberCPUMain memory (GB)GPUs per nodeGPU hardwareWalltime (hh:mm:ss)Description
gpu-a10034Ice Lake 8360Y10004NVIDIA A100 80GB 24:00:00full node exclusive
gpu-a100:shared54NVIDIA A100 80GB shared node access, exclusive use of the requested GPUs
gpu-a100:shared:mig128 (4 x 7)1 to 28 1g.10gb A100 MIG slices

shared node access, shared GPU devices via Multi Instance GPU. Each of the four GPUs is logically split into usable seven slices with 10 GB of GPU memory associated to each slice

gpu-a100:test24NVIDIA A100 80GB 01:00:00nodes reserved for short job tests before scheduling longer jobs with more resources

See Slurm usage how to pass a 24h walltime limit with job dependencies.

Charge rates

Charge rates for the slurm partitions you find in Accounting.

Examples

Assuming a job script 

#!/bin/bash
#SBATCH --partition=gpu-a100
#SBATCH --nodes=2
#SBATCH --ntasks=8 
#SBATCH --gres=gpu:4

module load openmpi/gcc.11/4.1.4
mpirun ./mycode.bin

you can submit a job to the slurm batch system via the line:

bgnlogin2 $ sbatch example.slurm
Submitted batch job 7748544
bgnlogin2 $ squeue -u myaccount
...


$ srun --nodes=2 --gres=gpu:4 --partition=gpu-a100 example_cmd


# Note: The two GPUs may be located on different nodes.
$ srun --gpus=2 --partition=gpu-a100:shared example_cmd

# Note: Two GPUs on the same node.
$ srun --nodes=1 --gres=gpu:2 --partition=gpu-a100:shared example_cmd


$ srun --gpus=1 --partition=gpu-a100:shared:mig example_cmd