Zum Ende der Metadaten springen
Zum Anfang der Metadaten

Sie zeigen eine alte Version dieser Seite an. Zeigen Sie die aktuelle Version an.

Unterschiede anzeigen Seitenhistorie anzeigen

« Vorherige Version anzeigen Version 5 Nächste Version anzeigen »

The compute nodes of Lise in Berlin (blogin.hlrn.de) and Emmy in Göttingen (glogin.hlrn.de) are organized via the following SLURM partitions:

The following GPU partitions are available on Lise.

gpu-a1004x A100 per Nodefull node exclusive

# run command on two MPI nodes in the exclusive use partition
srun -N 2 -p gpu-a100 example_ cmd

gpu-a100:shared1 to 4 A100shared node access, exclusive use of the requested GPUs

# run command using GPUs (-G/--gpus) in the shared node partition
srun -G 2 -p gpu-a100:shared example_cmd

gpu-a100:shared:mig1 to 28 1g.10gb A100 MIG slicesshared node access, shared GPU devices via Multi Instance GPU.
Each of the four GPUs is logically split into usable seven slices with 10 GB of GPU memory associated to each slice

# run command using one Multi Instance GPU slice on the according partition
srun -G 1 -p gpu-a100:shared:mig example_cmd



Lise (Berlin)

Partition (number holds cores per node)

Node nameMax. walltimeNodesMax. nodes
per job

Max jobs (running/ queued)
per user

Usable memory MB per node

CPU

Shared

Charged core-hours per node

Remark
standard96bcn#12:00:001204512

16 / 500

362 000Cascade 924296default partition











standard96:testbcn#1:00:0032 dedicated
+128 on demand
161 / 500362 000Cascade 924296test nodes with higher priority but lower walltime
large96bfn#12:00:0028816 / 500747 000Cascade 9242144fat memory nodes
large96:testbfn#1:00:002 dedicated
+2 on demand
21 / 500747 000Cascade 9242144fat memory test nodes with higher priority but lower walltime
large96:sharedbfn#48:00:002 dedicated116 / 500

747 000

Cascade 9242144fat memory nodes for data pre- and postprocessing
huge96bsn#24:00:002116 / 500

1522 000

Cascade 9242192

very fat memory nodes for data pre- and postprocessing

12 hours are too short? See here how to pass the 12h walltime limit with job dependencies.


List of CPUs and GPUs at HLRN


Short nameLink to manufacturer specificationsWhere to findUnits per node

Cores per unit

Clock speed
[GHz]

Cascade 9242Intel Cascade Lake Platinum 9242 (CLX-AP)Lise and Emmy compute partitions2482.3
Cascade 4210Intel Cascade Lake Silver 4210 (CLX)blogin[1-8], glogin[3-8]2102.2
Skylake  6148Intel Skylake Gold 6148Emmy compute partitions2202.4
Skylake 4110Intel Skylake Silver 4110glogin[1-2]282.1
Tesla V100NVIDIA Tesla V100 32GBEmmy grete partitions4

640/5120*


Tesla A100NVIDIA Tesla A100 40GB and 80GBEmmy grete partitions4 or 8

432/6912*


*Tensor Cores / CUDA Cores

  • Keine Stichwörter