Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.
Kommentar: Added special footnote about the NPL per hour on the nodes with 8 GPUs

The compute nodes of Lise in Berlin (blogin.hlrn.de) and Emmy in Göttingen (glogin.hlrn.de) are organized via the following SLURM partitions:

...

Partition (number holds cores per node)

Node name

Max. walltime

NodesMax. nodes
per job
Max. jobs
per user

Usable memory MB per node

CPU, GPU type

SharedNPL per node hourRemark
standard96

gcn#

12:00:00924256unlimited362 000Cascade 924296default partition
standard96:testgcn#1:00:0016 dedicated
+48 on demand
16unlimited362 000Cascade 924296test nodes with higher priority but lower walltime
large96gfn#12:00:00122unlimited747 000Cascade 9242144fat memory nodes
large96:testgfn#1:00:002 dedicated
+2 on demand
2unlimited747 000Cascade 9242144fat memory test nodes with higher priority but lower walltime
large96:sharedgfn#48:00:002 dedicated +2 on demand1unlimited

747 000

Cascade 9242144fat memory nodes for data pre- and postprocessing
huge96gsn#24:00:0021unlimited

1522 000

Cascade 9242192

very fat memory nodes for data pre- and postprocessing












medium40gcn#48:00:00368128unlimited181 000Skylake  614840
medium40:testgcn#1:00:00

32 dedicated

+96 on demand

8unlimited

181 000

Skylake  614840test nodes with higher priority but lower walltime
large40gfn#48:00:00114unlimited

764 000

Skylake  614880fat memory nodes
large40:testgfn#1:00:0032unlimited

764 000

Skylake  614880fat memory test nodes with higher priority but lower walltime
large40:sharedgfn#48:00:0021unlimited764 000Skylake  614880fat memory nodes for data pre- and postprocessing
gpuggpu#12:00:0033unlimited

764 000 MB per node

(32GB per gpu)

Skylake  6148 + 4 Nvidia V100600





see GPU Usage

grete
ggpu#48:00:003535unlimited

500 000 MB and 1 000 000 MB per node

(40GB and 80GB per GPU)

Zen3 EPYC 7513 + 4 NVidia A100
and Zen2 EPYC 7662 + 8 NVidia A100
600/1200*
grete:shared
ggpu#48:00:003535unlimited500 000 MB and 1 000 000 MB per node
(40GB and 80GB per GPU)
Zen3 EPYC 7513 + 4 NVidia A100
and Zen2 EPYC 7662 + 8 NVidia A100
600/1200*
grete:interactive
ggpu#48:00:0033unlimited500 000 MB (40GB per GPU)Zen3 EPYC 7513 + 4 NVidia A100600

see GPU Usage


GPUs are split into slices via MIG (3 slices per GPU)

grete:preemptible
ggpu#48:00:0033unlimited500 000 MB (40GB per GPU)Zen3 EPYC 7513 + 4 NVidia A100600

* 600 for the nodes with 4 GPUs, and 1200 for the nodes with 8 GPUs


Which partition to choose?

...