Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.

The compute nodes of Lise in Berlin (blogin.hlrn.de) and Emmy in Göttingen (glogin.hlrn.de) are organized via the following SLURM partitions:

...

Partition (number holds cores per node)

Node name

Max. walltime

NodesMax. nodes
per job
Max. jobs
per user

Usable memory MB per node

CPU, GPU type

SharedNPL per node hourRemark
standard96

gcn#

12:00:00924256unlimited362 000Cascade 924296default partition
standard96:testgcn#1:00:0016 dedicated
+48 on demand
16unlimited362 000Cascade 924296test nodes with higher priority but lower walltime
large96gfn#12:00:00122unlimited747 000Cascade 9242144fat memory nodes
large96:testgfn#1:00:002 dedicated
+2 on demand
2unlimited747 000Cascade 9242144fat memory test nodes with higher priority but lower walltime
large96:sharedgfn#48:00:002 dedicated +2 on demand1unlimited

747 000

Cascade 9242144fat memory nodes for data pre- and postprocessing
huge96gsn#24:00:0021unlimited

1522 000

Cascade 9242192

very fat memory nodes for data pre- and postprocessing












medium40gcn#48:00:00368128unlimited181 000Skylake  614840
medium40:testgcn#1:00:00

32 dedicated

+96 on demand

8unlimited

181 000

Skylake  614840test nodes with higher priority but lower walltime
large40gfn#48:00:00114unlimited

764 000

Skylake  614880fat memory nodes
large40:testgfn#1:00:0032unlimited

764 000

Skylake  614880fat memory test nodes with higher priority but lower walltime
large40:sharedgfn#48:00:0021unlimited764 000Skylake  614880fat memory nodes for data pre- and postprocessing
gpuggpu#12:00:0033unlimited

764 000 MB per node

(32GB per gpu)

Skylake  6148 + 4 Tesla V100600





see GPU Usage

grete (coming soon24.04.2013)
ggpu#48:00:003434unlimited500 000 MB per node
(40GB per gpu)
Zen3 EPYC 7513 + 4 NVidia A100coming soon 24.04.2013
grete:shared (coming soon24.04.2013)
ggpu#48:00:003434unlimited500 000 MB per node
(40GB per gpu)
Zen3 EPYC 7513 + 4 NVidia A100coming soon 24.04.2013
grete-mig10g mig-interactive (coming soon24.04.2013)
ggpu#48:00:0044unlimitedtwo 515 810 MB (40 GB per gpu) nodes and two 1 031 620 MB nodes (80 GB per GPU)two Zen3 EPYC 7513 + 4 NVidia A100 and two Zen2 EPYC 7662 + 8 NVidia A100 nodes
coming soon 24.04.2013









see GPU Usage


GPUs are split into slices via MIG (3 slices for the two nodes with 4 GPUs, and 7 slices for the two nodes with 8 GPUs)

grete-mig10gmig-interactive preempt (coming soon24.04.2013)
ggpu#48:00:0044unlimitedtwo 515 810 MB (40 GB per gpu) nodes and two 1 031 620 MB nodes (80 GB per GPU)two Zen3 EPYC 7513 + 4 NVidia A100 and two Zen2 EPYC 7662 + 8 NVidia A100 nodes
coming soongrete-mig10g-preemptive (coming soon)
ggpu#48:00:0044unlimitedtwo 515 810 MB (40 GB per gpu) nodes and two 1 031 620 MB nodes (80 GB per GPU)two Zen3 EPYC 7513 + 4 NVidia A100 and two Zen2 EPYC 7662 + 8 NVidia A100 nodes
coming soon24.04.2013

Which partition to choose?

...