The compute nodes of Lise in Berlin (blogin.hlrn.de) and Emmy in Göttingen (glogin.hlrn.de) are organized via the following SLURM partitions:
Lise (Berlin)
Partition (number holds cores per node) | Node name | Max. walltime | Nodes | Max. nodes per job | Usable memory MB per node | CPU | Shared |
---|
Charged core- hours per node |
---|
Remark | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
standard96 | bcn# | 12:00:00 | 1204 | 512 | 16 / 500 | 362 000 | Cascade 9242 | ✘ | 14 | default partition |
standard96:test | bcn# | 1:00:00 | 32 dedicated +128 on demand | 16 | 1 / 500 | 362 000 | Cascade 9242 | ✘ | 14 | test nodes with higher priority but lower walltime |
large96 | bfn# | 12:00:00 | 28 | 8 | 16 / 500 | 747 000 | Cascade 9242 | ✘ | 21 | fat memory nodes |
large96:test | bfn# | 1:00:00 | 2 dedicated +2 on demand | 2 | 1 / 500 | 747 000 | Cascade 9242 | ✘ | 21 | fat memory test nodes with higher priority but lower walltime |
large96:shared | bfn# | 48:00:00 | 2 dedicated | 1 | 16 / 500 | 747 000 | Cascade 9242 | ✓ | 21 | fat memory nodes for data pre- and postprocessing |
huge96 | bsn# | 24:00:00 | 2 | 1 | 16 / 500 | 1522 000 | Cascade 9242 | ✓ | 28 | very fat memory nodes for data pre- and postprocessing |
12 hours are too short? See here how to pass the 12h walltime limit with job dependencies.
...