The compute nodes of Lise in Berlin (blogin.hlrn.de) and Emmy in Göttingen (glogin.hlrn.de) are organized via the following SLURM partitions:
The following GPU partitions are available on Lise.
gpu-a100 | 4x A100 per Node | full node exclusive |
| |
gpu-a100:shared | 1 to 4 A100 | shared node access, exclusive use of the requested GPUs |
| |
gpu-a100:shared:mig | 1 to 28 1g.10gb A100 MIG slices | shared node access, shared GPU devices via Multi Instance GPU. Each of the four GPUs is logically split into usable seven slices with 10 GB of GPU memory associated to each slice |
|
Lise (Berlin)
Partition (number holds cores per node) | Node name | Max. walltime | Nodes | Max. nodes per job | Usable memory MB per node | CPU | Shared | Charged core-hours per node | Remark | |
---|---|---|---|---|---|---|---|---|---|---|
standard96 | bcn# | 12:00:00 | 1204 | 512 | 16 / 500 | 362 000 | Cascade 9242 | ✘ | 96 | default partition |
standard96:test | bcn# | 1:00:00 | 32 dedicated +128 on demand | 16 | 1 / 500 | 362 000 | Cascade 9242 | ✘ | 96 | test nodes with higher priority but lower walltime |
large96 | bfn# | 12:00:00 | 28 | 8 | 16 / 500 | 747 000 | Cascade 9242 | ✘ | 144 | fat memory nodes |
large96:test | bfn# | 1:00:00 | 2 dedicated +2 on demand | 2 | 1 / 500 | 747 000 | Cascade 9242 | ✘ | 144 | fat memory test nodes with higher priority but lower walltime |
large96:shared | bfn# | 48:00:00 | 2 dedicated | 1 | 16 / 500 | 747 000 | Cascade 9242 | ✓ | 144 | fat memory nodes for data pre- and postprocessing |
huge96 | bsn# | 24:00:00 | 2 | 1 | 16 / 500 | 1522 000 | Cascade 9242 | ✓ | 192 | very fat memory nodes for data pre- and postprocessing |
...