Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.

Content

Inhalt

Project account

...

  • A User Account accesses a project account containing units of core hour. A project account can be a Test Project or a Compute Project.
  • A batch job on the compute system is charged by a number of core hour to measure the usage.
  • Usage of persistent storage including the tape library are currently not accounted.

...

NHR@ZIB operates system Lise with different Compute cluster. Each cluster contains different partitions. Properties for available (slurm-)partitions representing the specific hardware.you find on the pages

...

Each partition holds a specific charge rate.

Compute partitionSlurm partitionCharge (core hour)
per 1 node per 1 h occupancy time
Remark
CPU clusterpartition "Lise"

standard96

standard96:test

96

large96

large96:test

large96:shared

144high memory layout
huge96

192


high memory layout
GPU A100 clusterpartition



gpu-a100

600

four NVidia A100 (80 GB) per compute node

gpu-a100:shared

150 per GPU

600 for four NVidia A100 (80 GB) per node

gpu-a100:shared:mig

21.43 per MiG slice

four NVidia A100 (80 GB) splitted each into

two 2g.10gb slices (8 per node and currently 24 in total) and

one 3g.20gb slice (4 per node and currently 12 in total)

GPU PVC clusterpartitiongpu-pvcfree of chargetest phase

Charge Rates for NHR@Göttingen

NHR@Göttingen operates system Emmy which hold different Compute cluster containing different types of compute nodes each. The charge rates for the partitions are given in the table.

192

one node in partitioncharged "core hours" per 1h occupancy timeincreased charge rate due to

standard96

standard96:test

96

large96

large96:test

large96:shared

144high memory layout
huge96high memory layout

medium40

medium40:test

40

large40

large40:test

80high memory layout

gpu

375

four NVidia V100 (32 GB) GPUs per node

grete

600

four NVidia A100 (40 GB)

grete:shared

150 per GPU

600: four NVidia A100 (40 GB) per node
1200: eight NVidia A100 (80 GB) GPUs per node

grete:interactive

grete:preemptible

47 per MiG slice

four NVidia A100 (40 GB) splitted each into

two 2g.10gb slices (8 per node and currently 24 in total) and

one 3g.20gb slice (4 per node and currently 12 in total)

Job charge

The charge of core hours for a batch job depends on the number of nodes, the wallclock time used by the job, and the charge rate for the partition used. For a batch job with

...