Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.

Inhalt
typeflat
The Content

Inhalt

Project account

NHR center NHR@ZIB follows NHR-wide regulations.

  • A User Account accesses a project account containing units of core hour. A project account can be a Test Project or a Compute Project.
  • A batch job on the compute system is charged by a number of core hour to measure the compute timeusage.
  • Usage of persistent storage including the tape library are currently not accounted.

Charge

...

rates

...

standard96

standard96:test

...

large96

large96:test

...

NHR@ZIB operates system Lise which hold  with different Compute cluster containing different types of compute nodes each. The charge rates for the partitions are given in the table.

...

partitions. Properties for available (slurm-)partitions you find on the pages

Each partition holds a specific charge rate.

Compute partitionSlurm partitionCharge (core hour)
per 1 node per 1 h occupancy time
Remark

CPU CLX partition


cpu-clx

96
cpu-clx:test
cpu-clx:ssd

cpu-clx:large

144high memory layouthuge96
cpu-clx:huge

192

high memory layout
CPU Genoa partitioncpu-genoa
cpu-genoa:test
free of charge1test phase
GPU A100 clusterpartition



gpu-a100

600

four NVidia A100 (80 GB) per compute node

gpu-a100:shared

150 per GPU

600 for four NVidia A100 (80 GB) per node

gpu-a100:shared:mig

47 21.43 per MiG slice

four NVidia A100 (80 GB) splitted each into

two 2g.10gb slices (8 per node and currently 24 in total) and

one 3g.20gb slice (4 per node and currently 12 in total)

Charge Rates for NHR@Göttingen

NHR@Göttingen operates system Emmy which hold different Compute cluster containing different types of compute nodes each. The charge rates for the partitions are given in the table.

...

standard96

standard96:test

...

large96

large96:test

large96:shared

...

192

...

medium40

medium40:test

...

large40

large40:test

...

gpu

...

four NVidia V100 (32 GB) GPUs per node

...

grete

...

four NVidia A100 (40 GB)

...

grete:shared

...

600: four NVidia A100 (40 GB) per node
1200: eight NVidia A100 (80 GB) GPUs per node

...

grete:interactive

grete:preemptible

...

four NVidia A100 (40 GB) splitted each into

two 2g.10gb slices (8 per node and currently 24 in total) and

one 3g.20gb slice (4 per node and currently 12 in total)

...

GPU PVC partitiongpu-pvcfree of charge1test phase

1: Practically the charge is a very small number (close to zero).

Job charge

The charge of core hours for a batch job depends on the number of nodes, the wallclock time used by the job, and the charge rate for the partition used. For a batch job with

...

Panel
titleExample 2: charge for a core reservation

A job on 48 cores on partition large96:shared (96 cores per node, 144 core hour) has a reservation for

num = 48/96 = 0.5 nodes. Assuming a wallclock time of 3 hours yields a job charge of 216 core hour.

Select the account in your batch job

Batch jobs are submitted by a user account to the compute system.

  • For each job the user chooses one project that will be charged by the job. At the beginning of the lifetime of the User Account the default project is the Test Project.
  • The user controls the project for a job using the Slurm usage option --account at submit time.
  • For the User Account the default project for computing time can be changed under the link User Data on the Portal NHR@ZIB.

Codeblock
titleExample: account for one job
To charge the account myaccount
add the following line to the job script. 
#SBATCH --account=myaccount

After job script submission the batch system checks the project for account coverage and authorizes the job for scheduling. Otherwise the job is rejected, please notice the error message:

Codeblock
titleExample: out of core hour
You can check the account of a job that is out of

core

hour

. > squeue

.

.. myaccount ... AccountOutOfNPL ...