Accounting

Accounting

Content

Project accounts

  • Each compute job (batch or interactive) is charged to a project account.
  • A project account is an entry in the SLURM database. It contains compute time measured in core hours and can be accessed by one or more users.
  • There are two types of project accounts: Test Projects (also known as Schnupperkontingent or personal account) and Compute Projects.
  • Jobs are charged in core hours reflecting the use of hardware resources. See next sections for details.
  • The project account a job is charged to can be specified in the job script header or with the submit command. When no account is specified, a default applies.
  • The default project account is the user's Test Project. This default setting can be modified in the Portal.  
  • Usage of persistent storage, including the tape library, is currently not accounted (disk quotas apply).

NHR@ZIB follows NHR-wide regulations.

Charge rates

NHR@ZIB operates system "Lise" which consists of several Compute partitions. Different charge rates apply depending on corresponding hardware equipment.

Compute partitionSlurm partitionCharge (in core hours)
per 1 node and 1 hour
Node allocationRemark

CPU CLX partition


cpu-clx

96exclusive
cpu-clx:testexclusive
cpu-clx:ssdexclusive

cpu-clx:large

144shareablelarge memory
cpu-clx:huge

192

shareablehuge memory
CPU Genoa partitioncpu-genoa192exclusive

cpu-genoa:testexclusive
GPU A100 partition



gpu-a100

600exclusive

four Nvidia A100 GPU per node

gpu-a100:shared

150 per GPUshareable

(600 core hours with four GPU)

gpu-a100:testshareable

gpu-a100:shared:mig

21.43 per logical GPUshareable

four Nvidia A100 GPU split into 28 logical GPU 

GPU PVC partitiongpu-pvcfree of chargeexclusivefour Intel Data Center GPU Max 1550 per node
gpu-pvc:sharedshareable

Job charges

The charge c of a job

  • occupying a number n of compute nodes,

  • for a wall clock time t (in hours),

  • running in a Slurm partition associated with a charge rate r,
  • and a fraction f (0 < f ≤ 1.0) of available CPU cores or GPUs allocated on the nodes 

is the product c = n × t × r × f.

Most of Lise's compute nodes belong to Slurm partitions where nodes are allocated exclusively to a job, meaning nodes are not shared between jobs. These nodes are billed in full (f = 1.0), regardless of the job's resource allocation (number of CPU cores or GPUs per node) and the actual resource utilization.

In some Slurm partitions, however, nodes can be shared among multiple jobs. Here, users may request fewer CPU cores or GPUs than are available per node (f < 1.0), and such jobs are billed at the corresponding fraction of the charge rate.

Example 1: CPU (test) job

A job running on 2 nodes of the CPU Genoa partition finished after 6 minutes.
It costs 2 × 0.1 × 192 × 1.0 = 76.8 core hours.
(exclusive node allocation on the CPU Genoa partition,  f = 1.0)

Example 2: CPU large memory job

A job running on 4 nodes of the CPU CLX large memory partition requested only 24 CPU cores per node and finished after 5 hours.
It costs 4 ×× 144 × 0.25 = 720 core hours.
(shared node allocation on the CPU CLX large memory partition, 96 CPU cores available on a CPU CLX node, f = 24/96 = 0.25)