Accounting
Content
Project accounts
- Each compute job (batch or interactive) is charged to a project account.
- A project account is an entry in the SLURM database. It contains compute time measured in core hours and can be accessed by one or more users.
- There are two types of project accounts: Test Projects (also known as Schnupperkontingent or personal account) and Compute Projects.
- Jobs are charged in core hours reflecting the use of hardware resources. See next sections for details.
- The project account a job is charged to can be specified in the job script header or with the submit command. When no account is specified, a default applies.
- The default project account is the user's Test Project. This default setting can be modified in the Portal.
- Usage of persistent storage, including the tape library, is currently not accounted (disk quotas apply).
NHR@ZIB follows NHR-wide regulations.
Charge rates
NHR@ZIB operates system "Lise" which consists of several Compute partitions. Different charge rates apply depending on corresponding hardware equipment.
| Compute partition | Slurm partition | Charge (in core hours) per 1 node and 1 hour | Node allocation | Remark |
|---|---|---|---|---|
cpu-clx | 96 | exclusive | ||
| cpu-clx:test | exclusive | |||
| cpu-clx:ssd | exclusive | |||
cpu-clx:large | 144 | shareable | large memory | |
| cpu-clx:huge | 192 | shareable | huge memory | |
| CPU Genoa partition | cpu-genoa | 192 | exclusive | |
| cpu-genoa:test | exclusive | |||
| GPU A100 partition | gpu-a100 | 600 | exclusive | four Nvidia A100 GPU per node |
gpu-a100:shared | 150 per GPU | shareable | (600 core hours with four GPU) | |
| gpu-a100:test | shareable | |||
gpu-a100:shared:mig | 21.43 per logical GPU | shareable | four Nvidia A100 GPU split into 28 logical GPU | |
| GPU PVC partition | gpu-pvc | free of charge | exclusive | four Intel Data Center GPU Max 1550 per node |
| gpu-pvc:shared | shareable |
Job charges
The charge c of a job
occupying a number n of compute nodes,
for a wall clock time t (in hours),
- running in a Slurm partition associated with a charge rate r,
- and a fraction f (0 < f ≤ 1.0) of available CPU cores or GPUs allocated on the nodes
is the product c = n × t × r × f.
Most of Lise's compute nodes belong to Slurm partitions where nodes are allocated exclusively to a job, meaning nodes are not shared between jobs. These nodes are billed in full (f = 1.0), regardless of the job's resource allocation (number of CPU cores or GPUs per node) and the actual resource utilization.
In some Slurm partitions, however, nodes can be shared among multiple jobs. Here, users may request fewer CPU cores or GPUs than are available per node (f < 1.0), and such jobs are billed at the corresponding fraction of the charge rate.
A job running on 2 nodes of the CPU Genoa partition finished after 6 minutes.
It costs 2 × 0.1 × 192 × 1.0 = 76.8 core hours.
(exclusive node allocation on the CPU Genoa partition, f = 1.0)
A job running on 4 nodes of the CPU CLX large memory partition requested only 24 CPU cores per node and finished after 5 hours.
It costs 4 × 5 × 144 × 0.25 = 720 core hours.
(shared node allocation on the CPU CLX large memory partition, 96 CPU cores available on a CPU CLX node, f = 24/96 = 0.25)