The NHR center NHR@ZIB follows NHR-wide regulations.
- A User Account accesses a project account containing units of core hour. A project account can be a Test Project or a Compute Project.
- A batch job on the compute system is charged by a number of core hour to measure the compute time.
- Usage of persistent storage including the tape library are currently not accounted.
Charge Rates for NHR@ZIB
NHR@ZIB operates system Lise which hold different Compute Cluster containing different types of compute nodes each. The charge rates for the partitions are given in the table.
one node in partition | charged "core hours" per 1h occupancy time | increased charge rate due to |
---|---|---|
standard96 standard96:test | 96 | |
large96 large96:test large96:shared | 144 | high memory layout |
huge96 | 192 | high memory layout |
gpu-a100 | 600 | four NVidia A100 (80 GB) per compute node |
gpu-a100:shared | 150 per GPU | 600 for four NVidia A100 (80 GB) per node |
gpu-a100:shared:mig | 47 per MiG slice | four NVidia A100 (80 GB) splitted each into two 2g.10gb slices (8 per node and currently 24 in total) and one 3g.20gb slice (4 per node and currently 12 in total) |
Charge Rates for NHR@Göttingen
NHR@Göttingen operates system Emmy which hold different Compute Cluster containing different types of compute nodes each. The charge rates for the partitions are given in the table.
one node in partition | charged "core hours" per 1h occupancy time | increased charge rate due to |
---|---|---|
standard96 standard96:test | 96 | |
large96 large96:test large96:shared | 144 | high memory layout |
huge96 | 192 | high memory layout |
medium40 medium40:test | 40 | |
large40 large40:test | 80 | high memory layout |
gpu | 375 | four NVidia V100 (32 GB) GPUs per node |
grete | 600 | four NVidia A100 (40 GB) |
grete:shared | 150 per GPU | 600: four NVidia A100 (40 GB) per node |
grete:interactive grete:preemptible | 47 per MiG slice | four NVidia A100 (40 GB) splitted each into two 2g.10gb slices (8 per node and currently 24 in total) and one 3g.20gb slice (4 per node and currently 12 in total) |
Job Charge
The charge of core hours for a batch job depends on the number of nodes, the wallclock time used by the job, and the charge rate for the partition used. For a batch job with
a number of nodes n,
running with a wallclock time of t hours, and
- on a partition with a charge rate charge_p
the job charge charge_j yields
charge_j = n * t * charge_p
A job on 10 nodes running for 3 hours on partition huge96 (= 192 core hour) yields a job charge of 5760 core hour.
Batch jobs running in the partition large96:shared access a subset of cores on a node. For a reservation of cores, the number of nodes is the appropriate node fraction.
A job on 48 cores on partition large96:shared (96 cores per node, 144 core hour) has a reservation for
num = 48/96 = 0.5 nodes. Assuming a wallclock time of 3 hours yields a job charge of 216 core hour.
Select the account in your batch job
Batch jobs are submitted by a user account to the compute system.
- For each job the user chooses one project that will be charged by the job. At the beginning of the lifetime of the User Account the default project is the Test Project.
- The user controls the project for a job using the Slurm usage option --account at submit time.
For the User Account the default project for computing time can be changed under the link User Data on the Portal NHR@ZIB.
To charge the account myaccount add the following line to the job script. #SBATCH --account=myaccount
After job script submission the batch system checks the project for account coverage and authorizes the job for scheduling. Otherwise the job is rejected, please notice the error message:
You can check the account of a job that is out of core hour. > squeue ... myaccount ... AccountOutOfNPL ...