Inhalt |
---|
The compute nodes of the CPU cluster of system Lise are organised via the following slurm Slurm partitions.
Partition name | Node numbercount | CPU | Main memory (GB) | Max. nodes per job | Max. jobs per user (running/ queued)per user | Walltime Wall time limit (hh:mm:ss) | Remark |
---|---|---|---|---|---|---|---|
standard96cpu-clx | 1204688 | Cascade 9242 | 362 | 512 | 128 / 500 | 12:00:00 | default partition |
standard96cpu-clx:test | 32 dedicated +128 on demand | 362 | 16 | 1 / 500 | 01:00:00 | test nodes with higher priority but lower walltimeless wall time | |
large96 | 28 | 747 | 8 | 128 / 500 | 12:00:00 | fat memory nodes | |
large96:test | 2 dedicated +2 on demand | 747 | 2 | 1 / 500 | 1001:00:00 | fat memory test nodes with higher priority but lower walltimeless wall time | |
large96:shared | 2 dedicated | 747 | 1 | 128 / 500 | 48:00:00 | fat memory nodes for data pre- and postprocessingpost-processing | |
huge96 | 2 | 1522 | 1 | 128 / 500 | 24:00:00 | very fat memory nodes for data pre- and postprocessingpost-processing |
See Slurm usage how to pass the 12h walltime wall time limit with job dependencies.
...
If you do not request a partition, your job will be placed in the default partition, which is standard96.
The default partitions are partition is suitable for most calculations. The :test partitions are, as the name suggests, intended for shorter and smaller test runs. These have a higher priority and a few dedicated nodes, but are limited in time and number of nodesprovide only limited resources. Shared nodes are suitable for pre- and postprocessingpost-processing. A job running on a shared node is only accounted for its core fraction (cores of job / all cores per node). All non-shared nodes are exclusive to one job only at a time.
The available home/local-ssd/work/perm storages file systems are discussed in Storage under File Systems.
An For an overview of all Slurm partitions and node statuses is provided bystatus of nodes: sinfo -r
To see For detailed information about a particular nodes type: scontrol show node <nodename>
Charge rates
Charge rates for the slurm partitions you find in Slurm partitions can be found under Accounting.
Fat-Tree Communication Network of Lise
See OPA Fat Tree network of Lise
List of CPUs
...
Short name | Link to manufacturer specifications | Where to find | Units per node | Cores per unit | Clock speed | |||||
---|---|---|---|---|---|---|---|---|---|---|
Cascade 9242 | Intel Cascade Lake Platinum 9242 (CLX-AP) | CPU partition "Lise" | 2 | 48 | 2.3 | |||||
Cascade 4210 | Intel Cascade Lake Silver 4210 (CLX) | blogin[1-6] | 2 | 10 | 2.2 | Tesla A100 | NVIDIA Tesla A100 40GB and 80GB | 4 | 432/6912* |
...