...
Partition name | Node count | CPU | Main memory (GB) | Max. nodes per job | Max. jobs per user (running/ queued) | Wall time limit (hh:mm:ss) | Remark |
---|---|---|---|---|---|---|---|
standard96cpu-clx | 1204688 | Cascade 9242 | 362 | 512 | 128 / 500 | 12:00:00 | default partition |
standard96cpu-clx:test | 32 dedicated +128 on demand | 362 | 16 | 1 / 500 | 01:00:00 | test nodes with higher priority but less wall time | |
large96 | 28 | 747 | 8 | 128 / 500 | 12:00:00 | fat memory nodes | |
large96:test | 2 dedicated +2 on demand | 747 | 2 | 1 / 500 | 01:00:00 | fat memory test nodes with higher priority but less wall time | |
large96:shared | 2 dedicated | 747 | 1 | 128 / 500 | 48:00:00 | fat memory nodes for data pre- and post-processing | |
huge96 | 2 | 1522 | 1 | 128 / 500 | 24:00:00 | very fat memory nodes for data pre- and post-processing |
...
The available home/local-ssd/work/perm file systems are discussed under Storage File Systems.
For an overview of all Slurm partitions and status of nodes: sinfo -r
For detailed information about a particular nodes: scontrol show node <nodename>
...