Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.

...

Submission of jobs mainly happens via the sbatch command using jobscript, but interactive jobs and node allocations are also possible using srun or  salloc. Resource selecttion (e.g. number of nodes or cores) is handled via command parameters, or may be specified in the job script.

Partitions

To match your job requirements to the hardware you can choose among various partitions. Each partition has its own job queue. All available partitions and their corresponding walltime, core number, memory, CPU/GPU types are listed here.

Partition

(number holds cores per node)

Location

Max. walltimeNodesMax. nodes
per job
Max. jobs
per user

Usable memory per node

SharedNPL per node hourRemark
standard96Lise12:00:00952256(var)362 GB14default partition
standard96:testLise1:00:0032 dedicated
+128 on demand
161362 GB14test nodes with higher priority but lower walltime
large96Lise12:00:00284(var)747 GB21fat nodes
large96:testLise1:00:002 dedicated
+2 on demand
21747 GB21fat test nodes with higher priority but lower walltime
large96:sharedLise48:00:002 dedicated1(var)

747 GB

21fat nodes for data pre- and postprocessing
huge96Lise24:00:0021(var)

1522 GB

28

very fat nodes for data pre- and postprocessing











medium40Emmy12:00:00368128unlimited362 GB6default partition
medium40:testEmmy1:00:00

16 dedicated

+48 on demand

8unlimited

362 GB

6test nodes with higher priority but lower walltime
large40Emmy12:00:00114unlimited

747 GB

12fat nodes
large40:testEmmy1:00:0032unlimited

747 GB

12fat test nodes with higher priority but lower walltime
large40:sharedEmmy24:00:0021unlimited747 GB12for data pre- and postprocessing
gpuEmmy12:00:0011unlimited

equipped with 4 x NVIDIA Tesla V100 32GB

...