Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.

...

WORK is the default shared file system for all jobs and can be accessed using the $WORK variable. WORK is accessible for all users and consists of 8 Metadata Targets (MDT's) with NVMe SSDs and 28 Object Storage Targets (OST's) on Lise and 100 OST's on Emmy handling the data. Both using classical hard drives.

...

Especially large shared file IO patterns will benefit from striping. Up to 28 OSTs on Lise and up to 100 OST's on Emmy can be used, recommended are up to 8 OSTs for Lise and 32 OSTs on Emmy. We have preconfigured a progressive file layout (PFL), which sets an automatic striping based on the file size.

...

If you have a significant amount of node-local IO which is not needed to be accessed after job end and will be smaller than 2 TB on Lise and 400 GB/1 TB (depending on the node) on Emmy we recommend using $LOCAL_TMPDIR. Depending on your IO pattern this may accelerate IO to up to 100%.

...

Especially random IO will be accelerated up to 200% using FastIO on Lise.

Performance Comparison of the different File Systems and SSDs for Emmy with IO500

Please remember that we are comparing here a single SSD for the node local SSDs, 43 SSDs for Lustre SSD and 1000 HDDs for Lustre HDD using 32 IO processes per node. For the Lustre filesystems 64 nodes were used to achieve near maximum performance for the Lustre HDD pool. For the Home filesystem with its 120 HDDs only 16 nodes with 10 processes per node were used, as more nodes or processes overloads this small filesystem, resulting in even lower performance.

A typical user job will see lower performance values as there are usually less IO processes. The numbers for the global filesystems indicate the aggregate performance that is distributed across all users.