Special Filesystems



NHR provides tailored WORK file systems for improved IO throughput of IO intense job workloads.

Default Lustre (WORK)

WORK is the default shared file system for all jobs and can be accessed using the $WORK variable. WORK is accessible for all users and consists of 8 Metadata Targets (MDT's) with NVMe SSDs and 28 Object Storage Targets (OST's) using classical hard drives.

Access: $WORK

Size: 8 PiB quoted


Special File System Types

Lustre with striping (WORK)

Some workloads will benefit of striping. Files will be split transparently between a number of OSTs.

Especially large shared file IO patterns will benefit from striping. Up to 28 OSTs on Lise can be used, recommended are up to 8 OSTs for Lise. We have preconfigured a progressive file layout (PFL), which sets an automatic striping based on the file size.

Access: create a new directory in $WORK and set lfs setstripe -c <stripsize> <dir>

Size: 8 PiB like WORK


Local SSDs

Some Compute Nodes are installed with local SSD storage. These node share the following properties.

  • 2 TB SSD locally attached to the node
  • Data on SSD will be deleted after the job is finished.
  • Data on SSD can not be shared across nodes.

For unshared local IO this is the best performing file system to use.



Lise: SSDLise: CAS
slurm partition

cpu-clx:ssd

using $LOCAL_TMPDIR

cpu-clx:large and cpu-clx:huge

using $LOCAL_TMPDIR

Type and size2 TB Intel NVMe SSD DC P4511

Intel NVMe SSD DC P4511 (2 TB) using

Intel Optane SSD DC P4801X (200 GB)

as write-trough cache


FastIO

WORK is extended with 4 additional OST's using NVMe SSDs to accelerate heavy (random) IO-demands. To accelerate specific IO-demands further striping for up to these 4 OSTs is available.

Access:

create a new directory in $WORK and set lfs setstripe -p flash <dir>

Size:

55 TiB - quoted


Finding the right File System

If your jobs have a significant IO part we recommend asking your consultant via support@nhr.zib.de to recommend the right file system for you.

Local IO

If you have a significant amount of node-local IO which is not needed to be accessed after job end and will be smaller than 2 TB on Lise we recommend using $LOCAL_TMPDIR. Depending on your IO pattern this may accelerate IO to up to 100%.

Global IO

Global IO is defined as shared IO which will be able to be accessed from multiple nodes at the same time and will be persistent after job end.

Especially random IO will be accelerated up to 200% using FastIO on Lise.