Atlassian uses cookies to improve your browsing experience, perform analytics and research, and conduct advertising. Accept all cookies to indicate that you agree to our use of cookies on your device. Atlassian cookies and tracking notice, (opens new window)
User Manual

User Manual
Results will update as you type.
  • Application Guide
  • Status of System
  • Usage Guide
    • SSH Login
    • Data Transfer
    • File Systems
      • File system replacement
      • System Quota
      • Metadata Usage on WORK
      • Special Filesystems
      • Sharing data
    • MPI, OpenMP build and run workflows
    • Slurm usage
  • Compute partitions
  • Software
  • FAQ
  • NHR Community
  • Contact

    You‘re viewing this with anonymous access, so some content might be blocked.
    /
    Special Filesystems
    Updated Juli 28

    Special Filesystems

    • 1 Finding the right File System
      • 1.1 Local disk I/O
      • 1.2 Global IO

    Finding the right File System

    If your jobs have a significant I/O part we recommend asking your consultant via support@nhr.zib.de to recommend the right file system for you.

    Local disk I/O

    Some Compute Nodes have local SSD or NVMe storage, available in $LOCAL_TMPDIR

    An empty directory is created on job start, and data will be deleted after the job is finished. Local data can not be shared across nodes.

    This is the best performing file system to use for data that doesn't need to be shared.

     

    Partition

    Local Storage

    Partition

    Local Storage

    cpu-genoa

    3.8 TB

    cpu-clx:large

    2 TB

    cpu-clx:huge

    2 TB

    cpu-clx:ssd

    2 TB

    Local disks are also available on all login nodes under /local. Files there are removed after 30 days.

    Typically, depending on your I/O pattern, it is much faster if data is first collected/packed node-locally on $LOCAL_TMPDIR and then copied together over the network to $WORK in a second step. In contrast, if all MPI tasks of a node communicate individually with the remote storage servers it can be a bottleneck.
    The environment variable$LOCAL_TMPDIR exits on all compute nodes. On compute nodes without SSD/NVMe (see Local Disks section below) $LOCAL_TMPDIR points to the /tmpfs filesystem. In this case, its capacity is capped around 50% of the total RAM - see:
    df -h $LOCAL_TMPDIR
    Please note that after your Slurm job finishes all data on $LOCAL_TMPDIR will be removed (cleaned for the next user), so you need to copy it to another location before.
    An example of how to make use of local I/O is given under Ex. moving local data parallel to program execution.

    Global IO

    Global IO is defined as shared IO which will be able to be accessed from multiple nodes at the same time and will be persistent after job end. See File SystemsPreview

     

    {"serverDuration": 12, "requestCorrelationId": "2f9a064463b64e1ca26632db9d7d0f5c"}