Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.

...

  • HOME: IBM Spectrum Scale file system with 340 TiByte capacity, which exports via NFS
    • $HOME directories /home/${USER}/
    • Centrally managed software and the module system in /sw
  • WORK: Lustre parallel file system with 8 PiByte capacity containing
  • $WORK directories /scratch/usr/${USER}/
  • $TMPDIR directories /scratch/tmp/${USER}/
  • project data directories /scratch/projects/<projectID>/ (not yet available)
  • PERM: Tape archive with multiple petabyte capacity with additional harddisk caches

...

Always use the short hostname for ssh/rsync, either the generic names blogin, glogin or the specific names like blogin5, glogin2, etc. This allows the usage of the direct intersite connection HLRN Link, which ich much faster then the internet connection, which is used, when you access the nodes of the other site by the hostnames b/glogin.hlrn.de.

...

HOME

The home filesystem and /sw are mounted via NFS, so performance is medium. We take daily snapshots of the filesystem, which can be used to restore a former state of a file or directory. These snapshots can be accessed through the path /home/.snapshots or /sw/.snapshots. There are additional regular backups to restore the filesystem in case of a catastrophic failure.

...

WORK

The Lustre based work filesystem /scratch is the main work filesystem for the HLRN clusters. Each user can distribute data to different directories.

...

A general recommendation for network filesystems is to keep the number of metadata operations for open and closing files, as well as checks for file existence or changes as low as possible. These operations often become a bottleneck for the IO of your job and on large clusters, as the ones operated by HLRN, can easily overload the file servers.

...

PERM, tape archive

...

The magnetic tape archive provides additional storage for inactive data to free up space on the work filesystem. It is directly accessible via the login nodes at the mountpoint /perm/${USER}/.

...