Zum Ende der Metadaten springen
Zum Anfang der Metadaten

Sie zeigen eine alte Version dieser Seite an. Zeigen Sie die aktuelle Version an.

Unterschiede anzeigen Seitenhistorie anzeigen

« Vorherige Version anzeigen Version 8 Nächste Version anzeigen »

File Systems at HLRN-IV

  • Home file system with 340 TiByte capacity containing $HOME directories /home/${USER}/
  • Lustre parallel file system with 8.1 PiByte capacity containing
    • $WORK directories /scratch/usr/${USER}/
    • $TMPDIR directories /scratch/tmp/${USER}/
    • project data directories /scratch/projects/<projectID>/ (not yet available)
  • Tape archive with 120 TiByte HDD cache (accessible on the login nodes, only (Lise), access using gperm1/2 on Emmy)
  • On Emmy: SSD for temporary data at $LOCAL_TMPDIR (400 GB shared among all jobs running on the node)

Filesystem Quotas are currently not activated for the $HOME  and $WORK directories

Login and copy data between HLRN sites

Inter-complex login (ssh) as well data copy (rsync/sftp) between both sites (Berlin and Göttingen) should work right out of the box. The same is true for inner-complex ssh and scp between nodes of one site. This is enabled through an administratively installed ssh key pair id_rsa[.pub] in your $HOME/.ssh/. Do not remove these keys.

Always use the short hostname for ssh/rsync: the generic names blogin, glogin, or the specific names like blogin5, glogin2, etc.

Tape archive PERM

The magnetic tape archive for permanent storage is directly accessible via the login nodes blogin[1-8] (only):

/perm/${USER}/

For reasons of efficiency and performance, many small files and/or complex directory structures should not be transferred here directly. Compressed tarballs containing such data are preferred.

  • Keine Stichwörter