File Systems at HLRN-IV
HLRN is operating at each site 3 central storage systems:
- IBM Spectrum Scale file system with 340 TiByte capacity, which exports via NFS
$HOME
directories/home/${USER}/
- Centrally managed software and the module system in
/sw
- Lustre parallel file system with 8 PiByte capacity containing
$WORK
directories/scratch/usr/${USER}/
$TMPDIR
directories/scratch/tmp/${USER}/
- project data directories
/scratch/projects/<projectID>/
(not yet available)
- Tape archive (multiple Petabyte) with additional harddisk caches
The system Emmy has addtional storage options for high IO demands:
- Phase 1 nodes (partitions
medium40
andlarge40
): Local SSD for temporary data at$LOCAL_TMPDIR
(400 GB shared among all jobs running on the node). The environment variable$LOCAL_TMPDIR is available on all nodes, but on the phase 2 systems it points to a ramdisk.
- DDN IME based burst buffer with 48TiB NVMe storage (general availability together with the phase 2 nodes)
Filesystem Quotas are currently not activated for the $
HOME
and $WORK
directories
Login and copy data between HLRN sites
Inter-complex login (ssh
) as well data copy (rsync/sftp
) between both sites (Berlin and Göttingen) should work right out of the box. The same is true for inner-complex ssh and scp between nodes of one site. This is enabled through an administratively installed ssh key pair id_rsa[.pub]
in your $HOME/.ssh/
. Do not remove these keys.
Always use the short hostname for ssh/rsync
: the generic names blogin, glogin, or the specific names like blogin5, glogin2, etc.
Tape archive PERM
The magnetic tape archive for permanent storage is directly accessible via the login nodes blogin[1-8] (only):
/perm/${USER}/
For reasons of efficiency and performance, many small files and/or complex directory structures should not be transferred here directly. Compressed tarballs containing such data are preferred.
access using gperm1/2 on Emmy)