File Systems at HLRN-IV
- Home file system with 340 TiByte capacity containing
$HOME
directories/home/${USER}/
- Lustre parallel file system with 8.1 PiByte capacity containing
$WORK
directories/scratch/usr/${USER}/
$TMPDIR
directories/scratch/tmp/${USER}/
- project data directories
/scratch/projects/<projectID>/
(not yet available)
- Tape archive with 120 TiByte capacity (accessible on the login nodes, only)
- On Emmy: SSD for temporary data at
$LOCAL_TMPDIR
(400 GB shared among all jobs running on the node)
Filesystem Quotas are currently not activated for the $
HOME
and $WORK
directories
Transfer data to the system
- Tutorial für rsync und filezilla
Login and copy data between HLRN sites
Inter-complex login (ssh) as well data copy (scp) between both sites (Berlin and Göttingen) should work right out of the box. The same is true for inner-complex ssh and scp between nodes of one site. This is enabled through an administratively installed ssh key pair id_rsa[.pub]
in your $HOME/.ssh/
. Do not remove these keys.
Always use the short hostname for ssh/scp: the generic names blogin, glogin, or the specific names like blogin5, glogin2, etc.
If for some reason this does not work for you for inter-complex ssh/scp, please see the topic Copy between HLRN sites and contact HLRN Support.
Tape archive PERM
The magnetic tape archive for permanent storage is directly accessible via the login nodes blogin[1-8] (only):
/perm/${USER}/
For reasons of efficiency and performance, many small files and/or complex directory structures should not be transferred here directly. Compressed tarballs containing such data are preferred.
- Add Göttingen info
- SCP ist veraltet, Beispiele sollten
rsync
benutzen.