NHR@ZIB is operating 3 central storage systems with their global file systems:
File System | Capacity | Storage Technology and Function |
---|---|---|
HOME | 20 PiB | IBM Storage Scale parallel file system (former GPFS) with 1 PiB NVMe SSD Cache on first write
|
WORK | ||
PERM | Tape archive with multiple petabyte capacity with additional harddisk caches |
The system has additional storage options for high IO demands:
$LOCAL_TMPDIR
(2 TB per node). For more details refer to Special Filesystems.In general, we store all data for an extra year after the end of a test account/project. If not extended, the standard term of test account/project is one year.
Each user holds one HOME directory:
HOME=/home/${USER}
We take daily snapshots of the filesystem, which can be used to restore a former state of a file or directory. These snapshots can be accessed through the path /home/.snapshots
or /sw/.snapshots
.
The 'Storage Scale' based work filesystem /scratch
is the main work filesystem for the systems. Each user can distribute data to different directories.
WORK=/scratch/usr/${USER}
, for user data/scratch/projects/<projectID>
, to collect and to share project data (please remember: no backup of the Lustre file system), see also hints on disk quotaWe provide no backup of this filesystem. The storage system of Lise provides around 185 GiB/s streaming bandwith during the acceptance test. With higher occupancy, the effective (write) streaming bandwidth is reduced.
The storage system is hard-disk based with NVMe Cache on first write.
A general recommendation for network filesystems is to keep the number of metadata operations for open and closing files, as well as checks for file existence or changes as low as possible. These operations often become a bottleneck for the IO of your job and on large clusters can easily overload the file servers.
The magnetic tape archive provides additional storage for inactive data to free up space on the WORK or HOME filesystem. It is directly accessible on the login nodes..
/perm/${USER}
For reasons of efficiency and performance, small files and/or complex directory structures should not be transferred to the archive directly. Please aggregate your data to compressed tarballs or other archive containers with a maximum size of 5,5TiB before copying your data to the archive.