...
- choose a login node associated to your Compute partitions and
- use authentication via SSH Login.
File systems
Each complex has the following file systems available. More information about Quota, usage, and best pratices are available on Fixing Quota Issues. Hints for data transfer are given here.
- Home file system with 340 TiByte capacity containing
$HOME
directories/home/${USER}/
- Lustre parallel file system with 8.1 PiByte capacity containing
$WORK
directories/scratch/usr/${USER}/
$TMPDIR
directories/scratch/tmp/${USER}/
- project data directories
/scratch/projects/<projectID>/
(not yet available)
- Tape archive with 120 TiByte capacity (accessible on the login nodes, only)
Info |
---|
Best practices for using WORK as a lustre filesystem: https://www.nas.nasa.gov/hecc/support/kb/lustre-best-practices_226.html |
Info |
---|
Hints for fair usage of the shared WORK ressource: Metadata Usage on WORK |
Software and environment modules
...
To avoid conflicts between different compilers and compiler versions, builds of most important libraries are provided for all compilers and major release numbers.
Program build
...
File systems
Each complex has the following file systems available. More information about Quota, usage, and best pratices are available on Fixing Quota Issues. Hints for data transfer are given here.
- Home file system with 340 TiByte capacity containing
$HOME
directories/home/${USER}/
- Lustre parallel file system with 8.1 PiByte capacity containing
$WORK
directories/scratch/usr/${USER}/
$TMPDIR
directories/scratch/tmp/${USER}/
- project data directories
/scratch/projects/<projectID>/
(not yet available)
- Tape archive with 120 TiByte capacity (accessible on the login nodes, only)
Info |
---|
Best practices for using WORK as a lustre filesystem: https://www.nas.nasa.gov/hecc/support/kb/lustre-best-practices_226.html |
Info |
---|
Hints for fair usage of the shared WORK ressource: Metadata Usage on WORK |
Using slurm batch system
To run your applications on the systems, you need to go through our batch system/scheduler: Slurm. The scheduler uses meta information about the job (requested node and core count, wall time, etc.) and then runs your program on the compute nodes, once the resources are available and your job is next in line. For a more in depth introduction, visit our Slurm documentation.
...