Zum Ende der Metadaten springen
Zum Anfang der Metadaten

Sie zeigen eine alte Version dieser Seite an. Zeigen Sie die aktuelle Version an.

Unterschiede anzeigen Seitenhistorie anzeigen

« Vorherige Version anzeigen Version 5 Nächste Version anzeigen »

Holding a user account is a precondition to access to the HLRN-IV system which you obtain as the first step of application process .

The HLRN-IV system

The HLRN-IV system consists of two independent systems named Lise (named after Lise Meitner) and Emmy (named after Emmy Noether). The systems are located at the Zuse Institute Berlin and the University of Göttingen respectively. Overall, the HLRN-IV system consists of 1270 compute nodes with 121,920 cores in total. You can learn more about the system and the differences between the sites on the HLRN-IV website.

Login

Please login to the gateway nodes using the Secure Shell ssh (protocol version 2), see the example below. The standard gateways are called

blogin.hlrn.de (Berlin)
and
glogin.hlrn.de (Göttingen).

Login authentication is possible only by SSH keys. For information and instructions please see our SSH Pubkey tutorial.

File Systems

Each complex has the following file systems available. More information about Quota, usage, and best pratices are available here.

  • Home file system with 340 TiByte capacity containing $HOME directories /home/${USER}/
  • Lustre parallel file system with 8.1 PiByte capacity containing
    • $WORK directories /scratch/usr/${USER}/
    • $TMPDIR directories /scratch/tmp/${USER}/
    • project data directories /scratch/projects/<projectID>/ (not yet available)
  • Tape archive with 120 TiByte capacity (accessible on the login nodes, only)
  • On Emmy: SSD for temporary data at $LOCAL_TMPDIR (400 GB shared among all jobs running on the node)
Filesystem quotas are currently not activated for the $HOME and $WORK directories

Software and Environment

The webpage Software gives you more information about available software on the HLRN systems.

HLRN provides a number of compilers and software packages for parallel computing and (serial) pre- and postprocessing:

  • Compilers: Intel, GNU
  • Libraries: NetCDF, LAPACK, ScaLAPACK, BLAS, FFTW, ...
  • Debuggers: Allinea DDT, Roguewave TotalView...
  • Tools: octave, python, R ...
  • Visualisation: mostly tools to investigate gridded data sets from earth-system modelling
  • Application software: mostly for engineering and chemistry (molecular dynamics)

To manage the access to these software/libraries, HLRN uses the module command. This command offers the following functionality.

  1. Show lists of available software
  2. Access software in different versions


Example: Show the currently available software and access the Intel compiles
blogin1:~ $ module avail
...
blogin1:~ $ module load intel
Module for Intel Parallel Studio XE Composer Edition (version 2019 Update 5) loaded.
blogin1:~ $ module list
Currently Loaded Modulefiles:
 1) sw.skl   2) slurm   3) HLRNenv   4) intel/19.0.5(default)


To avoid conflicts between different compilers and compiler versions, builds of most important libraries are provided for all compilers and major release numbers.

  • Keine Stichwörter