Zum Ende der Metadaten springen
Zum Anfang der Metadaten

Sie zeigen eine alte Version dieser Seite an. Zeigen Sie die aktuelle Version an.

Unterschiede anzeigen Seitenhistorie anzeigen

« Vorherige Version anzeigen Version 88 Aktuelle »

Table of Contents

Partitions on system Lise

Compute system Lise at NHR@ZIB contains different Compute partitions for CPUs and GPUs. Your choice for the partition affects specific configurations of

Login nodes

To login to system Lise, please

Software and environment modules

The webpage Software gives you information about available software on the NHR systems.

NHR provides a number of compilers and software packages for parallel computing and (serial) pre- and postprocessing:

  • Compilers: Intel, GNU
  • Libraries: NetCDF, LAPACK, ScaLAPACK, BLAS, FFTW, ...
  • Debuggers: Allinea DDT, Roguewave TotalView...
  • Tools: octave, python, R ...
  • Visualisation: mostly tools to investigate gridded data sets from earth-system modelling
  • Application software: mostly for engineering and chemistry (molecular dynamics)

Environment Modules are used to manage the access to software/libraries. The module command offers the following functionality.

  1. Show lists of available software
  2. Enables access to software in different versions


Example: Show the currently available software and access the Intel compiles
blogin1:~ $ module avail
...
blogin1:~ $ module load intel
Module for Intel Parallel Studio XE Composer Edition (version 2019 Update 5) loaded.
blogin1:~ $ module list
Currently Loaded Modulefiles:
 1) sw.skl   2) slurm   3) HLRNenv   4) intel/19.0.5(default)

To avoid conflicts between different compilers and compiler versions, builds of most important libraries are provided for all compilers and major release numbers.

File systems

Each complex has the following file systems available. More information about Quota, usage, and best pratices are available on Fixing Quota Issues. Hints for data transfer are given here.

  • Home file system with 340 TiByte capacity containing $HOME directories /home/${USER}/
  • Lustre parallel file system with 8.1 PiByte capacity containing
    • $WORK directories /scratch/usr/${USER}/
    • $TMPDIR directories /scratch/tmp/${USER}/
    • project data directories /scratch/projects/<projectID>/ (not yet available)
  • Tape archive with 120 TiByte capacity (accessible on the login nodes, only)
Best practices for using WORK as a lustre filesystem: https://www.nas.nasa.gov/hecc/support/kb/lustre-best-practices_226.html
Hints for fair usage of the shared WORK ressource: Metadata Usage on WORK


  • Keine Stichwörter