Table of Contents
For questions, please contact the support crew support@nhr.zib.de.
Login
Login authentication is possible via SSH Login.
Partition of Lise | Login node |
---|---|
CPU partition "Lise" | blogin.nhr.zib.de |
CPU Genoa partition | genoa-login.nhr.zib.de |
GPU A100 partition | bgnlogin.nhr.zib.de |
GPU PVC partition | bgilogin.nhr.zib.de |
office $ ssh -i $HOME/.ssh/id_rsa_nhr nhr_username@blogin.nhr.zib.de Enter passphrase for key '...': blogin1 $
File systems
Each complex has the following file systems available. More information about Quota, usage, and best pratices are available on Fixing Quota Issues. Hints for data transfer are given here.
- Home file system with 340 TiByte capacity containing
$HOME
directories/home/${USER}/
- Lustre parallel file system with 8.1 PiByte capacity containing
$WORK
directories/scratch/usr/${USER}/
$TMPDIR
directories/scratch/tmp/${USER}/
- project data directories
/scratch/projects/<projectID>/
(not yet available)
- Tape archive with 120 TiByte capacity (accessible on the login nodes, only)
Software and environment modules
The webpage Software gives you information about available software on the NHR systems.
NHR provides a number of compilers and software packages for parallel computing and (serial) pre- and postprocessing:
- Compilers: Intel, GNU
- Libraries: NetCDF, LAPACK, ScaLAPACK, BLAS, FFTW, ...
- Debuggers: Allinea DDT, Roguewave TotalView...
- Tools: octave, python, R ...
- Visualisation: mostly tools to investigate gridded data sets from earth-system modelling
- Application software: mostly for engineering and chemistry (molecular dynamics)
Environment Modules are used to manage the access to software/libraries. The module
command offers the following functionality.
- Show lists of available software
- Enables access to software in different versions
blogin1:~ $ module avail ... blogin1:~ $ module load intel Module for Intel Parallel Studio XE Composer Edition (version 2019 Update 5) loaded. blogin1:~ $ module list Currently Loaded Modulefiles: 1) sw.skl 2) slurm 3) HLRNenv 4) intel/19.0.5(default)
To avoid conflicts between different compilers and compiler versions, builds of most important libraries are provided for all compilers and major release numbers.
Program build
Here only a brief introduction to program building using the intel compiler is given. For more detailed instructions, including important compiler flags and special libraries, refer to our webpage Compilation CPU CLX.
Examples for building a program on the Atos system
To build executables for the Atos system, call the standard compiler executables (icc, ifort, gcc, gfortran) directly.
module load intel icc -o hello.bin hello.c
module load intel module load impi mpiicc -o hello.bin hello.c
module load intel icc -qopenmp -o hello.bin hello.c
MPI, Communication Libraries, OpenMP
We provide several communication libraries:
- Intel MPI
- OpenMPI
As Intel MPI is the communication library recommended by the system vendor, currently only documentation for Intel MPI is provided, except for application specific documentation.
OpenMP support is available with the compilers from Intel and GNU.
Using the batch system
To run your applications on the systems, you need to go through our batch system/scheduler: Slurm. The scheduler uses meta information about the job (requested node and core count, wall time, etc.) and then runs your program on the compute nodes, once the resources are available and your job is next in line. For a more in depth introduction, visit our Slurm documentation.
We distinguish two kinds of jobs:
- Interactive job execution
- Job script execution
Resource specification
To request resources, there are multiple flags to be used when submitting the job.
Parameter | Default Value | |
---|---|---|
# tasks | -n # | 1 |
# nodes | -N # | 1 |
# tasks per node | --tasks-per-node # | |
partition | -p <name> | standard96 |
Timelimit | -t hh:mm:ss | 12:00:00 |
Interactive jobs
For using compute resources interactively, e.g. to follow the execution of MPI programs, the following steps are required. Note that non-interactive batch jobs via job scripts (see below) are the primary way of using the compute resources.
- A resource allocation for interactive usage has to be requested first with the
salloc --interactive
command which should also include your resource requirements. - When
salloc
successfully allocated the requested resources, you have to issue an additional srun command to work one of the allocated nodes (see example below) if you want to work on the compute node. - Afterwards,
srun
or MPI launch commands, likempirun
ormpiexec
, can be used to start parallel programs (see according user guides)
blogin1 ~ $ salloc -t 00:10:00 -p standard96:test -N2 --tasks-per-node 24 salloc: Granted job allocation [...] salloc: Waiting for resource configuration salloc: Nodes bcn[1001,1003] are ready for job # To get a shell on one of the allocated nodes blogin1 ~ $ srun --pty --interactive --preserve-env ${SHELL} bcn1001 ~ $ srun hostname | sort | uniq -c 24 bcn1001 24 bcn1003 bcn1001 ~ $ exit # Exit a second time for Berlin/Lise blogin1:~ > exit salloc: Relinquishing job allocation [...]
Job scripts
Please go to our webpage CPU partition "Lise" for more details about job scripts. For introduction, standard batch system jobs are executed applying the following steps:
- Provide (write) a batch job script, see the examples below.
- Submit the job script with the command
sbatch
(sbatch jobscript.sh
) - Monitor and control the job execution, e.g. with the commands
squeue
andscancel
(cancel the job).
A job script is a script (written in bash
, ksh
or csh
syntax) containing Slurm keywords which are used as arguments for the command sbatch
.
Job Accounting
Accounting gives you more information about job accounting.
Every batch job is accounted. The account (project) which is debited for a batch job can be specified using the sbatch
parameter --account <account>
. If a batch job does not state an account (project), a default is taken from the account database. It defaults to the personal project of the user, which has the same name as the user. Users may modify their default project by visiting the Portal NHR@ZIB.