Table of Contents
Inhalt |
---|
Help and answers
For questions, please contact the support crew support@nhr.zib.de.
Login
Login authentication is possible via SSH keys only. Please visit our tutorial SSH Login.
Partition of Lise | Login node |
---|---|
CPU partition "Lise" | blogin.nhr.zib.de |
GPU A100 partition | bgnlogin.nhr.zib.de |
GPU PVC partition | bgilogin.nhr.zib.de |
...
Each complex has the following file systems available. More information about Quota, usage, and best pratices are available hereon Fixing Quota Issues. Hints for data transfer are given here.
- Home file system with 340 TiByte capacity containing
$HOME
directories/home/${USER}/
- Lustre parallel file system with 8.1 PiByte capacity containing
$WORK
directories/scratch/usr/${USER}/
$TMPDIR
directories/scratch/tmp/${USER}/
- project data directories
/scratch/projects/<projectID>/
(not yet available)
- Tape archive with 120 TiByte capacity (accessible on the login nodes, only)
- On Emmy: SSD for temporary data at
$LOCAL_TMPDIR
(400 GB shared among all jobs running on the node)
Info |
---|
Best practices for using WORK as a lustre filesystem: https://www.nas.nasa.gov/hecc/support/kb/lustre-best-practices_226.html |
...
Here only a brief introduction to program building using the intel compiler is given. For more detailed instructions, including important compiler flags and special libraries, refer to our webpage Compilation GuideCPU CLX.
Examples for building a program on the Atos system
...
Parameter | Default Value | |
---|---|---|
# tasks | -n # | 1 |
# nodes | -N # | 1 |
# tasks per node | --tasks-per-node # | |
partition | -p <name> | standard96/medium40 |
Timelimit | -t hh:mm:ss | 12:00:00 |
...
- A resource allocation for interactive usage has to be requested first with the
salloc --interactive
command which should also include your resource requirements. - When
salloc
successfully allocated the requested resources and when working at the Göttingen complex (Emmy), you are automatically logged in at one of the allocated compute nodes. For Berlin (Lise), you have to issue an additional srun command to work one of the allocated nodes (see example below) if you want to work on the compute node. - Afterwards,
srun
or MPI launch commands, likempirun
ormpiexec
, can be used to start parallel programs (see according user guides)
Codeblock | ||
---|---|---|
| ||
blogin1 ~ $ salloc -t 00:10:00 -p standard96:test -N2 --tasks-per-node 24
salloc: Granted job allocation [...]
salloc: Waiting for resource configuration
salloc: Nodes bcn[1001,1003] are ready for job
# To get a shell on one of the allocated nodes at the Berlin complex/Lise (not required for Göttingen/Emmy)
blogin1 ~ $ srun --pty --interactive --preserve-env ${SHELL}
bcn1001 ~ $ srun hostname | sort | uniq -c
24 bcn1001
24 bcn1003
bcn1001 ~ $ exit
# Exit a second time for Berlin/Lise
blogin1:~ > exit
salloc: Relinquishing job allocation [...] |
...
Job scripts
Please go to our webpage CPU partition "Lise" for more details about job scripts. For introduction, standard batch system jobs are executed applying the following steps:
...
Accounting gives you more information about job accounting.
Every batch job on Lise and Emmy is accounted. The account (project) which is debited for a batch job can be specified using the sbatch
parameter --account <account>
. If a batch job does not state an account (project), a default is taken from the account database. It defaults to the personal project of the user, which has the same name as the user. Users may modify their default project by visiting the Service portal Portal NHR@ZIB.