hallo
Login
Login to the GPU A100 partition is possible through dedicated login nodes, reachable via SSH under bgnlogin.nhr.zib.de:
$ ssh -i $HOME/.ssh/id_rsa_zib zib_username@bgnlogin.nhr.zib.de Enter passphrase for key '/<home_directory>/.ssh/id_rsa_zib': bgnlogin1$
File systems
The file systems HOME and WORK on the GPU system are the same as on the CPU system, see Quickstart. Access to compute node local SSD space is provided via the environment variable LOCAL_TMPDIR
defined during a SLURM session (batch or interactive job).
Login
Login to the GPU A100 partition is possible through dedicated login nodes, reachable via SSH under bgnlogin.nhr.zib.de:
Example: login
Enter passphrase for key '/<home_directory>/.ssh/id_rsa_zib' : bgnlogin1$ |
File systems
The file systems HOME and WORK on the GPU system are the same as on the CPU system, see Quickstart. Access to compute node local SSD space is provided via the environment variable LOCAL_TMPDIR
defined during a SLURM session (batch or interactive job).
The following GPU partitions are available on the GPU partition of system Lise.
Partition name | Nodes | GPUs per node | GPU hardware | Description |
---|---|---|---|---|
gpu-a100 | 36 | 4 | NVIDIA Tesla A100 80GB | full node exclusive |
gpu-a100:shared | 5 | 4 | NVIDIA Tesla A100 80GB | shared node access, exclusive use of the requested GPUs |
gpu-a100:shared:mig | 1 | 28 (4 x 7) | 1 to 28 1g.10gb A100 MIG slices | shared node access, shared GPU devices via Multi Instance GPU. Each of the four GPUs is logically split into usable seven slices with 10 GB of GPU memory associated to each slice |
Cost: 150 core hours per GPU or 21.43 per MIG slice