Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.

...

OpenMP support ist built in with the compilers from Intel and GNU.

Using the Batch System

To run your applications on the HLRN, you need to go through our batch system/scheduler: Slurm. The scheduler uses metainformation about the job (requested node and core count, wall time, etc.) and then runs your program on the compute nodes, once the resources are available and your job is next in line. For a more in depth introduction, visit our Slurm documentation.

We distinguish two kinds of jobs:

  • Interactive job execution
  • Job script execution

Resource specification

To request resources, there are multiple flags to be used when submitting the job.


ParameterDefault Value
# tasks-n #1
# nodes-N #1
# tasks per node--tasks-per-node #
partition

-p <name>

standard96/medium40
Timelimit-t hh:mm:ss12:00:00

Interactive jobs

Interactive MPI programs are executed applying the following steps (example for the default medium partition):

  1. Ask for an interactive shell with the command srun <…> --pty bash. We advise to use the one of the test partitions for interactive jobs.
  2. In the interactive shell, execute the parallel program with the MPI starter mpirun or srun.
Codeblock
languagetext
blogin1:~ > srun -t 00:10:00 -p medium40:test -N2 --tasks-per-node 24 --pty bash
bash-4.2$ mpirun hello_world >> hello_world.out
bash-4.2$ exit
blogin1:~ > 

Job scripts

Please go to our webpage MPI start Guide for more details about job scripts. For introduction, standard batch system jobs are executed applying the following steps:

  1. Provide (write) a batch job script, see the examples below.
  2. Submit the job script with the command sbatch (sbatch jobscript.sh)
  3. Monitor and control the job execution, e.g. with the commands squeue and scancel (cancel the job).

A job script is a script (written in bash, ksh or csh syntax) containing Slurm keywords which are used as arguments for the command sbatch.

Erweitern
titleIntel MPI Job Script

Requesting 4 nodes in the medium partition with 96 cores (no hyperthreading) for 10 minutes, using Intel MPI.

Codeblock
languagebash
linenumberstrue
#!/bin/bash
#SBATCH -t 00:10:00
#SBATCH -N 4
#SBATCH --tasks-per-node 96
#SBATCH -p standard96

module load impi
export SLURM_CPU_BIND=none  # important when using "mpirun" from Intel-MPI!
							# Do NOT use this with srun!
export I_MPI_HYDRA_TOPOLIB=ipl
export I_MPI_HYDRA_BRANCH_COUNT=-1

mpirun hello_world > hello.output



Erweitern
titleOpenMP job

Requesting 1 large node with 96 CPUs (physical cores) for 20 minutes, and then using 192 hyperthreads

Codeblock
languagebash
linenumberstrue
#!/bin/bash
#SBATCH -t 00:20:00
#SBATCH -N 1
#SBATCH --cpus-per-task=96
#SBATCH -p large96:test

# This binds each thread to one core
export OMP_PROC_BIND=TRUE
# Number of threads as given by -c / --cpus-per-task
export OMP_NUM_THREADS=$(($SLURM_CPUS_PER_TASK * 2))
export KMP_AFFINITY=verbose,scatter

hello_world > hello.output