Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.


Auszug
TURBOMOLE is a computational chemistry program that implements various quantum chemistry methods (ab initio methods). It was initially developed at the University of Karlsruhe.

...

The vendor also provides a list of utilities.

Prerequisites

Only members of the tmol user group can use the TURBOMOLE software. To have their user ID included in this group, users can send a message to their consultant or to HLRN support.

Modules

VersionInstallation Pathmodulefilecompilercomment
7.3
/sw/chem/turbomole/7.3/skl
turbomole/7.3

7.6
/sw/chem/turbomole/7.6/skl
turbomole/7.6

2022
/sw/chem/turbomole/tmolex2022/skl
turbomole/tmolex2022
TmoleX GUI, includes Turbomole 7.6 CLI

...

export PARA_ARCH=MPI
module load turbomole/7.6

TmoleX GUI

TmoleX is a GUI for TURBOMOLE which allows to build a workflow. Also it aids in building of the initial structure and the visualisation of results. 

To run the TmoleX GUI you must connect with X11 forwarding (ssh -Y ...).

module load turbomole/tmolex2022
TmoleX22

Job Script Examples

Note that some calculation run only in a certain execution mode, please consult the manual. Here all execution modes are listed.

1. MPI . Serial version. The calculations run serial and run only on one node.

Codeblock
languagebash
linenumberstrue
#!/bin/bash
#SBATCH -t 12:00:00
#SBATCH -p standard96
#SBATCH -N 1 
#SBATCH --mem-per-cpu=2G

module load turbomole/7.6

jobex -ri -c 300 > result.out

2. SMP Version, it can only run on one node. Use one node and use all CPUs:

Codeblock
languagebash
linenumberstrue
#!/bin/bash
#SBATCH -t 12:00:00
#SBATCH -p standard96
#SBATCH -N 1
#SBATCH --cpus-per-task=96 

export PARA_ARCH=SMP
module load turbomole/7.6

export PARNODES=$SLURM_CPUS_PER_TASK

jobex -ri -c 300 > result.out

3. MPI version. The MPI binaries have a _mpi suffix. To use the same binary names as the SMP version, the path will be extended to TURBODIR/mpirun_scripts/. This directory symlinks the binaries to the _mpi binaries. Here we run it on 7 nodes with all 96 cores:

Codeblock
languagebash
linenumberstrue
#!/bin/bash
#SBATCH -t 12:00:00
#SBATCH -p standard96
#SBATCH -N 7
#SBATCH --tasks-per-node 96

export SLURM_CPU_BIND=none  
export PARA_ARCH=MPI
module load turbomole/7.6
 
export PATH=$TURBODIR/mpirun_scripts/`sysname`/IMPI/bin:$TURBODIR/bin/`sysname`:$PATH
export PARNODES=${SLURM_NTASKS}

jobex -ri -c 300 > result.out

24. Open MP Version, here we need to set the OMP_NUM_THREADS variable. Again it uses 7 nodes with 96 cores. We use the standard binaries with Open MP, do not use the mpi binaries. If OMP_NUM_THREADS is set, then it uses the Open MP version.

Codeblock
languagebash
linenumberstrue
#!/bin/bash
#SBATCH -t 12:00:00
#SBATCH -p standard96
#SBATCH -N 7
#SBATCH --tasks-per-node 96

export SLURM_CPU_BIND=none  
export PARA_ARCH=MPI
module load turbomole/7.6

export PARNODES=${SLURM_NTASKS}
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}

jobex -ri -c 300 > result.out

3. SMP Version, it can only run on one node. Use one node and use all CPUs:

Codeblock
languagebash
linenumberstrue
#!/bin/bash
#SBATCH -t 12:00:00
#SBATCH -p standard96
#SBATCH -N 1
#SBATCH --cpus-per-task=96 

export PARA_ARCH=SMP
module load turbomole/7.6

export PARNODES=$SLURM_CPUS_PER_TASK

jobex -ri -c 300 > result.out

...