TURBOMOLE

Description

TURBOMOLE is a computational chemistry program package that implements various methods from quantum chemistry. Some of its components are also available as MPI and/or OpenMP parallel executables. TURBOMOLE was initially developed at the University of Karlsruhe.

Read more about it on the developer's homepage.

An overview of the documentation can be found here

The vendor also provides a list of utilities.

Prerequisites

On Lise, only members of the tmol user group can use TURBOMOLE. To have their user ID included in this group, users can send a message to their consultant or to our support.

Environment Modules

VersionInstallation Pathmodulefile
7.4
/sw/chem/turbomole/7.4/skl
turbomole/7.4

Usage

TURBOMOLE has two execution modes. By default it uses threaded executables (OpenMP or POSIX threads, single node), but it also provides MPI parallel executables for runs on multiple nodes. For the MPI parallel version, the variable PARA_ARCH needs to be set to MPI. If PARA_ARCH is empty or does not exist or is set to SMP, TURBOMOLE uses threaded (non-MPI) executables. The environment variable PARA_ARCH must be adjusted before loading a turbomole environment module.

Example for the MPI version:

export PARA_ARCH=MPI
module load turbomole/7.4

Job Script Examples

Note that some calculation run only in a certain execution mode, please consult the manual. Here all execution modes are listed.

1. Serial version. The calculations run serial and run only on one node.

#!/bin/bash
#SBATCH -t 12:00:00
#SBATCH -p cpu-clx
#SBATCH -N 1 

module load turbomole/7.4

jobex -ri -c 300 > result.out

2. SMP Version, it can only run on one node. Use one node and use all CPUs:

#!/bin/bash
#SBATCH -t 12:00:00
#SBATCH -p cpu-clx
#SBATCH -N 1
#SBATCH --cpus-per-task=96 

export PARA_ARCH=SMP
module load turbomole/7.4

export PARNODES=$SLURM_CPUS_PER_TASK

jobex -ri -c 300 > result.out

3. MPI version. The MPI binaries have a _mpi suffix. To use the same binary names as the SMP version, the path will be extended to TURBODIR/mpirun_scripts/. This directory symlinks the binaries to the _mpi binaries. Here we run it on 8 nodes with all 96 cores:

#!/bin/bash
#SBATCH -t 12:00:00
#SBATCH -p cpu-clx
#SBATCH -N 8
#SBATCH --tasks-per-node=96

export SLURM_CPU_BIND=none  
export PARA_ARCH=MPI
module load turbomole/7.4
 
export PATH=$TURBODIR/mpirun_scripts/`sysname`/IMPI/bin:$TURBODIR/bin/`sysname`:$PATH
export PARNODES=${SLURM_NTASKS}

jobex -ri -c 300 > result.out

4. OpenMP Version, here we need to set the OMP_NUM_THREADS variable. Again it uses 8 nodes with 96 cores. We use the standard binaries with OpenMP, do not use the mpi binaries. If OMP_NUM_THREADS is set, then it uses the OpenMP version.

#!/bin/bash
#SBATCH -t 12:00:00
#SBATCH -p cpu-clx
#SBATCH -N 8
#SBATCH --tasks-per-node=96

export SLURM_CPU_BIND=none  
export PARA_ARCH=MPI
module load turbomole/7.4

export PARNODES=${SLURM_NTASKS}
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}

jobex -ri -c 300 > result.out

Related pages