Atlassian uses cookies to improve your browsing experience, perform analytics and research, and conduct advertising. Accept all cookies to indicate that you agree to our use of cookies on your device. Atlassian cookies and tracking notice, (opens new window)
User Manual

User Manual
Results will update as you type.
  • Application Guide
  • Status of System
  • Usage Guide
  • Compute partitions
  • Software
    • AI Frameworks and Tools
    • Bring your own license
    • Chemistry
      • CP2K
      • exciting
      • Gaussian
      • GPAW
      • NAMD
      • Octopus
      • Quantum ESPRESSO
      • RELION
      • TURBOMOLE
      • VASP
      • Wannier90
      • GROMACS
      • SIESTA
    • Data Manipulation
    • Engineering
    • Environment Modules
    • Miscellaneous
    • Numerics
    • Virtualization
    • Devtools Compiler Debugger
    • Visualisation Tools
  • FAQ
  • NHR Community
  • Contact

    You‘re viewing this with anonymous access, so some content might be blocked.
    /
    Gaussian

    Gaussian

    Aug. 14, 2025

    a computational chemistry application provided by Gaussian Inc.

    License agreement

    In order to use Gaussian you have to  agree to the following conditions.

    1. I am not a member of a research group developing software competitive to Gaussian.

    2. I will not copy the Gaussian software, or make it available to anyone else.

    3. I will properly acknowledge Gaussian Inc. in publications.

    Please contact support with a copy of the following statement, to add your user ID to the Gaussian UNIX group.

    Limitations

    Gaussian 16 is available at NHR@ZIB. 

    "Linda parallelism", Cluster/network parallel execution of Gaussian, is not supported at any of our systems. Only "shared-memory multiprocessor parallel execution" is supported, therefore no Gaussian job can use more than a single compute node.

    Description

    Gaussian 16 is the latest in the Gaussian series of programs. It provides state-of-the-art capabilities for electronic structure modeling. 

    QuickStart

    Environment modules

    The following versions have been installed:

    Version

    Module file

    Prerequisites

    Deprecated versions

    Gaussian 16 Rev. C.02gaussian/16.C02-
    Gaussian 16 Rev. C.02gaussian/16.C02cuda/11.8

    CPU CLX partition

    Gaussian 16 Rev. C.02gaussian/16.C02-

    CPU Genoa partition

    Gaussian 16 Rev. C.02gaussian/16.C02-

    GPU A100 partition

    Gaussian 16 Rev. C.02gaussian/16.C02cuda/12.9

    GPU Job Performance

    GPUs are effective for DFT calculations, for both the ground and excited states for larger molecules. However, they are not effective for smaller jobs or for use in post-SCF calculations such as MP2 or CCSD.

    Job submissions

    Besides your Gaussian input file you have to prepare a job script to define the compute resources for the job; both input file and job script have to be in the same directory.

    Default runtime files (.rwf, .inp, .d2e, .int, .skr files) will be saved only temporarily in $LOCAL_TMPDIR on the compute node to which the job was scheduled. The files will be removed by the scheduler when a job is done.

    If you wish to restart your calculations when a job is finished (successful or not), please define the checkpoint (file_name.chk) file in your G16 input file (%Chk=route/route/name.chk).

    CPU jobs

    Because only the "shared-memory multiprocessor" parallel version is supported, your jobs can use only one node and up to 96 maximum cores per node.

    CPU job script  example

    CPU_submit
    #!/bin/bash
    #SBATCH --time=12:00:00 		       # expected run time (hh:mm:ss)
    #SBATCH --partition=cpu-clx:ssd.       # Compute Nodes with installed local SSD storage
    #SBATCH --mem=16G                      # memory, roughly 2 times %mem defined in the input name.com file
    #SBATCH --cpus-per-task=16             # No. of CPUs, same amount as defined by %nprocs in the filename.com input file
    
    module load gaussian/16.C02 
    
    g16 filename.com                       # g16 command, input: filename.com
     

    GPU jobs

    Because only the "shared-memory multiprocessor" parallel version is supported, your jobs can use only one node up to 4 GPUs per node. 

    GPU job script
    #!/bin/bash 
    #SBATCH --time=12:00:00 		       # expected run time (hh:mm:ss)
    #SBATCH --partition=gpu-a100           # Compute Nodes with installed local SSD storage
    #SBATCH --nodes=1                      # number of compute node
    #SBATCH --mem=32G                      # memory, roughly 2 times %mem defined in the input name.com file
    #SBATCH --ntasks=32                    # No.CPUs plus the number of control CPUs same amount as defined by %cpu in the filename.com input file 
    #SBATCH --gres=gpu:4                   # No. GPUs same amount as defined by %GPUCPU in the filename.com input file   
    
    module load cuda/11.8
    module load gaussian/16.C02   
    
    g16 filename.com                       # g16 command, input: filename.com 

    Specifying GPUs & Control CPUs for a Gaussian Job

    The %GPUCPU Link 0 command specifies which GPUs and associated controlling CPUs to use for a calculation. This command takes one parameter:

    %GPUCPU=gpu-list=control-cpus

    For example, for 2 GPUs, a job which uses 2 control CPU cores would use the following Link 0 commands:

    %CPU=0-1                                                               #Control CPUs are included in this list.

    %GPUCPU=0,1=0,1

    Using 4 GPUs and 4 control CPU cores:

    %CPU=0-3                                                               #Control CPUs are included in this list.

    %GPUCPU=0,1,2,3=0,1,2,3

    Using 4 GPUs and a total of 32 CPU cores including 4 control CPU cores :

    %CPU=0-31                                                               #Control CPUs are included in this list.

    %GPUCPU=0,1,2,3=0,1,2,3

    Interactive jobs

    Example for CPU calculations:

    ~ $ salloc -t 00:10:00 -p standard96:ssd  -N1 --tasks-per-node 24

    ~ $ g16 filename.com 

    Exmaple for GPU calculations:

    ~ $ salloc -t 00:10:00 -p gpu-a100 -N1 --ntasks=32 

    ~ $ g16 filename.com 

    Restart calculations from checkpoint files 

    opt=restart 

    Molecular geometry optimization jobs can be restarted from a checkpoint file. All existing information; basis sets, wavefunction and molecular structures during the geometry optimization can be read from the checkpoint file.

    restart_opt.com
    %chk=filename.chk
    %mem=16GB
    %nprocs=16
    # method chkbasis guess=read geom=allcheck opt=restart  

     #restart

    The same restarting can be done for vibrational frequency computations.

    restart_freq.com
    %chk=filename.chk
    %mem=16GB
    %nprocs=16
    # restart  

    Input file examples

    Example for CPU calculations: water.com

    Example for GPU calculations: DeOxyThymidine.com

    , multiple selections available, Use left or right arrow keys to navigate selected items
    sw-chemistry
    hlrn-sw
    software
    {"serverDuration": 10, "requestCorrelationId": "2bb1a7598d6c4dcc9a88eed5446b07de"}