a computational chemistry application provided by Gaussian Inc. |
In order to use Gaussian you have to agree to the following conditions.
1. I am not a member of a research group developing software competitive to Gaussian.
2. I will not copy the Gaussian software, or make it available to anyone else.
3. I will properly acknowledge Gaussian Inc. in publications.
Please contact support with a copy of the following statement, to add your user ID to the Gaussian UNIX group.
Gaussian 16 is available at NHR@ZIB.
"Linda parallelism", Cluster/network parallel execution of Gaussian, is not supported at any of our systems. Only "shared-memory multiprocessor parallel execution" is supported, therefore no Gaussian job can use more than a single compute node.
Gaussian 16 is the latest in the Gaussian series of programs. It provides state-of-the-art capabilities for electronic structure modeling.
The following versions have been installed:
Version | Module file | Prerequisites |
---|---|---|
Deprecated versions | ||
gaussian /16 .C02 | ||
gaussian /16 .C02 | ||
CPU CLX partition | ||
Gaussian 16 Rev. C.02 | gaussian /16 .C02 | - |
CPU Genoa partition | ||
Gaussian 16 Rev. C.02 | gaussian /16 .C02 | - |
GPU A100 partition | ||
Gaussian 16 Rev. C.02 | gaussian /16 .C02 | cuda/12.9 |
GPUs are effective for DFT calculations, for both the ground and excited states for larger molecules. However, they are not effective for smaller jobs or for use in post-SCF calculations such as MP2 or CCSD.
Besides your Gaussian input file you have to prepare a job script to define the compute resources for the job; both input file and job script have to be in the same directory.
Default runtime files (.rwf, .inp, .d2e, .int, .skr files) will be saved only temporarily in $LOCAL_TMPDIR on the compute node to which the job was scheduled. The files will be removed by the scheduler when a job is done.
If you wish to restart your calculations when a job is finished (successful or not), please define the checkpoint (file_name.chk) file in your G16 input file (%Chk=route/route/name.chk).
Because only the "shared-memory multiprocessor" parallel version is supported, your jobs can use only one node and up to 96 maximum cores per node.
#!/bin/bash #SBATCH --time=12:00:00 # expected run time (hh:mm:ss) #SBATCH --partition=cpu-clx:ssd. # Compute Nodes with installed local SSD storage #SBATCH --mem=16G # memory, roughly 2 times %mem defined in the input name.com file #SBATCH --cpus-per-task=16 # No. of CPUs, same amount as defined by %nprocs in the filename.com input file module load gaussian/16.C02 g16 filename.com # g16 command, input: filename.com |
Because only the "shared-memory multiprocessor" parallel version is supported, your jobs can use only one node up to 4 GPUs per node.
#!/bin/bash #SBATCH --time=12:00:00 # expected run time (hh:mm:ss) #SBATCH --partition=gpu-a100 # Compute Nodes with installed local SSD storage #SBATCH --nodes=1 # number of compute node #SBATCH --mem=32G # memory, roughly 2 times %mem defined in the input name.com file #SBATCH --ntasks=32 # No.CPUs plus the number of control CPUs same amount as defined by %cpu in the filename.com input file #SBATCH --gres=gpu:4 # No. GPUs same amount as defined by %GPUCPU in the filename.com input file module load cuda/11.8 module load gaussian/16.C02 g16 filename.com # g16 command, input: filename.com |
The %GPUCPU Link 0 command specifies which GPUs and associated controlling CPUs to use for a calculation. This command takes one parameter:
%GPUCPU=gpu-list=control-cpus
For example, for 2 GPUs, a job which uses 2 control CPU cores would use the following Link 0 commands:
%CPU=0-1 #Control CPUs are included in this list.
%GPUCPU=0,1=0,1
Using 4 GPUs and 4 control CPU cores:
%CPU=0-3 #Control CPUs are included in this list.
%GPUCPU=0,1,2,3=0,1,2,3
Using 4 GPUs and a total of 32 CPU cores including 4 control CPU cores :
%CPU=0-31 #Control CPUs are included in this list.
%GPUCPU=0,1,2,3=0,1,2,3
Example for CPU calculations:
~ $ salloc -t 00:10:00 -p standard96:ssd -N1 --tasks-per-node 24
~ $ g16 filename.com
Exmaple for GPU calculations:
~ $ salloc -t 00:10:00 -p gpu-a100 -N1 --ntasks=32
~ $ g16 filename.com
opt=restart
Molecular geometry optimization jobs can be restarted from a checkpoint file. All existing information; basis sets, wavefunction and molecular structures during the geometry optimization can be read from the checkpoint file.
%chk=filename.chk %mem=16GB %nprocs=16 # method chkbasis guess=read geom=allcheck opt=restart |
#restart
The same restarting can be done for vibrational frequency computations.
%chk=filename.chk %mem=16GB %nprocs=16 # restart |
Example for CPU calculations: water.com
Example for GPU calculations: DeOxyThymidine.com