Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.


Auszug

a computational chemistry application provided by Gaussian , Inc.

Inhalt
typeflat

License agreement

...

Please contact support with a copy of the following statement, to add your user ID to the Gaussian UNIX group.

Limitations

Gaussian 16 is available at NHR@ZIB. 

"Linda parallelism", Cluster/network parallel execution of Gaussian, is not supported at any of our systems. Only "shared-memory multiprocessor parallel execution" is supported, therefore no Gaussian job can use more than a single compute node.

Description

Gaussian 16 is the latest in the Gaussian series of programs. It provides state-of-the-art capabilities for electronic structure modeling. 

...

VersionInstallation Pathmodulefile
Modules for running on CPUs 
Gaussian 16 Rev. C.02/sw/chem/gaussian/g16_C02/skl/g16 gaussian/16.C02

Modules for running on GPUs

Gaussian 16 Rev. C.02/sw/chem/gaussian/g16_C02/a100/g16 

gaussian/16.C02

GPU Job Performance

GPUs are effective for DFT calculations, for both the ground and excited states for larger molecules. However, they are not effective for smaller jobs or for use in post-SCF calculations such as MP2 or CCSD.

Job submissions

Besides your Gaussian input file you have to prepare a job script to define the compute resources for the job; both input file and job script have to be in the same directory.

Default runtime files (.rwf, .inp, .d2e, .int, .skr files) will be saved only temporarily in $ $LOCAL_TMPDIR on the compute node where to which the job was scheduled to. The files will be removed by the scheduler when a job is done.

If you wish to restart your calculations when a job is done finished (successful or not), please define the checkpoint (file_name.chk) file in your G16 input file (%Chk=route/route/name.chk).

CPU jobs

Since Because only the "shared-memory multiprocessor" parallel version is supported, your jobs can use only one node and up to 96 maximum cores per node.

CPU job script  example

Codeblock
languagebash
titleCPU_submit
#!/bin/bash
#SBATCH --time=12:00:00 		       # expected run time (hh:mm:ss)
#SBATCH --partition=standard96:ssd     # Compute Nodes with installed local SSD storage
#SBATCH --mem=16G                      # memory, roughly 2 times %mem defined in the input name.com file
#SBATCH --cpus-per-task=16             # No. of CPUs, same amount as defined by %nprocs in the filename.com input file

module load gaussian/16.C02 

g16 filename.com                       # g16 command, input: filename.com
 


GPU jobs

Since Because only the "shared-memory multiprocessor" parallel version is supported, your jobs can use only one node up to 4 GPUs per node

Codeblock
titleGPU job script
linenumberstrue
#!/bin/bash 
#SBATCH --time=12:00:00 		       # expected run time (hh:mm:ss)
#SBATCH --partition=gpu-a100           # Compute Nodes with installed local SSD storage
#SBATCH --nodes=1                      # number of compute node
#SBATCH --mem=32G                      # memory, roughly 2 times %mem defined in the input name.com file
#SBATCH --cpus-per-task=1ntasks=32                    # # No.CPUs plus the number of control CPUs same amount as defined by %cpu plus %GPUCPU in the filename.com input file 
#SBATCH --gpus-per-task=gres=gpu:4                    # # No. GPUs same amount as defined by %GPUCPU in the filename.com input file   

module load cuda/11.8
module load gaussian/16.C02   

g16 filename.com                       # g16 command, input: filename.com 

Specifying GPUs & Control CPUs for a Gaussian Job

The %GPUCPU Link 0 command specifies which GPUs and associated controlling CPUs to use for a calculation and their controlling CPUs are specified with the %GPUCPU Link 0 command. This command takes one parameter:

%GPUCPU=gpu-list=control-cpus

For example, for 2 GPUs, a job which uses 2 control CPU cores would use the following Link 0 commands:

%CPU=0-1                                                               #Control CPUs are included in this list.

%GPUCPU=0,1=0,1

Using 4 GPUs and 4 control CPU cores:

%CPU=0-3                                                               #Control CPUs are included in this list.

%GPUCPU=0,1,2,3=0,1,2,3

Using 4 GPUs and a total of 32 CPU cores including 4 control CPU cores :

%CPU=0-31                                                               #Control CPUs are included in this list.

%GPUCPU=0,1,2,3=0,1,2,3

Interactive jobs

For Example for CPU calculations:

~ $ salloc -t 00:10:00 -p standard96:ssd  -N1 --tasks-per-node 24

~ $ g16 filename.com 

Exmaple for GPU calculations:

~ $ salloc -t 00:10:00 -p gpu-a100 -N1 --ntasks=32 

~ $ g16 filename.com 

Restart calculations from checkpoint files 

opt=restart 

Restart molecular Molecular geometry optimization jobs can be restarted from a checkpoint file. All existing information; basis sets, wavefunction and molecular structures during the geometry optimization can be read from the checkpoint file.

Codeblock
languagetext
titlerestart_opt.com
%chk=filename.chk
%mem=16GB
%nprocs=16
# method chkbasis guess=read geom=allcheck opt=restart  

 #restart

Restart The same restarting can be done for vibrational frequency computation from the checkpoint filecomputations.

Codeblock
languagetext
titlerestart_freq.com
%chk=filename.chk
%mem=16GB
%nprocs=16
# restart  

...

Example for CPU calculations: water.com

Example for GPU calculations: DeOxyThymidine.com