Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.
Auszug
A Package for Computational Fluid Dynamics Simulations

General Information

The ANSYS software package is developed and distributed by ANSYS, Inc..

Info

This documentation describes the specifics of installation and usage of ANSYS at HLRN. Introductory courses for ANSYS as well as courses for special topics are offered by ANSYS Inc. and their regional offices, e.g. in Germany. It is strongly recommended to take at least an introductory course (see the CAD-FEM GmbH homepage).

Details of the HLRN Installation of ANSYS

The ANSYS versions currently installed are

...

Info

The module name is ansys. Other versions of ANSYS may be installed. Inspect the output of module avail ansys.

Example Jobscripts

Warnung
titleLicences

Important: Always add
#SBATCH -L ansys
to your job script.

This allows for the batch system to start the job only, when the appropriate number of licenses is available.
You can check the availability yourself: scontrol show lic
aa_t_a is a "ANSYS Academic Teaching License" with a maximum of 16 tasks.
aa_r is a "ANSYS Academic Research License" with 16 inclusive tasks. Research jobs with more than 16 tasks cost additional "aa_r_hpc" licenses.

The use of Ansys is restricted to members of the ansys user group. You can apply to become a group member at support[at]hlrn.de
Please note: Our licenses are restricted to students, PhD students, teachers and trainers of public institutions. They cannot be used in projects that are financed by industrial partners.

Example Jobscripts


Auszug
General computational fluid dynamics solver (cell-centered FVM). GPUs are supported.

General Information

Info

To obtain and checkout a product license please read Ansys Suite first.

Documentation and Tutorials

Info
Besides the official documentation and tutorials (see Ansys Suite), another alternative source is: https://cfd.ninja/tutorials
As part of the official documentation you find for example all text commands to write journal files: /sw/eng/ansys_inc/v231/doc_manuals/v231/Ansys_Fluent_Text_Command_List.pdf

Example Jobscripts

The underlying test case are

Codeblock
languagebash
titleThis is an example for a parallel distributed memory job on 2 nodes with 40 tasks per nodeConvection - 2 CPU-nodes each with 96 cores (IntelMPI)
#!/bin/bash
#SBATCH -t 00:10:00
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=4096
#SBATCH -L ansys:80  ### Important: match number to (nodes)*(tasks-per-node)
#SBATCH -p mediumstandard96:test
#SBATCH --mail-type=ALL
#SBATCH --output="cavity.log.%j"
#SBATCH --job-name=cavity_on_cpu
 
module load ansys/2019r22023r2
srun hostname -s > hostfile
echo "Running on nodes: ${SLURM_JOB_NODELIST}"
 
fluent 2d -g -t${SLURM_NTASKS}  -ssh  ssh  -mpi=intel -pib -cnf=hostfile << EOFluentInput >cavity.out.$SLURM_JOB_ID
      ; this is an Ansys journal file aka text user interface (TUI) file
      file/read-case cavityinitial_run.cas
      parallel.h5
      parallel/partition/method/cartesian-axes 2
      file/auto-save/append-file-name time-step 6
      file/auto-save/case-frequency if-case-is-modified
 solve      file/auto-save/data-frequency 10
      file/auto-save/retain-most-recent-files yes
      solve/initialize/initialize-flow
      solve/iterate 100
      exit
      yes
EOFluentInput
 
echo '#################### Fluent finished ############'


Codeblock
languagebash
titleNozzle flow - 1 GPU-node with 1 host-cpu and 1 GPU (new gpu native mode, OpenMPI)
#!/bin/bash
#SBATCH -t 00:59:00
#SBATCH --nodes=1
#SBATCH --partition=gpu-a100:shared ### on GPU-cluster of NHR@ZIB
#SBATCH --ntasks-per-node=1
#SBATCH --gres=gpu:1             # number of GPUs per node - ignored if exclusive partition with 4 GPUs
#SBATCH --gpu-bind=single:1      # bind each process to its own GPU (single:<tasks_per_gpu>)
#SBATCH -L ansys
#SBATCH --output="slurm-log.%j"

module add gcc openmpi/gcc.11 ansys/2023r2_mlx_openmpiCUDAaware # external OpenMPI is CUDA-aware
hostlist=$(srun hostname -s | sort | uniq -c | awk '{printf $2":"$1","}')
echo "Running on nodes: $hostlist"

cat <<EOF >tui_input.jou
file/read-cas nozzle_gpu_supported.cas.h5
solve/initialize/hyb-initialization
solve/iterate 100 yes
file/write-case-data outputfile1
file/export cgns outputfile2 full-domain yes yes
pressure temperature x-velocity y-velocity mach-number
quit
exit
EOF

fluent 3ddp -g -cnf=$hostlist -t${SLURM_NTASKS} -gpu -nm -i tui_input.jou \
       -mpi=openmpi -pib -mpiopt="--report-bindings --rank-by core" >/dev/null 2>&1
echo '#################### Fluent finished ############'


Codeblock
languagebash
titleConvection - 2 GPU-nodes each with 4 cpus/GPUs (old gpgpu mode, OpenMPI)
#!/bin/bash
#SBATCH -t 00:10:00
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=4
#SBATCH -L ansys
#SBATCH -p gpu-a100  ### on GPU-cluster of NHR@ZIB
#SBATCH --output="slurm.log.%j"
#SBATCH --job-name=cavity_on_gpu

module add gcc openmpi/gcc.11 # external OpenMPI is CUDA aware
module add ansys/2023r2_mlx_openmpiCUDAaware

hostlist=$(srun hostname -s | sort | uniq -c | awk '{printf $2":"$1","}')
echo "Running on nodes: $hostlist"

cat <<EOF >fluent.jou
; this is an Ansys journal file aka text user interface (TUI) file
parallel/gpgpu/show
file/read-case initial_run.cas.h5
solve/set/flux-type yes
EOFluentInput

solve/iterate 100
file/write-case-data outputfile
ok
exit
EOF

fluent 2d -g -t${SLURM_NTASKS} -gpgpu=4 -mpi=openmpi -pib -cnf=$hostlist -i fluent.jou  >/dev/null 2>&1
echo '#################### Fluent finished ############'

Your job can be offloaded if parallel/gpgpu/show denotes the selected devices with a "(*)".
Your job was offloaded successfully if the actual call of you solver prints "AMG on GPGPU".
In this case, your .trn output file contains device_list and amgx_and_runtime, respectively.

Info

Ansys only supports certain GPU vendors/models:
https://www.ansys.com/it-solutions/platform-support/previous-releases
Look here for the PDF called "Graphics Cards Tested" of your version... (most Nvidia, some AMD)


Info
The number of CPU-cores (e.g. ntasks-per-node=Integer*GPUnr) per node must be an integer multiple of the GPUs (e.g. gpgpu=GPUnr) per node.

Fluent GUI: to setup your case at your local machine

Unfortunately, the case setup is most convenient with the Fluent GUI only. Therefore, we recommend doing all necessary GUI interactions on your local machine beforehand. As soon as the case setup is complete (geometry, materials, boundaries, solver method, etc.), save it as a *.cas file. After copying the *.cas file to the working directory of the supercomputer, this prepared case (incl. the geometry) just needs to be read [file/read-case], initialized [solve/initialize/initialize-flow], and finally executed [solve/iterate]. Above, you will find examples of *.jou (TUI) files in the job scripts.

If you cannot set up your case input files *.cas by other means you may start a Fluent GUI as a last resort on our compute nodes.
But be warned: to keep fast/small OS images on the compute node there is a minimal set of graphic drivers/libs only; X-window interactions involve high latency.

Codeblock
languagebash
titleInteractive Fluent GUI run (not recommended for supercomputer use)
srun -N 1 -p standard96:test -L ansys --x11 --pty bash

# wait for node allocation, then run the following on the compute node 

export XDG_RUNTIME_DIR=$TMPDIR/$(basename $XDG_RUNTIME_DIR); mkdir -p $XDG_RUNTIME_DIR
module add ansys/2023r1
fluent &