Atlassian uses cookies to improve your browsing experience, perform analytics and research, and conduct advertising. Accept all cookies to indicate that you agree to our use of cookies on your device. Atlassian cookies and tracking notice, (opens new window)
User Manual

User Manual
Results will update as you type.
  • Application Guide
  • Status of System
  • Usage Guide
  • Compute partitions
  • Software
    • AI Frameworks and Tools
    • Bring your own license
    • Chemistry
    • Data Manipulation
    • Engineering
      • Abaqus - only with own license
      • Ansys Suite
        • CFX
        • Fluent
        • LS-DYNA
        • Lumerical - only with own license
        • Mechanical
      • MATLAB - only with own license
      • OpenFOAM
      • STAR-CCM+
    • Environment Modules
    • Miscellaneous
    • Numerics
    • Virtualization
    • Devtools Compiler Debugger
    • Visualisation Tools
  • FAQ
  • NHR Community
  • Contact

    You‘re viewing this with anonymous access, so some content might be blocked.
    /
    Fluent

    Fluent

    Juli 03, 2024

    General computational fluid dynamics solver (cell-centered FVM). GPUs are supported.

    General Information

    To obtain and checkout a product license please read Ansys Suite first.

    Documentation and Tutorials

    Besides the official documentation and tutorials (see Ansys Suite), another alternative source is: https://cfd.ninja/tutorials
    As part of the official documentation you find for example all text commands to write journal files: /sw/eng/ansys_inc/v231/doc_manuals/v231/Ansys_Fluent_Text_Command_List.pdf

    Example Jobscripts

    The underlying test case are

    • natural convection / circulation: descriped here, cas file: NaturalConvection_SimulationFiles.zip
    • steady nozzle flow: described in Fluent tutorial guide (2023 R1, Ch. 8) "Modeling Transient Compressible Flow", cas file: nozzle_gpu_supported.cas.h5
    Convection - 2 CPU-nodes each with 96 cores (IntelMPI)
    #!/bin/bash
    #SBATCH -t 00:10:00
    #SBATCH --nodes=2
    #SBATCH --ntasks-per-node=96
    #SBATCH -L ansys
    #SBATCH -p standard96:test
    #SBATCH --mail-type=ALL
    #SBATCH --output="cavity.log.%j"
    #SBATCH --job-name=cavity_on_cpu
     
    module load ansys/2023r2
    srun hostname -s > hostfile
    echo "Running on nodes: ${SLURM_JOB_NODELIST}"
     
    fluent 2d -g -t${SLURM_NTASKS} -ssh  -mpi=intel -pib -cnf=hostfile << EOFluentInput >cavity.out.$SLURM_JOB_ID
          ; this is an Ansys journal file aka text user interface (TUI) file
          file/read-case initial_run.cas.h5
          parallel/partition/method/cartesian-axes 2
          file/auto-save/append-file-name time-step 6
          file/auto-save/case-frequency if-case-is-modified
          file/auto-save/data-frequency 10
          file/auto-save/retain-most-recent-files yes
          solve/initialize/initialize-flow
          solve/iterate 100
          exit
          yes
    EOFluentInput
     
    echo '#################### Fluent finished ############'
    Nozzle flow - 1 GPU-node with 1 host-cpu and 1 GPU (new gpu native mode, OpenMPI)
    #!/bin/bash
    #SBATCH -t 00:59:00
    #SBATCH --nodes=1
    #SBATCH --partition=gpu-a100:shared ### on GPU-cluster of NHR@ZIB
    #SBATCH --ntasks-per-node=1
    #SBATCH --gres=gpu:1             # number of GPUs per node - ignored if exclusive partition with 4 GPUs
    #SBATCH --gpu-bind=single:1      # bind each process to its own GPU (single:<tasks_per_gpu>)
    #SBATCH -L ansys
    #SBATCH --output="slurm-log.%j"
    
    module add gcc openmpi/gcc.11 ansys/2023r2_mlx_openmpiCUDAaware # external OpenMPI is CUDA-aware
    hostlist=$(srun hostname -s | sort | uniq -c | awk '{printf $2":"$1","}')
    echo "Running on nodes: $hostlist"
    
    cat <<EOF >tui_input.jou
    file/read-cas nozzle_gpu_supported.cas.h5
    solve/initialize/hyb-initialization
    solve/iterate 100 yes
    file/write-case-data outputfile1
    file/export cgns outputfile2 full-domain yes yes
    pressure temperature x-velocity y-velocity mach-number
    quit
    exit
    EOF
    
    fluent 3ddp -g -cnf=$hostlist -t${SLURM_NTASKS} -gpu -nm -i tui_input.jou \
           -mpi=openmpi -pib -mpiopt="--report-bindings --rank-by core" >/dev/null 2>&1
    echo '#################### Fluent finished ############'
    Convection - 2 GPU-nodes each with 4 cpus/GPUs (old gpgpu mode, OpenMPI)
    #!/bin/bash
    #SBATCH -t 00:10:00
    #SBATCH --nodes=2
    #SBATCH --ntasks-per-node=4
    #SBATCH -L ansys
    #SBATCH -p gpu-a100  ### on GPU-cluster of NHR@ZIB
    #SBATCH --output="slurm.log.%j"
    #SBATCH --job-name=cavity_on_gpu
    
    module add gcc openmpi/gcc.11 # external OpenMPI is CUDA aware
    module add ansys/2023r2_mlx_openmpiCUDAaware
    
    hostlist=$(srun hostname -s | sort | uniq -c | awk '{printf $2":"$1","}')
    echo "Running on nodes: $hostlist"
    
    cat <<EOF >fluent.jou
    ; this is an Ansys journal file aka text user interface (TUI) file
    parallel/gpgpu/show
    file/read-case initial_run.cas.h5
    solve/set/flux-type yes
    solve/iterate 100
    file/write-case-data outputfile
    ok
    exit
    EOF
    
    fluent 2d -g -t${SLURM_NTASKS} -gpgpu=4 -mpi=openmpi -pib -cnf=$hostlist -i fluent.jou  >/dev/null 2>&1
    echo '#################### Fluent finished ############'

    Your job can be offloaded if parallel/gpgpu/show denotes the selected devices with a "(*)".
    Your job was offloaded successfully if the actual call of you solver prints "AMG on GPGPU".
    In this case, your .trn output file contains device_list and amgx_and_runtime, respectively.

    Ansys only supports certain GPU vendors/models:
    https://www.ansys.com/it-solutions/platform-support/previous-releases
    Look here for the PDF called "Graphics Cards Tested" of your version... (most Nvidia, some AMD)

    The number of CPU-cores (e.g. ntasks-per-node=Integer*GPUnr) per node must be an integer multiple of the GPUs (e.g. gpgpu=GPUnr) per node.

    Fluent GUI: to setup your case at your local machine

    Unfortunately, the case setup is most convenient with the Fluent GUI only. Therefore, we recommend doing all necessary GUI interactions on your local machine beforehand. As soon as the case setup is complete (geometry, materials, boundaries, solver method, etc.), save it as a *.cas file. After copying the *.cas file to the working directory of the supercomputer, this prepared case (incl. the geometry) just needs to be read [file/read-case], initialized [solve/initialize/initialize-flow], and finally executed [solve/iterate]. Above, you will find examples of *.jou (TUI) files in the job scripts.

    If you cannot set up your case input files *.cas by other means you may start a Fluent GUI as a last resort on our compute nodes.
    But be warned: to keep fast/small OS images on the compute node there is a minimal set of graphic drivers/libs only; X-window interactions involve high latency.

    Interactive Fluent GUI run (not recommended for supercomputer use)
    srun -N 1 -p standard96:test -L ansys --x11 --pty bash
    
    # wait for node allocation, then run the following on the compute node 
    
    export XDG_RUNTIME_DIR=$TMPDIR/$(basename $XDG_RUNTIME_DIR); mkdir -p $XDG_RUNTIME_DIR
    module add ansys/2023r1
    fluent &
    , multiple selections available, Use left or right arrow keys to navigate selected items
    sw-engineering
    software
    hlrn-sw
    {"serverDuration": 9, "requestCorrelationId": "84edc36ae60b4240b2f239ff39f51e20"}