A Finite Element Analysis Package for Engineering Application
Details of the HLRN Installation of ABAQUS
The ABAQUS versions currently installed are
- ABAQUS 2020
- ABAQUS 2019 (default)
- ABAQUS 2018 (first version with multi-node support)
- ABAQUS 2017
- ABAQUS 2016 (last version including Abaqus/CFD)
The module name is abaqus
. Other versions may be installed. Inspect the output of : module avail abaqus
Conditions for Usage and Licensing at HLRN
All usage of ABAQUS at HLRN is strictly limited to teaching and academic research for non-industry funded projects only.
Access to and usage of the software is regionally limited:
- Users from Berlin (account names "be*") are allowed to use the ZIB license.
- Users from other german states can use the software installed on HLRN but have to use their own license from their own license server (see How to bring your own license).
Usually, there are always sufficient licenses for Abaqus/Standard and Abaqus/Explicit command-line based jobs. You can check this yourself (just in case):
# on NHR@ZIB systems lmutil lmstat -S -c 1700@10.241.101.140 | grep -e "ABAQUSLM:" -e "Users of abaqus" -e "Users of parallel" -e "Users of cae" # on NHR@GWDG systems lmutil lmstat -S -c 1055@10.241.201.133 | grep -e "ABAQUSLM:" -e "Users of abaqus" -e "Users of parallel" -e "Users of cae"
Example Jobscripts
The input file of the test case (Large Displacement Analysis of a linear beam in a plane) is: c2.inp
Distributed Memory Parallel Processing
#!/bin/bash #SBATCH -t 00:10:00 #SBATCH --nodes=2 #SBATCH --ntasks-per-node=48 #SBATCH -p standard96:test #SBATCH --mail-type=ALL #SBATCH --job-name=abaqus.c2 module load abaqus/2020 # host list: echo "SLURM_NODELIST: $SLURM_NODELIST" create_abaqus_hostlist_for_slurm # This command will create the file abaqus_v6.env for you. # If abaqus_v6.env exists already in the case folder, it will append the line with the hostlist. ### ABAQUS parallel execution abq2019 analysis job=c2 cpus=${SLURM_NTASKS} standard_parallel=all mp_mode=mpi interactive double echo '#################### ABAQUS finished ############'
SLURM logs to: slurm-<your job id>.out
The log of the solver is written to: c2.msg
The small number of elements in this example does not allow to use 2x96 cores. Hence, 2x48 are utilized here. But typically, if there is sufficient memory per core, we recommend using all physical cores per node (such as, in the case of standard96: #SBATCH --ntasks-per-node=96
). Please refer to Compute node partitions, to see the number of cores on your selected partition and machine (Lise, Emmy).
Single Node Processing
#!/bin/bash #SBATCH -t 00:10:00 #SBATCH --nodes=1 ## 2016 and 2017 do not run on more than one node #SBATCH --ntasks-per-node=96 #SBATCH -p standard96:test #SBATCH --job-name=abaqus.c2 module load abaqus/2016 # host list: echo "SLURM_NODELIST: $SLURM_NODELIST" create_abaqus_hostlist_for_slurm # This command will create the file abaqus_v6.env for you. # If abaqus_v6.env exists already in the case folder, it will append the line with the hostlist. ### ABAQUS parallel execution abq2016 analysis job=c2 cpus=${SLURM_NTASKS} standard_parallel=all mp_mode=mpi interactive double echo '#################### ABAQUS finished ############'
Abaqus CAE GUI - not recommended for supercomputer use!
If you cannot set up your case input files *.inp by other means you may start a CAE GUI as a last resort on our compute nodes.
But be warned: to keep fast/small OS images on the compute node there is a minimal set of graphic drivers/libs only; X-window interactions involve high latency.
If you comply with our license terms (discussed above) you can use one of our four CAE licenses. In this case, please always add
#SBATCH -L cae
to your job script. This ensures that the SLURM scheduler starts your job only if a CAE license is available.
srun -p standard96:test -L cae --x11 --pty bash # wait for node allocation (a single node is the default), then run the following on the compute node module load abaqus/2022 abaqus cae -mesa