Atlassian uses cookies to improve your browsing experience, perform analytics and research, and conduct advertising. Accept all cookies to indicate that you agree to our use of cookies on your device. Atlassian cookies and tracking notice, (opens new window)
User Manual

User Manual
Results will update as you type.
  • Application Guide
  • Status of System
  • Usage Guide
  • Compute partitions
  • Software
    • AI Frameworks and Tools
    • Bring your own license
    • Chemistry
      • CP2K
      • exciting
      • Gaussian
      • GPAW
      • NAMD
      • Octopus
      • Quantum ESPRESSO
      • RELION
      • TURBOMOLE
      • VASP
      • Wannier90
      • GROMACS
      • SIESTA
    • Data Manipulation
    • Engineering
    • Environment Modules
    • Miscellaneous
    • Numerics
    • Virtualization
    • Devtools Compiler Debugger
    • Visualisation Tools
  • FAQ
  • NHR Community
  • Contact

    You‘re viewing this with anonymous access, so some content might be blocked.
    /
    Octopus

    Octopus

    Aug. 22, 2024

    Description

    Octopus is an ab initio program that describes electrons quantum-mechanically within density-functional theory (DFT) and in its time-dependent form (TDDFT) for problems with a time evolution. Nuclei are described classically as point particles. Electron-nucleus interaction is described within the pseudopotential approximation.

    Octopus is free software, released under the GPL license.

    More information about the program and its usage can be found on https://www.octopus-code.org/

    Modules

    Octopus is currently available only on Lise. The standard pseudopotentials deployed with Octopus are located in $OCTOPUS_ROOT/share/octopus/pseudopotentials/PSF/. If you need to use a different set, please refer to the manual.

    Octopus versionModule filesRequirementsOptional features supportedCompute partitionsCPU/GPU
    12.1octopus/12.1

    intel/2021.2

    impi/2021.7.1


    CentOS 7(Haken)/(Fehler)
    14.1octopus/14.1impi/2021.13NetCDFRocky Linux 9(Haken)/(Fehler)


    Example Jobscripts

    Assuming that your input file inp is located within the directory where you are submitting the job script, and that the output is written to out, one example of job script is given below

    CPU partitions with Rocky Linux 9
    #!/bin/bash 
    #SBATCH --time 12:00:00
    #SBATCH --partition cpu-clx
    #SBATCH --nodes=1
    #SBATCH --ntasks-per-node=24
    #SBATCH --cpus-per-task=4
    #SBATCH --job-name=octopus 
    
    module load impi/2021.13
    module load octopus/14.1
    
    # Set the number of OpenMP threads as given by the SLURM parameter "cpus-per-task"
    export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
     
    # Adjust the maximum stack size of OpenMP threads
    export OMP_STACKSIZE=512m
    
    # Do not use the CPU binding provided by slurm
    export SLURM_CPU_BIND=none
     
    # Binding OpenMP threads
    export OMP_PLACES=cores
    export OMP_PROC_BIND=close
     
    # Binding MPI tasks
    export I_MPI_PIN=yes
    export I_MPI_PIN_DOMAIN=omp
    export I_MPI_PIN_CELL=core
    
    mpirun octopus

    Please, check carefully for your use cases the best parallelization strategies in terms of e. g. the number of MPI processes and OpenMP threads. Note that the variables ParStates, ParDomains and ParKPoints defined in the input file also impact the parallelization performance.

    A similar example valid for the CPU partitions with CentOS 7 is

    CPU partitions with CentOS 7
    #!/bin/bash 
    #SBATCH --time 12:00:00
    #SBATCH --partition standard96
    #SBATCH --nodes=1
    #SBATCH --ntasks-per-node=24
    #SBATCH --cpus-per-task=4
    #SBATCH --job-name=octopus 
    
    module load intel/2021.2 impi/2021.7.1 octopus/12.1
    
    # Set the number of OpenMP threads as given by the SLURM parameter "cpus-per-task"
    export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
     
    # Adjust the maximum stack size of OpenMP threads
    export OMP_STACKSIZE=512m
    
    # Do not use the CPU binding provided by slurm
    export SLURM_CPU_BIND=none
     
    # Binding OpenMP threads
    export OMP_PLACES=cores
    export OMP_PROC_BIND=close
     
    # Binding MPI tasks
    export I_MPI_PIN=yes
    export I_MPI_PIN_DOMAIN=omp
    export I_MPI_PIN_CELL=core
    
    mpirun octopus
    , multiple selections available, Use left or right arrow keys to navigate selected items
    sw-chemistry
    hlrn-sw
    software
    {"serverDuration": 10, "requestCorrelationId": "0b9906f4f27b440698e2a5c5525b6bd3"}