Atlassian uses cookies to improve your browsing experience, perform analytics and research, and conduct advertising. Accept all cookies to indicate that you agree to our use of cookies on your device. Atlassian cookies and tracking notice, (opens new window)
User Manual

User Manual
Results will update as you type.
  • Application Guide
  • Status of System
  • Usage Guide
  • Compute partitions
  • Software
    • AI Frameworks and Tools
    • Bring your own license
    • Chemistry
    • Data Manipulation
    • Engineering
    • Environment Modules
    • Miscellaneous
    • Numerics
    • Virtualization
    • Devtools Compiler Debugger
      • Anaconda (conda) and Mamba
      • antlr
      • Arm DDT
      • Charm++
      • Intel oneAPI Compiler Suite
      • Intel oneAPI MPI
      • Intel oneAPI Performance Tools
      • LIKWID Performance Tool Suite
      • OpenMPI
      • Patchelf
      • Python
      • SYCL
      • Valgrind instrumentation framework
      • VS Code
        • Containerized VScode server (old)
      • Julia
      • Perforce TotalView
    • Visualisation Tools
  • FAQ
  • NHR Community
  • Contact

    You‘re viewing this with anonymous access, so some content might be blocked.
    /
    Containerized VScode server (old)

    Containerized VScode server (old)

    Apr. 28, 2025

    Connecting to Singularity/Apptainer Containers

    The following contains a guide how to actively develop in a Singularity/Apptainer container. See Singularity/Apptainer for more information on the singularity  module. Both Singularity and Apptainer are largely compatible with each other, and in fact you run the container with the singularity  command regardless of which module you use.

    module load SINGULARITY_OR_APPTAINER/VERSION


    This guide was contributed by our GPU users Anwai Archit and Arne Nix who kindly provided this documentation. It is lightly edited to fit the format of this page and fix a few typos. Any place you see "singularity", you could replace it with "apptainer" if you use the apptainer  module instead. It was written for the Grete GPU nodes of Emmy, but can be easily translated to other partitions/clusters (see /wiki/spaces/PUB/pages/428683 for more information). Obviously, rename any directories and files as makes sense for your user name, the SIF container file you use, and the names of your files and directories.

    Starting a Singularity Container

    First we need to setup a singularity container and submit it to run on a GPU node. For me this is done by the following SBATCH script:

    #SBATCH --job-name=anix_dev-0               # Name of the job
    #SBATCH --ntasks=1                          # Number of tasks
    #SBATCH --cpus-per-task=2                   # Number of CPU cores per task
    #SBATCH --nodes=1                           # Ensure that all cores are on one machine
    #SBATCH --time=0-01:00                      # Runtime in D-HH:MM
    #SBATCH --mem-per-cpu=3000                  # Memory pool for all cores (see also --mem-per-cpu)
    #SBATCH --output=logs/anix_dev-0.%j.out     # File to which STDOUT will be written
    #SBATCH --error=logs/anix_dev-0.%j.err      # File to which STDERR will be written
    #SBATCH --mail-type=ALL                     # Type of email notification- BEGIN,END,FAIL,ALL
    #SBATCH --mail-user=None                    # Email to which notifications will be sent
    #SBATCH -p gpu                              # Partition to submit to
    #SBATCH -G 1                                # Number of requested GPUs
    
    
    export SRCDIR=$HOME/src
    export WORKDIR=$LOCAL_TMPDIR/$USER/$SLURM_JOB_ID
    mkdir -p $WORKDIR
    mkdir -p $WORKDIR/tmp_home
    
    module load singularity
    module load cuda/11.2
    scontrol show job $SLURM_JOB_ID  # print some info
    
    singularity instance start --nv --env-file xaug/.env --no-home --bind  $WORKDIR/tmp_home:$HOME,$HOME/.vscode-server:$HOME/.vscode-server,$SRCDIR:/src,$WORKDIR:/work xaug_image.sif anix_dev-0 /src/xaug/run_dev.sh
    sleep infinity


    Important here are four things:

    1. We need to load cuda  and singularity  to have it available to our container.
    2. We need to bind $HOME/.vscode-server  to the same place in the container.
    3. We need to remember the name of our container. In this case: anix_dev-0
    4. We need to keep the script running in order to not loose the node. This is achieved by sleep infinity .

    SSH Config to Connect to the Container

    We want to connect to the container via ssh. For this, setup the following configuartion in ~/.ssh/config  on your local machine.

    ~/.ssh/config
    Host hlrn
        User <your_username>
        HostName glogin.hlrn.de
        IdentityFile ~/.ssh/<your_key>
    
    Host hlrn-*
        User <your_username>
        IdentityFile ~/.ssh/<your_key>
        Port 22
        ProxyCommand ssh $(echo %h | cut -d- -f1) nc $(echo %h | cut -d- -f2) %p
    
    Host hlrn-*-singularity
        User <your_username>
        IdentityFile ~/.ssh/<your_key>
        RequestTTY force
        Port 22
        ProxyCommand ssh $(echo %h | cut -d- -f1) nc $(echo %h | cut -d- -f2) %p
        RemoteCommand module load singularity; singularity shell --env HTTP_PROXY="http://www-cache.gwdg.de:3128",HTTPS_PROXY="http://www-cache.gwdg.de:3128" instance://<container_name>

    This enables three different connections from your local machine:

    1. Connection to the login node: ssh hlrn 
    2. Connection to a compute node that we obtained through the scheduler, e.g. ssh hlrn-ggpu02 
    3. Connection to the singularity container running on a compute node, e.g. ssh hlrn-ggpu02-singularity 

    Connecting VS-Code to the Container

    This follows mostly the tutorial here (VS Code Intellisense Inside an Apptainer Container.pdf). Then add the following lines:

        "remote.SSH.enableRemoteCommand": true,
        "remote.SSH.useLocalServer": true,


    Now remote connections should be possible. Before we can connect to the individual cluster nodes, we first need to initialize the vscode-server  on the login nodes. For this we press Ctrl+Shift+P, enter Remote-SSH: Connect to Host and select hlrn . This should (after typing in the password of your private key) connect our VS-Code to the login node. At the same time the vscode-server  is installed in your home directory on the cluster. Additionally, you should go into the extensions and install all extensions (e.g. python) that you need on the cluster. These two steps cannot be done on the compute nodes, so it is important to do it on the login node beforehand. Finally, we can close the connection to the login node and now connect to the compute node that we have the singularity container running on. This works in the same way as the connection to the login node, but instead of hlrn , we select hlrn-<your_node>-singularity .



    , multiple selections available, Use left or right arrow keys to navigate selected items
    hlrn-sw
    sw-devtools
    software
    {"serverDuration": 10, "requestCorrelationId": "3aec5d4002b44d1cacf6f2864559bf43"}