VS Code
VS Code is an IDE that while not provided on the clusters, many users use on their own machines and connect into the clusters with.
Description
Visual Studio Code (VS Code) and the completely free software version VSCodium (relationship is like Chromium to Chrome) are a commonly used IDEs that many users use on their own machines for development and are capable of SSH-ing into other machines for remote operation, editing, and debugging. While neither are provided on the clusters, many users edit files, run their codes, and debug their codes on the clusters via VS Code or VSCodium run on their own machines. This page is here to point out how to do certain things and avoid certain pitfalls.
Modules
None. Users run it on their own machines.
Connecting to Singularity/Apptainer Containers
The following contains a guide how to actively develop in a Singularity/Apptainer container. See Singularity/Apptainer for more information on the singularity
 module. Both Singularity and Apptainer are largely compatible with each other, and in fact you run the container with the singularity
 command regardless of which module you use.
module load SINGULARITY_OR_APPTAINER/VERSION
This guide was contributed by our GPU users Anwai Archit and Arne Nix who kindly provided this documentation. It is lightly edited to fit the format of this page and fix a few typos. Any place you see "singularity", you could replace it with "apptainer" if you use the apptainer
 module instead. It was written for the Grete GPU nodes of Emmy, but can be easily translated to other partitions/clusters (see /wiki/spaces/PUB/pages/428683 for more information). Obviously, rename any directories and files as makes sense for your user name, the SIF container file you use, and the names of your files and directories.
Starting a Singularity Container
First we need to setup a singularity container and submit it to run on a GPU node. For me this is done by the following SBATCH script:
#SBATCH --job-name=anix_dev-0        # Name of the job #SBATCH --ntasks=1              # Number of tasks #SBATCH --cpus-per-task=2          # Number of CPU cores per task #SBATCH --nodes=1              # Ensure that all cores are on one machine #SBATCH --time=0-01:00            # Runtime in D-HH:MM #SBATCH --mem-per-cpu=3000          # Memory pool for all cores (see also --mem-per-cpu) #SBATCH --output=logs/anix_dev-0.%j.out   # File to which STDOUT will be written #SBATCH --error=logs/anix_dev-0.%j.err    # File to which STDERR will be written #SBATCH --mail-type=ALL           # Type of email notification- BEGIN,END,FAIL,ALL #SBATCH --mail-user=None           # Email to which notifications will be sent #SBATCH -p gpu                # Partition to submit to #SBATCH -G 1                 # Number of requested GPUs export SRCDIR=$HOME/src export WORKDIR=$LOCAL_TMPDIR/$USER/$SLURM_JOB_ID mkdir -p $WORKDIR mkdir -p $WORKDIR/tmp_home module load singularity module load cuda/11.2 scontrol show job $SLURM_JOB_ID  # print some info singularity instance start --nv --env-file xaug/.env --no-home --bind  $WORKDIR/tmp_home:$HOME,$HOME/.vscode-server:$HOME/.vscode-server,$SRCDIR:/src,$WORKDIR:/work xaug_image.sif anix_dev-0 /src/xaug/run_dev.sh sleep infinity
Important here are four things:
- We need to load
cuda
 andsingularity
 to have it available to our container. - We need to bind
$HOME/.vscode-server
 to the same place in the container. - We need to remember the name of our container. In this case:
anix_dev-0
- We need to keep the script running in order to not loose the node. This is achieved by
sleep infinity
 .
SSH Config to Connect to the Container
We want to connect to the container via ssh. For this, setup the following configuartion in ~/.ssh/config
 on your local machine.
Host hlrn   User <your_username>   HostName glogin.hlrn.de   IdentityFile ~/.ssh/<your_key> Host hlrn-*   User <your_username>   IdentityFile ~/.ssh/<your_key>   Port 22   ProxyCommand ssh $(echo %h | cut -d- -f1) nc $(echo %h | cut -d- -f2) %p Host hlrn-*-singularity   User <your_username>   IdentityFile ~/.ssh/<your_key>   RequestTTY force   Port 22   ProxyCommand ssh $(echo %h | cut -d- -f1) nc $(echo %h | cut -d- -f2) %p   RemoteCommand module load singularity; singularity shell --env HTTP_PROXY="http://www-cache.gwdg.de:3128",HTTPS_PROXY="http://www-cache.gwdg.de:3128" instance://<container_name>
This enables three different connections from your local machine:
- Connection to the login node:
ssh hlrn
 - Connection to a compute node that we obtained through the scheduler, e.g.
ssh hlrn-ggpu02
 - Connection to the singularity container running on a compute node, e.g.
ssh hlrn-ggpu02-singularity
Â
Connecting VS-Code to the Container
This follows mostly the tutorial here. Then add the following lines:
  "remote.SSH.enableRemoteCommand": true,   "remote.SSH.useLocalServer": true,
Now remote connections should be possible. Before we can connect to the individual cluster nodes, we first need to initialize the vscode-server
 on the login nodes. For this we press Ctrl+Shift+P, enter Remote-SSH: Connect to Host and select hlrn
. This should (after typing in the password of your private key) connect our VS-Code to the login node. At the same time the vscode-server
 is installed in your home directory on the cluster. Additionally, you should go into the extensions and install all extensions (e.g. python) that you need on the cluster. These two steps cannot be done on the compute nodes, so it is important to do it on the login node beforehand. Finally, we can close the connection to the login node and now connect to the compute node that we have the singularity container running on. This works in the same way as the connection to the login node, but instead of hlrn
, we select hlrn-<your_node>-singularity
.