Auszug |
---|
VS Visual Studio Code is an IDE that while not provided on the clusters, many users use on their own machines and connect into the clusters with. |
Description
...
with full support for remote code development. |
Modules
None. Users install and run it VS Code on their own machines.
Connecting to Singularity/Apptainer Containers
The following contains a guide how to actively develop in a Singularity/Apptainer container. See Singularity/Apptainer for more information on the singularity
module. Both Singularity and Apptainer are largely compatible with each other, and in fact you run the container with the singularity
command regardless of which module you use.
Codeblock | ||
---|---|---|
| ||
module load SINGULARITY_OR_APPTAINER/VERSION |
This guide was contributed by our GPU users Anwai Archit and Arne Nix who kindly provided this documentation. It is lightly edited to fit the format of this page and fix a few typos. Any place you see "singularity", you could replace it with "apptainer" if you use the apptainer
module instead. It was written for the Grete GPU nodes of Emmy, but can be easily translated to other partitions/clusters (see /wiki/spaces/PUB/pages/428683 for more information). Obviously, rename any directories and files as makes sense for your user name, the SIF container file you use, and the names of your files and directories.
Starting a Singularity Container
First we need to setup a singularity container and submit it to run on a GPU node. For me this is done by the following SBATCH script:
Codeblock | ||
---|---|---|
| ||
#SBATCH --job-name=anix_dev-0 # Name of the job
#SBATCH --ntasks=1 # Number of tasks
#SBATCH --cpus-per-task=2 # Number of CPU cores per task
#SBATCH --nodes=1 # Ensure that all cores are on one machine
#SBATCH --time=0-01:00 # Runtime in D-HH:MM
#SBATCH --mem-per-cpu=3000 # Memory pool for all cores (see also --mem-per-cpu)
#SBATCH --output=logs/anix_dev-0.%j.out # File to which STDOUT will be written
#SBATCH --error=logs/anix_dev-0.%j.err # File to which STDERR will be written
#SBATCH --mail-type=ALL # Type of email notification- BEGIN,END,FAIL,ALL
#SBATCH --mail-user=None # Email to which notifications will be sent
#SBATCH -p gpu # Partition to submit to
#SBATCH -G 1 # Number of requested GPUs
export SRCDIR=$HOME/src
export WORKDIR=$LOCAL_TMPDIR/$USER/$SLURM_JOB_ID
mkdir -p $WORKDIR
mkdir -p $WORKDIR/tmp_home
module load singularity
module load cuda/11.2
scontrol show job $SLURM_JOB_ID # print some info
singularity instance start --nv --env-file xaug/.env --no-home --bind $WORKDIR/tmp_home:$HOME,$HOME/.vscode-server:$HOME/.vscode-server,$SRCDIR:/src,$WORKDIR:/work xaug_image.sif anix_dev-0 /src/xaug/run_dev.sh
sleep infinity |
Important here are four things:
- We need to load
cuda
andsingularity
to have it available to our container. - We need to bind
$HOME/.vscode-server
to the same place in the container. - We need to remember the name of our container. In this case:
anix_dev-0
- We need to keep the script running in order to not loose the node. This is achieved by
sleep infinity
.
SSH Config to Connect to the Container
We want to connect to the container via ssh. For this, setup the following configuartion in ~/.ssh/config
on your local machine.machine. On the remote side no module needs to be loaded - the client installs its own server automatically (~/.vscode-server
).
Setup a remote connection
1) locally install the following VScode extensions
2) pick and fire up your remote compute node (avoid expansive tasks on logins), for example
Codeblock |
---|
salloc -p cpu-clx |
3) add this block to your local SSH config to connect to any bcn*
compute node via blogin
Codeblock | ||
---|---|---|
| ||
Host hlrnclx-login User <your_username> HostName gloginblogin.nhr.hlrnzib.de IdentityFile ~/.ssh/<your_key> Host hlrn-bcn* User <your_username> IdentityFile ~/.ssh/<your_key> Port 22 ProxyCommand ssh $(echo %h | cut -d- -f1) nc $(echo %h | cut -d- -f2) %p Host hlrn-*-singularity User <your_username> IdentityFile ~/.ssh/<your_key> RequestTTY force Port 22 ProxyCommand ssh $(echo %h | cut -d- -f1) nc $(echo %h | cut -d- -f2) %p RemoteCommand module load singularity; singularity shell --env HTTP_PROXY="http://www-cache.gwdg.de:3128",HTTPS_PROXY="http://www-cache.gwdg.de:3128" instance://<container_name> |
This enables three different connections from your local machine:
- Connection to the login node:
ssh hlrn
- Connection to a compute node that we obtained through the scheduler, e.g.
ssh hlrn-ggpu02
- Connection to the singularity container running on a compute node, e.g.
ssh hlrn-ggpu02-singularity
Connecting VS-Code to the Container
This follows mostly the tutorial here. Then add the following lines:
Codeblock |
---|
"remote.SSH.enableRemoteCommand": true,
"remote.SSH.useLocalServer": true, |
...
-W %h:%p clx-login |
4) in VScode open the Command Palette, type/enter: Remote – SSH: Connect to Host, and provide the specific name of your allocated compute node, e.g. bcn####
5) as soon as the remote connection in VScode is established you can install additional extensions on the remote side, too. Some recommendations are
- GitHub Copilot
- Python
- JupyterHub
- C/C++
- Modern Fortran
Steps 2) and 4) need to be executed each time running VScode on a compute node. All other steps are required one time only - are permanent.
Optional: containerized VScode server
Advantage: inside container user has more rights, e.g., can use dnf
.