Skip to content

User Guide

ScienceApps allows you to interactively run and manage ScienceCluster sessions from the browser.

Interactive Apps

To create a new interactive session, click "Interactive Apps" on the top menu bar then select which App you would like to start.

The following interactive apps are available, where you can analyze data, develop algorithms, and create models:

Desktop Environments

Allows you to work with a Linux remote desktop, including VS Code.

To install custom software, learn more about custom containers.

Beta Interactive Apps

The following apps are available in Beta, so only with very limited support:

Launching a Session

Once you have selected the App, complete the web form to create your session:

  • Version: Application version
  • Hours: The number of hours your interactive session should be available. You can always delete your interactive session at any point to stop the allocation. The maximum duration for a single session is one week (168 hours).
  • Cores: Number of vCPUs to allocate for your session.
  • RAM (system memory): Amount of memory to allocate for your session.
  • GPU: (not available for all ScienceApps) Allows you to request a GPU either first available or of a specific type. Note: GPUs have their own memory, all of which is allocated to the session, independent of the above RAM setting.
  • Project (Slurm account): (Optional) In most cases, this field should be left blank. If you are a member of multiple research groups, and the cost contribution needs to be assigned to a non-default project, you would then specify the name of the Science IT project that will fund your cost contribution.
  • Partition: (GPU jobs only). Select the ScienceCluster partition that you want to use. This value is only applied when a GPU is requested. See lowprio for more info.
  • Email notifications: Check the box "Receive email on all job state changes" if you want to receive email notifications when your job starts, fails, or ends.

My Interactive Sessions

This gives an overview of currently running interactive Apps. Here you can do the following:

  • Connect to the web interface of an existing session
  • View and manage queued sessions
  • Delete running sessions to release the allocated resources

Files

You can interact with the filesystem through the web browser.

  • /home/$USER is where you store your configuration files, datasets and output files (limited in size).
  • /scratch/$USER is for temporary storage of potentially big data sets. Data that is not accessed or modified in more than 30 days will be automatically removed.
  • /shares/<PROJECT> is a scalable group storage with cost contribution.

A full description of the ScienceCluster filesystem is available here. Reminder: Backing up or archiving your files to protect against data loss is the responsibility of the user.

Job Interaction

Here you can view and manage your current cluster jobs (active or in the queue).

With the Job Composer you can create jobs based on templates.

Advanced Topics

Cluster Shells

Start an interactive ssh shell on the frontend node of the cluster, similar to having an interactive session like in this article. You can use this tool to create custom Jupyter kernels as described above.

Info

For the best experience, please use Chrome or Firefox.

Containerized Kernels in Jupyter

Support Disclaimer

Given the wide range of possible configurations, we are unable to provide comprehensive technical support for this specific section.

To run Jupyter notebooks using custom packages and dependencies, you can create your own containerized kernel environment. To streamline this process, we provide a helper script, apptainer_ipykernel.py (available here), which links your custom Apptainer container to the ScienceApps Jupyter environment.

These tools require command-line access to the ScienceCluster. First, ssh into the cluster from a local terminal, or open an interactive terminal via Cluster shells section.

Once a custom kernel is installed, it becomes available in ScienceApps (Jupyter).

Note

Container environments are immutable. If you need to install additional packages later, you cannot do so dynamically within the notebook; you must rebuild the containerized kernel from scratch.

Prerequisites

  • IPython Kernel: The container must include the ipykernel package. The helper script expects the package to be available via the command: apptainer exec <container> python -m ipykernel --help.

  • The helper script: Download apptainer_ipykernel.py:

    curl -O https://gitlab.uzh.ch/s3it/docs/-/snippets/271/raw/main/apptainer_ipykernel.py
    python apptainer_ipykernel.py --help
    

Conda Example

For more information about using Conda, see Using Conda on the ScienceCluster section.

  1. First, run an interactive session:

    srun --pty -n 1 -c 2 --time=00:30:00 --mem=7G bash -l
    
  2. Create a Conda env.yml file with your required packages including the required ipykernel package; this example file installs python (of a specific version) and pandas:

    env.yml
    name: venv
    channels:
      - conda-forge
    dependencies:
      - python=3.14
      - pandas
      - ipykernel
    
  3. Create an Apptainer definition file (e.g., conda.def) used to build a container that will include as well as make use of the env.yml:

    conda.def
    Bootstrap: docker
    From: condaforge/miniforge3:latest
    
    %files
        ./env.yml /env.yml
    
    %post
        mamba env create --yes -f /env.yml
        mamba clean --all -f -y
    
    %environment
        export PATH="/opt/conda/envs/venv/bin:$PATH"
    
    %runscript
        exec python "$@"
    
  4. Build the container using the env.yml and the conda.def:

    module load apptainer
    APPTAINER_BINDPATH="" apptainer build conda.sif conda.def
    
  5. Install the kernel using the helper script:

    python apptainer_ipykernel.py install conda.sif --name conda_pandas
    
  6. Optionally, list installed containerized kernels using the helper script:

    python apptainer_ipykernel.py list
    

Remove a Kernel

In case you need to remove a containerized kernel, you can use the helper script:

python apptainer_ipykernel.py remove <kernel_name>

Tensorboard

Tensorboard can be used to monitor the status of a Tensorflow model either in real-time or after a workflow has been completed.

For example, if you run the code from this page on the cluster, you'll create a logs folder within your current working directory. At that point (or at any point thereafter) you can begin a Tensorboard ScienceApps session by providing the absolute path to the logs folder via the "Log Directory" input (e.g., /scratch/$USER/logs if the logs directory is located in scratch).