Skip to content

FAQs

What will happen to my queued jobs during maintenance?

To perform maintenance on ScienceCluster, Science IT admins will create a reservation for all computational nodes. The reservation ensures that no jobs are running during maintenance because software updates may interfere with running jobs.

Pending jobs that cannot finish before the reservation, based on their requested time, will remain pending until the reservation expires. The priority of jobs will not change during the maintenance, and they will be scheduled to run based on priority once the maintenance is over.

To see if there is a reservation for the next maintenance window, you can use the following command:

scontrol show reservations

The output will show the start time, end time, and affected nodes. For example:

ReservationName=s3it.uzh_24 StartTime=2024-06-05T06:00:00 EndTime=2024-06-05T18:00:00 Duration=12:00:00
   Nodes=u20-compute-l[1-40],u20-compute-lmem[1,3-5],u20-compute-m[1-10],u20-compute-p[1-2],u20-compute-q1,u20-computegpu-[1-10],u20-computeib-hpc[1-12,14-18],u20-computeibmgpu-vesta[6-13,16-20],u20-computemgpu-vesta[14-15] NodeCnt=99 CoreCnt=3222 Features=(null) PartitionName=(null) Flags=MAINT,IGNORE_JOBS,SPEC_NODES,ALL_NODES
   TRES=cpu=4174
   Users=(null) Groups=(null) Accounts=s3it.uzh Licenses=(null) State=INACTIVE BurstBuffer=(null) Watts=n/a
   MaxStartDelay=(null)

Typically, reservations last from 6:00 until 18:00 on the maintenance day. However, they may also start earlier and finish later. The end time may also be adjusted during the maintenance if necessary.

In addition to the SLURM reservation, access to the login nodes may also be restricted. In that case, you will see a special message about the reservation when you try to log in, and the login will fail.

During the maintenance, it is often necessary to reboot the login nodes. This means that all tmux and screen session will be terminated.

What to do if I have a broken conda or mamba environment?

There are a variety of possible causes that a conda (or mamba) virtual environment might no longer function, even if it worked in the past, so there is not a single answer to cover all cases. There are two general approaches: either to start over with a new environment, or to repair the existing environment.

Start fresh with a new environment

One approach, and generally the simplest and most reliable, is to create a new environment and start again following the methods outlined in this how-to article.

In some cases, that may not be sufficient. For example, if one has inadvertently installed packages using pip while not within an activated virtual environment, those packages may end up in .local, and they may conflict with packages within a virtual environment. In that case, it may be needed clean up .local/lib and .local/bin. Check whether either of those directories exist with ls .local, then ls .local/lib to see whether that directory contains folder or files with names containing "python". If so, one can clean these directories in a reversible way (to avoid deleting something that may be needed by another application) by renaming (instead of deleting) those directories

mv .local/lib .local/lib_bak
mv .local/bin .local/bin_bak
This issue can be avoided in the future by first conda install pip (or mamba install pip) within your activated virtual environment before installing any packages with pip. Do NOT modify .local/share because that directory may contain important configuration settings for other applications.

Check version compatibility: Sometimes, in order to get packages working in a new environment, a specific package might require an older (or newer) version of python; check documentation about that package. In that case, one can create a new environment with a specific python version, e.g.:

conda create --name myenv python=3.10
In other cases, a specific version of a package may be needed for compatibility with other packages in an environment, which can be done as:
conda install <package_name>=<version_number>

Repair the environment

Another approach, though not guaranteed to work, is to attempt to repair the virtual environment. Some possible steps (not a comprehensive guide) that may help in some cases are below. Update packages:

conda update --all
Remove and re-install a specific package that is giving errors:
conda remove <package_name>
conda install <package_name>
Also, check version compatibility of packages, and reinstall specific packages, if needed.