Job management¶
Job information¶
You can use squeue
utility to see the list of all running jobs. Without any parameters, it will show the list of all jobs that are running or pending on the whole cluster.
squeue
Since the queue is typically very long, it may be beneficial to pipe the output to the less
command. Then you can use up/down arrows to scroll as well as ctrl+f
and ctrl+b
to page forward and back respectively. You can get to the bottom of the output by pressing G
. Finally, press q
to exit less
.
squeue | less
You can limit the output to display only your jobs -u $USER
.
squeue -u $USER
The output includes basic job information such as job id, user, and requested resources. Jobs cannot exceed the END_TIME
but they can terminate earlier. NODELIST(REASON)
column shows the list of nodes for the running jobs or the reason for pending jobs. The most common reason is (priority)
, which means that the job is not running because it has lower priority than some other scheduled jobs. Pending jobs with highest priority will have (resources)
as the reason. Occasionally, you may see (ReqNodeNotAvail)
. In most cases, it means that a reservation has been placed on partition nodes due to an upcoming maintenance and your job cannot start as its runtime may overlap with the maintenance window.
By default, the jobs are sorted by increasing step id, which is not very convenient. To make the output more informative, you can sort by job state t
(pending, running) and priority Q
(low to high).
squeue -S t,Q | less
Other useful sorting options are node name N
and expected end time e
. For example,
squeue -S t,N,e | less
Note
Some pending jobs may already show their estimated end time. This is a very rough estimate and the actual completion time may be either sooner or later depending on many factors. For example, additional jobs may be submitted at any time and they may delay currently pending jobs that have lower priority.
Information about running jobs can be also obtained with the sstat
utility. For example, the MaxRSS
column shows the maximum amount of RAM your job has consumed so far:
sstat -a <jobid> -o Jobid,MaxRSS,AveCPU
sstat --helpformat
command to see the list of all available fields and check man sstat
to find out exactly what each field means. Information about jobs that ran previously can be obtained with sacct
utility. The most common parameters are listed below.
-S <date>
displays jobs that started after the specified date. Date should be in ISO format, e.g.'2023-01-01'
. You can also specify time, e.g.'2023-01-01 14:30'
.-s <state>
limits the output to jobs in specific states, e.g.-s FAILED,TIMEOUT
would show jobs that failed or timed out.-j <jobid>
shows the information for the specified job only.
For example,
sacct -S '2023-01-01' -s COMPLETED
sacct -j 2905691
Jobs that successfully finished should have COMPLETED
state and 0:0
exit code.
Among the default output columns, you may find MaxRSS
particularly useful. It shows the maximum amount of RAM your job consumed at some point during its execution. This information can be used to adjust the amount of requested RAM for similar jobs in the future.
There are many other fields that you can request. You can see the whole list by running sacct --helpformat
. The output format can be controlled with the -o
parameter, which accepts a comma-separated list of fields.
sacct -S '2023-01-01' -s COMPLETED -o jobid,start,reqtres,reqmem,maxrss
In some cases, a column may not be wide enough to fit entire values. sacct
appends a plus sign to the end of truncated values. You can increase column width by adding %x
to the column names specified with -o
. Here, x
is the width of the corresponding column in characters. For example, the following command expands the width of JobID
, ReqTres
, and ReqMem
columns to 9, 25, and 15 characters respectively.
sacct -S '2023-01-01' -s COMPLETED -o jobid%9,start,reqtres%25,reqmem%15,maxrss
You may also find useful to compare the number of requested CPUs with the columns CPUTime
(time allocated to the job: Elapsed
*AllocCPUs
) and TotalCPU
(actual CPU time consumed by the job): the time should be comparable. For example, if the CPUTime
is twice the TotalCPU
you can try halving the number of requested CPUs.
sacct -S '2023-01-01' -s COMPLETED -o jobid,start,reqtres,reqmem,maxrss,alloccpus,cputime,totalcpu
Job priority¶
On ScienceCluster, the order in which jobs are executed is primarily determined by the job's priority. Slurm assigns the initial priority when the job is submitted. This initial priority depends on the user's fair share, which is the difference between the promised resources and the resources already consumed by the user. In other words, the more resources that have been allocated for the user's jobs in the past, the lower the initial priority will be. All users initially start with the same fair share value, which begins to decrease once the user's jobs start running. The record of usage has a half-life decay of 7 days. This implies that the user's fair share may increase over time, consequently increasing the priority of pending jobs.
In addition to the fair share value, a job's priority depends on the amount of time the job remains in the queue. The longer it stays in the queue, the higher the priority bonus.
At regular intervals, Slurm re-evaluates the priority of jobs and checks whether there are enough resources to run the jobs with the highest priority. If so, the jobs are assigned to nodes with available resources for execution. Additionally, there is a backfilling mechanism that schedules lower-priority jobs if available resources are insufficient for higher-priority jobs, and scheduling these lower-priority jobs does not delay the scheduling of higher-priority jobs.
You can see the priority of pending jobs by using sprio -l
command. When you run squeue
, the job with the highest priority is indicated by "(Resources)" in the NODELIST(REASON) column, while jobs with lower priority have "(Priority)" in that column.
Cancelling jobs¶
You can remove a pending or running job from the queue with scancel
. Typically, you would use it with specific job ids.
scancel 2905690 2905690
However, it is possible to delete all your jobs that satisfy certain criteria. For example, you can delete all jobs that are pending.
scancel --state=PENDING
The command also has an interactive mode whereby it would ask you to confirm the deletion of each job before actually deleting them. The mode is enabled with -i
flag.
scancel -i --state=RUNNING