The standard Slurm commands (sbatch, squeue, scancel…) are powerful but not always user-friendly.
The NBI-Slurm package provides a set of wrapper utilities designed to simplify common tasks on the NBI HPC.
On the NBI HPC, you can load the utilities with:
# Recommended: using LMOD
module load nbi-slurm
# Legacy method (still works)
source /nbi/software/testing/bin/nbi-slurm
Once activated, you’ll have access to several helpful commands.
The squeue command output can be hard to read.
lsjobs provides a cleaner, more informative view of your jobs with colour-coded output and filtering options,
and also allows to delete jobs by search string (e.g. all jobs matching “kraken”)
# List all your jobs
lsjobs
# Filter by job name pattern
lsjobs MEGAHIT
# Show only running jobs
lsjobs -r
# Show jobs from another user
lsjobs -u colleague
# Delete jobs matching a pattern (with confirmation prompt)
lsjobs -d MEGAHIT
| Option | Description |
|---|---|
-u USER |
Show jobs from a specific user (default: yourself) |
-n PATTERN |
Filter by job name |
-r |
Show only running jobs |
-d |
Delete selected jobs (interactive confirmation) |
-t |
Output as TSV (pipe to vd for interactive exploration) |
Writing a full Slurm script for a simple command is tedious. runjob lets you submit jobs directly from the command line with a clean syntax.
# Write a simple job script (dry run - shows the script)
runjob -n "my-job" -c 4 -m 8g -t 2h -w logs/ "python script.py --threads 4"
# Actually run it (add -r or --run)
runjob -n "my-job" -c 4 -m 8g -t 2h -w logs/ -r "python script.py --threads 4"
| Option | Description |
|---|---|
-n NAME |
Job name |
-c CORES |
Number of CPU cores |
-m MEMORY |
Memory (e.g., 4g, 500m, 16Gb) |
-t TIME |
Time limit (e.g., 2h, 1d, 30m) |
-q QUEUE |
Queue/partition |
-r |
Actually submit the job (without this, it only prints the script) |
--after JOBID |
Wait for another job to finish first |
-f FILES |
Input files with placeholder substitution |
Since the output of the script is the JobID, you can use it to make very simple chains of jobs:
JOB1=$(runjob -r "echo 'Step 1 complete'")
runjob --after $JOB1 -r "echo 'Step 2 starts after Step 1'"
Use the -f option with placeholders to run a command on multiple files:
# Process all FASTQ files (one job per file)
runjob -f "*.fastq.gz" -r "fastqc #FILE#"
Sometimes you need to wait for a set of jobs to finish before starting the next step — but you don’t know all the job IDs in advance. waitjobs monitors the queue and waits until jobs matching your criteria have completed.
# Wait for all your jobs named "assembly" to finish
waitjobs -n assembly
# Wait for all jobs from a specific user
waitjobs -u colleague
# Check every 10 seconds instead of the default 20
waitjobs -n myjob -r 10
This is particularly useful in pipelines: you can submit waitjobs itself as a job, then use its job ID as a dependency for downstream steps.
| Option | Description |
|---|---|
-u USER |
Wait for jobs from this user |
-n PATTERN |
Wait for jobs matching this name pattern |
-r SECONDS |
Refresh interval (default: 20) |
--verbose |
Show progress information |
The NBI-Slurm package includes additional utilities:
| Command | Purpose |
|---|---|
whojobs |
See who’s using the cluster (users ranked by job count) |
shelf |
Search for packages installed on the HPC |
session |
Start an interactive session with sensible defaults |
make_image_from_bioconda |
Generate a Singularity image from a Bioconda package |
You can set defaults in ~/.nbislurm.config to avoid typing the same options repeatedly:
queue=qib-short
email=your.email@quadram.ac.uk
memory=8000
time=3h
tmpdir=/home/user/slurm
type configuration to create an empty template.
For full documentation, see the MetaCPAN page.