Cluster Execution

Snakemake can make use of cluster engines that support shell scripts and have access to a common filesystem, (e.g. the Sun Grid Engine). In this case, Snakemake simply needs to be given a submit command that accepts a shell script as first positional argument:

$ snakemake --cluster qsub -j 32

Here, -j denotes the number of jobs submitted being submitted to the cluster at the same time (here 32). The cluster command can be decorated with job specific information, e.g.

$ snakemake --cluster "qsub {threads}"

Thereby, all keywords of a rule are allowed (e.g. rulename, params, input, output, threads, priority, resources, …). For example, you could encode the expected running time into resources:

rule:
    input:
        ...
    output:
        ...
    resources:
        runtime_min=240
    shell:
        ...

and forward it to the cluster scheduler:

$ snakemake --cluster "qsub --runtime {resources.runtime}"

In order to avoid specifying runtime_min for each rule, you can make use of the --default-resources flag, see snakemake --help. If your cluster system supports DRMAA, Snakemake can make use of that to increase the control over jobs. E.g. jobs can be cancelled upon pressing Ctrl+C, which is not possible with the generic --cluster support. With DRMAA, no qsub command needs to be provided, but system specific arguments can still be given as a string, e.g.

$ snakemake --drmaa " -q username" -j 32

Note that the string has to contain a leading whitespace. Else, the arguments will be interpreted as part of the normal Snakemake arguments, and execution will fail.

Adapting to a specific cluster can involve quite a lot of options. It is therefore a good idea to setup a a profile.

Job Properties

When executing a workflow on a cluster using the --cluster parameter (see below), Snakemake creates a job script for each job to execute. This script is then invoked using the provided cluster submission command (e.g. qsub). Sometimes you want to provide a custom wrapper for the cluster submission command that decides about additional parameters. As this might be based on properties of the job, Snakemake stores the job properties (e.g. name, rulename, threads, input, output, params etc.) as JSON inside the job script (for group jobs, the rulename will be “GROUP”, otherwise it will be the same as the job name). For convenience, there exists a parser function snakemake.utils.read_job_properties that can be used to access the properties. The following shows an example job submission wrapper:

#!python

#!/usr/bin/env python3
import os
import sys

from snakemake.utils import read_job_properties

jobscript = sys.argv[1]
job_properties = read_job_properties(jobscript)

# do something useful with the threads
threads = job_properties[threads]

# access property defined in the cluster configuration file (Snakemake >=3.6.0)
job_properties["cluster"]["time"]

os.system("qsub -t {threads} {script}".format(threads=threads, script=jobscript))