The Snakemake API

snakemake.snakemake(snakefile, listrules=False, list_target_rules=False, cores=1, nodes=1, local_cores=1, resources={}, config={}, configfile=None, config_args=None, workdir=None, targets=None, dryrun=False, touch=False, forcetargets=False, forceall=False, forcerun=[], until=[], omit_from=[], prioritytargets=[], stats=None, printreason=False, printshellcmds=False, printdag=False, printrulegraph=False, printd3dag=False, nocolor=False, quiet=False, keepgoing=False, cluster=None, cluster_config=None, cluster_sync=None, drmaa=None, jobname='snakejob.{rulename}.{jobid}.sh', immediate_submit=False, standalone=False, ignore_ambiguity=False, snakemakepath=None, lock=True, unlock=False, cleanup_metadata=None, force_incomplete=False, ignore_incomplete=False, list_version_changes=False, list_code_changes=False, list_input_changes=False, list_params_changes=False, list_resources=False, summary=False, detailed_summary=False, latency_wait=3, benchmark_repeats=1, wait_for_files=None, print_compilation=False, debug=False, notemp=False, nodeps=False, keep_target_files=False, keep_shadow=False, allowed_rules=None, jobscript=None, timestamp=False, greediness=None, no_hooks=False, overwrite_shellcmd=None, updated_files=None, log_handler=None, keep_logger=False, max_jobs_per_second=None, verbose=False)[source]

Run snakemake on a given snakefile.

This function provides access to the whole snakemake functionality. It is not thread-safe.

Parameters:
  • snakefile (str) – the path to the snakefile
  • listrules (bool) – list rules (default False)
  • list_target_rules (bool) – list target rules (default False)
  • cores (int) – the number of provided cores (ignored when using cluster support) (default 1)
  • nodes (int) – the number of provided cluster nodes (ignored without cluster support) (default 1)
  • local_cores (int) – the number of provided local cores if in cluster mode (ignored without cluster support) (default 1)
  • resources (dict) – provided resources, a dictionary assigning integers to resource names, e.g. {gpu=1, io=5} (default {})
  • config (dict) – override values for workflow config
  • workdir (str) – path to working directory (default None)
  • targets (list) – list of targets, e.g. rule or file names (default None)
  • dryrun (bool) – only dry-run the workflow (default False)
  • touch (bool) – only touch all output files if present (default False)
  • forcetargets (bool) – force given targets to be re-created (default False)
  • forceall (bool) – force all output files to be re-created (default False)
  • forcerun (list) – list of files and rules that shall be re-created/re-executed (default [])
  • prioritytargets (list) – list of targets that shall be run with maximum priority (default [])
  • stats (str) – path to file that shall contain stats about the workflow execution (default None)
  • printreason (bool) – print the reason for the execution of each job (default false)
  • printshellcmds (bool) – print the shell command of each job (default False)
  • printdag (bool) – print the dag in the graphviz dot language (default False)
  • printrulegraph (bool) – print the graph of rules in the graphviz dot language (default False)
  • printd3dag (bool) – print a D3.js compatible JSON representation of the DAG (default False)
  • nocolor (bool) – do not print colored output (default False)
  • quiet (bool) – do not print any default job information (default False)
  • keepgoing (bool) – keep goind upon errors (default False)
  • cluster (str) – submission command of a cluster or batch system to use, e.g. qsub (default None)
  • cluster_config (str,list) – configuration file for cluster options, or list thereof (default None)
  • cluster_sync (str) – blocking cluster submission command (like SGE ‘qsub -sync y’) (default None)
  • drmaa (str) – if not None use DRMAA for cluster support, str specifies native args passed to the cluster when submitting a job
  • jobname (str) – naming scheme for cluster job scripts (default “snakejob.{rulename}.{jobid}.sh”)
  • immediate_submit (bool) – immediately submit all cluster jobs, regardless of dependencies (default False)
  • standalone (bool) – kill all processes very rudely in case of failure (do not use this if you use this API) (default False) (deprecated)
  • ignore_ambiguity (bool) – ignore ambiguous rules and always take the first possible one (default False)
  • snakemakepath (str) – path to the snakemake executable (default None)
  • lock (bool) – lock the working directory when executing the workflow (default True)
  • unlock (bool) – just unlock the working directory (default False)
  • cleanup_metadata (bool) – just cleanup metadata of output files (default False)
  • force_incomplete (bool) – force the re-creation of incomplete files (default False)
  • ignore_incomplete (bool) – ignore incomplete files (default False)
  • list_version_changes (bool) – list output files with changed rule version (default False)
  • list_code_changes (bool) – list output files with changed rule code (default False)
  • list_input_changes (bool) – list output files with changed input files (default False)
  • list_params_changes (bool) – list output files with changed params (default False)
  • summary (bool) – list summary of all output files and their status (default False)
  • latency_wait (int) – how many seconds to wait for an output file to appear after the execution of a job, e.g. to handle filesystem latency (default 3)
  • benchmark_repeats (int) – number of repeated runs of a job if declared for benchmarking (default 1)
  • wait_for_files (list) – wait for given files to be present before executing the workflow
  • list_resources (bool) – list resources used in the workflow (default False)
  • summary – list summary of all output files and their status (default False). If no option is specified a basic summary will be ouput. If ‘detailed’ is added as an option e.g –summary detailed, extra info about the input and shell commands will be included
  • detailed_summary (bool) – list summary of all input and output files and their status (default False)
  • print_compilation (bool) – print the compilation of the snakefile (default False)
  • debug (bool) – allow to use the debugger within rules
  • notemp (bool) – ignore temp file flags, e.g. do not delete output files marked as temp after use (default False)
  • nodeps (bool) – ignore dependencies (default False)
  • keep_target_files (bool) – Do not adjust the paths of given target files relative to the working directory.
  • keep_shadow (bool) – Do not delete the shadow directory on snakemake startup.
  • allowed_rules (set) – Restrict allowed rules to the given set. If None or empty, all rules are used.
  • jobscript (str) – path to a custom shell script template for cluster jobs (default None)
  • timestamp (bool) – print time stamps in front of any output (default False)
  • greediness (float) – set the greediness of scheduling. This value between 0 and 1 determines how careful jobs are selected for execution. The default value (0.5 if prioritytargets are used, 1.0 else) provides the best speed and still acceptable scheduling quality.
  • overwrite_shellcmd (str) – a shell command that shall be executed instead of those given in the workflow. This is for debugging purposes only.
  • updated_files (list) – a list that will be filled with the files that are updated or created during the workflow execution
  • verbose (bool) – show additional debug output (default False)
  • log_handler (function) – redirect snakemake output to this custom log handler, a function that takes a log message dictionary (see below) as its only argument (default None). The log message dictionary for the log handler has to following entries:
  • max_jobs_per_second

    maximal number of cluster/drmaa jobs per second, None to impose no limit (default None)

    level:the log level (“info”, “error”, “debug”, “progress”, “job_info”)
    level=”info”, “error” or “debug”:
     
    msg:the log message
    level=”progress”:
     
    done:number of already executed jobs
    total:number of total jobs
    level=”job_info”:
     
    input:list of input files of a job
    output:list of output files of a job
    log:path to log file of a job
    local:whether a job is executed locally (i.e. ignoring cluster)
    msg:the job message
    reason:the job reason
    priority:the job priority
    threads:the threads of the job
Returns:

True if workflow execution was successful.

Return type:

bool