Commit 89da42b6 authored by van den Berg's avatar van den Berg
Browse files

Remove directory output from fastqc

The directory output for the fastqc tasks is causing issues on the
shared file system of the cluster, since it cannot properly determine
the age of the folder. As a result, it re-runs the fastqc tasks every
time a workflow is restarted, regardless of whether the task has already
completed.

To prevent this, a single dummy output file '.done' has been added to
the fastqc tasks which will be written when fastqc exits successfully.
parent e33b4498
Pipeline #4022 passed with stages
in 38 minutes and 10 seconds
...@@ -120,10 +120,10 @@ rule all: ...@@ -120,10 +120,10 @@ rule all:
gvcf_tbi = expand("{sample}/vcf/{sample}.g.vcf.gz.tbi", gvcf_tbi = expand("{sample}/vcf/{sample}.g.vcf.gz.tbi",
sample=config["samples"]), sample=config["samples"]),
fastqc_raw = (f"{sample}/pre_process/raw-{sample}-{read_group}/" fastqc_raw = (f"{sample}/pre_process/raw-{sample}-{read_group}/.done"
for read_group, sample in get_readgroup_per_sample()), for read_group, sample in get_readgroup_per_sample()),
fastqc_trim = (f"{sample}/pre_process/trimmed-{sample}-{read_group}/" fastqc_trim = (f"{sample}/pre_process/trimmed-{sample}-{read_group}/.done"
for read_group, sample in get_readgroup_per_sample()), for read_group, sample in get_readgroup_per_sample()),
cutadapt = (f"{sample}/pre_process/{sample}-{read_group}.txt" cutadapt = (f"{sample}/pre_process/{sample}-{read_group}.txt"
...@@ -377,20 +377,26 @@ rule fastqc_raw: ...@@ -377,20 +377,26 @@ rule fastqc_raw:
input: input:
r1 = lambda wc: (config['samples'][wc.sample]['read_groups'][wc.read_group]['R1']), r1 = lambda wc: (config['samples'][wc.sample]['read_groups'][wc.read_group]['R1']),
r2 = lambda wc: (config['samples'][wc.sample]['read_groups'][wc.read_group]['R2']), r2 = lambda wc: (config['samples'][wc.sample]['read_groups'][wc.read_group]['R2']),
params:
folder = "{sample}/pre_process/raw-{sample}-{read_group}"
output: output:
directory("{sample}/pre_process/raw-{sample}-{read_group}/") done = "{sample}/pre_process/raw-{sample}-{read_group}/.done"
container: containers["fastqc"] container: containers["fastqc"]
shell: "fastqc --threads 4 --nogroup -o {output} {input.r1} {input.r2} " shell: "fastqc --threads 4 --nogroup -o {params.folder} {input.r1} {input.r2} && "
"touch {output.done}"
rule fastqc_postqc: rule fastqc_postqc:
"""Run fastqc on fastq files post pre-processing""" """Run fastqc on fastq files post pre-processing"""
input: input:
r1 = rules.cutadapt.output.r1, r1 = rules.cutadapt.output.r1,
r2 = rules.cutadapt.output.r2 r2 = rules.cutadapt.output.r2
params:
folder = "{sample}/pre_process/trimmed-{sample}-{read_group}"
output: output:
directory("{sample}/pre_process/trimmed-{sample}-{read_group}/") done = "{sample}/pre_process/trimmed-{sample}-{read_group}/.done"
container: containers["fastqc"] container: containers["fastqc"]
shell: "fastqc --threads 4 --nogroup -o {output} {input.r1} {input.r2} " shell: "fastqc --threads 4 --nogroup -o {params.folder} {input.r1} {input.r2} && "
"touch {output.done}"
## coverage ## coverage
rule covstats: rule covstats:
...@@ -527,10 +533,10 @@ rule multiqc: ...@@ -527,10 +533,10 @@ rule multiqc:
"{sample}/bams/{sample}.insert_size_metrics", "{sample}/bams/{sample}.insert_size_metrics",
sample=config["samples"] sample=config["samples"]
), ),
fastqc_raw = (directory(f"{sample}/pre_process/raw-{sample}-{read_group}/") fastqc_raw = (f"{sample}/pre_process/raw-{sample}-{read_group}/.done"
for read_group, sample in get_readgroup_per_sample()), for read_group, sample in get_readgroup_per_sample()),
fastqc_trim = (directory(f"{sample}/pre_process/trimmed-{sample}-{read_group}/") fastqc_trim = (f"{sample}/pre_process/trimmed-{sample}-{read_group}/.done"
for read_group, sample in get_readgroup_per_sample()), for read_group, sample in get_readgroup_per_sample()),
hs_metric = expand("{sample}/bams/{sample}.hs_metrics.txt", hs_metric = expand("{sample}/bams/{sample}.hs_metrics.txt",
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment