Commit bc9c1c28 authored by Peter van 't Hof's avatar Peter van 't Hof
Browse files

Merge remote-tracking branch 'remotes/origin/develop' into tool-checkFastq-pair

parents 097f37b0 d19d4308
......@@ -4,49 +4,52 @@
Gears is a metagenomics pipeline. (``GE``nome ``A``nnotation of ``R``esidual ``S``equences). One can use this pipeline to identify contamination in sequencing runs on either raw FastQ files or BAM files.
In case of BAM file as input, it will extract the unaligned read(pair) sequences for analysis.
Analysis result is reported in a sunburst graph, which is visible and navigatable in a webbrowser.
Analysis result is reported in a krona graph, which is visible and navigatable in a webbrowser.
Pipeline analysis components include:
- Kraken, DerrickWood [GitHub](https://github.com/DerrickWood/kraken)
- [Kraken, DerrickWood](https://github.com/DerrickWood/kraken)
- [Qiime closed reference](http://qiime.org)
- [Qiime rtax](http://qiime.org) (**Experimental**)
- SeqCount (**Experimental**)
## Gears
## Example
This pipeline is used to analyse a group of samples. This pipeline only accepts fastq files. The fastq files first get trimmed and clipped with [Flexiprep](Flexiprep). This can be disabled with the config flags of [Flexiprep](Flexiprep). The samples can be specified with a sample config file, see [Config](../general/Config)
To get the help menu:
### Config
``` bash
biopet pipeline Gears -h
... default config ...
Arguments for Gears:
-R1,--fastqr1 <fastqr1> R1 reads in FastQ format
-R2,--fastqr2 <fastqr2> R2 reads in FastQ format
-bam,--bamfile <bamfile> All unmapped reads will be extracted from this bam for analysis
--outputname <outputname> Undocumented option
-sample,--sampleid <sampleid> Sample ID
-library,--libid <libid> Library ID
-config,--config_file <config_file> JSON / YAML config file(s)
-cv,--config_value <config_value> Config values, value should be formatted like 'key=value' or
'path:path:key=value'
-DSC,--disablescatter Disable all scatters
| Key | Type | default | Function |
| --- | ---- | ------- | -------- |
| gears_use_kraken | Boolean | true | Run fastq file with kraken |
| gears_use_qiime_closed | Boolean | false | Run fastq files with qiime with the closed reference module |
| gears_use_qiime_rtax | Boolean | false | Run fastq files with qiime with the rtax module |
| gears_use_seq_count | Boolean | false | Produces raw count files |
### Example
To start the pipeline (remove `-run` for a dry run):
``` bash
biopet pipeline Gears -run \
-config mySettings.json -config samples.json
```
Note that the pipeline also works on unpaired reads where one should only provide R1.
## GearsSingle
This pipeline can be used to analyse a single sample, this can be fastq files or a bam file. When a bam file is given only the unmapped reads are extracted.
### Example
To start the pipeline (remove `-run` for a dry run):
``` bash
biopet pipeline Gears -run \
biopet pipeline GearsSingle -run \
-R1 myFirstReadPair -R2 mySecondReadPair -sample mySampleName \
-library myLibname -config mySettings.json
```
## Configuration and flags
### Commandline flags
For technical reasons, single sample pipelines, such as this pipeline do **not** take a sample config.
Input files are in stead given on the command line as a flag.
......@@ -58,17 +61,22 @@ Command line flags for Gears are:
| -R2 | --input_r2 | Path (optional) | Path to second read pair fastq file. |
| -bam | --bamfile | Path (optional) | Path to bam file. |
| -sample | --sampleid | String (**required**) | Name of sample |
| -library | --libid | String (**required**) | Name of library |
| -library | --libid | String (optional) | Name of library |
If `-R2` is given, the pipeline will assume a paired-end setup. `-bam` is mutualy exclusive with the `-R1` and `-R2` flags. Either specify `-bam` or `-R1` and/or `-R2`.
### Config
| Key | Type | default | Function |
| --- | ---- | ------- | -------- |
| gears_use_kraken | Boolean | true | Run fastq file with kraken |
| gears_use_qiime_closed | Boolean | false | Run fastq files with qiime with the closed reference module |
| gears_use_qiime_rtax | Boolean | false | Run fastq files with qiime with the rtax module |
| gears_use_seq_count | Boolean | false | Produces raw count files |
### Result files
## Result files
The results of `Gears` are stored in the following files:
The results of `GearsSingle` are stored in the following files:
| File suffix | Application | Content | Description |
| ----------- | ----------- | ------- | ----------- |
......
......@@ -75,7 +75,7 @@ class Shiva(val root: Configurable) extends QScript with ShivaTrait {
}
}
override def keepMergedFiles: Boolean = config("keep_merged_files", default = false)
override def keepMergedFiles: Boolean = config("keep_merged_files", default = !useIndelRealigner)
override def summarySettings = super.summarySettings + ("use_indel_realigner" -> useIndelRealigner)
......
......@@ -17,6 +17,7 @@ package nl.lumc.sasc.biopet.pipelines.bammetrics
import java.io.File
import nl.lumc.sasc.biopet.core.annotations.{ RibosomalRefFlat, AnnotationRefFlat }
import nl.lumc.sasc.biopet.utils.config.Configurable
import nl.lumc.sasc.biopet.core.summary.SummaryQScript
import nl.lumc.sasc.biopet.core.{ Reference, BiopetFifoPipe, PipelineCommand, SampleLibraryTag }
......@@ -31,16 +32,15 @@ class BamMetrics(val root: Configurable) extends QScript
with SummaryQScript
with SampleLibraryTag
with Reference
with TargetRegions {
with TargetRegions
with AnnotationRefFlat
with RibosomalRefFlat {
def this() = this(null)
@Input(doc = "Bam File", shortName = "BAM", required = true)
var inputBam: File = _
/** Settings for CollectRnaSeqMetrics */
var transcriptRefFlatFile: Option[File] = config("transcript_refflat")
/** return location of summary file */
def summaryFile = (sampleId, libId) match {
case (Some(s), Some(l)) => new File(outputDir, s + "-" + l + ".BamMetrics.summary.json")
......@@ -92,7 +92,7 @@ class BamMetrics(val root: Configurable) extends QScript
add(gcBiasMetrics)
addSummarizable(gcBiasMetrics, "gc_bias")
if (transcriptRefFlatFile.isEmpty) {
if (config("wgs_metrics", default = true)) {
val wgsMetrics = new CollectWgsMetrics(this)
wgsMetrics.input = inputBam
wgsMetrics.output = swapExt(outputDir, inputBam, ".bam", ".wgs.metrics")
......@@ -100,12 +100,13 @@ class BamMetrics(val root: Configurable) extends QScript
addSummarizable(wgsMetrics, "wgs")
}
if (transcriptRefFlatFile.isDefined) {
if (config("rna_metrics", default = false)) {
val rnaMetrics = new CollectRnaSeqMetrics(this)
rnaMetrics.input = inputBam
rnaMetrics.output = swapExt(outputDir, inputBam, ".bam", ".rna.metrics")
rnaMetrics.chartOutput = Some(swapExt(outputDir, inputBam, ".bam", ".rna.metrics.pdf"))
rnaMetrics.refFlat = transcriptRefFlatFile.get
rnaMetrics.refFlat = annotationRefFlat()
rnaMetrics.ribosomalIntervals = ribosomalRefFlat()
add(rnaMetrics)
addSummarizable(rnaMetrics, "rna")
}
......
......@@ -61,6 +61,12 @@ object BammetricsReport extends ReportBuilder {
val wgsExecuted = summary.getValue(sampleId, libId, metricsTag, "stats", "wgs").isDefined
val rnaExecuted = summary.getValue(sampleId, libId, metricsTag, "stats", "rna").isDefined
val insertsizeMetrics = summary.getValue(sampleId, libId, metricsTag, "stats", "CollectInsertSizeMetrics", "metrics") match {
case Some(None) => false
case Some(_) => true
case _ => false
}
val targets = (
summary.getValue(sampleId, libId, metricsTag, "settings", "amplicon_name"),
summary.getValue(sampleId, libId, metricsTag, "settings", "roi_name")
......@@ -77,9 +83,10 @@ object BammetricsReport extends ReportBuilder {
targets.map(t => t -> ReportSection("/nl/lumc/sasc/biopet/pipelines/bammetrics/covstatsPlot.ssp", Map("target" -> Some(t)))),
Map())),
List(
"Summary" -> ReportSection("/nl/lumc/sasc/biopet/pipelines/bammetrics/alignmentSummary.ssp"),
"Insert Size" -> ReportSection("/nl/lumc/sasc/biopet/pipelines/bammetrics/insertSize.ssp", Map("showPlot" -> true))
) ++ (if (wgsExecuted) List("Whole genome coverage" -> ReportSection("/nl/lumc/sasc/biopet/pipelines/bammetrics/wgsHistogram.ssp",
"Summary" -> ReportSection("/nl/lumc/sasc/biopet/pipelines/bammetrics/alignmentSummary.ssp")) ++
(if (insertsizeMetrics) List("Insert Size" -> ReportSection("/nl/lumc/sasc/biopet/pipelines/bammetrics/insertSize.ssp", Map("showPlot" -> true))
)
else Nil) ++ (if (wgsExecuted) List("Whole genome coverage" -> ReportSection("/nl/lumc/sasc/biopet/pipelines/bammetrics/wgsHistogram.ssp",
Map("showPlot" -> true)))
else Nil) ++
(if (rnaExecuted) List("Rna coverage" -> ReportSection("/nl/lumc/sasc/biopet/pipelines/bammetrics/rnaHistogram.ssp",
......
......@@ -50,22 +50,22 @@ class BamMetricsTest extends TestNGSuite with Matchers {
@DataProvider(name = "bammetricsOptions")
def bammetricsOptions = {
val rois = Array(0, 1, 2, 3)
val amplicon = Array(true, false)
val rna = Array(true, false)
val bool = Array(true, false)
for (
rois <- rois;
amplicon <- amplicon;
rna <- rna
) yield Array(rois, amplicon, rna)
amplicon <- bool;
rna <- bool;
wgs <- bool
) yield Array(rois, amplicon, rna, wgs)
}
@Test(dataProvider = "bammetricsOptions")
def testBamMetrics(rois: Int, amplicon: Boolean, rna: Boolean) = {
val map = ConfigUtils.mergeMaps(Map("output_dir" -> BamMetricsTest.outputDir),
def testBamMetrics(rois: Int, amplicon: Boolean, rna: Boolean, wgs: Boolean) = {
val map = ConfigUtils.mergeMaps(Map("output_dir" -> BamMetricsTest.outputDir, "rna_metrics" -> rna, "wgs_metrics" -> wgs),
Map(BamMetricsTest.executables.toSeq: _*)) ++
(if (amplicon) Map("amplicon_bed" -> "amplicon.bed") else Map()) ++
(if (rna) Map("transcript_refflat" -> "transcripts.refFlat") else Map()) ++
(if (rna) Map("annotation_refflat" -> "transcripts.refFlat") else Map()) ++
Map("regions_of_interest" -> (1 to rois).map("roi_" + _ + ".bed").toList)
val bammetrics: BamMetrics = initPipeline(map)
......@@ -77,7 +77,7 @@ class BamMetricsTest extends TestNGSuite with Matchers {
var regions: Int = rois + (if (amplicon) 1 else 0)
bammetrics.functions.count(_.isInstanceOf[CollectRnaSeqMetrics]) shouldBe (if (rna) 1 else 0)
bammetrics.functions.count(_.isInstanceOf[CollectWgsMetrics]) shouldBe (if (rna) 0 else 1)
bammetrics.functions.count(_.isInstanceOf[CollectWgsMetrics]) shouldBe (if (wgs) 1 else 0)
bammetrics.functions.count(_.isInstanceOf[CollectMultipleMetrics]) shouldBe 1
bammetrics.functions.count(_.isInstanceOf[CalculateHsMetrics]) shouldBe (if (amplicon) 1 else 0)
bammetrics.functions.count(_.isInstanceOf[CollectTargetedPcrMetrics]) shouldBe (if (amplicon) 1 else 0)
......
#import(java.io.File)
#import(scala.io.Source)
<%@ var rootPath: String %>
<%@ var kronaXml: File %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
<meta charset="utf-8"/>
<link rel="shortcut icon" href="${rootPath}ext/img/krona/favicon.ico"/>
<!--<script id="notfound">window.onload=function(){document.body.innerHTML="Could not get resources from \"http://krona.sourceforge.net\"."}</script>-->
<script src="${rootPath}ext/js/krona-2.0.js"></script>
</head>
<body>
<img id="hiddenImage" src="${rootPath}ext/img/krona/hidden.png" style="display:none"/>
<img id="loadingImage" src="${rootPath}ext/img/krona/loading.gif" style="display:none"/>
<noscript>Javascript must be enabled to view this page.</noscript>
<div style="display:none">
<%
val reader = Source.fromFile(kronaXml)
val xml = reader.getLines().mkString("\n")
reader.close()
%>
${unescape(xml)}
</div></body></html>
......@@ -148,7 +148,7 @@
${name}
</h3>
</div>
${unescape(section.render(args))}
${unescape(section.render(args ++ Map("args" -> args)))}
</div>
#end
</div>
......
......@@ -23,9 +23,7 @@ import nl.lumc.sasc.biopet.core.report.ReportBuilderExtension
import nl.lumc.sasc.biopet.utils.Logging
import org.broadinstitute.gatk.queue.{ QScript, QSettings }
import org.broadinstitute.gatk.queue.function.QFunction
import org.broadinstitute.gatk.queue.function.scattergather.ScatterGatherableFunction
import org.broadinstitute.gatk.queue.util.{ Logging => GatkLogging }
import org.broadinstitute.gatk.utils.commandline.Argument
/** Base for biopet pipeline */
trait BiopetQScript extends Configurable with GatkLogging { qscript: QScript =>
......@@ -99,12 +97,13 @@ trait BiopetQScript extends Configurable with GatkLogging { qscript: QScript =>
inputFiles.foreach { i =>
if (!i.file.exists()) Logging.addError(s"Input file does not exist: ${i.file}")
else if (!i.file.canRead) Logging.addError(s"Input file can not be read: ${i.file}")
if (!i.file.canRead) Logging.addError(s"Input file can not be read: ${i.file}")
if (!i.file.isAbsolute) Logging.addError(s"Input file should be an absulute path: ${i.file}")
}
functions.filter(_.jobOutputFile == null).foreach(f => {
try {
f.jobOutputFile = new File(f.firstOutput.getAbsoluteFile.getParent, "." + f.firstOutput.getName + "." + configName + ".out")
f.jobOutputFile = new File(f.firstOutput.getAbsoluteFile.getParent, "." + f.firstOutput.getName + "." + f.getClass.getSimpleName + ".out")
} catch {
case e: NullPointerException => logger.warn(s"Can't generate a jobOutputFile for $f")
}
......
......@@ -94,7 +94,7 @@ trait PipelineCommand extends MainCommand with GatkLogging with ImplicitConversi
}
if (!args.contains("-retry") && !args.contains("--retry_failed")) {
val retry: Int = globalConfig(pipelineName, Nil, "retry", default = 5)
logger.info("No retry flag found, ")
logger.info(s"No retry flag found, set to default value of '$retry'")
argv ++= List("-retry", retry.toString)
}
BiopetQCommandLine.main(argv)
......
......@@ -71,7 +71,7 @@ trait Reference extends Configurable {
val file: File = config("reference_fasta")
checkFasta(file)
val dict = new File(file.getAbsolutePath.stripSuffix(".fa").stripSuffix(".fasta") + ".dict")
val dict = new File(file.getAbsolutePath.stripSuffix(".fa").stripSuffix(".fasta").stripSuffix(".fna") + ".dict")
val fai = new File(file.getAbsolutePath + ".fai")
this match {
......
package nl.lumc.sasc.biopet.core.annotations
import nl.lumc.sasc.biopet.core.BiopetQScript
import nl.lumc.sasc.biopet.core.BiopetQScript.InputFile
import nl.lumc.sasc.biopet.utils.LazyCheck
import org.broadinstitute.gatk.queue.QScript
/**
* Created by pjvan_thof on 1/12/16.
*/
trait AnnotationGtf extends BiopetQScript { qscript: QScript =>
/** GTF reference file */
lazy val annotationGtf: File = {
val file: File = config("annotation_gtf", freeVar = true)
inputFiles :+ InputFile(file, config("annotation_gtf_md5", freeVar = true))
file
}
}
trait AnnotationRefFlat extends BiopetQScript { qscript: QScript =>
/** GTF reference file */
lazy val annotationRefFlat = new LazyCheck({
val file: File = config("annotation_refflat", freeVar = true)
inputFiles :+ InputFile(file, config("annotation_refflat_md5", freeVar = true))
file
})
}
trait RibosomalRefFlat extends BiopetQScript { qscript: QScript =>
/** GTF reference file */
lazy val ribosomalRefFlat = new LazyCheck({
val file: Option[File] = config("ribosome_refflat", freeVar = true)
file match {
case Some(f) => inputFiles :+ InputFile(f, config("ribosome_refflat_md5", freeVar = true))
case _ =>
}
file
})
}
......@@ -15,6 +15,8 @@
*/
package nl.lumc.sasc.biopet.core.extensions
import java.io.File
import nl.lumc.sasc.biopet.core.BiopetCommandLineFunction
import nl.lumc.sasc.biopet.utils.rscript.Rscript
......@@ -28,7 +30,7 @@ trait RscriptCommandLineFunction extends BiopetCommandLineFunction with Rscript
executable = rscriptExecutable
override def beforeGraph(): Unit = {
checkScript(Some(jobTempDir))
checkScript(Some(new File(".queue" + File.separator + "tmp")))
}
def cmdLine: String = repeat(cmd)
......
......@@ -21,6 +21,9 @@ from __future__ import print_function
__author__="Wai Yi Leung"
import sys
import re
upacPatern = re.compile(r'[RYKMSWBDHV]')
if __name__ == "__main__":
"""
......@@ -46,4 +49,5 @@ if __name__ == "__main__":
if new_size == 0:
l[5] = ""
l[2] = upacPatern.sub("N", l[2])
print("\t".join(map(str, l)))
package nl.lumc.sasc.biopet.extensions
import java.io.File
import nl.lumc.sasc.biopet.core.{ Version, BiopetCommandLineFunction }
import nl.lumc.sasc.biopet.utils.config.Configurable
import org.broadinstitute.gatk.utils.commandline.Input
import scala.util.matching.Regex
/**
* Created by pjvanthof on 16/12/15.
*/
class Flash(val root: Configurable) extends BiopetCommandLineFunction with Version {
executable = config("exe", default = "flash", freeVar = false)
/** Command to get version of executable */
def versionCommand: String = executable + " --version"
/** Regex to get version from version command output */
def versionRegex: Regex = """FLASH (v.*)""".r
@Input(required = true)
var fastqR1: File = _
@Input(required = true)
var fastqR2: File = _
var minOverlap: Option[Int] = config("min_overlap")
var maxOverlap: Option[Int] = config("max_overlap")
var maxMismatchDensity: Option[Double] = config("max_mismatch_density")
var allowOuties: Boolean = config("allow_outies", default = false)
var phredOffset: Option[Int] = config("phred_offset")
var readLen: Option[Int] = config("read_len")
var fragmentLen: Option[Int] = config("fragment_len")
var fragmentLenStddev: Option[Int] = config("fragment_len_stddev")
var capMismatchQuals: Boolean = config("cap_mismatch_quals", default = false)
var interleavedInput: Boolean = config("interleaved-input", default = false)
var interleavedOutput: Boolean = config("interleaved_output", default = false)
var interleaved: Boolean = config("interleaved", default = false)
var tabDelimitedInput: Boolean = config("tab_delimited_input", default = false)
var tabDelimitedOutput: Boolean = config("tab_delimited_output", default = false)
var outputPrefix: String = config("output_prefix", default = "out")
var outputDirectory: File = _
var compress: Boolean = config("compress", default = false)
var compressProg: Option[String] = config("compress_prog")
var compressProgArgs: Option[String] = config("compress_prog_args")
var outputSuffix: Option[String] = config("output_suffix")
private def suffix = outputSuffix.getOrElse("fastq") + (if (compress) ".gz" else "")
def combinedFastq = new File(outputDirectory, s"$outputPrefix.extendedFrags.$suffix")
def notCombinedR1 = new File(outputDirectory, s"$outputPrefix.notCombined_1.$suffix")
def notCombinedR2 = new File(outputDirectory, s"$outputPrefix.notCombined_2.$suffix")
def outputHistogramTable = new File(outputDirectory, s"$outputPrefix.hist")
def outputHistogram = new File(outputDirectory, s"$outputPrefix.histogram")
override def beforeGraph(): Unit = {
super.beforeGraph()
outputFiles :::= combinedFastq :: notCombinedR1 ::
notCombinedR2 :: outputHistogramTable :: outputHistogram :: Nil
}
def cmdLine = executable +
optional("-m", minOverlap) +
optional("-M", maxOverlap) +
optional("-x", maxMismatchDensity) +
conditional(allowOuties, "--allow-outies") +
optional("--phred-offset", phredOffset) +
optional("--read-len", readLen) +
optional("--fragment-len", fragmentLen) +
optional("--fragment-len-stddev", fragmentLenStddev) +
conditional(capMismatchQuals, "--cap-mismatch-quals") +
conditional(interleavedInput, "--interleaved-input") +
conditional(interleavedOutput, "--interleaved-output") +
conditional(interleaved, "--interleaved") +
conditional(tabDelimitedInput, "--tab-delimited-input") +
conditional(tabDelimitedOutput, "--tab-delimited-output") +
optional("--output-prefix", outputPrefix) +
required("--output-directory", outputDirectory) +
conditional(compress, "--compress") +
optional("--compress-prog", compressProg) +
optional("--compress-prog-args", compressProgArgs) +
optional("--output-suffix", outputSuffix) +
optional("--threads", threads) +
required(fastqR1) +
required(fastqR2)
}
......@@ -49,52 +49,32 @@ class Ln(val root: Configurable) extends InProcessFunction with Configurable {
/** return commandline to execute */
lazy val cmd: String = {
lazy val inCanonical: String = {
val inCanonical: String = {
// need to remove "/~" to correctly expand path with tilde
input.getAbsolutePath.replace("/~", "")
}
lazy val outCanonical: String = output.getAbsolutePath.replace("/~", "")
val outCanonical: String = output.getAbsolutePath.replace("/~", "")
lazy val inToks: Array[String] = inCanonical.split(File.separator)
if (relative) {
val inToks: Array[String] = inCanonical.split(File.separator)
lazy val outToks: Array[String] = outCanonical.split(File.separator)
val outToks: Array[String] = outCanonical.split(File.separator)
lazy val commonPrefixLength: Int = {
val maxLength = scala.math.min(inToks.length, outToks.length)
var i: Int = 0
while (i < maxLength && inToks(i) == outToks(i)) i += 1
i
}
val commonPrefixLength: Int = {
val maxLength = scala.math.min(inToks.length, outToks.length)
var i: Int = 0
while (i < maxLength && inToks(i) == outToks(i)) i += 1
i
}
lazy val inUnique: String = {
inToks.slice(commonPrefixLength, inToks.length).mkString(File.separator)
}
val inUnique = inToks.slice(commonPrefixLength, inToks.length)
lazy val outUnique: String = {
outToks.slice(commonPrefixLength, outToks.length).mkString(File.separator)
}
val outUnique = outToks.slice(commonPrefixLength, outToks.length)
lazy val inRelative: String = {
// calculate 'distance' from output directory to input
// which is the number of directory walks required to get to the inUnique directory from outDir
val dist =
// relative path differs depending on which of the input or target is in the 'higher' directory
if (inToks.length > outToks.length)
scala.math.max(0, inUnique.split(File.separator).length - 1)
else
scala.math.max(0, outUnique.split(File.separator).length - 1)
val result =
if (dist == 0 || inToks.length > outToks.length)
inUnique
else
((".." + File.separator) * dist) + inUnique
result
}
val inRelative: String =
((".." + File.separator) * (outUnique.length - 1)) + inUnique.mkString(File.separator)
if (relative) {
// workaround until we have `ln` that works with relative path (i.e. `ln -r`)
"ln -s " + inRelative + " " + outCanonical
} else {
......
......@@ -54,10 +54,10 @@ object Zcat {
zcat
}
def apply(root: Configurable, input: List[File], output: File): Zcat = {
def apply(root: Configurable, input: List[File], output: File = null): Zcat = {
val zcat = new Zcat(root)
zcat.input = input
zcat.output = output
if (output != null) zcat.output = output
zcat
}
}
\ No newline at end of file
......@@ -53,7 +53,7 @@ class Kraken(val root: Configurable) extends BiopetCommandLineFunction with Vers
def versionCommand = executable + " --version"