Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
T
tasks
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Container Registry
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
biowdl
tasks
Commits
589712ab
Commit
589712ab
authored
5 years ago
by
JasperBoom
Browse files
Options
Downloads
Plain Diff
Merge branch 'develop' of
https://github.com/biowdl/tasks
into BIOWDL-380
parents
4255eca7
7f1545f4
No related branches found
No related tags found
No related merge requests found
Changes
25
Hide whitespace changes
Inline
Side-by-side
Showing
5 changed files
star.wdl
+19
-1
19 additions, 1 deletion
star.wdl
strelka.wdl
+43
-0
43 additions, 0 deletions
strelka.wdl
talon.wdl
+95
-218
95 additions, 218 deletions
talon.wdl
transcriptclean.wdl
+45
-112
45 additions, 112 deletions
transcriptclean.wdl
vardict.wdl
+30
-0
30 additions, 0 deletions
vardict.wdl
with
232 additions
and
331 deletions
star.wdl
+
19
−
1
View file @
589712ab
...
...
@@ -19,7 +19,7 @@ task Star {
String dockerImage = "quay.io/biocontainers/star:2.7.3a--0"
}
#TODO
Needs to
be extended for all possible output extensions
#TODO
Could
be extended for all possible output extensions
Map[String, String] samOutputNames = {"BAM SortedByCoordinate": "sortedByCoord.out.bam"}
command {
...
...
@@ -48,6 +48,24 @@ task Star {
memory: memory
docker: dockerImage
}
parameter_meta {
inputR1: {description: "The first-/single-end FastQ files.", category: "required"}
inputR2: {description: "The second-end FastQ files (in the same order as the first-end files).", category: "common"}
indexFiles: {description: "The star index files.", category: "required"}
outFileNamePrefix: {description: "The prefix for the output files. May include directories.", category: "required"}
outSAMtype: {description: "The type of alignment file to be produced. Currently only `BAM SortedByCoordinate` is supported.", category: "advanced"}
readFilesCommand: {description: "Equivalent to star's `--readFilesCommand` option.", category: "advanced"}
outStd: {description: "Equivalent to star's `--outStd` option.", category: "advanced"}
twopassMode: {description: "Equivalent to star's `--twopassMode` option.", category: "advanced"}
outSAMattrRGline: {description: "The readgroup lines for the fastq pairs given (in the same order as the fastq files).", category: "common"}
outSAMunmapped: {description: "Equivalent to star's `--outSAMunmapped` option.", category: "advanced"}
limitBAMsortRAM: {description: "Equivalent to star's `--limitBAMsortRAM` option.", category: "advanced"}
runThreadN: {description: "The number of threads to use.", category: "advanced"}
memory: {description: "The amount of memory this job will use.", category: "advanced"}
dockerImage: {description: "The docker image used for this task. Changing this may result in errors which the developers may choose not to address.",
category: "advanced"}
}
}
task MakeStarRGline {
...
...
This diff is collapsed.
Click to expand it.
strelka.wdl
+
43
−
0
View file @
589712ab
...
...
@@ -44,6 +44,23 @@ task Germline {
cpu: cores
memory: "~{memoryGb}G"
}
parameter_meta {
runDir: {description: "The directory to use as run/output directory.", category: "common"}
bams: {description: "The input BAM files.", category: "required"}
indexes: {description: "The indexes for the input BAM files.", category: "required"}
referenceFasta: {description: "The reference fasta file which was also used for mapping.", category: "required"}
referenceFastaFai: {description: "The index for the reference fasta file.", category: "required"}
callRegions: {description: "The bed file which indicates the regions to operate on.", category: "common"}
callRegionsIndex: {description: "The index of the bed file which indicates the regions to operate on.", category: "common"}
exome: {description: "Whether or not the data is from exome sequencing.", category: "common"}
rna: {description: "Whether or not the data is from RNA sequencing.", category: "common"}
cores: {description: "The number of cores to use.", category: "advanced"}
memoryGb: {description: "The amount of memory this job will use in Gigabytes.", category: "advanced"}
dockerImage: {description: "The docker image used for this task. Changing this may result in errors which the developers may choose not to address.",
category: "advanced"}
}
}
task Somatic {
...
...
@@ -96,4 +113,30 @@ task Somatic {
cpu: cores
memory: "~{memoryGb}G"
}
parameter_meta {
runDir: {description: "The directory to use as run/output directory.", category: "common"}
normalBam: {description: "The normal/control sample's BAM file.", category: "required"}
normalBamIndex: {description: "The index for the normal/control sample's BAM file.", category: "required"}
tumorBam: {description: "The tumor/case sample's BAM file.", category: "required"}
tumorBamIndex: {description: "The index for the tumor/case sample's BAM file.", category: "required"}
referenceFasta: {description: "The reference fasta file which was also used for mapping.", category: "required"}
referenceFastaFai: {description: "The index for the reference fasta file.", category: "required"}
callRegions: {description: "The bed file which indicates the regions to operate on.", category: "common"}
callRegionsIndex: {description: "The index of the bed file which indicates the regions to operate on.", category: "common"}
indelCandidatesVcf: {description: "An indel candidates VCF file from manta.", category: "advanced"}
indelCandidatesVcfIndex: {description: "The index for the indel candidates VCF file.", category: "advanced"}
exome: {description: "Whether or not the data is from exome sequencing.", category: "common"}
cores: {description: "The number of cores to use.", category: "advanced"}
memoryGb: {description: "The amount of memory this job will use in Gigabytes.", category: "advanced"}
dockerImage: {description: "The docker image used for this task. Changing this may result in errors which the developers may choose not to address.",
category: "advanced"}
}
meta {
WDL_AID: {
exclude: ["doNotDefineThis"]
}
}
}
\ No newline at end of file
This diff is collapsed.
Click to expand it.
talon.wdl
+
95
−
218
View file @
589712ab
...
...
@@ -30,7 +30,6 @@ task CreateAbundanceFileFromDatabase {
File? whitelistFile
File? datasetsFile
Int cores = 1
String memory = "4G"
String dockerImage = "biocontainers/talon:v4.4.1_cv1"
}
...
...
@@ -52,40 +51,25 @@ task CreateAbundanceFileFromDatabase {
}
runtime {
cpu: cores
memory: memory
docker: dockerImage
}
parameter_meta {
databaseFile: {
description: "TALON database.",
category: "required"
}
annotationVersion: {
description: "Which annotation version to use.",
category: "required"
}
genomeBuild: {
description: "Genome build to use.",
category: "required"
}
outputPrefix: {
description: "Output directory path + output file prefix.",
category: "required"
}
whitelistFile: {
description: "Whitelist file of transcripts to include in the output.",
category: "advanced"
}
datasetsFile: {
description: "A file indicating which datasets should be included.",
category: "advanced"
}
outputAbundanceFile: {
description: "Abundance for each transcript in the TALON database across datasets.",
category: "required"
}
# inputs
databaseFile: {description: "TALON database.", category: "required"}
annotationVersion: {description: "Which annotation version to use.", category: "required"}
genomeBuild: {description: "Genome build to use.", category: "required"}
outputPrefix: {description: "Output directory path + output file prefix.", category: "required"}
whitelistFile: {description: "Whitelist file of transcripts to include in the output.", category: "advanced"}
datasetsFile: {description: "A file indicating which datasets should be included.", category: "advanced"}
memory: {description: "The amount of memory available to the job.", category: "advanced"}
dockerImage: {description: "The docker image used for this task. Changing this may result in errors which the developers may choose not to address.",
category: "advanced"}
# outputs
outputAbundanceFile: {description: "Abundance for each transcript in the TALON database across datasets."}
}
}
...
...
@@ -100,7 +84,6 @@ task CreateGtfFromDatabase {
File? whitelistFile
File? datasetFile
Int cores = 1
String memory = "4G"
String dockerImage = "biocontainers/talon:v4.4.1_cv1"
}
...
...
@@ -123,44 +106,25 @@ task CreateGtfFromDatabase {
}
runtime {
cpu: cores
memory: memory
docker: dockerImage
}
parameter_meta {
databaseFile: {
description: "TALON database.",
category: "required"
}
genomeBuild: {
description: "Genome build to use.",
category: "required"
}
annotationVersion: {
description: "Which annotation version to use.",
category: "required"
}
outputPrefix: {
description: "Output directory path + output file prefix.",
category: "required"
}
observedInDataset: {
description: "The output will only include transcripts that were observed at least once.",
category: "advanced"
}
whitelistFile: {
description: "Whitelist file of transcripts to include in the output.",
category: "advanced"
}
datasetFile: {
description: "A file indicating which datasets should be included.",
category: "advanced"
}
outputGTFfile: {
description: "The genes, transcripts, and exons stored a TALON database in GTF format.",
category: "required"
}
# inputs
databaseFile: {description: "TALON database.", category: "required"}
genomeBuild: {description: "Genome build to use.", category: "required"}
annotationVersion: {description: "Which annotation version to use.", category: "required"}
outputPrefix: {description: "Output directory path + output file prefix.", category: "required"}
observedInDataset: {description: "The output will only include transcripts that were observed at least once.", category: "advanced"}
whitelistFile: {description: "Whitelist file of transcripts to include in the output.", category: "advanced"}
datasetFile: {description: "A file indicating which datasets should be included.", category: "advanced"}
memory: {description: "The amount of memory available to the job.", category: "advanced"}
dockerImage: {description: "The docker image used for this task. Changing this may result in errors which the developers may choose not to address.",
category: "advanced"}
# outputs
outputGTFfile: {description: "The genes, transcripts, and exons stored a TALON database in GTF format."}
}
}
...
...
@@ -172,7 +136,6 @@ task FilterTalonTranscripts {
File? pairingsFile
Int cores = 1
String memory = "4G"
String dockerImage = "biocontainers/talon:v4.4.1_cv1"
}
...
...
@@ -192,28 +155,18 @@ task FilterTalonTranscripts {
}
runtime {
cpu: cores
memory: memory
docker: dockerImage
}
parameter_meta {
databaseFile: {
description: "TALON database.",
category: "required"
}
annotationVersion: {
description: "Which annotation version to use.",
category: "required"
}
outputPrefix: {
description: "Output directory path + output file prefix.",
category: "required"
}
pairingsFile: {
description: "A file indicating which datasets should be considered together.",
category: "advanced"
}
databaseFile: {description: "TALON database.", category: "required"}
annotationVersion: {description: "Which annotation version to use.", category: "required"}
outputPrefix: {description: "Output directory path + output file prefix.", category: "required"}
pairingsFile: {description: "A file indicating which datasets should be considered together.", category: "advanced"}
memory: {description: "The amount of memory available to the job.", category: "advanced"}
dockerImage: {description: "The docker image used for this task. Changing this may result in errors which the developers may choose not to address.",
category: "advanced"}
}
}
...
...
@@ -225,7 +178,6 @@ task GetReadAnnotations {
File? datasetFile
Int cores = 1
String memory = "4G"
String dockerImage = "biocontainers/talon:v4.4.1_cv1"
}
...
...
@@ -245,32 +197,22 @@ task GetReadAnnotations {
}
runtime {
cpu: cores
memory: memory
docker: dockerImage
}
parameter_meta {
databaseFile: {
description: "TALON database.",
category: "required"
}
genomeBuild: {
description: "Genome build to use.",
category: "required"
}
outputPrefix: {
description: "Output directory path + output file prefix.",
category: "required"
}
datasetFile: {
description: "A file indicating which datasets should be included.",
category: "advanced"
}
outputAnnotation: {
description: "Read-specific annotation information from a TALON database.",
category: "required"
}
# inputs
databaseFile: { description: "TALON database.", category: "required"}
genomeBuild: {description: "Genome build to use.", category: "required"}
outputPrefix: {description: "Output directory path + output file prefix.", category: "required"}
datasetFile: {description: "A file indicating which datasets should be included.", category: "advanced"}
memory: {description: "The amount of memory available to the job.", category: "advanced"}
dockerImage: {description: "The docker image used for this task. Changing this may result in errors which the developers may choose not to address.",
category: "advanced"}
# outputs
outputAnnotation: {description: "Read-specific annotation information from a TALON database."}
}
}
...
...
@@ -285,7 +227,6 @@ task InitializeTalonDatabase {
Int cutoff3p = 300
String outputPrefix
Int cores = 1
String memory = "10G"
String dockerImage = "biocontainers/talon:v4.4.1_cv1"
}
...
...
@@ -309,48 +250,26 @@ task InitializeTalonDatabase {
}
runtime {
cpu: cores
memory: memory
docker: dockerImage
}
parameter_meta {
GTFfile: {
description: "GTF annotation containing genes, transcripts, and edges.",
category: "required"
}
genomeBuild: {
description: "Name of genome build that the GTF file is based on (ie hg38).",
category: "required"
}
annotationVersion: {
description: "Name of supplied annotation (will be used to label data).",
category: "required"
}
minimumLength: {
description: "Minimum required transcript length.",
category: "common"
}
novelIDprefix: {
description: "Prefix for naming novel discoveries in eventual TALON runs.",
category: "common"
}
cutoff5p: {
description: "Maximum allowable distance (bp) at the 5' end during annotation.",
category: "advanced"
}
cutoff3p: {
description: "Maximum allowable distance (bp) at the 3' end during annotation.",
category: "advanced"
}
outputPrefix: {
description: "Output directory path + output file prefix.",
category: "required"
}
outputDatabase: {
description: "TALON database.",
category: "required"
}
# inputs
GTFfile: {description: "GTF annotation containing genes, transcripts, and edges.", category: "required"}
genomeBuild: {description: "Name of genome build that the GTF file is based on (ie hg38).", category: "required"}
annotationVersion: {description: "Name of supplied annotation (will be used to label data).", category: "required"}
minimumLength: { description: "Minimum required transcript length.", category: "common"}
novelIDprefix: {description: "Prefix for naming novel discoveries in eventual TALON runs.", category: "common"}
cutoff5p: { description: "Maximum allowable distance (bp) at the 5' end during annotation.", category: "advanced"}
cutoff3p: {description: "Maximum allowable distance (bp) at the 3' end during annotation.", category: "advanced"}
outputPrefix: {description: "Output directory path + output file prefix.", category: "required"}
memory: {description: "The amount of memory available to the job.", category: "advanced"}
dockerImage: {description: "The docker image used for this task. Changing this may result in errors which the developers may choose not to address.",
category: "advanced"}
# outputs
outputDatabase: {description: "TALON database."}
}
}
...
...
@@ -358,7 +277,6 @@ task ReformatGtf {
input {
File GTFfile
Int cores = 1
String memory = "4G"
String dockerImage = "biocontainers/talon:v4.4.1_cv1"
}
...
...
@@ -374,16 +292,15 @@ task ReformatGtf {
}
runtime {
cpu: cores
memory: memory
docker: dockerImage
}
parameter_meta {
GTFfile: {
description: "
GTF annotation containing genes, transcripts, and edges.",
category: "required"
}
GTFfile: {
description: "GTF annotation containing genes, transcripts, and edges.", category: "required"}
memory: {
description: "
The amount of memory available to the job.", category: "advanced"}
dockerImage: {description: "The docker image used for this task. Changing this may result in errors which the developers may choose not to address.",
category: "advanced"
}
}
}
...
...
@@ -395,7 +312,6 @@ task SummarizeDatasets {
File? datasetGroupsCSV
Int cores = 1
String memory = "4G"
String dockerImage = "biocontainers/talon:v4.4.1_cv1"
}
...
...
@@ -415,32 +331,22 @@ task SummarizeDatasets {
}
runtime {
cpu: cores
memory: memory
docker: dockerImage
}
parameter_meta {
databaseFile: {
description: "TALON database.",
category: "required"
}
setVerbose: {
description: "Print out the counts in terminal.",
category: "advanced"
}
outputPrefix: {
description: "Output directory path + output file prefix.",
category: "required"
}
datasetGroupsCSV: {
description: "File of comma-delimited dataset groups to process together.",
category: "advanced"
}
outputSummaryFile: {
description: "Tab-delimited file of gene and transcript counts for each dataset.",
category: "required"
}
# inputs
databaseFile: {description: "TALON database.", category: "required"}
setVerbose: {description: "Print out the counts in terminal.", category: "advanced"}
outputPrefix: {description: "Output directory path + output file prefix.", category: "required"}
datasetGroupsCSV: {description: "File of comma-delimited dataset groups to process together.", category: "advanced"}
memory: {description: "The amount of memory available to the job.", category: "advanced"}
dockerImage: {description: "The docker image used for this task. Changing this may result in errors which the developers may choose not to address.",
category: "advanced"}
# outputs
outputSummaryFile: {description: "Tab-delimited file of gene and transcript counts for each dataset."}
}
}
...
...
@@ -496,53 +402,24 @@ task Talon {
}
parameter_meta {
SAMfiles: {
description: "Input SAM files.",
category: "required"
}
organism: {
description: "The name of the organism from which the samples originated.",
category: "required"
}
sequencingPlatform: {
description: "The sequencing platform used to generate long reads.",
category: "required"
}
databaseFile: {
description: "TALON database. Created using initialize_talon_database.py.",
category: "required"
}
genomeBuild: {
description: "Genome build (i.e. hg38) to use.",
category: "required"
}
minimumCoverage: {
description: "Minimum alignment coverage in order to use a SAM entry.",
category: "common"
}
minimumIdentity: {
description: "Minimum alignment identity in order to use a SAM entry.",
category: "common"
}
outputPrefix: {
description: "Output directory path + output file prefix.",
category: "required"
}
outputUpdatedDatabase: {
description: "Updated TALON database.",
category: "required"
}
outputLog: {
description: "Log file from TALON run.",
category: "required"
}
outputAnnot: {
description: "Read annotation file from TALON run.",
category: "required"
}
outputConfigFile: {
description: "The TALON configuration file.",
category: "required"
}
# inputs
SAMfiles: {description: "Input SAM files.", category: "required"}
organism: {description: "The name of the organism from which the samples originated.", category: "required"}
sequencingPlatform: {description: "The sequencing platform used to generate long reads.", category: "required"}
databaseFile: {description: "TALON database. Created using initialize_talon_database.py.", category: "required"}
genomeBuild: {description: "Genome build (i.e. hg38) to use.", category: "required"}
minimumCoverage: {description: "Minimum alignment coverage in order to use a SAM entry.", category: "common"}
minimumIdentity: {description: "Minimum alignment identity in order to use a SAM entry.", category: "common" }
outputPrefix: {description: "Output directory path + output file prefix.", category: "required"}
cores: {description: "The number of cores to be used.", category: "advanced"}
memory: {description: "The amount of memory available to the job.", category: "advanced"}
dockerImage: {description: "The docker image used for this task. Changing this may result in errors which the developers may choose not to address.",
category: "advanced"}
# outputs
outputUpdatedDatabase: {description: "Updated TALON database."}
outputLog: {description: "Log file from TALON run."}
outputAnnot: {description: "Read annotation file from TALON run."}
outputConfigFile: {description: "The TALON configuration file."}
}
}
This diff is collapsed.
Click to expand it.
transcriptclean.wdl
+
45
−
112
View file @
589712ab
...
...
@@ -27,7 +27,6 @@ task GetSJsFromGtf {
String outputPrefix
Int minIntronSize = 21
Int cores = 1
String memory = "8G"
String dockerImage = "biocontainers/transcriptclean:v2.0.2_cv1"
}
...
...
@@ -47,32 +46,21 @@ task GetSJsFromGtf {
}
runtime {
cpu: cores
memory: memory
docker: dockerImage
}
parameter_meta {
GTFfile: {
description: "Input GTF file",
category: "required"
}
genomeFile: {
description: "Reference genome",
category: "required"
}
minIntronSize: {
description: "Minimum size of intron to consider a junction.",
category: "advanced"
}
outputPrefix: {
description: "Output directory path + output file prefix.",
category: "required"
}
outputSJsFile: {
description: "Extracted splice junctions.",
category: "required"
}
# inputs
GTFfile: {description: "Input GTF file", category: "required"}
genomeFile: {description: "Reference genome", category: "required"}
minIntronSize: {description: "Minimum size of intron to consider a junction.", category: "advanced"}
outputPrefix: {description: "Output directory path + output file prefix.", category: "required"}
memory: {description: "The amount of memory available to the job.", category: "advanced"}
dockerImage: {description: "The docker image used for this task. Changing this may result in errors which the developers may choose not to address.",
category: "advanced"}
# outputs
outputSJsFile: {description: "Extracted splice junctions."}
}
}
...
...
@@ -81,7 +69,6 @@ task GetTranscriptCleanStats {
File transcriptCleanSAMfile
String outputPrefix
Int cores = 1
String memory = "4G"
String dockerImage = "biocontainers/transcriptclean:v2.0.2_cv1"
}
...
...
@@ -99,24 +86,20 @@ task GetTranscriptCleanStats {
}
runtime {
cpu: cores
memory: memory
docker: dockerImage
}
parameter_meta {
transcriptCleanSAMfile: {
description: "Output SAM file from TranscriptClean",
category: "required"
}
outputPrefix: {
description: "Output directory path + output file prefix.",
category: "required"
}
outputStatsFile: {
description: "Summary stats from TranscriptClean run.",
category: "required"
}
# inputs
transcriptCleanSAMfile: {description: "Output SAM file from TranscriptClean", category: "required"}
outputPrefix: {description: "Output directory path + output file prefix.", category: "required"}
memory: {description: "The amount of memory available to the job.", category: "advanced"}
dockerImage: {description: "The docker image used for this task. Changing this may result in errors which the developers may choose not to address.",
category: "advanced"}
# outputs
outputStatsFile: {description: "Summary stats from TranscriptClean run."}
}
}
...
...
@@ -180,81 +163,31 @@ task TranscriptClean {
}
parameter_meta {
SAMfile: {
description: "Input SAM file containing transcripts to correct.",
category: "required"
}
referenceGenome: {
description: "Reference genome fasta file.",
category: "required"
}
maxLenIndel: {
description: "Maximum size indel to correct.",
category: "advanced"
}
maxSJoffset: {
description: "Maximum distance from annotated splice junction to correct.",
category: "advanced"
}
outputPrefix: {
description: "Output directory path + output file prefix.",
category: "required"
}
correctMismatches: {
description: "Set this to make TranscriptClean correct mismatches.",
category: "common"
}
correctIndels: {
description: "Set this to make TranscriptClean correct indels.",
category: "common"
}
correctSJs: {
description: "Set this to make TranscriptClean correct splice junctions.",
category: "common"
}
dryRun: {
description: "TranscriptClean will read in the data but don't do any correction.",
category: "advanced"
}
primaryOnly: {
description: "Only output primary mappings of transcripts.",
category: "advanced"
}
canonOnly: {
description: "Only output canonical transcripts and transcript containing annotated noncanonical junctions.",
category: "advanced"
}
bufferSize: {
description: "Number of lines to output to file at once by each thread during run.",
category: "common"
}
deleteTmp: {
description: "The temporary directory generated by TranscriptClean will be removed.",
category: "common"
}
spliceJunctionAnnotation: {
description: "Splice junction file.",
category: "common"
}
variantFile: {
description: "VCF formatted file of variants.",
category: "common"
}
outputTranscriptCleanFasta: {
description: "Fasta file containing corrected reads.",
category: "required"
}
outputTranscriptCleanLog: {
description: "Log file of TranscriptClean run.",
category: "required"
}
outputTranscriptCleanSAM: {
description: "SAM file containing corrected aligned reads.",
category: "required"
}
outputTranscriptCleanTElog: {
description: "TE log file of TranscriptClean run.",
category: "required"
}
# inputs
SAMfile: {description: "Input SAM file containing transcripts to correct.", category: "required"}
referenceGenome: {description: "Reference genome fasta file.", category: "required"}
maxLenIndel: {description: "Maximum size indel to correct.", category: "advanced"}
maxSJoffset: {description: "Maximum distance from annotated splice junction to correct.", category: "advanced"}
outputPrefix: {description: "Output directory path + output file prefix.", category: "required"}
correctMismatches: {description: "Set this to make TranscriptClean correct mismatches.", category: "common"}
correctIndels: {description: "Set this to make TranscriptClean correct indels.", category: "common"}
correctSJs: {description: "Set this to make TranscriptClean correct splice junctions.", category: "common"}
dryRun: {description: "TranscriptClean will read in the data but don't do any correction.", category: "advanced"}
primaryOnly: {description: "Only output primary mappings of transcripts.", category: "advanced"}
canonOnly: {description: "Only output canonical transcripts and transcript containing annotated noncanonical junctions.", category: "advanced"}
bufferSize: {description: "Number of lines to output to file at once by each thread during run.", category: "common"}
deleteTmp: {description: "The temporary directory generated by TranscriptClean will be removed.", category: "common"}
spliceJunctionAnnotation: {description: "Splice junction file.", category: "common"}
variantFile: {description: "VCF formatted file of variants.", category: "common"}
cores: {description: "The number of cores to be used.", category: "advanced"}
memory: {description: "The amount of memory available to the job.", category: "advanced"}
dockerImage: {description: "The docker image used for this task. Changing this may result in errors which the developers may choose not to address.",
category: "advanced"}
# outputs
outputTranscriptCleanFasta: {description: "Fasta file containing corrected reads."}
outputTranscriptCleanLog: {description: "Log file of TranscriptClean run."}
outputTranscriptCleanSAM: {description: "SAM file containing corrected aligned reads."}
outputTranscriptCleanTElog: {description: "TE log file of TranscriptClean run."}
}
}
This diff is collapsed.
Click to expand it.
vardict.wdl
+
30
−
0
View file @
589712ab
...
...
@@ -69,4 +69,34 @@ task VarDict {
memory: memory
docker: dockerImage
}
parameter_meta {
tumorSampleName: {description: "The name of the tumor/case sample.", category: "required"}
tumorBam: {description: "The tumor/case sample's BAM file.", category: "required"}
tumorBamIndex: {description: "The index for the tumor/case sample's BAM file.", category: "required"}
normalSampleName: {description: "The name of the normal/control sample.", category: "common"}
normalBam: {description: "The normal/control sample's BAM file.", category: "common"}
normalBamIndex: {description: "The normal/control sample's BAM file.", category: "common"}
referenceFasta: {description: "The reference fasta file.", category: "required"}
referenceFastaFai: {description: "The index for the reference fasta file.", category: "required"}
bedFile: {description: "A bed file describing the regions to operate on. These regions must be below 1e6 bases in size.", category: "required"}
outputVcf: {description: "The location to write the output VCF file to.", category: "required"}
chromosomeColumn: {description: "Equivalent to vardict-java's `-c` option.", category: "advanced"}
startColumn: {description: "Equivalent to vardict-java's `-S` option.", category: "advanced"}
endColumn: {description: "Equivalent to vardict-java's `-E` option.", category: "advanced"}
geneColumn: {description: "Equivalent to vardict-java's `-g` option.", category: "advanced"}
outputCandidateSomaticOnly: {description: "Equivalent to var2vcf_paired.pl or var2vcf_valid.pl's `-M` flag.", category: "advanced"}
outputAllVariantsAtSamePosition: {description: "Equivalent to var2vcf_paired.pl or var2vcf_valid.pl's `-A` flag.", category: "advanced"}
mappingQuality: {description: "Equivalent to var2vcf_paired.pl or var2vcf_valid.pl's `-Q` option.", category: "advanced"}
minimumTotalDepth: {description: "Equivalent to var2vcf_paired.pl or var2vcf_valid.pl's `-d` option.", category: "advanced"}
minimumVariantDepth: {description: "Equivalent to var2vcf_paired.pl or var2vcf_valid.pl's `-v` option.", category: "advanced"}
minimumAlleleFrequency: {description: "Equivalent to var2vcf_paired.pl or var2vcf_valid.pl's `-f` option.", category: "advanced"}
threads: {description: "The number of threads to use.", category: "advanced"}
memory: {description: "The amount of memory this job will use.", category: "advanced"}
javaXmx: {description: "The maximum memory available to the program. Should be lower than `memory` to accommodate JVM overhead.",
category: "advanced"}
dockerImage: {description: "The docker image used for this task. Changing this may result in errors which the developers may choose not to address.",
category: "advanced"}
}
}
This diff is collapsed.
Click to expand it.
Prev
1
2
Next
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment