From b1a46aafce22fb8e0cc00eff6b5caa7f2a3583a3 Mon Sep 17 00:00:00 2001
From: wyleung <w.y.leung@e-sensei.nl>
Date: Fri, 14 Nov 2014 16:52:48 +0100
Subject: [PATCH] Documentation change index file and split end-user and
 developers docu

---
 docs/about.md |  5 +--
 docs/index.md | 94 +++++++++++++++++++++++++++++++++++++++++++--------
 2 files changed, 80 insertions(+), 19 deletions(-)

diff --git a/docs/about.md b/docs/about.md
index 884f30e87..4695dab3c 100644
--- a/docs/about.md
+++ b/docs/about.md
@@ -6,12 +6,9 @@ We develop tools and pipelines for several purposes in analysis. Most of them
 share the same methods. So the basic idea is to let them work on the same 
 platform and reduce code duplication and increase maintainability.
 
-## Compute Cluster support
-
-
 ## The Team
 SASC:
-Currently our team excists out of 5 members
+Currently our team exists out of 5 members
 
 - Leon Mei (LUMC-SASC) 
 - Wibowo Arindrarto (LUMC-SASC)
diff --git a/docs/index.md b/docs/index.md
index e55d0941f..e7ff4c67b 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -2,34 +2,65 @@
 ###### (Bio Pipeline Execution Tool)
 
 ## Introduction
+
+Biopet is an abbreviation of ( Bio Pipeline Execution Tool ) and packages several functionalities:
+
+ 1. Tools for working on sequencing data
+ 1. Pipelines to do analysis on sequencing data
+ 1. Running analysis on a computing cluster ( Open Grid Engine )
+ 1. Running analysis on your local desktop computer
+
 ### System Requirements
-- Java 7 JVM
-- Maven 3 (does not need to be on shark)
 
-### Compiling Biopet
+Biopet is build on top of GATK Queue, which requires having `java` installed on the analysis machine(s).
 
-1. Clone Biopet with `git clone git@git.lumc.nl:biopet/biopet.git`
-2. Go to Biopet directory
-3. run mvn_install_queue.sh, this install queue jars into the local maven repository
-4. run `mvn verify` to compile and package or do `mvn install` to install the jars also in local maven repository
+For end-users:
+
+ * Java 7 JVM
+ * Minimum 2 GB RAM, more when analysis is also run on this machine.
+ * [Cran R 2.15.3](http://cran.r-project.org/)
+
+For developers:
+
+ * OpenJDK 7 or Oracle-Java JDK 7
+ * Minimum of 4 GB RAM {todo: provide more accurate estimation on building}
+ * [Cran R 2.15.3](http://cran.r-project.org/)
+ * Maven 3
+ * [GATK + Queue](https://www.broadinstitute.org/gatk/download)
+ * IntelliJ or Netbeans 8.0 for development
+
+## How to use
 
 ### Running a pipeline
+
 - Help: `java -jar Biopet(version).jar (pipeline of interest) -h`
 - Local: `java -jar Biopet(version).jar (pipeline of interest) (pipeline options) -run`
-- Shark: `java -jar Biopet(version).jar (pipeline of interest) (pipeline options) -qsub -jobParaEnv BWA -run`
+- Cluster: `java -jar Biopet(version).jar (pipeline of interest) (pipeline options) -qsub -jobParaEnv BWA -run`
 - DryRun: `java -jar Biopet(version).jar (pipeline of interest) (pipeline options)` 
-- DryRun(shark): `java -jar Biopet(version).jar (pipeline of interest) (pipeline options) -qsub -jobParaEnv BWA`
+- DryRun (shark): `java -jar Biopet(version).jar (pipeline of interest) (pipeline options) -qsub -jobParaEnv BWA`
 
     - A dry run can be performed to see if the scheduling and creating of the pipelines jobs performs well. Nothing will be executed only the job commands are created. If this succeeds it's a good indication you actual run will be successful as well.
     - Each pipeline can be found as an options inside the jar file Biopet[version].jar which is located in the target directory and can be started with `java -jar <pipelineJarFile>`
 
-### Running a tool
+### Shark Compute Cluster specific
+
+In the SHARK compute cluster, a module is available to load the necessary dependencies.
+
+    $ module load biopet/v0.2.0
+
+Using this option, the `java -jar Biopet-<version>.jar` can be omnited and `biopet` can be started using:
+
+    $ biopet
+
 
 
-### Pipelines
+### Running pipelines
 
-- [Flexiprep](https://git.lumc.nl/biopet/biopet/wikis/Flexiprep-Pipeline)
-- [Mapping](https://git.lumc.nl/biopet/biopet/wikis/Mapping-Pipeline)
+    $ biopet pipeline <pipeline_name>
+
+
+- [Flexiprep](pipelines/flexiprep)
+- [Mapping](pipelines/mapping)
 - [Gatk Variantcalling](https://git.lumc.nl/biopet/biopet/wikis/GATK-Variantcalling-Pipeline)
 - BamMetrics
 - Basty
@@ -38,14 +69,47 @@
 - GatkPipeline
 - GatkVariantRecalibration
 - GatkVcfSampleCompare
-- Gentrap (Under development)
-- Sage
+- [Gentrap](pipelines/gentrap)
+- [Sage](pipelines/sage)
 - Yamsvp (Under development)
 
 __Note that each pipeline needs a config file written in JSON format__
 
+
+### Running a tool
+
+    $ biopet tool <tool_name>
+
+  - BedToInterval
+  - BedtoolsCoverageToCounts
+  - BiopetFlagstat
+  - CheckAllelesVcfInBam
+  - ExtractAlignedFastq
+  - FastqSplitter
+  - FindRepeatsPacBio
+  - MpileupToVcf
+  - SageCountFastq
+  - SageCreateLibrary
+  - SageCreateTagCounts
+  - VcfFilter
+  - VcfToTsv
+  - WipeReads
+
+
 - More info can be found here: [How To! Config](https://git.lumc.nl/biopet/biopet/wikis/Config)
 
+## Developers
+
+### Compiling Biopet
+
+1. Clone biopet with `git clone git@git.lumc.nl:biopet/biopet.git biopet`
+2. Go to biopet directory
+3. run mvn_install_queue.sh, this install queue jars into the local maven repository
+3. alternatively download the `queue.jar` from the GATK website
+4. run `mvn verify` to compile and package or do `mvn install` to install the jars also in local maven repository
+
+
+
 ## About 
 Go to the [about page](about)
 
-- 
GitLab