- Oct 20, 2014
- Oct 15, 2014
-
-
Vermaat authored
The `getGS` website view for LOVD2 would report "transcript not found" if the genomic reference has multiple transcripts annotated or if the variant description raises an error in the variant checker.
-
- Oct 09, 2014
- Oct 08, 2014
-
-
Vermaat authored
-
- Oct 04, 2014
- Oct 03, 2014
-
-
Vermaat authored
-
- Oct 02, 2014
-
-
Vermaat authored
This prevents the case where the old announcement had a url set and the new one does not (Redis would keep the existing url).
-
- Sep 27, 2014
-
-
Vermaat authored
This fixes uploading base64 encoded data to the JSON webservice. For example: echo "NM_003002.2:c.274delT\nXXX:g.1del" | base64 > test.base64 curl \ -d 'process=SyntaxChecker' \ -d 'argument=hg19' \ --data-urlencode 'data@test.base64' \ 'http://127.0.0.1:8082/submitBatchJob'
-
Vermaat authored
-
Vermaat authored
-
Vermaat authored
Upstream Spyne crashes on POST requests to the HTTP/RPC+JSON webservice. We patched it in a rather hacky way. This was a regression from the old codebase, where we installed Spyne separately from our LUMC GitHub mirror. This is now also referenced in the requirements.txt file. Thanks to Ken Doig for reporting the issue.
-
- Sep 26, 2014
- Sep 23, 2014
-
-
Vermaat authored
Rename this webservice method. Note the capital letter L in the old name. Also add a short note to the documentation that data arguments must be base64 encoded.
-
- Sep 22, 2014
- Sep 19, 2014
- Sep 06, 2014
-
-
Vermaat authored
Last remaining relevant todo notes have been filed as issues in GitLab.
-
Vermaat authored
Previously, Mutalyzer would after writing any file check the cache size and start removing files while it exceeded the maximum. However, this caused long delays in case many files had to be removed (it would re- calculate the total size after each removal). Following the principle of separating concerns, this is now handled by a separate script on our production servers, which uses the inotifywait tool to cleanup the cache whenever files are added to it. It also doesn't suffer from the performance problem. Note that this removes the `MAX_CACHE_SIZE` configuration setting. Fixes #18
-
- Sep 05, 2014
-
-
Vermaat authored
-
- Sep 02, 2014
-
-
Vermaat authored
Using fetchChromSizes [1] and selecting *Download the full sequence report* from the NCBI assembly overview [2] we can generate a mapping from UCSC chromosome names to accession numbers: ./fetchChromSizes hg38 > human.hg38.genome for contig in $(cut -f 1 human.hg38.genome | grep 'alt$'); do code=$(echo $contig | cut -d _ -f 2 | sed 's/v/./') echo -n $contig$'\t' grep $code GCF_000001405.26.assembly.txt | cut -f 7 done > alt_chrom_names.mapping Generate the JSON dictionary entries: >>> import json >>> entries = [] >>> for line in open('alt_chrom_names.mapping'): ... chr, acc = line.strip().split() ... entries.append({'organelle': 'nucleus', ... 'name': chr, ... 'accession': acc}) ... >>> print json.dumps(entries, indent=2) [ { "organelle": "nucleus", "name": "chr12_KI270837v1_alt", "accession": "NT_187588.1" }, { "organelle": "nucleus", "name": "chr13_KI270842v1_alt", "accession": "NT_187596.1" }, ... ] [1] http://hgdownload.cse.ucsc.edu/admin/exe/linux.x86_64/fetchChromSizes [2] ftp://ftp.ncbi.nlm.nih.gov/genomes/ASSEMBLY_REPORTS/All/GCF_000001405.26.assembly.txt
-
Vermaat authored
-
- Aug 28, 2014
- Aug 27, 2014
-
-
Vermaat authored
-
Vermaat authored
See http://pytest.org/
-
- Aug 26, 2014
-
-
Vermaat authored
-