1. 13 Nov, 2015 1 commit
    • Vermaat's avatar
      wip · 43323c05
      Vermaat authored
      43323c05
  2. 09 Nov, 2015 8 commits
  3. 08 Nov, 2015 1 commit
  4. 05 Nov, 2015 2 commits
  5. 04 Nov, 2015 3 commits
  6. 03 Nov, 2015 4 commits
  7. 02 Nov, 2015 8 commits
  8. 30 Oct, 2015 1 commit
    • Vermaat's avatar
      Process batch jobs grouped by email address · 7e0db497
      Vermaat authored
      We previously processed batch jobs round robin, i.e., one item
      for each job per round. This is fair from the job point of view,
      but not from the user point of view when one user has many jobs.
      
      We now process batch jobs one item for each user per round,
      where we pick the oldest job if a user has more than one. Users
      are defined by their email address.
      
      Batch jobs submitted via the webservices all have the same email
      address, so they are effectively throttled as if all from the
      same user. Adapting the webservices to also allow setting an
      email address is future work.
      7e0db497
  9. 29 Oct, 2015 1 commit
  10. 26 Oct, 2015 3 commits
  11. 23 Oct, 2015 1 commit
  12. 20 Oct, 2015 1 commit
    • Vermaat's avatar
      Cache transcript protein links in Redis · 473c732c
      Vermaat authored
      Caching of transcript protein links received from the NCBI Entrez
      service is a typical use case for Redis. This implements this cache
      in Redis and removes all use of our original database table.
      
      An Alembic migration copies all existing links from the database to
      Redis. The original `TranscriptProteinLink` database table is not
      dropped. This will be done in a future migration to ensure running
      processes don't error and to provide a rollback scenario.
      
      We also remove the expiration of links (originally defaulting to 30
      days), since we don't expect them to ever change. Negative links
      (caching a 'not found' result from Entrez) *are* still expiring,
      but with a longer default of 30 days (was 5 days).
      
      The configuration setting for the latter was renamed, yielding the
      following changes in the default configuration settings.
      
      Removed default settings:
      
          # Expiration time for transcript<->protein links from the NCBI (in seconds).
          PROTEIN_LINK_EXPIRATION = 60 * 60 * 24 * 30
      
          # Expiration time for negative transcript<->protein links from the NCBI (in
          # seconds).
          NEGATIVE_PROTEIN_LINK_EXPIRATION = 60 * 60 * 24 * 5
      
      Added default setting:
      
          # Cache expiration time for negative transcript<->protein links from the NCBI
          # (in seconds).
          NEGATIVE_LINK_CACHE_EXPIRATION = 60 * 60 * 24 * 30
      473c732c
  13. 16 Oct, 2015 2 commits
    • Vermaat's avatar
      Clarify Redis client import in stats module · 47ce0ee3
      Vermaat authored
      47ce0ee3
    • Vermaat's avatar
      Create new Redis connection when REDIS_URI changes · f56cbbda
      Vermaat authored
      When the REDIS_URI configuration setting is changed, the Redis
      client should be reconfigured with a new connection pool, just
      like we do with the database.
      
      It appears redis-py manages the connection pool by itself and
      doesn't expose ways to explicitely destroy it or close all
      connections (this is done automatically when all connections
      loose scope).
      
      This fix ensures that the unit tests don't accidentally work on
      the Redis database configured in MUTALYZER_SETTINGS, which was
      quite an unfortunate bug.
      f56cbbda
  14. 14 Oct, 2015 1 commit
  15. 13 Oct, 2015 3 commits