1. 07 Mar, 2018 1 commit
  2. 20 Feb, 2017 1 commit
    • antirez's avatar
      Use SipHash hash function to mitigate HashDos attempts. · adeed29a
      antirez authored
      This change attempts to switch to an hash function which mitigates
      the effects of the HashDoS attack (denial of service attack trying
      to force data structures to worst case behavior) while at the same time
      providing Redis with an hash function that does not expect the input
      data to be word aligned, a condition no longer true now that sds.c
      strings have a varialbe length header.
      
      Note that it is possible sometimes that even using an hash function
      for which collisions cannot be generated without knowing the seed,
      special implementation details or the exposure of the seed in an
      indirect way (for example the ability to add elements to a Set and
      check the return in which Redis returns them with SMEMBERS) may
      make the attacker's life simpler in the process of trying to guess
      the correct seed, however the next step would be to switch to a
      log(N) data structure when too many items in a single bucket are
      detected: this seems like an overkill in the case of Redis.
      
      SPEED REGRESION TESTS:
      
      In order to verify that switching from MurmurHash to SipHash had
      no impact on speed, a set of benchmarks involving fast insertion
      of 5 million of keys were performed.
      
      The result shows Redis with SipHash in high pipelining conditions
      to be about 4% slower compared to using the previous hash function.
      However this could partially be related to the fact that the current
      implementation does not attempt to hash whole words at a time but
      reads single bytes, in order to have an output which is endian-netural
      and at the same time working on systems where unaligned memory accesses
      are a problem.
      
      Further X86 specific optimizations should be tested, the function
      may easily get at the same level of MurMurHash2 if a few optimizations
      are performed.
      adeed29a
  3. 02 Dec, 2016 1 commit
  4. 14 Sep, 2016 1 commit
  5. 12 Sep, 2016 1 commit
  6. 20 Jun, 2016 1 commit
  7. 10 May, 2016 3 commits
  8. 09 May, 2016 1 commit
  9. 18 Apr, 2016 1 commit
  10. 15 Apr, 2016 2 commits
  11. 14 Apr, 2016 2 commits
  12. 18 Feb, 2016 1 commit
  13. 01 Oct, 2015 1 commit
  14. 27 Jul, 2015 1 commit
  15. 26 Jul, 2015 5 commits
  16. 03 Jul, 2015 1 commit
  17. 22 Jun, 2015 2 commits
    • antirez's avatar
      Geo: zsetScore refactoring · 9fc47ddf
      antirez authored
      Now used both in geo.c and t_zset to provide ZSCORE.
      9fc47ddf
    • Matt Stancliff's avatar
      [In-Progress] Add Geo Commands · 7f4ac3d1
      Matt Stancliff authored
      Current todo:
        - replace functions in zset.{c,h} with a new unified Redis
          zset access API.
      
      Once we get the zset interface fixed, we can squash
      relevant commits in this branch and have one nice commit
      to merge into unstable.
      
      This commit adds:
        - Geo commands
        - Tests; runnable with: ./runtest --single unit/geo
        - Geo helpers in deps/geohash-int/
        - src/geo.{c,h} and src/geojson.{c,h} implementing geo commands
        - Updated build configurations to get everything working
        - TEMPORARY: src/zset.{c,h} implementing zset score and zset
          range reading without writing to client output buffers.
        - Modified linkage of one t_zset.c function for use in zset.c
      
      Conflicts:
      	src/Makefile
      	src/redis.c
      7f4ac3d1
  18. 02 Jun, 2015 1 commit
  19. 29 May, 2015 3 commits
  20. 28 May, 2015 1 commit
  21. 23 Dec, 2014 1 commit
  22. 07 Aug, 2014 1 commit
  23. 22 Jul, 2014 2 commits
    • antirez's avatar
      d74e422b
    • antirez's avatar
      ZUNIONSTORE reimplemented for speed. · 119813e9
      antirez authored
      The user @kjmph provided excellent ideas to improve speed of ZUNIONSTORE
      (in certain cases by many order of magnitude), together with an
      implementation of the ideas.
      
      While the ideas were sounding, the implementation could be improved both
      in terms of speed and clearness, so that's my attempt at reimplementing
      the speedup proposed, trying to improve by directly using just a
      dictionary with an embedded score inside, and reusing the single-pass
      aggregate + order-later approach.
      
      Note that you can't apply this commit without applying the previous
      commit in this branch that adds a double in the dictEntry value union.
      
      Issue #1786.
      119813e9
  24. 26 Jun, 2014 1 commit
  25. 31 May, 2014 1 commit
  26. 18 Apr, 2014 1 commit
  27. 17 Apr, 2014 2 commits