- 07 Mar, 2018 1 commit
-
-
Guy Benoish authored
-
- 20 Feb, 2017 1 commit
-
-
antirez authored
This change attempts to switch to an hash function which mitigates the effects of the HashDoS attack (denial of service attack trying to force data structures to worst case behavior) while at the same time providing Redis with an hash function that does not expect the input data to be word aligned, a condition no longer true now that sds.c strings have a varialbe length header. Note that it is possible sometimes that even using an hash function for which collisions cannot be generated without knowing the seed, special implementation details or the exposure of the seed in an indirect way (for example the ability to add elements to a Set and check the return in which Redis returns them with SMEMBERS) may make the attacker's life simpler in the process of trying to guess the correct seed, however the next step would be to switch to a log(N) data structure when too many items in a single bucket are detected: this seems like an overkill in the case of Redis. SPEED REGRESION TESTS: In order to verify that switching from MurmurHash to SipHash had no impact on speed, a set of benchmarks involving fast insertion of 5 million of keys were performed. The result shows Redis with SipHash in high pipelining conditions to be about 4% slower compared to using the previous hash function. However this could partially be related to the fact that the current implementation does not attempt to hash whole words at a time but reads single bytes, in order to have an output which is endian-netural and at the same time working on systems where unaligned memory accesses are a problem. Further X86 specific optimizations should be tested, the function may easily get at the same level of MurMurHash2 if a few optimizations are performed.
-
- 02 Dec, 2016 1 commit
-
-
Itamar Haber authored
Fixes https://github.com/antirez/redis/issues/3639
-
- 14 Sep, 2016 1 commit
-
-
antirez authored
Optimizations suggested and originally implemented by @oranagra. Re-applied by @antirez using the modified API.
-
- 12 Sep, 2016 1 commit
-
-
oranagra authored
(Change cherry-picked and modified by @antirez from a larger commit provided by @oranagra in PR #3223).
-
- 20 Jun, 2016 1 commit
-
-
Yossi Gottlieb authored
-
- 10 May, 2016 3 commits
- 09 May, 2016 1 commit
-
-
oranagra authored
-
- 18 Apr, 2016 1 commit
-
-
Damian Janowski authored
-
- 15 Apr, 2016 2 commits
- 14 Apr, 2016 2 commits
- 18 Feb, 2016 1 commit
-
-
antirez authored
Related to issue #3019.
-
- 01 Oct, 2015 1 commit
-
-
antirez authored
-
- 27 Jul, 2015 1 commit
-
-
antirez authored
-
- 26 Jul, 2015 5 commits
- 03 Jul, 2015 1 commit
-
-
antirez authored
-
- 22 Jun, 2015 2 commits
-
-
antirez authored
Now used both in geo.c and t_zset to provide ZSCORE.
-
Matt Stancliff authored
Current todo: - replace functions in zset.{c,h} with a new unified Redis zset access API. Once we get the zset interface fixed, we can squash relevant commits in this branch and have one nice commit to merge into unstable. This commit adds: - Geo commands - Tests; runnable with: ./runtest --single unit/geo - Geo helpers in deps/geohash-int/ - src/geo.{c,h} and src/geojson.{c,h} implementing geo commands - Updated build configurations to get everything working - TEMPORARY: src/zset.{c,h} implementing zset score and zset range reading without writing to client output buffers. - Modified linkage of one t_zset.c function for use in zset.c Conflicts: src/Makefile src/redis.c
-
- 02 Jun, 2015 1 commit
-
-
linfangrong authored
-
- 29 May, 2015 3 commits
-
-
antirez authored
From Twitter: "@antirez that’s an awfully-named command :( http://en.wikipedia.org/wiki/Retching"
-
antirez authored
Normally ZADD only returns the number of elements added to a sorted set, using the RETCH option it returns the sum of elements added or for which the score was updated.
-
antirez authored
-
- 28 May, 2015 1 commit
-
-
antirez authored
-
- 23 Dec, 2014 1 commit
-
-
Matt Stancliff authored
-
- 07 Aug, 2014 1 commit
-
-
Wei Jin authored
Fixes #1741
-
- 22 Jul, 2014 2 commits
-
-
antirez authored
-
antirez authored
The user @kjmph provided excellent ideas to improve speed of ZUNIONSTORE (in certain cases by many order of magnitude), together with an implementation of the ideas. While the ideas were sounding, the implementation could be improved both in terms of speed and clearness, so that's my attempt at reimplementing the speedup proposed, trying to improve by directly using just a dictionary with an embedded score inside, and reusing the single-pass aggregate + order-later approach. Note that you can't apply this commit without applying the previous commit in this branch that adds a double in the dictEntry value union. Issue #1786.
-
- 26 Jun, 2014 1 commit
-
-
antirez authored
-
- 31 May, 2014 1 commit
-
-
zionwu authored
-
- 18 Apr, 2014 1 commit
-
-
antirez authored
-
- 17 Apr, 2014 2 commits