- 30 Nov, 2012 1 commit
-
-
antirez authored
SDIFF used an algorithm that was O(N) where N is the total number of elements of all the sets involved in the operation. The algorithm worked like that: ALGORITHM 1: 1) For the first set, add all the members to an auxiliary set. 2) For all the other sets, remove all the members of the set from the auxiliary set. So it is an O(N) algorithm where N is the total number of elements in all the sets involved in the diff operation. Cristobal Viedma suggested to modify the algorithm to the following: ALGORITHM 2: 1) Iterate all the elements of the first set. 2) For every element, check if the element also exists in all the other remaining sets. 3) Add the element to the auxiliary set only if it does not exist in any of the other sets. The complexity of this algorithm on the worst case is O(N*M) where N is the size of the first set and M the total number of sets involved in the operation. However when there are elements in common, with this algorithm we stop the computation for a given element as long as we find a duplicated element into another set. I (antirez) added an additional step to algorithm 2 to make it faster, that is to sort the set to subtract from the biggest to the smallest, so that it is more likely to find a duplicate in a larger sets that are checked before the smaller ones. WHAT IS BETTER? None of course, for instance if the first set is much larger than the other sets the second algorithm does a lot more work compared to the first algorithm. Similarly if the first set is much smaller than the other sets, the original algorithm will less work. So this commit makes Redis able to guess the number of operations required by each algorithm, and select the best at runtime according to the input received. However, since the second algorithm has better constant times and can do less work if there are duplicated elements, an advantage is given to the second algorithm.
-
- 08 Nov, 2012 1 commit
-
-
antirez authored
-
- 21 Sep, 2012 2 commits
-
-
antirez authored
For "CASE 4" (see code) we need to free the element if it's already in the result dictionary and adding it failed.
-
antirez authored
SRANDMEMBER called with just the key argument can just return a single random element from a Redis Set. However many users need to return multiple unique elements from a Set, this is not a trivial problem to handle in the client side, and for truly good performance a C implementation was required. After many requests for this feature it was finally implemented. The problem implementing this command is the strategy to follow when the number of elements the user asks for is near to the number of elements that are already inside the set. In this case asking random elements to the dictionary API, and trying to add it to a temporary set, may result into an extremely poor performance, as most add operations will be wasted on duplicated elements. For this reason this implementation uses a different strategy in this case: the Set is copied, and random elements are returned to reach the specified count. The code actually uses 4 different algorithms optimized for the different cases. If the count is negative, the command changes behavior and allows for duplicated elements in the returned subset.
-
- 07 Apr, 2012 1 commit
-
-
Erik Dubbelboer authored
-
- 08 Nov, 2011 1 commit
-
-
antirez authored
-
- 04 Oct, 2011 1 commit
-
-
antirez authored
-
- 20 Jun, 2011 1 commit
-
-
antirez authored
-
- 31 May, 2011 1 commit
-
-
antirez authored
-
- 15 May, 2011 1 commit
-
-
antirez authored
-
- 19 Apr, 2011 1 commit
-
-
antirez authored
-
- 15 Apr, 2011 1 commit
-
-
antirez authored
-
- 16 Feb, 2011 1 commit
-
-
antirez authored
-
- 29 Dec, 2010 1 commit
-
-
antirez authored
touched key for WATCH refactored into a more general thing that can be used also for the cache system. Some more changes towards diskstore working.
-
- 09 Dec, 2010 2 commits
- 17 Oct, 2010 1 commit
-
-
Pieter Noordhuis authored
-
- 02 Sep, 2010 2 commits
-
-
Pieter Noordhuis authored
-
Pieter Noordhuis authored
-
- 30 Aug, 2010 1 commit
-
-
Pieter Noordhuis authored
-
- 26 Aug, 2010 3 commits
-
-
antirez authored
-
antirez authored
-
Pieter Noordhuis authored
-
- 21 Aug, 2010 1 commit
-
-
Pieter Noordhuis authored
-
- 12 Jul, 2010 1 commit
-
-
antirez authored
-
- 01 Jul, 2010 1 commit
-
-
antirez authored
networking related stuff moved into networking.c moved more code more work on layout of source code SDS instantaneuos memory saving. By Pieter and Salvatore at VMware ;) cleanly compiling again after the first split, now splitting it in more C files moving more things around... work in progress split replication code splitting more Sets split Hash split replication split even more splitting more splitting minor change
-