- 16 Apr, 2014 31 commits
-
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
Better results can be achieved by compensating for the bias of the raw approximation just after 2.5m (when LINEARCOUNTING is no longer used) by using a polynomial that approximates the bias at a given cardinality. The curve used was found using this web page: http://www.xuru.org/rt/PR.asp That performs polynomial regression given a set of values.
-
antirez authored
Merge N HLL data structures by selecting the max value for every M[i] register among the set of HLLs.
-
antirez authored
When we update the cached value, we need to propagate the command and signal the key as modified for WATCH.
-
antirez authored
The following form is given: HLLADD myhll No element is provided in the above case so if 'myhll' var does not exist the result is to just create an empty HLL structure, and no update will be performed on the registers. In this case, the DB should still be set dirty and the command propagated.
-
antirez authored
-
antirez authored
Those are generated to trace graphs using gnuplot.
-
antirez authored
The HyperLogLog original paper suggests using LINEARCOUNTING for cardinalities < 2.5m, however for P=14 the median / max error curves show that a value of '3' is the best pick for m = 16384.
-
antirez authored
-
antirez authored
The more we add elements to an HyperLogLog counter, the smaller is the probability that we actually update some register. From this observation it is easy to see how it is possible to use caching of a previously computed cardinality and reuse it to serve HLLCOUNT queries as long as no register was updated in the data structure. This commit does exactly this by using just additional 8 bytes for the data structure to store a 64 bit unsigned integer value cached cardinality. When the most significant bit of the 64 bit integer is set, it means that the value computed is no longer usable since at least a single register was modified and we need to recompute it at the next call of HLLCOUNT. The value is always stored in little endian format regardless of the actual CPU endianess.
-
antirez authored
All the Redis functions that need to modify the string value of a key in a destructive way (APPEND, SETBIT, SETRANGE, ...) require to make the object unshared (if refcount > 1) and encoded in raw format (if encoding is not already REDIS_ENCODING_RAW). This was cut & pasted many times in multiple places of the code. This commit puts the small logic needed into a function called dbUnshareStringValue().
-
antirez authored
We need to be sure that you can save a dataset in a Redis instance, reload it in a different architecture, and continue to count in the same HyperLogLog structure. So 32 and 64 bit, little or bit endian, must all guarantee to output the same hash for the same element.
-
antirez authored
The new representation is more obvious, starting from the LSB of the first byte and using bits going to MSB, and passing to next byte as needed. There was also a subtle error: first two bits were unused, everything was carried over on the right of two bits, even if it worked because of the code requirement of always having a byte more at the end. During the rewrite the code was made safer trying to avoid undefined behavior due to shifting an uint8_t for more than 8 bits.
-
antirez authored
-
antirez authored
-
antirez authored
This speedups the macros by a noticeable factor.
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
There was an error in the computation of 2^register, and the sequence of zeroes computed after the hashing did not included the "1".
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
There was an error in the first version of the macro. Now the HLLSELFTEST test reports success.
-
antirez authored
-
antirez authored
-
antirez authored
To test the bitfield array of counters set/get macros from the Redis Tcl suite is hard, so a specialized command that is able to test the internals was developed.
-
antirez authored
-
- 27 Mar, 2014 1 commit
-
-
antirez authored
-
- 25 Mar, 2014 8 commits
-
-
antirez authored
-
antirez authored
-
antirez authored
In this commit: * Decrement steps are semantically differentiated from the reserved FDs. Previously both values were 32 but the meaning was different. * Make it clear that we save setrlimit errno. * Don't explicitly handle wrapping of 'f', but prevent it from happening. * Add comments to make the function flow more readable. This integrates PR #1630
-
Matt Stancliff authored
errno may be reset by the previous call to redisLog, so capture the original value for proper error reporting.
-
Matt Stancliff authored
Also update the original REDIS_EVENTLOOP_FDSET_INCR to include REDIS_MIN_RESERVED_FDS. REDIS_EVENTLOOP_FDSET_INCR exists to make sure more than (maxclients+RESERVED) entries are allocated, but we can only guarantee that if we include the current value of REDIS_MIN_RESERVED_FDS as a minimum for the INCR size.
-
Matt Stancliff authored
Fun fact: rlim_t is an unsigned long long on all platforms. Continually subtracting from a rlim_t makes it get smaller and smaller until it wraps, then you're up to 2^64-1. This was causing an infinite loop on Redis startup if your ulimit was extremely (almost comically) low. The case of (f > oldlimit) would never be met in a case like: f = 150 while (f > 20) f -= 128 Since f is unsigned, it can't go negative and would take on values of: Iteration 1: 150 - 128 => 22 Iteration 2: 22 - 128 => 18446744073709551510 Iterations 3-∞: ... To catch the wraparound, we use the previous value of f stored in limit.rlimit_cur. If we subtract from f and get a larger number than the value it had previously, we print an error and exit since we don't have enough file descriptors to help the user at this point. Thanks to @bs3g for the inspiration to fix this problem. Patches existed from @bs3g at antirez#1227, but I needed to repair a few other parts of Redis simultaneously, so I didn't get a chance to use them.
-
Matt Stancliff authored
The log messages about open file limits have always been slightly opaque and confusing. Here's an attempt to fix their wording, detail, and meaning. Users will have a better understanding of how to fix very common problems with these reworded messages. Also, we handle a new error case when maxclients becomes less than one, essentially rendering the server unusable. We now exit on startup instead of leaving the user with a server unable to handle any connections. This fixes antirez#356 as well.
-
Matt Stancliff authored
32 was the additional number of file descriptors Redis would reserve when managing a too-low ulimit. The number 32 was in too many places statically, so now we use a macro instead that looks more appropriate. When Redis sets up the server event loop, it uses: server.maxclients+REDIS_EVENTLOOP_FDSET_INCR So, when reserving file descriptors, it makes sense to reserve at least REDIS_EVENTLOOP_FDSET_INCR FDs instead of only 32. Currently, REDIS_EVENTLOOP_FDSET_INCR is set to 128 in redis.h. Also, I replaced the static 128 in the while f < old loop with REDIS_EVENTLOOP_FDSET_INCR as well, which results in no change since it was already 128. Impact: Users now need at least maxclients+128 as their open file limit instead of maxclients+32 to obtain actual "maxclients" number of clients. Redis will carve the extra REDIS_EVENTLOOP_FDSET_INCR file descriptors it needs out of the "maxclients" range instead of failing to start (unless the local ulimit -n is too low to accomidate the request).
-