- 17 Sep, 2012 1 commit
-
-
antirez authored
Redis provides support for blocking operations such as BLPOP or BRPOP. This operations are identical to normal LPOP and RPOP operations as long as there are elements in the target list, but if the list is empty they block waiting for new data to arrive to the list. All the clients blocked waiting for th same list are served in a FIFO way, so the first that blocked is the first to be served when there is more data pushed by another client into the list. The previous implementation of blocking operations was conceived to serve clients in the context of push operations. For for instance: 1) There is a client "A" blocked on list "foo". 2) The client "B" performs `LPUSH foo somevalue`. 3) The client "A" is served in the context of the "B" LPUSH, synchronously. Processing things in a synchronous way was useful as if "A" pushes a value that is served by "B", from the point of view of the database is a NOP (no operation) thing, that is, nothing is replicated, nothing is written in the AOF file, and so forth. However later we implemented two things: 1) Variadic LPUSH that could add multiple values to a list in the context of a single call. 2) BRPOPLPUSH that was a version of BRPOP that also provided a "PUSH" side effect when receiving data. This forced us to make the synchronous implementation more complex. If client "B" is waiting for data, and "A" pushes three elemnents in a single call, we needed to propagate an LPUSH with a missing argument in the AOF and replication link. We also needed to make sure to replicate the LPUSH side of BRPOPLPUSH, but only if in turn did not happened to serve another blocking client into another list ;) This were complex but with a few of mutually recursive functions everything worked as expected... until one day we introduced scripting in Redis. Scripting + synchronous blocking operations = Issue #614. Basically you can't "rewrite" a script to have just a partial effect on the replicas and AOF file if the script happened to serve a few blocked clients. The solution to all this problems, implemented by this commit, is to change the way we serve blocked clients. Instead of serving the blocked clients synchronously, in the context of the command performing the PUSH operation, it is now an asynchronous and iterative process: 1) If a key that has clients blocked waiting for data is the subject of a list push operation, We simply mark keys as "ready" and put it into a queue. 2) Every command pushing stuff on lists, as a variadic LPUSH, a script, or whatever it is, is replicated verbatim without any rewriting. 3) Every time a Redis command, a MULTI/EXEC block, or a script, completed its execution, we run the list of keys ready to serve blocked clients (as more data arrived), and process this list serving the blocked clients. 4) As a result of "3" maybe more keys are ready again for other clients (as a result of BRPOPLPUSH we may have push operations), so we iterate back to step "3" if it's needed. The new code has a much simpler semantics, and a simpler to understand implementation, with the disadvantage of not being able to "optmize out" a PUSH+BPOP as a No OP. This commit will be tested with care before the final merge, more tests will be added likely.
-
- 05 Sep, 2012 2 commits
-
-
antirez authored
Bug #582 was not present in 32 bit builds of Redis as getObjectFromLong() will return an error for overflow. This commit makes sure that the test does not fail because of the error returned when running against 32 bit builds.
-
Haruto Otake authored
remove unsafe and unnecessary cast. until now, this cast may lead segmentation fault when end > UINT_MAX setbit foo 0 1 bitcount 0 4294967295 => ok bitcount 0 4294967296 => cause segmentation fault. Note by @antirez: the commit was modified a bit to also change the string length type to long, since it's guaranteed to be at max 512 MB in size, so we can work with the same type across all the code path. A regression test was also added.
-
- 04 Sep, 2012 1 commit
-
-
antirez authored
SORT is able to return (faster than when ordering) unordered output if the "BY" clause is used with a constant value. However we try to play well with scripting requirements of determinism providing always sorted outputs when SORT (and other similar commands) are called by Lua scripts. However we used the general mechanism in place in scripting in order to reorder SORT output, that is, if the command has the "S" flag set, the Lua scripting engine will take an additional step when converting a multi bulk reply to Lua value, calling a Lua sorting function. This is suboptimal as we can do it faster inside SORT itself. This is also broken as issue #545 shows us: basically when SORT is used with a constant BY, and additionally also GET is used, the Lua scripting engine was trying to order the output as a flat array, while it was actually a list of key-value pairs. What we do know is to recognized if the caller of SORT is the Lua client (since we can check this using the REDIS_LUA_CLIENT flag). If so, and if a "don't sort" condition is triggered by the BY option with a constant string, we force the lexicographical sorting. This commit fixes this bug and improves the performance, and at the same time simplifies the implementation. This does not mean I'm smart today, it means I was stupid when I committed the original implementation ;)
-
- 31 Aug, 2012 1 commit
-
-
antirez authored
Redis used to crash with a call like the following: EVAL "redis.call()" 0 Now the explicit check for at least one argument prevents the problem. This commit fixes issue #655.
-
- 12 Jun, 2012 1 commit
-
-
antirez authored
The new fuzzy tester also removes elements from the hash instead of just adding random fields. This should increase the probability to find bugs in the implementations of the hash type internal representations.
-
- 11 Jun, 2012 1 commit
-
-
antirez authored
A new stress test was added to stress test the code converting a ziplist into an hash table. In this commit also randomValue helper function was modified to also return negative values.
-
- 02 Jun, 2012 2 commits
-
-
antirez authored
wait_for_condition is now used instead of the usual "after 1000" (that is the way to sleep in Tcl). This should avoid to find the replica in a state where it is loading the RDB in memory, returning -LOADING error. This test used to fail when running the test over valgrind, due to the added latencies.
-
Alex Mitrofanov authored
(additional commit notes by antirez@gmail.com): The rdbIsObjectType() macro was not updated when the new RDB object type of ziplist encoded hashes was added. As a result RESTORE, that uses rdbLoadObjectType(), failed when a ziplist encoded hash was loaded. This does not affected normal RDB loading because in that case we use the lower-level function rdbLoadType(). The commit also adds a regression test.
-
- 31 May, 2012 1 commit
-
-
antirez authored
In the issue #529 an user reported a bug that can be triggered with the following code: flushdb set a "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" bitop or x a b The bug was introduced with the speed optimization in commit 8bbc0768 that specializes every BITOP operation loop up to the minimum length of the input strings. However the computation of the minimum length contained an error when a non existing key was present in the input, after a key that was non zero length. This commit fixes the bug and adds a regression test for it.
-
- 25 May, 2012 1 commit
-
- 24 May, 2012 4 commits
-
-
antirez authored
This commit adds a fast-path to the BITOP that can be used for all the bytes from 0 to the minimal length of the string, and if there are at max 16 input keys. Often the intersected bitmaps are roughly the same size, so this optimization can provide a 10x speed boost to most real world usages of the command. Bytes are processed four full words at a time, in loops specialized for the specific BITOP sub-command, without the need to check for length issues with the inputs (since we run this algorithm only as far as there is data from all the keys at the same time). The remaining part of the string is intersected in the usual way using the slow but generic algorith. It is possible to do better than this with inputs that are not roughly the same size, sorting the input keys by length, by initializing the result string in a smarter way, and noticing that the final part of the output string composed of only data from the longest string does not need any proecessing since AND, OR and XOR against an empty string does not alter the output (zero in the first case, and the original string in the other two cases). More implementations will be implemented later likely, but this should be enough to release Redis 2.6-RC4 with bitops merged in. Note: this commit also adds better testing for BITOP NOT command, that is currently the faster and hard to optimize further since it just flips the bits of a single input string.
-
antirez authored
A bug in the implementation caused BITOP to crash the server if at least one one of the source objects was integer encoded. The new implementation takes an additional array of Redis objects pointers and calls getDecodedObject() to get a reference to a string encoded object, and then uses decrRefCount() to release the object. Tests modified to cover the regression and improve coverage.
-
antirez authored
Fuzzing tests of BITCOUNT / BITOP are iterated multiple times. The new BITCOUNT fuzzing test uses random strings in a wider interval of lengths including zero-len strings.
-
antirez authored
The Redis implementation is tested against Tcl implementations of the same operation. Both fuzzing and testing of specific aspects of the commands behavior are performed.
-
- 23 May, 2012 1 commit
-
-
antirez authored
Weeks ago trying to fix an harmless GCC warning I introduced a bug in the ziplist-encoded implementations of sorted sets. The bug completely broke zuiNext() iterator, that is used in the ZINTERSTORE and ZUNIONSTORE implementation, so those two commands are no longer reliable starting from Redis version 2.4.12 and latest 2.6.0-RC releases. This commit fixes the problem and adds a regression test.
-
- 02 May, 2012 1 commit
-
-
antirez authored
-
- 01 May, 2012 1 commit
-
-
Harmen authored
-
- 26 Apr, 2012 1 commit
-
-
antirez authored
A new primitive wait_for_condition was introduced in the scripting engine that makes waiting for events simpler, so that it is simpler to write tests that are more resistant to timing issues.
-
- 23 Apr, 2012 2 commits
- 21 Apr, 2012 1 commit
-
-
antirez authored
Two limits are added: 1) Up to SLOWLOG_ENTRY_MAX_ARGV arguments are logged. 2) Up to SLOWLOG_ENTRY_MAX_STRING bytes per argument are logged. 3) slowlog-max-len is set to 128 by default (was 1024). The number of remaining arguments / bytes is logged in the entry so that the user can understand better the nature of the logged command.
-
- 19 Apr, 2012 1 commit
-
-
antirez authored
-
- 18 Apr, 2012 10 commits
- 17 Apr, 2012 3 commits
-
-
Michael Schlenker authored
Tcl's exec can send data to stdout itself, no need to call cat/echo for that usually.
-
antirez authored
-
antirez authored
-
- 13 Apr, 2012 3 commits
- 08 Apr, 2012 1 commit
-
-
antirez authored
-