- 31 Oct, 2012 2 commits
-
-
Salvatore Sanfilippo authored
fix a typo in a comment
-
antirez authored
Because of the short circuit behavior of && inverting the two sides of the if expression avoids an hash table lookup if the non-EX variant of SET is called. Thanks to Weibin Yao (@yaoweibin on github) for spotting this.
-
- 30 Oct, 2012 2 commits
- 26 Oct, 2012 2 commits
- 25 Oct, 2012 2 commits
- 24 Oct, 2012 2 commits
-
-
antirez authored
-
YAMAMOTO Takashi authored
-
- 22 Oct, 2012 5 commits
-
-
antirez authored
This was an important information missing from the INFO output in the replication section. It obviously reflects if the slave is read only or not.
-
Salvatore Sanfilippo authored
Fix (cosmetic) typos in dict.h
-
Schuster authored
(Commit message from @antirez as it was missign in the original commits, also the patch was modified a bit to still work with 2.4 dumps and to avoid if expressions that are always true due to checked types range) This commit changes redis-check-dump to account for new encodings and for the new MSTIME expire format. It also refactors the test for valid type into a function. The code is still compatible with Redis 2.4 generated dumps. This fixes issue #709.
-
antirez authored
In some system, notably osx, the 3.5 GB limit was too far and not able to prevent a crash for out of memory. The 3 GB limit works better and it is still a lot of memory within a 4 GB theorical limit so it's not going to bore anyone :-) This fixes issue #711
-
antirez authored
When calling SCRIPT KILL currently you can get two errors: * No script in timeout (busy) state. * The script already performed a write. It is useful to be able to distinguish the two errors, but right now both start with "ERR" prefix, so string matching (that is fragile) must be used. This commit introduces two different prefixes. -NOTBUSY and -UNKILLABLE respectively to reply with an error when no script is busy at the moment, and when the script already executed a write operation and can not be killed.
-
- 16 Oct, 2012 1 commit
-
-
antirez authored
Before of this commit it used to be like this: MULTI EXEC ... actual commands of the transaction ... Because after all that is the natural order of things. Transaction commands are queued and executed *only after* EXEC is called. However this makes debugging with MONITOR a mess, so the code was modified to provide a coherent output. What happens is that MULTI is rendered in the MONITOR output as far as possible, instead EXEC is propagated only after the transaction is executed, or even in the case it fails because of WATCH, so in this case you'll simply see: MULTI EXEC An empty transaction.
-
- 11 Oct, 2012 1 commit
-
-
antirez authored
If the server is password protected we need to accept AUTH when there is a server busy (-BUSY) condition, otherwise it will be impossible to send SHUTDOWN NOSAVE or SCRIPT KILL. This fixes issue #708.
-
- 10 Oct, 2012 2 commits
-
-
Salvatore Sanfilippo authored
Update src/redis-benchmark.c
-
NanXiao authored
The code of current implementation: if (c->pending == 0) clientDone(c); In clientDone function, the c's memory has been freed, then the loop will continue: while(c->pending). The memory of c has been freed now, so c->pending is invalid (c is an invalid pointer now), and this will cause memory dump in some platforams(eg: Solaris). So I think the code should be modified as: if (c->pending == 0) { clientDone(c); break; } and this will not lead to while(c->pending).
-
- 06 Oct, 2012 1 commit
-
-
antirez authored
-
- 05 Oct, 2012 4 commits
-
-
dvir volk authored
fixed server install script to rewrite the default configuration file and not a template, and removed the old config template Conflicts: utils/redis.conf.tpl
-
antirez authored
The previously used hash function, djbhash, is not secure against collision attacks even when the seed is randomized as there are simple ways to find seed-independent collisions. The new hash function appears to be safe (or much harder to exploit at least) in this case, and has better distribution. Better distribution does not always means that's better. For instance in a fast benchmark with "DEBUG POPULATE 1000000" I obtained the following results: 1.6 seconds with djbhash 2.0 seconds with murmurhash2 This is due to the fact that djbhash will hash objects that follow the pattern `prefix:<id>` and where the id is numerically near, to near buckets. This improves the locality. However in other access patterns with keys that have no relation murmurhash2 has some (apparently minimal) speed advantage. On the other hand a better distribution should significantly improve the quality of the distribution of elements returned with dictGetRandomKey() that is used in SPOP, SRANDMEMBER, RANDOMKEY, and other commands. Everything considered, and under the suspect that this commit fixes a security issue in Redis, we are switching to the new hash function. If some serious speed regression will be found in the future we'll be able to step back easiliy. This commit fixes issue #663.
-
antirez authored
This commit warns the user with a log at "warning" level if: 1) After the server startup the maxmemory limit was found to be < 1MB. 2) After a CONFIG SET command modifying the maxmemory setting the limit is set to a value that is smaller than the currently used memory. The behaviour of the Redis server is unmodified, and this wil not make the CONFIG SET command or a wrong configuration in redis.conf less likely to create problems, but at least this will make aware most users about a possbile error they committed without resorting to external help. However no warning is issued if, as a result of loading the AOF or RDB file, we are very near the maxmemory setting, or key eviction will be needed in order to go under the specified maxmemory setting. The reason is that in servers configured as a cache with an aggressive maxmemory-policy most of the times restarting the server will cause this condition to happen if persistence is not switched off. This fixes issue #429.
-
antirez authored
-
- 04 Oct, 2012 2 commits
-
-
Jokea authored
When system time changes back, the timer will not worker properly hence some core functionality of redis will stop working(e.g. replication, bgsave, etc). See issue #633 for details. The patch saves the previous time and when a system clock skew is detected, it will force expire all timers. Modiifed by @antirez: the previous time was moved into the eventLoop structure to make sure the library is still thread safe as long as you use different event loops into different threads (otherwise you need some synchronization). More comments added about the reasoning at the base of the patch, that's worth reporting here: /* If the system clock is moved to the future, and then set back to the * right value, time events may be delayed in a random way. Often this * means that scheduled operations will not be performed soon enough. * * Here we try to detect system clock skews, and force all the time * events to be processed ASAP when this happens: the idea is that * processing events earlier is less dangerous than delaying them * indefinitely, and practice suggests it is. */
-
antirez authored
The new message now contains an hint about modifying the repl-timeout configuration directive if the problem persists. This should normally not be needed, because while the master generates the RDB file it makes sure to send newlines to the replication channel to prevent timeouts. However there are times when masters running on very slow systems can completely stop for seconds during the RDB saving process. In such a case enlarging the timeout value can fix the problem. See issue #695 for an example of this problem in an EC2 deployment.
-
- 03 Oct, 2012 2 commits
-
-
antirez authored
When SORT is called with the option BY set to a string constant not inclduing the wildcard character "*", there is no way to sort the output so any ordering is valid. This allows the SORT internals to optimize its work and don't really sort the output at all. However it was odd that this option was not able to retain the natural order of a sorted set. This feature was requested by users multiple times as sometimes to call SORT with GET against sorted sets as a way to mass-fetch objects can be handy. This commit introduces two things: 1) The ability of SORT to return sorted sets elements in their natural ordering when `BY nosort` is specified, accordingly to `DESC / ASC` options. 2) The ability of SORT to optimize this case further if LIMIT is passed as well, avoiding to really fetch the whole sorted set, but directly obtaining the specified range. Because in this case the sorting is always deterministic, no post-sorting activity is performed when SORT is called from a Lua script. This commit fixes issue #98.
-
Greg Hurrell authored
-
- 01 Oct, 2012 1 commit
-
- 28 Sep, 2012 2 commits
-
-
antirez authored
A previous commit introduced Redis.NIL. This commit adds similar helper functions to return tables with a single field set to the specified string so that instead of using 'return {err="My Error"}' it is possible to use a more idiomatic form: return redis.error_reply("My Error") return redis.status_reply("OK")
-
antirez authored
Lua arrays can't contain nil elements (see http://www.lua.org/pil/19.1.html for more information), so Lua scripts were not able to return a multi-bulk reply containing nil bulk elements inside. This commit introduces a special conversion: a table with just a "nilbulk" field set to a boolean value is converted by Redis as a nil bulk reply, but at the same time for Lua this type is not a "nil" so can be used inside Lua arrays. This type is also assigned to redis.NIL, so the following two forms are equivalent and will be able to return a nil bulk reply as second element of a three elements array: EVAL "return {1,redis.NIL,3}" 0 EVAL "return {1,{nilbulk=true},3}" 0 The result in redis-cli will be: 1) (integer) 1 2) (nil) 3) (integer) 3
-
- 26 Sep, 2012 1 commit
-
-
antirez authored
-
- 21 Sep, 2012 3 commits
-
-
antirez authored
-
antirez authored
For "CASE 4" (see code) we need to free the element if it's already in the result dictionary and adding it failed.
-
antirez authored
SRANDMEMBER called with just the key argument can just return a single random element from a Redis Set. However many users need to return multiple unique elements from a Set, this is not a trivial problem to handle in the client side, and for truly good performance a C implementation was required. After many requests for this feature it was finally implemented. The problem implementing this command is the strategy to follow when the number of elements the user asks for is near to the number of elements that are already inside the set. In this case asking random elements to the dictionary API, and trying to add it to a temporary set, may result into an extremely poor performance, as most add operations will be wasted on duplicated elements. For this reason this implementation uses a different strategy in this case: the Set is copied, and random elements are returned to reach the specified count. The code actually uses 4 different algorithms optimized for the different cases. If the count is negative, the command changes behavior and allows for duplicated elements in the returned subset.
-
- 17 Sep, 2012 3 commits
-
-
antirez authored
-
antirez authored
-
antirez authored
Redis provides support for blocking operations such as BLPOP or BRPOP. This operations are identical to normal LPOP and RPOP operations as long as there are elements in the target list, but if the list is empty they block waiting for new data to arrive to the list. All the clients blocked waiting for th same list are served in a FIFO way, so the first that blocked is the first to be served when there is more data pushed by another client into the list. The previous implementation of blocking operations was conceived to serve clients in the context of push operations. For for instance: 1) There is a client "A" blocked on list "foo". 2) The client "B" performs `LPUSH foo somevalue`. 3) The client "A" is served in the context of the "B" LPUSH, synchronously. Processing things in a synchronous way was useful as if "A" pushes a value that is served by "B", from the point of view of the database is a NOP (no operation) thing, that is, nothing is replicated, nothing is written in the AOF file, and so forth. However later we implemented two things: 1) Variadic LPUSH that could add multiple values to a list in the context of a single call. 2) BRPOPLPUSH that was a version of BRPOP that also provided a "PUSH" side effect when receiving data. This forced us to make the synchronous implementation more complex. If client "B" is waiting for data, and "A" pushes three elemnents in a single call, we needed to propagate an LPUSH with a missing argument in the AOF and replication link. We also needed to make sure to replicate the LPUSH side of BRPOPLPUSH, but only if in turn did not happened to serve another blocking client into another list ;) This were complex but with a few of mutually recursive functions everything worked as expected... until one day we introduced scripting in Redis. Scripting + synchronous blocking operations = Issue #614. Basically you can't "rewrite" a script to have just a partial effect on the replicas and AOF file if the script happened to serve a few blocked clients. The solution to all this problems, implemented by this commit, is to change the way we serve blocked clients. Instead of serving the blocked clients synchronously, in the context of the command performing the PUSH operation, it is now an asynchronous and iterative process: 1) If a key that has clients blocked waiting for data is the subject of a list push operation, We simply mark keys as "ready" and put it into a queue. 2) Every command pushing stuff on lists, as a variadic LPUSH, a script, or whatever it is, is replicated verbatim without any rewriting. 3) Every time a Redis command, a MULTI/EXEC block, or a script, completed its execution, we run the list of keys ready to serve blocked clients (as more data arrived), and process this list serving the blocked clients. 4) As a result of "3" maybe more keys are ready again for other clients (as a result of BRPOPLPUSH we may have push operations), so we iterate back to step "3" if it's needed. The new code has a much simpler semantics, and a simpler to understand implementation, with the disadvantage of not being able to "optmize out" a PUSH+BPOP as a No OP. This commit will be tested with care before the final merge, more tests will be added likely.
-
- 11 Sep, 2012 1 commit
-
-
antirez authored
Unfortunately we had still the lame atoi() without any error checking in place, so "SELECT foo" would work as "SELECT 0". This was not an huge problem per se but some people expected that DB can be strings and not just numbers, and without errors you get the feeling that they can be numbers, but not the behavior. Now getLongFromObjectOrReply() is used as almost everybody else across the code, generating an error if the number is not an integer or overflows the long type. Thanks to @mipearson for reporting that on Twitter.
-
- 10 Sep, 2012 1 commit
-
-
antirez authored
-