- 04 Oct, 2012 2 commits
-
-
Jokea authored
When system time changes back, the timer will not worker properly hence some core functionality of redis will stop working(e.g. replication, bgsave, etc). See issue #633 for details. The patch saves the previous time and when a system clock skew is detected, it will force expire all timers. Modiifed by @antirez: the previous time was moved into the eventLoop structure to make sure the library is still thread safe as long as you use different event loops into different threads (otherwise you need some synchronization). More comments added about the reasoning at the base of the patch, that's worth reporting here: /* If the system clock is moved to the future, and then set back to the * right value, time events may be delayed in a random way. Often this * means that scheduled operations will not be performed soon enough. * * Here we try to detect system clock skews, and force all the time * events to be processed ASAP when this happens: the idea is that * processing events earlier is less dangerous than delaying them * indefinitely, and practice suggests it is. */
-
antirez authored
The new message now contains an hint about modifying the repl-timeout configuration directive if the problem persists. This should normally not be needed, because while the master generates the RDB file it makes sure to send newlines to the replication channel to prevent timeouts. However there are times when masters running on very slow systems can completely stop for seconds during the RDB saving process. In such a case enlarging the timeout value can fix the problem. See issue #695 for an example of this problem in an EC2 deployment.
-
- 03 Oct, 2012 1 commit
-
-
antirez authored
When SORT is called with the option BY set to a string constant not inclduing the wildcard character "*", there is no way to sort the output so any ordering is valid. This allows the SORT internals to optimize its work and don't really sort the output at all. However it was odd that this option was not able to retain the natural order of a sorted set. This feature was requested by users multiple times as sometimes to call SORT with GET against sorted sets as a way to mass-fetch objects can be handy. This commit introduces two things: 1) The ability of SORT to return sorted sets elements in their natural ordering when `BY nosort` is specified, accordingly to `DESC / ASC` options. 2) The ability of SORT to optimize this case further if LIMIT is passed as well, avoiding to really fetch the whole sorted set, but directly obtaining the specified range. Because in this case the sorting is always deterministic, no post-sorting activity is performed when SORT is called from a Lua script. This commit fixes issue #98.
-
- 01 Oct, 2012 1 commit
-
- 28 Sep, 2012 2 commits
-
-
antirez authored
A previous commit introduced Redis.NIL. This commit adds similar helper functions to return tables with a single field set to the specified string so that instead of using 'return {err="My Error"}' it is possible to use a more idiomatic form: return redis.error_reply("My Error") return redis.status_reply("OK")
-
antirez authored
Lua arrays can't contain nil elements (see http://www.lua.org/pil/19.1.html for more information), so Lua scripts were not able to return a multi-bulk reply containing nil bulk elements inside. This commit introduces a special conversion: a table with just a "nilbulk" field set to a boolean value is converted by Redis as a nil bulk reply, but at the same time for Lua this type is not a "nil" so can be used inside Lua arrays. This type is also assigned to redis.NIL, so the following two forms are equivalent and will be able to return a nil bulk reply as second element of a three elements array: EVAL "return {1,redis.NIL,3}" 0 EVAL "return {1,{nilbulk=true},3}" 0 The result in redis-cli will be: 1) (integer) 1 2) (nil) 3) (integer) 3
-
- 26 Sep, 2012 1 commit
-
-
antirez authored
-
- 21 Sep, 2012 3 commits
-
-
antirez authored
-
antirez authored
For "CASE 4" (see code) we need to free the element if it's already in the result dictionary and adding it failed.
-
antirez authored
SRANDMEMBER called with just the key argument can just return a single random element from a Redis Set. However many users need to return multiple unique elements from a Set, this is not a trivial problem to handle in the client side, and for truly good performance a C implementation was required. After many requests for this feature it was finally implemented. The problem implementing this command is the strategy to follow when the number of elements the user asks for is near to the number of elements that are already inside the set. In this case asking random elements to the dictionary API, and trying to add it to a temporary set, may result into an extremely poor performance, as most add operations will be wasted on duplicated elements. For this reason this implementation uses a different strategy in this case: the Set is copied, and random elements are returned to reach the specified count. The code actually uses 4 different algorithms optimized for the different cases. If the count is negative, the command changes behavior and allows for duplicated elements in the returned subset.
-
- 17 Sep, 2012 3 commits
-
-
antirez authored
-
antirez authored
-
antirez authored
Redis provides support for blocking operations such as BLPOP or BRPOP. This operations are identical to normal LPOP and RPOP operations as long as there are elements in the target list, but if the list is empty they block waiting for new data to arrive to the list. All the clients blocked waiting for th same list are served in a FIFO way, so the first that blocked is the first to be served when there is more data pushed by another client into the list. The previous implementation of blocking operations was conceived to serve clients in the context of push operations. For for instance: 1) There is a client "A" blocked on list "foo". 2) The client "B" performs `LPUSH foo somevalue`. 3) The client "A" is served in the context of the "B" LPUSH, synchronously. Processing things in a synchronous way was useful as if "A" pushes a value that is served by "B", from the point of view of the database is a NOP (no operation) thing, that is, nothing is replicated, nothing is written in the AOF file, and so forth. However later we implemented two things: 1) Variadic LPUSH that could add multiple values to a list in the context of a single call. 2) BRPOPLPUSH that was a version of BRPOP that also provided a "PUSH" side effect when receiving data. This forced us to make the synchronous implementation more complex. If client "B" is waiting for data, and "A" pushes three elemnents in a single call, we needed to propagate an LPUSH with a missing argument in the AOF and replication link. We also needed to make sure to replicate the LPUSH side of BRPOPLPUSH, but only if in turn did not happened to serve another blocking client into another list ;) This were complex but with a few of mutually recursive functions everything worked as expected... until one day we introduced scripting in Redis. Scripting + synchronous blocking operations = Issue #614. Basically you can't "rewrite" a script to have just a partial effect on the replicas and AOF file if the script happened to serve a few blocked clients. The solution to all this problems, implemented by this commit, is to change the way we serve blocked clients. Instead of serving the blocked clients synchronously, in the context of the command performing the PUSH operation, it is now an asynchronous and iterative process: 1) If a key that has clients blocked waiting for data is the subject of a list push operation, We simply mark keys as "ready" and put it into a queue. 2) Every command pushing stuff on lists, as a variadic LPUSH, a script, or whatever it is, is replicated verbatim without any rewriting. 3) Every time a Redis command, a MULTI/EXEC block, or a script, completed its execution, we run the list of keys ready to serve blocked clients (as more data arrived), and process this list serving the blocked clients. 4) As a result of "3" maybe more keys are ready again for other clients (as a result of BRPOPLPUSH we may have push operations), so we iterate back to step "3" if it's needed. The new code has a much simpler semantics, and a simpler to understand implementation, with the disadvantage of not being able to "optmize out" a PUSH+BPOP as a No OP. This commit will be tested with care before the final merge, more tests will be added likely.
-
- 11 Sep, 2012 1 commit
-
-
antirez authored
Unfortunately we had still the lame atoi() without any error checking in place, so "SELECT foo" would work as "SELECT 0". This was not an huge problem per se but some people expected that DB can be strings and not just numbers, and without errors you get the feeling that they can be numbers, but not the behavior. Now getLongFromObjectOrReply() is used as almost everybody else across the code, generating an error if the number is not an integer or overflows the long type. Thanks to @mipearson for reporting that on Twitter.
-
- 10 Sep, 2012 1 commit
-
-
antirez authored
-
- 05 Sep, 2012 3 commits
-
-
antirez authored
Bug #582 was not present in 32 bit builds of Redis as getObjectFromLong() will return an error for overflow. This commit makes sure that the test does not fail because of the error returned when running against 32 bit builds.
-
Haruto Otake authored
remove unsafe and unnecessary cast. until now, this cast may lead segmentation fault when end > UINT_MAX setbit foo 0 1 bitcount 0 4294967295 => ok bitcount 0 4294967296 => cause segmentation fault. Note by @antirez: the commit was modified a bit to also change the string length type to long, since it's guaranteed to be at max 512 MB in size, so we can work with the same type across all the code path. A regression test was also added.
-
Salvatore Sanfilippo authored
Bug fix: slaves being pinged every second
-
- 04 Sep, 2012 3 commits
-
-
antirez authored
SORT is able to return (faster than when ordering) unordered output if the "BY" clause is used with a constant value. However we try to play well with scripting requirements of determinism providing always sorted outputs when SORT (and other similar commands) are called by Lua scripts. However we used the general mechanism in place in scripting in order to reorder SORT output, that is, if the command has the "S" flag set, the Lua scripting engine will take an additional step when converting a multi bulk reply to Lua value, calling a Lua sorting function. This is suboptimal as we can do it faster inside SORT itself. This is also broken as issue #545 shows us: basically when SORT is used with a constant BY, and additionally also GET is used, the Lua scripting engine was trying to order the output as a flat array, while it was actually a list of key-value pairs. What we do know is to recognized if the caller of SORT is the Lua client (since we can check this using the REDIS_LUA_CLIENT flag). If so, and if a "don't sort" condition is triggered by the BY option with a constant string, we force the lexicographical sorting. This commit fixes this bug and improves the performance, and at the same time simplifies the implementation. This does not mean I'm smart today, it means I was stupid when I committed the original implementation ;)
-
antirez authored
If we don't have any clue about a master since it never replied to INFO so far, reply with an -IDONTKNOW error to SENTINEL get-master-addr-by-name requests.
-
antirez authored
Before this commit Sentienl used to redirect master ip/addr if the current instance reported to be a slave only if this was the first INFO output received, and the role was found to be slave. Now instead also if we find that the runid is different, and the reported role is slave, we also redirect to the reported master ip/addr. This unifies the behavior of Sentinel in the case of a reboot (where it will see the first INFO output with the wrong role and will perform the redirection), with the behavior of Sentinel in the case of a change in what it sees in the INFO output of the master.
-
- 02 Sep, 2012 1 commit
-
-
antirez authored
During the first synchronization step of the replication process, a Redis slave connects with the master in a non blocking way. However once the connection is established the replication continues sending the REPLCONF command, and sometimes the AUTH command if needed. Those commands are send in a partially blocking way (blocking with timeout in the order of seconds). Because it is common for a blocked master to accept connections even if it is actually not able to reply to the slave requests, it was easy for a slave to block if the master had serious issues, but was still able to accept connections in the listening socket. For this reason we now send an asynchronous PING request just after the non blocking connection ended in a successful way, and wait for the reply before to continue with the replication process. It is very unlikely that a master replying to PING can't reply to the other commands. This solution was proposed by Didier Spezia (Thanks!) so that we don't need to turn all the replication process into a non blocking affair, but still the probability of a slave blocked is minimal even in the event of a failing master. Also we now use getsockopt(SO_ERROR) in order to check errors ASAP in the event handler, instead of waiting for actual I/O to return an error. This commit fixes issue #632.
-
- 31 Aug, 2012 2 commits
-
-
antirez authored
Lua scripting uses a fake client in order to run commands in the context of a client, accumulate the reply, and convert it into a Lua object to return to the caller. This client is reused again and again, and is referenced by the server.lua_client globally accessible pointer. However after every call to redis.call() or redis.pcall(), that is handled by the luaRedisGenericCommand() function, the reply_bytes field of the client was not set back to zero. This filed is used to estimate the amount of memory currently used in the reply. Because of the lack of reset, script after script executed, this value used to get bigger and bigger, and in the end on 32 bit systems it triggered the following assert: redisAssert(c->reply_bytes < ULONG_MAX-(1024*64)); On 64 bit systems this does not happen because it takes too much time to reach values near to 2^64 for users to see the practical effect of the bug. Now in the cleanup stage of luaRedisGenericCommand() we reset the reply_bytes counter to zero, avoiding the issue. It is not practical to add a test for this bug, but the fix was manually tested using a debugger. This commit fixes issue #656.
-
antirez authored
Redis used to crash with a call like the following: EVAL "redis.call()" 0 Now the explicit check for at least one argument prevents the problem. This commit fixes issue #655.
-
- 30 Aug, 2012 1 commit
-
-
antirez authored
Older versions of Redis (before 2.4.17) don't publish the runid field in INFO. This commit makes Sentinel able to handle that without crashing.
-
- 29 Aug, 2012 2 commits
- 28 Aug, 2012 5 commits
-
-
antirez authored
-
antirez authored
The slave priority that is now published by Redis in INFO output is now used by Sentinel in order to select the slave with minimum priority for promotion, and in order to consider slaves with priority set to 0 as not able to play the role of master (they will never be promoted by Sentinel). The "slave-priority" field is now one of the fileds that Sentinel publishes when describing an instance via the SENTINEL commands such as "SENTINEL slaves mastername".
-
antirez authored
A Redis slave can now be configured with a priority, that is an integer number that is shown in INFO output and can be get and set using the redis.conf file or the CONFIG GET/SET command. This field is used by Sentinel during slave election. A slave with lower priority is preferred. A slave with priority zero is never elected (and is considered to be impossible to elect even if it is the only slave available). A next commit will add support in the Sentinel side as well.
-
antirez authored
Note that the assertion guarantees that one of the if branches setting table is always entered.
-
antirez authored
This fixes issue #539. Basically if there is enough free memory the OS may buffer the RDB file that the slave transfers on disk from the master. The file may actually be flused on disk at once by the operating system when it gets closed by Redis, causing the close system call to block for a long time. This patch is a modified version of one provided by yoav-steinberg of @garantiadata (the original version was posted in the issue #539 comments), and tries to flush the OS buffers incrementally (every 8 MB of loaded data).
-
- 24 Aug, 2012 4 commits
-
-
antirez authored
-
antirez authored
The previous implementation of zmalloc.c was not able to handle out of memory in an application-specific way. It just logged an error on standard error, and aborted. The result was that in the case of an actual out of memory in Redis where malloc returned NULL (In Linux this actually happens under specific overcommit policy settings and/or with no or little swap configured) the error was not properly logged in the Redis log. This commit fixes this problem, fixing issue #509. Now the out of memory is properly reported in the Redis log and a stack trace is generated. The approach used is to provide a configurable out of memory handler to zmalloc (otherwise the default one logging the event on the standard output is used).
-
antirez authored
From the point of view of Redis an instance replying -BUSY is down, since it is effectively not able to reply to user requests. However a looping script is a recoverable condition in Redis if the script still did not performed any write to the dataset. In that case performing a fail over is not optimal, so Sentinel now tries to restore the normal server condition killing the script with a SCRIPT KILL command. If the script already performed some write before entering an infinite (or long enough to timeout) loop, SCRIPT KILL will not work and the fail over will be triggered anyway.
-
antirez authored
The call to sentinelScheduleScriptExecution() lacked the final NULL argument to signal the end of arguments. This resulted into a crash.
-
- 22 Aug, 2012 1 commit
-
-
Salvatore Sanfilippo authored
Fix ziplist edge case
-
- 21 Aug, 2012 2 commits
-
-
antirez authored
This new hiredis features allows us to reuse a previous context reader buffer even if already very big in order to maximize performances with big payloads (Usually hiredis re-creates buffers when they are too big and unused in order to save memory).
-
antirez authored
This version of hiredis merges modifications of the Redis fork with latest changes in the hiredis repository. The same version was pushed on the hiredis repository and will probably merged into the master branch in short time.
-
- 13 Aug, 2012 1 commit
-
-
Pieter Noordhuis authored
-