- 22 Oct, 2012 2 commits
-
-
antirez authored
This was an important information missing from the INFO output in the replication section. It obviously reflects if the slave is read only or not.
-
antirez authored
In some system, notably osx, the 3.5 GB limit was too far and not able to prevent a crash for out of memory. The 3 GB limit works better and it is still a lot of memory within a 4 GB theorical limit so it's not going to bore anyone :-) This fixes issue #711
-
- 16 Oct, 2012 1 commit
-
-
antirez authored
Before of this commit it used to be like this: MULTI EXEC ... actual commands of the transaction ... Because after all that is the natural order of things. Transaction commands are queued and executed *only after* EXEC is called. However this makes debugging with MONITOR a mess, so the code was modified to provide a coherent output. What happens is that MULTI is rendered in the MONITOR output as far as possible, instead EXEC is propagated only after the transaction is executed, or even in the case it fails because of WATCH, so in this case you'll simply see: MULTI EXEC An empty transaction.
-
- 11 Oct, 2012 1 commit
-
-
antirez authored
If the server is password protected we need to accept AUTH when there is a server busy (-BUSY) condition, otherwise it will be impossible to send SHUTDOWN NOSAVE or SCRIPT KILL. This fixes issue #708.
-
- 05 Oct, 2012 1 commit
-
-
antirez authored
This commit warns the user with a log at "warning" level if: 1) After the server startup the maxmemory limit was found to be < 1MB. 2) After a CONFIG SET command modifying the maxmemory setting the limit is set to a value that is smaller than the currently used memory. The behaviour of the Redis server is unmodified, and this wil not make the CONFIG SET command or a wrong configuration in redis.conf less likely to create problems, but at least this will make aware most users about a possbile error they committed without resorting to external help. However no warning is issued if, as a result of loading the AOF or RDB file, we are very near the maxmemory setting, or key eviction will be needed in order to go under the specified maxmemory setting. The reason is that in servers configured as a cache with an aggressive maxmemory-policy most of the times restarting the server will cause this condition to happen if persistence is not switched off. This fixes issue #429.
-
- 27 Sep, 2012 4 commits
-
-
antirez authored
After cherry-picking Sentinel commits a few spurious issues remained about references to Redis Cluster that is not present in the 2.6 branch.
-
antirez authored
The new "redis_mode" field in the INFO output will show if Redis is running in standalone mode, cluster, or sentinel mode.
-
mrb authored
-
antirez authored
This commit implements the first, beta quality implementation of Redis Sentinel, a distributed monitoring system for Redis with notification and automatic failover capabilities. More info at http://redis.io/topics/sentinel
-
- 21 Sep, 2012 1 commit
-
-
antirez authored
SRANDMEMBER called with just the key argument can just return a single random element from a Redis Set. However many users need to return multiple unique elements from a Set, this is not a trivial problem to handle in the client side, and for truly good performance a C implementation was required. After many requests for this feature it was finally implemented. The problem implementing this command is the strategy to follow when the number of elements the user asks for is near to the number of elements that are already inside the set. In this case asking random elements to the dictionary API, and trying to add it to a temporary set, may result into an extremely poor performance, as most add operations will be wasted on duplicated elements. For this reason this implementation uses a different strategy in this case: the Set is copied, and random elements are returned to reach the specified count. The code actually uses 4 different algorithms optimized for the different cases. If the count is negative, the command changes behavior and allows for duplicated elements in the returned subset.
-
- 17 Sep, 2012 1 commit
-
-
antirez authored
Redis provides support for blocking operations such as BLPOP or BRPOP. This operations are identical to normal LPOP and RPOP operations as long as there are elements in the target list, but if the list is empty they block waiting for new data to arrive to the list. All the clients blocked waiting for th same list are served in a FIFO way, so the first that blocked is the first to be served when there is more data pushed by another client into the list. The previous implementation of blocking operations was conceived to serve clients in the context of push operations. For for instance: 1) There is a client "A" blocked on list "foo". 2) The client "B" performs `LPUSH foo somevalue`. 3) The client "A" is served in the context of the "B" LPUSH, synchronously. Processing things in a synchronous way was useful as if "A" pushes a value that is served by "B", from the point of view of the database is a NOP (no operation) thing, that is, nothing is replicated, nothing is written in the AOF file, and so forth. However later we implemented two things: 1) Variadic LPUSH that could add multiple values to a list in the context of a single call. 2) BRPOPLPUSH that was a version of BRPOP that also provided a "PUSH" side effect when receiving data. This forced us to make the synchronous implementation more complex. If client "B" is waiting for data, and "A" pushes three elemnents in a single call, we needed to propagate an LPUSH with a missing argument in the AOF and replication link. We also needed to make sure to replicate the LPUSH side of BRPOPLPUSH, but only if in turn did not happened to serve another blocking client into another list ;) This were complex but with a few of mutually recursive functions everything worked as expected... until one day we introduced scripting in Redis. Scripting + synchronous blocking operations = Issue #614. Basically you can't "rewrite" a script to have just a partial effect on the replicas and AOF file if the script happened to serve a few blocked clients. The solution to all this problems, implemented by this commit, is to change the way we serve blocked clients. Instead of serving the blocked clients synchronously, in the context of the command performing the PUSH operation, it is now an asynchronous and iterative process: 1) If a key that has clients blocked waiting for data is the subject of a list push operation, We simply mark keys as "ready" and put it into a queue. 2) Every command pushing stuff on lists, as a variadic LPUSH, a script, or whatever it is, is replicated verbatim without any rewriting. 3) Every time a Redis command, a MULTI/EXEC block, or a script, completed its execution, we run the list of keys ready to serve blocked clients (as more data arrived), and process this list serving the blocked clients. 4) As a result of "3" maybe more keys are ready again for other clients (as a result of BRPOPLPUSH we may have push operations), so we iterate back to step "3" if it's needed. The new code has a much simpler semantics, and a simpler to understand implementation, with the disadvantage of not being able to "optmize out" a PUSH+BPOP as a No OP. This commit will be tested with care before the final merge, more tests will be added likely.
-
- 10 Sep, 2012 1 commit
-
-
antirez authored
-
- 04 Sep, 2012 1 commit
-
-
antirez authored
SORT is able to return (faster than when ordering) unordered output if the "BY" clause is used with a constant value. However we try to play well with scripting requirements of determinism providing always sorted outputs when SORT (and other similar commands) are called by Lua scripts. However we used the general mechanism in place in scripting in order to reorder SORT output, that is, if the command has the "S" flag set, the Lua scripting engine will take an additional step when converting a multi bulk reply to Lua value, calling a Lua sorting function. This is suboptimal as we can do it faster inside SORT itself. This is also broken as issue #545 shows us: basically when SORT is used with a constant BY, and additionally also GET is used, the Lua scripting engine was trying to order the output as a flat array, while it was actually a list of key-value pairs. What we do know is to recognized if the caller of SORT is the Lua client (since we can check this using the REDIS_LUA_CLIENT flag). If so, and if a "don't sort" condition is triggered by the BY option with a constant string, we force the lexicographical sorting. This commit fixes this bug and improves the performance, and at the same time simplifies the implementation. This does not mean I'm smart today, it means I was stupid when I committed the original implementation ;)
-
- 31 Aug, 2012 1 commit
-
-
antirez authored
A Redis slave can now be configured with a priority, that is an integer number that is shown in INFO output and can be get and set using the redis.conf file or the CONFIG GET/SET command. This field is used by Sentinel during slave election. A slave with lower priority is preferred. A slave with priority zero is never elected (and is considered to be impossible to elect even if it is the only slave available). A next commit will add support in the Sentinel side as well.
-
- 28 Aug, 2012 1 commit
-
-
antirez authored
This fixes issue #539. Basically if there is enough free memory the OS may buffer the RDB file that the slave transfers on disk from the master. The file may actually be flused on disk at once by the operating system when it gets closed by Redis, causing the close system call to block for a long time. This patch is a modified version of one provided by yoav-steinberg of @garantiadata (the original version was posted in the issue #539 comments), and tries to flush the OS buffers incrementally (every 8 MB of loaded data).
-
- 24 Aug, 2012 1 commit
-
-
antirez authored
The previous implementation of zmalloc.c was not able to handle out of memory in an application-specific way. It just logged an error on standard error, and aborted. The result was that in the case of an actual out of memory in Redis where malloc returned NULL (In Linux this actually happens under specific overcommit policy settings and/or with no or little swap configured) the error was not properly logged in the Redis log. This commit fixes this problem, fixing issue #509. Now the out of memory is properly reported in the Redis log and a stack trace is generated. The approach used is to provide a configurable out of memory handler to zmalloc (otherwise the default one logging the event on the standard output is used).
-
- 31 Jul, 2012 1 commit
-
-
Saj Goonatilleke authored
Behaves like rdb_last_bgsave_status -- even down to reporting 'ok' when no rewrite has been done yet. (You might want to check that aof_last_rewrite_time_sec is not -1.)
-
- 22 Jul, 2012 1 commit
-
-
antirez authored
Redis loading data from disk, and a Redis slave disconnected from its master with serve-stale-data disabled, are two conditions where commands are normally refused by Redis, returning an error. However there is no reason to disable Pub/Sub commands as well, given that this layer does not interact with the dataset. To allow Pub/Sub in as many contexts as possible is especially interesting now that Redis Sentinel uses Pub/Sub of a Redis master as a communication channel between Sentinels. This commit allows Pub/Sub to be used in the above two contexts where it was previously denied.
-
- 07 Jul, 2012 1 commit
-
-
antirez authored
The REPLCONF command is an internal command (not designed to be directly used by normal clients) that allows a slave to set some replication related state in the master before issuing SYNC to start the replication. The initial motivation for this command, and the only reason currently it is used by the implementation, is to let the slave instance communicate its listening port to the slave, so that the master can show all the slaves with their listening ports in the "replication" section of the INFO output. This allows clients to auto discover and query all the slaves attached into a master. Currently only a single option of the REPLCONF command is supported, and it is called "listening-port", so the slave now starts the replication process with something like the following chat: REPLCONF listening-prot 6380 SYNC Note that this works even if the master is an older version of Redis and does not understand REPLCONF, because the slave ignores the REPLCONF error. In the future REPLCONF can be used for partial replication and other replication related features where there is the need to exchange information between master and slave. NOTE: This commit also fixes a bug: the INFO outout already carried information about slaves, but the port was broken, and was obtained with getpeername(2), so it was actually just the ephemeral port used by the slave to connect to the master as a client.
-
- 21 Jun, 2012 2 commits
-
-
antirez authored
-
antirez authored
The way we compared the authentication password using strcmp() allowed an attacker to gain information about the password using a well known class of attacks called "timing attacks". The bug appears to be practically not exploitable in most modern systems running Redis since even using multiple bytes of differences in the input at a time instead of one the difference in running time in in the order of 10 nanoseconds, making it hard to exploit even on LAN. However attacks always get better so we are providing a fix ASAP. The new implementation uses two fixed length buffers and a constant time comparison function, with the goal of: 1) Completely avoid leaking information about the content of the password, since the comparison is always performed between 512 characters and without conditionals. 2) Partially avoid leaking information about the length of the password. About "2" we still have a stage in the code where the real password and the user provided password are copied in the static buffers, we also run two strlen() operations against the two inputs, so the running time of the comparison is a fixed amount plus a time proportional to LENGTH(A)+LENGTH(B). This means that the absolute time of the operation performed is still related to the length of the password in some way, but there is no way to change the input in order to get a difference in the execution time in the comparison that is not just proportional to the string provided by the user (because the password length is fixed). Thus in practical terms the user should try to discover LENGTH(PASSWORD) looking at the whole execution time of the AUTH command and trying to guess a proportionality between the whole execution time and the password length: this appears to be mostly unfeasible in the real world. Also protecting from this attack is not very useful in the case of Redis as a brute force attack is anyway feasible if the password is too short, while with a long password makes it not an issue that the attacker knows the length.
-
- 25 May, 2012 1 commit
-
-
antirez authored
The 'persistence' section of INFO output now contains additional four fields related to RDB and AOF persistence: rdb_last_bgsave_time_sec Duration of latest BGSAVE in sec. rdb_current_bgsave_time_sec Duration of current BGSAVE in sec. aof_last_rewrite_time_sec Duration of latest AOF rewrite in sec. aof_current_rewrite_time_sec Duration of current AOF rewrite in sec. The 'current' fields are set to -1 if a BGSAVE / AOF rewrite is not in progress. The 'last' fileds are set to -1 if no previous BGSAVE / AOF rewrites were performed. Additionally a few fields in the persistence section were renamed for consistency: changes_since_last_save -> rdb_changes_since_last_save bgsave_in_progress -> rdb_bgsave_in_progress last_save_time -> rdb_last_save_time last_bgsave_status -> rdb_last_bgsave_status bgrewriteaof_in_progress -> aof_rewrite_in_progress bgrewriteaof_scheduled -> aof_rewrite_scheduled After the renaming, fields in the persistence section start with rdb_ or aof_ prefix depending on the persistence method they describe. The field 'loading' and related fields are not prefixed because they are unique for both the persistence methods.
-
- 24 May, 2012 3 commits
-
-
antirez authored
The motivation for this new commands is to be search in the usage of Redis for real time statistics. See the article "Fast real time metrics using Redis". http://blog.getspool.com/2011/11/29/fast-easy-realtime-metrics-using-redis-bitmaps/ In general Redis strings when used as bitmaps using the SETBIT/GETBIT command provide a very space-efficient and fast way to store statistics. For instance in a web application with users, every user can be associated with a key that shows every day in which the user visited the web service. This information can be really valuable to extract user behaviour information. With Redis bitmaps doing this is very simple just saying that a given day is 0 (the data the service was put online) and all the next days are 1, 2, 3, and so forth. So with SETBIT it is possible to set the bit corresponding to the current day every time the user visits the site. It is possible to take the count of the bit sets on the run, this is extremely easy using a Lua script. However a fast bit count native operation can be useful, especially if it can operate on ranges, or when the string is small like in the case of days (even if you consider many years it is still extremely little data). For this reason BITOP was introduced. The command counts the number of bits set to 1 in a string, with optional range: BITCOUNT key [start end] The start/end parameters are similar to GETRANGE. If omitted the whole string is tested. Population counting is more useful when bit-level operations like AND, OR and XOR are avaialble. For instance I can test multiple users to see the number of days three users visited the site at the same time. To do this we can take the AND of all the bitmaps, and then count the set bits. For this reason the BITOP command was introduced: BITOP [AND|OR|XOR|NOT] dest_key src_key1 src_key2 src_key3 ... src_keyN In the special case of NOT (that inverts the bits) only one source key can be passed. The judicious use of BITCOUNT and BITOP combined can lead to interesting use cases with very space efficient representation of data. The implementation provided is still not tested and optimized for speed, next commits will introduce unit tests. Later the implementation will be profiled to see if it is possible to gain an important amount of speed without making the code much more complex.
-
antirez authored
The INFO output, persistence section, already contained the field describing the size of the current AOF buffer to flush on disk. However the other AOF buffer, used to accumulate changes during an AOF rewrite, was not mentioned in the INFO output. This commit introduces a new field called aof_rewrite_buffer_length with the length of the rewrite buffer.
-
antirez authored
During the AOF rewrite process, the parent process needs to accumulate the new writes in an in-memory buffer: when the child will terminate the AOF rewriting process this buffer (that ist the difference between the dataset when the rewrite was started, and the current dataset) is flushed to the new AOF file. We used to implement this buffer using an sds.c string, but sds.c has a 2GB limit. Sometimes the dataset can be big enough, the amount of writes so high, and the rewrite process slow enough that we overflow the 2GB limit, causing a crash, documented on github by issue #504. In order to prevent this from happening, this commit introduces a new system to accumulate writes, implemented by a linked list of blocks of 10 MB each, so that we also avoid paying the reallocation cost. Note that theoretically modern operating systems may implement realloc() simply as a remaping of the old pages, thus with very good performances, see for instance the mremap() syscall on Linux. However this is not always true, and jemalloc by default avoids doing this because there are issues with the current implementation of mremap(). For this reason we are using a linked list of blocks instead of a single block that gets reallocated again and again. The changes in this commit lacks testing, that will be performed before merging into the unstable branch. This fix will not enter 2.4 because it is too invasive. However 2.4 will log a warning when the AOF rewrite buffer is near to the 2GB limit.
-
- 14 May, 2012 3 commits
-
-
antirez authored
activeExpireCycle() can consume no more than a few milliseconds per iteration. This commit improves the precision of the check for the time elapsed in two ways: 1) We check every 16 iterations instead of the main loop instead of 256. 2) We reset iterations at the start of the function and not every time we switch to the next database, so the check is correctly performed every 16 iterations.
-
antirez authored
A previous commit introduced REDIS_HZ define that changes the frequency of calls to the serverCron() Redis function. This commit improves different related things: 1) Software watchdog: now the minimal period can be set according to REDIS_HZ. The minimal period is two times the timer period, that is: (1000/REDIS_HZ)*2 milliseconds 2) The incremental rehashing is now performed in the expires dictionary as well. 3) The activeExpireCycle() function was improved in different ways: - Now it checks if it already used too much time using microseconds instead of milliseconds for better precision. - The time limit is now calculated correctly, in the previous version the division was performed before of the multiplication resulting in a timelimit of 0 if HZ was big enough. - Databases with less than 1% of buckets fill in the hash table are skipped, because getting random keys is too expensive in this condition. 4) tryResizeHashTables() is now called at every timer call, we need to match the number of calls we do to the expired keys colleciton cycle. 5) REDIS_HZ was raised to 100.
-
antirez authored
Redis uses a function called serverCron() that is very similar to the timer interrupt of an operating system. This function is used to handle a number of asynchronous things, like active expired keys collection, clients timeouts, update of statistics, things related to the cluster and replication, triggering of BGSAVE and AOF rewrite process, and so forth. In the past the timer was called 1 time per second. At some point it was raised to 10 times per second, but it still was fixed and could not be changed even at compile time, because different functions called from serverCron() assumed a given fixed frequency. This commmit makes the frequency configurable, so that it is simpler to pick a good tradeoff between overhead of this function (that is usually very small) and the responsiveness of Redis during a few critical circumstances where a lot of work is done inside the timer. An example of such a critical condition is mass-expire of a lot of keys in the same second. Up to a given percentage of CPU time is used to perform expired keys collection per expire cylce. Now changing the REDIS_HZ macro it is possible to do less work but more times per second in order to block the server for less time. If this patch will work well in our tests it will enter Redis 2.6-final.
-
- 12 May, 2012 2 commits
-
-
antirez authored
-
antirez authored
If a large amonut of keys are all expiring about at the same time, the "active" expired keys collection cycle used to block as far as the percentage of already expired keys was >= 25% of the total population of keys with an expire set. This could block the server even for many seconds in order to reclaim memory ASAP. The new algorithm uses at max a small amount of milliseconds per cycle, even if this means reclaiming the memory less promptly it also means a more responsive server.
-
- 02 May, 2012 1 commit
-
-
antirez authored
We used to reply -ERR ... message ..., now the reply is instead -MASTERDOWN ... message ... so that it can be distinguished easily by the other error conditions.
-
- 27 Apr, 2012 1 commit
-
-
antirez authored
This commit reverts most of c5757662, in order to use back main stack for signal handling. The main reason is that otherwise it is completely pointless that we do a lot of efforts to print the stack trace on crash, and the content of the stack and registers as well. Using an alternate stack broken this feature completely.
-
- 19 Apr, 2012 1 commit
-
-
antirez authored
-
- 18 Apr, 2012 1 commit
-
-
antirez authored
1) Don't accept maxclients set to < 0 2) Allow maxclients < 1024, it is useful for testing.
-
- 13 Apr, 2012 3 commits
-
-
antirez authored
After considering the interaction between ability to delcare globals in scripts using the 'global' function, and the complexities related to hanlding replication and AOF in a sane way with globals AND ability to turn protection On and Off, we reconsidered the design. The new design makes clear that there is only one good way to write Redis scripts, that is not using globals. In the rare cases state must be retained across calls a Redis key can be used.
-
antirez authored
-
antirez authored
-
- 10 Apr, 2012 2 commits
- 08 Apr, 2012 1 commit
-
-
antirez authored
-