- 25 May, 2012 1 commit
-
-
antirez authored
The 'persistence' section of INFO output now contains additional four fields related to RDB and AOF persistence: rdb_last_bgsave_time_sec Duration of latest BGSAVE in sec. rdb_current_bgsave_time_sec Duration of current BGSAVE in sec. aof_last_rewrite_time_sec Duration of latest AOF rewrite in sec. aof_current_rewrite_time_sec Duration of current AOF rewrite in sec. The 'current' fields are set to -1 if a BGSAVE / AOF rewrite is not in progress. The 'last' fileds are set to -1 if no previous BGSAVE / AOF rewrites were performed. Additionally a few fields in the persistence section were renamed for consistency: changes_since_last_save -> rdb_changes_since_last_save bgsave_in_progress -> rdb_bgsave_in_progress last_save_time -> rdb_last_save_time last_bgsave_status -> rdb_last_bgsave_status bgrewriteaof_in_progress -> aof_rewrite_in_progress bgrewriteaof_scheduled -> aof_rewrite_scheduled After the renaming, fields in the persistence section start with rdb_ or aof_ prefix depending on the persistence method they describe. The field 'loading' and related fields are not prefixed because they are unique for both the persistence methods.
-
- 24 May, 2012 2 commits
-
-
antirez authored
The motivation for this new commands is to be search in the usage of Redis for real time statistics. See the article "Fast real time metrics using Redis". http://blog.getspool.com/2011/11/29/fast-easy-realtime-metrics-using-redis-bitmaps/ In general Redis strings when used as bitmaps using the SETBIT/GETBIT command provide a very space-efficient and fast way to store statistics. For instance in a web application with users, every user can be associated with a key that shows every day in which the user visited the web service. This information can be really valuable to extract user behaviour information. With Redis bitmaps doing this is very simple just saying that a given day is 0 (the data the service was put online) and all the next days are 1, 2, 3, and so forth. So with SETBIT it is possible to set the bit corresponding to the current day every time the user visits the site. It is possible to take the count of the bit sets on the run, this is extremely easy using a Lua script. However a fast bit count native operation can be useful, especially if it can operate on ranges, or when the string is small like in the case of days (even if you consider many years it is still extremely little data). For this reason BITOP was introduced. The command counts the number of bits set to 1 in a string, with optional range: BITCOUNT key [start end] The start/end parameters are similar to GETRANGE. If omitted the whole string is tested. Population counting is more useful when bit-level operations like AND, OR and XOR are avaialble. For instance I can test multiple users to see the number of days three users visited the site at the same time. To do this we can take the AND of all the bitmaps, and then count the set bits. For this reason the BITOP command was introduced: BITOP [AND|OR|XOR|NOT] dest_key src_key1 src_key2 src_key3 ... src_keyN In the special case of NOT (that inverts the bits) only one source key can be passed. The judicious use of BITCOUNT and BITOP combined can lead to interesting use cases with very space efficient representation of data. The implementation provided is still not tested and optimized for speed, next commits will introduce unit tests. Later the implementation will be profiled to see if it is possible to gain an important amount of speed without making the code much more complex.
-
antirez authored
During the AOF rewrite process, the parent process needs to accumulate the new writes in an in-memory buffer: when the child will terminate the AOF rewriting process this buffer (that ist the difference between the dataset when the rewrite was started, and the current dataset) is flushed to the new AOF file. We used to implement this buffer using an sds.c string, but sds.c has a 2GB limit. Sometimes the dataset can be big enough, the amount of writes so high, and the rewrite process slow enough that we overflow the 2GB limit, causing a crash, documented on github by issue #504. In order to prevent this from happening, this commit introduces a new system to accumulate writes, implemented by a linked list of blocks of 10 MB each, so that we also avoid paying the reallocation cost. Note that theoretically modern operating systems may implement realloc() simply as a remaping of the old pages, thus with very good performances, see for instance the mremap() syscall on Linux. However this is not always true, and jemalloc by default avoids doing this because there are issues with the current implementation of mremap(). For this reason we are using a linked list of blocks instead of a single block that gets reallocated again and again. The changes in this commit lacks testing, that will be performed before merging into the unstable branch. This fix will not enter 2.4 because it is too invasive. However 2.4 will log a warning when the AOF rewrite buffer is near to the 2GB limit.
-
- 13 May, 2012 2 commits
-
-
antirez authored
A previous commit introduced REDIS_HZ define that changes the frequency of calls to the serverCron() Redis function. This commit improves different related things: 1) Software watchdog: now the minimal period can be set according to REDIS_HZ. The minimal period is two times the timer period, that is: (1000/REDIS_HZ)*2 milliseconds 2) The incremental rehashing is now performed in the expires dictionary as well. 3) The activeExpireCycle() function was improved in different ways: - Now it checks if it already used too much time using microseconds instead of milliseconds for better precision. - The time limit is now calculated correctly, in the previous version the division was performed before of the multiplication resulting in a timelimit of 0 if HZ was big enough. - Databases with less than 1% of buckets fill in the hash table are skipped, because getting random keys is too expensive in this condition. 4) tryResizeHashTables() is now called at every timer call, we need to match the number of calls we do to the expired keys colleciton cycle. 5) REDIS_HZ was raised to 100.
-
antirez authored
Redis uses a function called serverCron() that is very similar to the timer interrupt of an operating system. This function is used to handle a number of asynchronous things, like active expired keys collection, clients timeouts, update of statistics, things related to the cluster and replication, triggering of BGSAVE and AOF rewrite process, and so forth. In the past the timer was called 1 time per second. At some point it was raised to 10 times per second, but it still was fixed and could not be changed even at compile time, because different functions called from serverCron() assumed a given fixed frequency. This commmit makes the frequency configurable, so that it is simpler to pick a good tradeoff between overhead of this function (that is usually very small) and the responsiveness of Redis during a few critical circumstances where a lot of work is done inside the timer. An example of such a critical condition is mass-expire of a lot of keys in the same second. Up to a given percentage of CPU time is used to perform expired keys collection per expire cylce. Now changing the REDIS_HZ macro it is possible to do less work but more times per second in order to block the server for less time. If this patch will work well in our tests it will enter Redis 2.6-final.
-
- 11 May, 2012 1 commit
-
-
antirez authored
If a large amonut of keys are all expiring about at the same time, the "active" expired keys collection cycle used to block as far as the percentage of already expired keys was >= 25% of the total population of keys with an expire set. This could block the server even for many seconds in order to reclaim memory ASAP. The new algorithm uses at max a small amount of milliseconds per cycle, even if this means reclaiming the memory less promptly it also means a more responsive server.
-
- 02 May, 2012 1 commit
-
-
antirez authored
We used to reply -ERR ... message ..., now the reply is instead -MASTERDOWN ... message ... so that it can be distinguished easily by the other error conditions.
-
- 21 Apr, 2012 1 commit
-
-
antirez authored
Two limits are added: 1) Up to SLOWLOG_ENTRY_MAX_ARGV arguments are logged. 2) Up to SLOWLOG_ENTRY_MAX_STRING bytes per argument are logged. 3) slowlog-max-len is set to 128 by default (was 1024). The number of remaining arguments / bytes is logged in the entry so that the user can understand better the nature of the logged command.
-
- 13 Apr, 2012 2 commits
-
-
antirez authored
After considering the interaction between ability to delcare globals in scripts using the 'global' function, and the complexities related to hanlding replication and AOF in a sane way with globals AND ability to turn protection On and Off, we reconsidered the design. The new design makes clear that there is only one good way to write Redis scripts, that is not using globals. In the rare cases state must be retained across calls a Redis key can be used.
-
antirez authored
-
- 10 Apr, 2012 1 commit
-
-
antirez authored
It is now possible to enable/disable RDB checksum computation from redis.conf or via CONFIG SET/GET. Also CONFIG SET support added for rdbcompression as well.
-
- 09 Apr, 2012 1 commit
-
-
antirez authored
-
- 07 Apr, 2012 1 commit
-
-
antirez authored
-
- 02 Apr, 2012 2 commits
-
-
antirez authored
-
Premysl Hruby authored
-
- 31 Mar, 2012 1 commit
-
-
antirez authored
-
- 30 Mar, 2012 1 commit
-
-
Joseph Jang authored
occurs when two or more dbs are replicated and at least one of them is >db10
-
- 29 Mar, 2012 1 commit
-
-
antirez authored
Fix for slaves chains. Force resync of slaves (simply disconnecting them) when SLAVEOF turns a master into a slave.
-
- 28 Mar, 2012 1 commit
-
-
antirez authored
-
- 27 Mar, 2012 2 commits
-
-
Premysl Hruby authored
-
antirez authored
-
- 25 Mar, 2012 1 commit
-
-
antirez authored
This new field counts all the times Redis is configured with AOF enabled and fsync policy 'everysec', but the previous fsync performed by the background thread was not able to complete within two seconds, forcing Redis to perform a write against the AOF file while the fsync is still in progress (likely a blocking operation).
-
- 20 Mar, 2012 1 commit
-
-
antirez authored
This commit introduces support for read only slaves via redis.conf and CONFIG GET/SET commands. Also various semantical fixes are implemented here: 1) MULTI/EXEC with only read commands now work where the server is into a state where writes (or commands increasing memory usage) are not allowed. Before this patch everything inside a transaction would fail in this conditions. 2) Scripts just calling read-only commands will work against read only slaves, when the server is out of memory, or when persistence is into an error condition. Before the patch EVAL always failed in this condition.
-
- 14 Mar, 2012 1 commit
-
-
antirez authored
-
- 13 Mar, 2012 1 commit
-
-
antirez authored
-
- 08 Mar, 2012 3 commits
-
-
antirez authored
-
antirez authored
The Run ID is a field that identifies a single execution of the Redis server. It can be useful for many purposes as it makes easy to detect if the instance we are talking about is the same, or if it is a different one or was rebooted. An application of run_id will be in the partial synchronization of replication, where a slave may request a partial sync from a given offset only if it is talking with the same master. Another application is in failover and monitoring scripts.
-
antirez authored
clusterGetRandomName() generalized into getRandomHexChars() so that we can use it for the run_id field as well.
-
- 07 Mar, 2012 4 commits
-
-
antirez authored
By default Redis refuses writes with an error if the latest BGSAVE failed (and at least one save point is configured). However people having good monitoring systems may prefer a server that continues to work, since they are notified that there are problems by their monitoring systems. This commit implements the ability to turn the feature on or off via redis.conf and CONFIG SET.
-
antirez authored
Redis now refuses accepting write queries if RDB persistence is configured, but RDB snapshots can't be generated for some reason. The status of the latest background save operation is now exposed in the INFO output as well. This fixes issue #90.
-
antirez authored
Better MONITOR output, now includes client ip:port or the lua string if the command was executed by the scripting engine.
-
antirez authored
-
- 29 Feb, 2012 1 commit
-
-
antirez authored
-
- 28 Feb, 2012 4 commits
-
-
antirez authored
The new code uses a more generic data structure to describe redis operations. The new design allows for multiple alsoPropagate() calls within the scope of a single command, that is useful in different contexts. For instance there when there are multiple clients doing BRPOPLPUSH against the same list, and a variadic LPUSH is performed against this list, the blocked clients will both be served, and we should correctly replicate multiple LPUSH commands after the replication of the current command.
-
antirez authored
Added a new API to replicate an additional command after the replication of the currently executed command, in order to propagte the LPUSH originating from RPOPLPUSH and indirectly by BRPOPLPUSH.
-
antirez authored
-
antirez authored
-
- 06 Feb, 2012 1 commit
-
-
antirez authored
-
- 04 Feb, 2012 2 commits
-
-
antirez authored
1) sendReplyToClient() now no longer stops transferring data to a single client in the case we are out of memory (maxmemory-wise). 2) in processCommand() the idea of we being out of memory is no longer the naive zmalloc_used_memory() > server.maxmemory. To say if we can accept or not write queries is up to the return value of freeMemoryIfNeeded(), that has full control about that. 3) freeMemoryIfNeeded() now does its math without considering output buffers size. But at the same time it can't let the output buffers to put us too much outside the max memory limit, so at the same time it makes sure there is enough effort into delivering the output buffers to the slaves, calling the write handler directly. This three changes are the result of many tests, I found (partially empirically) that is the best way to address the problem, but maybe we'll find better solutions in the future.
-
antirez authored
Use less memory when emitting the protocol, by using more shared objects for commonly emitted parts of the protocol.
-