- 06 Aug, 2013 2 commits
-
-
antirez authored
This commit makes the fast collection cycle time configurable, at the same time it does not allow to run a new fast collection cycle for the same amount of time as the max duration of the fast collection cycle.
-
antirez authored
The main idea here is that when we are no longer to expire keys at the rate the are created, we can't block more in the normal expire cycle as this would result in too big latency spikes. For this reason the commit introduces a "fast" expire cycle that does not run for more than 1 millisecond but is called in the beforeSleep() hook of the event loop, so much more often, and with a frequency bound to the frequency of executed commnads. The fast expire cycle is only called when the standard expiration algorithm runs out of time, that is, consumed more than REDIS_EXPIRELOOKUPS_TIME_PERC of CPU in a given cycle without being able to take the number of already expired keys that are yet not collected to a number smaller than 25% of the number of keys. You can test this commit with different loads, but a simple way is to use the following: Extreme load with pipelining: redis-benchmark -r 100000000 -n 100000000 \ -P 32 set ele:rand:000000000000 foo ex 2 Remove the -P32 in order to avoid the pipelining for a more real-world load. In another terminal tab you can monitor the Redis behavior with: redis-cli -i 0.1 -r -1 info keyspace and redis-cli --latency-history Note: this commit will make Redis printing a lot of debug messages, it is not a good idea to use it in production.
-
- 18 Jul, 2013 1 commit
-
-
antirez authored
-
- 16 Jul, 2013 1 commit
-
-
yoav authored
-
- 12 Jul, 2013 1 commit
-
-
antirez authored
-
- 11 Jul, 2013 6 commits
-
-
antirez authored
-
antirez authored
-
antirez authored
-
Geoff Garside authored
-
Geoff Garside authored
-
Geoff Garside authored
-
- 08 Jul, 2013 3 commits
- 02 Jul, 2013 1 commit
-
-
antirez authored
-
- 01 Jul, 2013 1 commit
-
-
antirez authored
-
- 27 Jun, 2013 1 commit
-
-
antirez authored
-
- 26 Jun, 2013 6 commits
-
-
antirez authored
It should be called just one time at startup and not every time the Lua scripting engine is re-initialized, otherwise memory is leaked.
-
antirez authored
This commit uses the Replication Script Cache in order to avoid translating EVALSHA into EVAL whenever possible for both the AOF and slaves.
-
antirez authored
This code is only responsible to take an LRU-evicted fixed length cache of SHA1 that we are sure all the slaves received. In this commit only the implementation is provided, but the Redis core does not use it to actually send EVALSHA to slaves when possible.
-
antirez authored
The old REDIS_CMD_FORCE_REPLICATION flag was removed from the implementation of Redis, now there is a new API to force specific executions of a command to be propagated to AOF / Replication link: void forceCommandPropagation(int flags); The new API is also compatible with Lua scripting, so a script that will execute commands that are forced to be propagated, will also be propagated itself accordingly even if no change to data is operated. As a side effect, this new design fixes the issue with scripts not able to propagate PUBLISH to slaves (issue #873).
-
antirez authored
Currently it implements three subcommands: PUBSUB CHANNELS [<pattern>] List channels with non-zero subscribers. PUBSUB NUMSUB [channel_1 ...] List number of subscribers for channels. PUBSUB NUMPAT Return number of subscribed patterns.
-
YAMAMOTO Takashi authored
time_t is always 64bit on recent versions of NetBSD.
-
- 31 May, 2013 1 commit
-
-
antirez authored
When min-slaves-to-write feature is active, this field reports the number of slaves considered good (online state, lag within the specified range).
-
- 30 May, 2013 2 commits
- 29 May, 2013 2 commits
-
-
antirez authored
-
antirez authored
There is a new 'lag' information in the list of slaves, in the "replication" section of the INFO output. Also the format was changed in a backward incompatible way in order to make it more easy to parse if new fields are added in the future, as the new format is comma separated but has named fields (no longer positional fields).
-
- 28 May, 2013 1 commit
-
-
antirez authored
-
- 27 May, 2013 2 commits
- 15 May, 2013 4 commits
-
-
antirez authored
Also the logfile option was modified to always have an explicit value and to log to stdout when an empty string is used as log file. Previously there was special handling of the string "stdout" that set the logfile to NULL, this always required some special handling.
-
antirez authored
-
antirez authored
-
antirez authored
-
- 24 Apr, 2013 1 commit
-
-
antirez authored
-
- 19 Apr, 2013 1 commit
-
-
antirez authored
-
- 02 Apr, 2013 1 commit
-
-
antirez authored
When a BGSAVE fails, Redis used to flood itself trying to BGSAVE at every next cron call, that is either 10 or 100 times per second depending on configuration and server version. This commit does not allow a new automatic BGSAVE attempt to be performed before a few seconds delay (currently 5). This avoids both the auto-flood problem and filling the disk with logs at a serious rate. The five seconds limit, considering a log entry of 200 bytes, will use less than 4 MB of disk space per day that is reasonable, the sysadmin should notice before of catastrofic events especially since by default Redis will stop serving write queries after the first failed BGSAVE. This fixes issue #849
-
- 28 Mar, 2013 2 commits