- 22 Feb, 2018 5 commits
- 19 Feb, 2018 1 commit
-
-
antirez authored
This commit adds two new fields in the INFO output, stats section: expired_stale_perc:0.34 expired_time_cap_reached_count:58 The first field is an estimate of the number of keys that are yet in memory but are already logically expired. They reason why those keys are yet not reclaimed is because the active expire cycle can't spend more time on the process of reclaiming the keys, and at the same time nobody is accessing such keys. However as the active expire cycle runs, while it will eventually have to return to the caller, because of time limit or because there are less than 25% of keys logically expired in each given database, it collects the stats in order to populate this INFO field. Note that expired_stale_perc is a running average, where the current sample accounts for 5% and the history for 95%, so you'll see it changing smoothly over time. The other field, expired_time_cap_reached_count, counts the number of times the expire cycle had to stop, even if still it was finding a sizeable number of keys yet to expire, because of the time limit. This allows people handling operations to understand if the Redis server, during mass-expiration events, is able to collect keys fast enough usually. It is normal for this field to increment during mass expires, but normally it should very rarely increment. When instead it constantly increments, it means that the current workloads is using a very important percentage of CPU time to expire keys. This feature was created thanks to the hints of Rashmi Ramesh and Bart Robinson from Twitter. In private email exchanges, they noted how it was important to improve the observability of this parameter in the Redis server. Actually in big deployments, the amount of keys that are yet to expire in each server, even if they are logically expired, may account for a very big amount of wasted memory.
-
- 15 Feb, 2018 2 commits
- 14 Feb, 2018 10 commits
- 13 Feb, 2018 5 commits
-
-
charsyam authored
-
antirez authored
See #3832.
-
antirez authored
See #3858.
-
Guy Benoish authored
It is possible to do BGREWRITEAOF even if appendonly=no. This is by design. stopAppendonly() didn't turn off aof_rewrite_scheduled (it can be turned on again by BGREWRITEAOF even while appendonly is off anyway). After configuring `appendonly yes` it will see that the state is AOF_OFF, there's no RDB fork, so it will do rewriteAppendOnlyFileBackground() which will fail since the aof_child_pid is set (was scheduled and started by cron). Solution: stopAppendonly() will turn off the schedule flag (regardless of who asked for it). startAppendonly() will terminate any existing fork and start a new one (so it is the most recent).
-
Oran Agra authored
in some cases LATENCY HISTORY reported latency that was higher than the max latency reported by LATENCY LATEST / DOCTOR
-
- 02 Feb, 2018 1 commit
-
-
antirez authored
-
- 23 Jan, 2018 1 commit
-
-
Mark Nunberg authored
Older versions might not have this function.
-
- 18 Jan, 2018 3 commits
-
-
antirez authored
-
Guy Benoish authored
When feeding the master with a high rate traffic the the slave's feed is much slower. This causes the replication buffer to grow (indefinitely) which leads to slave disconnection. The problem is that writeToClient() decides to stop writing after NET_MAX_WRITES_PER_EVENT writes (In order to be fair to clients). We should ignore this when the client is a slave. It's better if clients wait longer, the alternative is that the slave has no chance to stay in sync in this situation.
-
antirez authored
See #3462 and related PRs. We use a simple algorithm to calculate the level of affinity violation, and then an optimizer that performs random swaps until things improve.
-
- 17 Jan, 2018 2 commits
- 16 Jan, 2018 3 commits
-
-
antirez authored
-
qinchao authored
, see issue: https://github.com/antirez/redis/issues/4587
-
Oran Agra authored
after a slave is promoted (assuming it has no slaves and it booted over an hour ago), it will lose it's replication backlog at the next replication cron, rather than waiting for slaves to connect to it. so on a simple master/slave faiover, if the new slave doesn't connect immediately, it may be too later and PSYNC2 will fail.
-
- 15 Jan, 2018 2 commits
-
-
zhaozhao.zz authored
-
antirez authored
-
- 14 Jan, 2018 1 commit
-
-
zhaozhao.zz authored
-
- 12 Jan, 2018 1 commit
-
-
antirez authored
This fixes a crash with Redis Cluster when OBJECT is mis-used, because getKeysUsingCommandTable() will call serverPanic() detecting we are accessing an invalid argument in the case "OBJECT foo" is called. This bug was introduced when OBJECT HELP was introduced, because the key argument is set fixed at index 2 in the command table, however now OBJECT may be called with an insufficient number of arguments to extract the key. The "Right Thing" would be to have a specific function to extract keys from the OBJECT command, however this is kinda of an overkill, so I preferred to make getKeysUsingCommandTable() more robust and just return no keys when it's not possible to honor the command table, because new commands are often added and also there are a number with an HELP subcommand violating the normal form, and crashing for this trivial reason or having many command-specific key extraction functions is not great.
-
- 11 Jan, 2018 2 commits
- 09 Jan, 2018 1 commit
-
-
antirez authored
See PR #2507. This is a reimplementation of the fix that contained different problems.
-