- 07 Mar, 2013 2 commits
- 06 Mar, 2013 2 commits
- 05 Mar, 2013 6 commits
-
-
antirez authored
-
antirez authored
If we have a master in FAIL state that's reachable again, and apparently no one is going to serve its slots, clear the FAIL flag and let the cluster continue with its operations again.
-
antirez authored
This is the unix time at which we set the FAIL flag for the node. It is only valid if FAIL is set. The idea is to use it in order to make the cluster more robust, for instance in order to revert a FAIL state if it is long-standing but still slots are assigned to this node, that is, no one is going to fix these slots apparently.
-
antirez authored
-
antirez authored
Usually we try to send just 1 ping every second, however when we detect we are going to have unreliable failure detection because we can't ping some node in time, send an additional ping. This should only happen with very large clusters or when the the node timeout is set to a very low value.
-
antirez authored
-
- 04 Mar, 2013 5 commits
-
-
antirez authored
If we are a cluster node the DB content will not match our configured slots. Don't do the check at all.
-
antirez authored
There are pathological cases where the line can be even longer a single node may contain all the slots in importing/migrating state.
-
antirez authored
-
antirez authored
-
charsyam authored
adding check error code
-
- 28 Feb, 2013 6 commits
-
-
antirez authored
As stated in the comment this is usually due to a resharding in progress so the client should be still redirected to the old node that will handle the redirection elsewhere.
-
antirez authored
The new code makes sure that the node slots bitmap is always consistent with the cluster->slots array.
-
antirez authored
-
antirez authored
-
antirez authored
Before a relatively slow popcount() operation was needed every time we needed to get the number of slots served by a given cluster node. Now we just need to check an integer that is taken in sync with the bitmap.
-
antirez authored
-
- 27 Feb, 2013 5 commits
- 26 Feb, 2013 5 commits
-
-
antirez authored
-
antirez authored
-
antirez authored
The new system detects a failure only when there is quorum from masters.
-
antirez authored
-
antirez authored
This is not very important as anyway when the function counting the number of reports is called the cleanup is performed. However with this change if only part of the nodes that reported the failure will report the node is back ok, we'll cleanup the older entries ASAP. In complex split net split scenarios, and when we are dealing with clusters having nodes in the order of ~ 1000, this can save some CPU.
-
- 25 Feb, 2013 8 commits
-
-
antirez authored
This is the missing part of the API that will be used to reimplement failure detection of Cluster nodes.
-
antirez authored
Not sure why I set a limit to 1 million keys, there is no reason for this artificial limit, and anyway this is s a stupid limit because it is already high enough to create latency issues. So let's the users shoot on their feet because maybe they just actually know what they are doing.
-
antirez authored
-
antirez authored
The new sub-command uses the new countKeysInSlot() API and allows a cluster client to get the number of keys for a given hashslot.
-
antirez authored
-
antirez authored
Redis functions start in low case. A few functions about cluster were capitalized the wrong way.
-
antirez authored
-
antirez authored
See the top-comment for the function in this commit for details about what the function is supposed to do.
-
- 22 Feb, 2013 1 commit
-
-
antirez authored
-