- 07 Feb, 2014 1 commit
-
-
antirez authored
Currently this is marginally useful, only to make sure two keys are in the same hash slot when the cluster is stable (no rehashing in progress). In the future it is possible that support will be added to run mutli-keys operations with keys in the same hash slot.
-
- 05 Feb, 2014 5 commits
-
-
antirez authored
For manual failover we need a manual failover in progress, and that mf_can_start is true (master offset received and matched).
-
antirez authored
-
antirez authored
Otherwise it is always detected as a manual failover timed out.
-
antirez authored
When a slave requests masters vote for a manual failover, the REQUEST_AUTH message is flagged in a special way in order to force the masters to give the authorization even if the master is not marked as failing.
-
antirez authored
-
- 31 Jan, 2014 1 commit
-
-
antirez authored
It is possible to configure the min number of additional working slaves a master should be left with, for a slave to migrate to an orphaned master.
-
- 30 Jan, 2014 3 commits
-
-
antirez authored
The check was placed in a way that conflicted with the continue statements used by the node hearth beat code later that needs to skip the current node sometimes. Moved at the start of the function so that's always executed.
-
antirez authored
This feature allows slaves to migrate to orphaned masters (masters without working slaves), as long as a set of conditions are met, including the fact that the migrating slave needs to be in a master-slaves ring with at least another slave working.
-
antirez authored
-
- 29 Jan, 2014 10 commits
-
-
antirez authored
-
antirez authored
When we schedule a failover, broadcast a PONG to the slaves. The other slaves that plan to get elected will do the same too, this way it is likely that every slave will have a good picture of its own rank. Note that this is N*N messages where N is the number of slaves for the failing master, however usually even large clusters have many master nodes but a limited number of replicas per node, so this is harmless.
-
antirez authored
-
antirez authored
Note that when we compute the initial delay, there are probably still more up to date information to receive from slaves with new offsets, so the delay is recomputed when new data is available.
-
antirez authored
Return the number of slaves for the same master having a better replication offset of the current slave, that is, the slave "rank" used to pick a delay before the request for election.
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
Accessing to the 'myself' node, the node representing the currently running instance, is handy without the need to type server.cluster->myself every time.
-
antirez authored
Now we can broadcast a pong to all the instances or just the local slaves (that is useful for replication offset propagation).
-
- 28 Jan, 2014 2 commits
-
-
antirez authored
-
antirez authored
The two fields are used in order to remember the latest known replication offset and the time we received it from other slave nodes. This will be used by slaves in order to start the election procedure with a delay that is proportional to the rank of the slave among the other slaves for this master, when sorted for replication offset. Usually this allows the slave with the most updated offset to win the election and replace the failing master in the cluster.
-
- 22 Jan, 2014 3 commits
- 20 Jan, 2014 2 commits
-
-
antirez authored
One of the simple heuristics used by Redis Cluster in order to avoid losing data in the typical failure modes created by the asynchronous replication with the slaves (a master is unable, when accepting a write, to immediately tell if it should be really accepted or refused because of a configuration change), is to wait some time before to rejoin the cluster after being partitioned away from the majority of instances. A similar condition happens when a master is restarted. It does not know if it was already failed over, nor if all the clients have already an updated configuration about the cluster map, so it is possible that clients will try to write to stale masters that were restarted. In a similar way this commit changes masters behavior so they wait 2000 milliseconds before accepting writes after a reboot. There is nothing special about 2 seconds if not to be a value supposedly larger a few orders of magnitude compared to the cluster bus communication latencies.
-
antirez authored
These were committed for error after being inserted in order to fix an issue.
-
- 17 Jan, 2014 1 commit
-
-
antirez authored
The code was doing checks for slaves that should be done only when the instance is currently a master. Switching a slave from a master to another one should just work.
-
- 16 Jan, 2014 2 commits
- 15 Jan, 2014 9 commits
-
-
antirez authored
CLUSTER FORGET is not useful if we can't remove a node from all the nodes of our cluster because of the Gossip protocol that keeps adding a given node to nodes where we already tried to remove it. So now CLUSTER FORGET implements a nodes blacklist that is set and checked by the Gossip section processing function. This way before a node is re-added at least 60 seconds must elapse since the FORGET execution. This means that redis-trib has some time to remove a node from a whole cluster. It is possible that in the future it will be uesful to raise the 60 sec figure to something bigger.
-
antirez authored
The hash table value should be set to now + 60 seconds otherwise it expires immediately.
-
antirez authored
We can't lookup by node->name that's not an SDS string but a plain C array in the node structure.
-
antirez authored
-
antirez authored
The rejoin delay usually is the node timeout. However if the node timeout is too small, we set it to 500 milliseconds, that is a value chosen to be greater than most setups RTT / instances latency figures so that likely communication with other nodes happen before rejoining.
-
antirez authored
Usually we update the cluster state (to understand if we should accept queries or reply with an error) only when there is a change in the state of the nodes. However for the "delayed rejoin" feature to work, that is, for a master to wait some time before accepting queries again after it rejoins the majority, we need to periodically update the last time when the node was partitioned away from the majority. With this commit if the cluster is down we update the state ten times per second.
-
antirez authored
See issue #1426 on Github.
-
antirez authored
Even without the user messing manually with the file, it is still possible to have blank lines (just a single "\n" per line) because of how the nodes.conf update/write process works.
-
antirez authored
The way the file was generated was unsafe and leaded to nodes.conf file corruption (zero length file) on server stop/crash during the creation of the file. The previous file update method was as simple as open with O_TRUNC followed by the write call. While the write call was a single one with the full payload, ensuring no half-written files for POSIX semantics, stopping the server just after the open call resulted into a zero-length file (all the nodes information lost!).
-
- 14 Jan, 2014 1 commit
-
-
antirez authored
A client can enter a special cluster read-only mode using the READONLY command: if the client read from a slave instance after this command, for slots that are actually served by the instance's master, the queries will be processed without redirection, allowing clients to read from slaves (but without any kind fo read-after-write guarantee). The READWRITE command can be used in order to exit the readonly state.
-