- 22 Dec, 2013 3 commits
-
-
antirez authored
Currently replication offsets could be used into a limited way in order to understand, out of a set of slaves, what is the one with the most updated data. For example this comparison is possible of N slaves were replicating all with the same master. However the replication offset was not transferred from master to slaves (that are later promoted as masters) in any way, so for instance if there were three instances A, B, C, with A master and B and C replication from A, the following could happen: C disconnects from A. B is turned into master. A is switched to master of B. B receives some write. In this context there was no way to compare the offset of A and C, because B would use its own local master replication offset as replication offset to initialize the replication with A. With this commit what happens is that when B is turned into master it inherits the replication offset from A, making A and C comparable. In the above case assuming no inconsistencies are created during the disconnection and failover process, A will show to have a replication offset greater than C. Note that this does not mean offsets are always comparable to understand what is, in a set of instances, since in more complex examples the replica with the higher replication offset could be partitioned away when picking the instance to elect as new master. However this in general improves the ability of a system to try to pick a good replica to promote to master.
-
antirez authored
-
antirez authored
When the configured node timeout is very small, the data validity time (maximum data age for a slave to try a failover) is too little (ten times the configured node timeout) when the replication link with the master is mostly idle. In this case we'll receive some data from the master only every server.repl_ping_slave_period to refresh the last interaction with the master. This commit adds to the max data validity time the slave ping period to avoid this problem of slaves sensing too old data without a good reason. However this max data validity time is likely a setting that should be configurable by the Redis Cluster user in a way completely independent from the node timeout.
-
- 20 Dec, 2013 6 commits
-
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
This commit makes it simple to start an handshake with a specific node address, and uses this in order to detect a node IP change and start a new handshake in order to fix the IP if possible.
-
antirez authored
As specified in the Redis Cluster specification, when a node can reach the majority again after a period in which it was partitioend away with the minorty of masters, wait some time before accepting queries, to provide a reasonable amount of time for other nodes to upgrade its configuration. This lowers the probabilities of both a client and a master with not updated configuration to rejoin the cluster at the same time, with a stale master accepting writes.
-
- 19 Dec, 2013 4 commits
-
-
antirez authored
CONFIG REWRITE is now wiser and does not touch what it does not understand inside redis.conf.
-
Yubao Liu authored
Those options will be thrown without this patch: include, rename-command, min-slaves-to-write, min-slaves-max-lag, appendfilename.
-
antirez authored
-
antirez authored
With this commit options not explicitly rewritten by CONFIG REWRITE are not touched at all. These include new options that may not have support for REWRITE, and other special cases like rename-command and include.
-
- 17 Dec, 2013 7 commits
-
-
antirez authored
The value was otherwise undefined, so next time the node was promoted again from slave to master, adding a slave to the list of slaves would likely crash the server or result into undefined behavior.
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
Later this should be configurable from the command line but at least now we use something more appropriate for our use case compared to the redis-rb default timeout.
-
antirez authored
This prevented 32bit cluster instances from clearing the FAIL flag when needed.
-
antirez authored
Ping sent and pong received fields need to be casted to long long to be printed correctly into 32 bit systems.
-
- 13 Dec, 2013 3 commits
-
-
antirez authored
-
antirez authored
The bug could be easily triggered by: SADD foo a b c 1 2 3 4 5 6 SDIFF foo foo When the key was the same in two sets, an unsafe iterator was used to check existence of elements in the same set we were iterating. Usually this would just result into a wrong output, however with the dict.c API misuse protection we have in place, the result was actually an assertion failed that was triggered by the CI test, while creating random datasets for the "MASTER and SLAVE consistency" test.
-
antirez authored
-
- 12 Dec, 2013 2 commits
- 11 Dec, 2013 2 commits
-
-
antirez authored
When a slave was disconnected from its master the replication offset was reported as -1. Now it is reported as the replication offset of the previous master, so that failover can be performed using this value in order to try to select a slave with more processed data from a set of slaves of the old master.
-
Yossi Gottlieb authored
-
- 10 Dec, 2013 4 commits
-
-
antirez authored
The previous fix for false positive timeout detected by master was not complete. There is another blocking stage while loading data for the first synchronization with the master, that is, flushing away the current data from the DB memory. This commit uses the newly introduced dict.c callback in order to make some incremental work (to send "\n" heartbeats to the master) while flushing the old data from memory. It is hard to write a regression test for this issue unfortunately. More support for debugging in the Redis core would be needed in terms of functionalities to simulate a slow DB loading / deletion.
-
antirez authored
Redis hash table implementation has many non-blocking features like incremental rehashing, however while deleting a large hash table there was no way to have a callback called to do some incremental work. This commit adds this support, as an optiona callback argument to dictEmpty() that is currently called at a fixed interval (one time every 65k deletions).
-
antirez authored
-
antirez authored
-
- 09 Dec, 2013 2 commits
-
-
antirez authored
Starting with Redis 2.8 masters are able to detect timed out slaves, while before 2.8 only slaves were able to detect a timed out master. Now that timeout detection is bi-directional the following problem happens as described "in the field" by issue #1449: 1) Master and slave setup with big dataset. 2) Slave performs the first synchronization, or a full sync after a failed partial resync. 3) Master sends the RDB payload to the slave. 4) Slave loads this payload. 5) Master detects the slave as timed out since does not receive back the REPLCONF ACK acknowledges. Here the problem is that the master has no way to know how much the slave will take to load the RDB file in memory. The obvious solution is to use a greater replication timeout setting, but this is a shame since for the 0.1% of operation time we are forced to use a timeout that is not what is suited for 99.9% of operation time. This commit tries to fix this problem with a solution that is a bit of an hack, but that modifies little of the replication internals, in order to be back ported to 2.8 safely. During the RDB loading time, we send the master newlines to avoid being sensed as timed out. This is the same that the master already does while saving the RDB file to still signal its presence to the slave. The single newline is used because: 1) It can't desync the protocol, as it is only transmitted all or nothing. 2) It can be safely sent while we don't have a client structure for the master or in similar situations just with write(2).
-
antirez authored
-
- 08 Dec, 2013 1 commit
-
-
Yossi Gottlieb authored
-
- 06 Dec, 2013 2 commits
-
-
antirez authored
The way the role change was recoded was not sane and too much convoluted, causing the role information to be not always updated. This commit fixes issue #1445.
-
antirez authored
When there is a master address switch, the reported role must be set to master so that we have a chance to re-sample the INFO output to check if the new address is reporting the right role. Otherwise if the role was wrong, it will be sensed as wrong even after the address switch, and for enough time according to the role change time, for Sentinel consider the master SDOWN. This fixes isue #1446, that describes the effects of this bug in practice.
-
- 05 Dec, 2013 2 commits
-
-
antirez authored
-
antirez authored
During the refactoring of blocking operations, commit 82b672f6, a bug was introduced where a milliseconds time is compared to a seconds time, so all the clients always appear to timeout if timeout is set to non-zero value. Thanks to Jonathan Leibiusky for finding the bug and helping verifying the cause and fix.
-
- 04 Dec, 2013 1 commit
-
-
antirez authored
-
- 03 Dec, 2013 1 commit
-
-
antirez authored
-