- 10 Dec, 2013 4 commits
-
-
antirez authored
The previous fix for false positive timeout detected by master was not complete. There is another blocking stage while loading data for the first synchronization with the master, that is, flushing away the current data from the DB memory. This commit uses the newly introduced dict.c callback in order to make some incremental work (to send "\n" heartbeats to the master) while flushing the old data from memory. It is hard to write a regression test for this issue unfortunately. More support for debugging in the Redis core would be needed in terms of functionalities to simulate a slow DB loading / deletion.
-
antirez authored
Redis hash table implementation has many non-blocking features like incremental rehashing, however while deleting a large hash table there was no way to have a callback called to do some incremental work. This commit adds this support, as an optiona callback argument to dictEmpty() that is currently called at a fixed interval (one time every 65k deletions).
-
antirez authored
-
antirez authored
-
- 09 Dec, 2013 2 commits
-
-
antirez authored
Starting with Redis 2.8 masters are able to detect timed out slaves, while before 2.8 only slaves were able to detect a timed out master. Now that timeout detection is bi-directional the following problem happens as described "in the field" by issue #1449: 1) Master and slave setup with big dataset. 2) Slave performs the first synchronization, or a full sync after a failed partial resync. 3) Master sends the RDB payload to the slave. 4) Slave loads this payload. 5) Master detects the slave as timed out since does not receive back the REPLCONF ACK acknowledges. Here the problem is that the master has no way to know how much the slave will take to load the RDB file in memory. The obvious solution is to use a greater replication timeout setting, but this is a shame since for the 0.1% of operation time we are forced to use a timeout that is not what is suited for 99.9% of operation time. This commit tries to fix this problem with a solution that is a bit of an hack, but that modifies little of the replication internals, in order to be back ported to 2.8 safely. During the RDB loading time, we send the master newlines to avoid being sensed as timed out. This is the same that the master already does while saving the RDB file to still signal its presence to the slave. The single newline is used because: 1) It can't desync the protocol, as it is only transmitted all or nothing. 2) It can be safely sent while we don't have a client structure for the master or in similar situations just with write(2).
-
antirez authored
-
- 06 Dec, 2013 3 commits
-
-
antirez authored
The way the role change was recoded was not sane and too much convoluted, causing the role information to be not always updated. This commit fixes issue #1445.
-
antirez authored
When there is a master address switch, the reported role must be set to master so that we have a chance to re-sample the INFO output to check if the new address is reporting the right role. Otherwise if the role was wrong, it will be sensed as wrong even after the address switch, and for enough time according to the role change time, for Sentinel consider the master SDOWN. This fixes isue #1446, that describes the effects of this bug in practice.
-
antirez authored
-
- 05 Dec, 2013 9 commits
-
-
Salvatore Sanfilippo authored
Grammar fix.
-
Anurag Ramdasan authored
-
Salvatore Sanfilippo authored
fixed typo
-
Anurag Ramdasan authored
-
Salvatore Sanfilippo authored
Fixed grammar: 'usually' to 'usual'
-
Anurag Ramdasan authored
-
antirez authored
-
antirez authored
-
antirez authored
During the refactoring of blocking operations, commit 82b672f6, a bug was introduced where a milliseconds time is compared to a seconds time, so all the clients always appear to timeout if timeout is set to non-zero value. Thanks to Jonathan Leibiusky for finding the bug and helping verifying the cause and fix.
-
- 04 Dec, 2013 1 commit
-
-
antirez authored
-
- 03 Dec, 2013 4 commits
- 02 Dec, 2013 3 commits
-
-
antirez authored
See issue #1419.
-
antirez authored
Sentinels are now desynchronized in a better way changing the time handler frequency between 10 and 20 HZ. This way on average a desynchronization of 25 milliesconds is produced that should be larger enough compared to network latency, avoiding most split-brain condition during the vote. Now that the clocks are desynchronized, to have larger random delays when performing operations can be easily achieved in the following way. Take as example the function that starts the failover, that is called with a frequency between 10 and 20 HZ and will start the failover every time there are the conditions. By just adding as an additional condition something like rand()%4 == 0, we can amplify the desynchronization between Sentinel instances easily. See issue #1419.
-
antirez authored
-
- 29 Nov, 2013 2 commits
- 28 Nov, 2013 3 commits
- 26 Nov, 2013 2 commits
-
-
Salvatore Sanfilippo authored
fix a bug in sentinel.c about pub/sub link
-
huangz1990 authored
-
- 25 Nov, 2013 3 commits
-
-
antirez authored
The result of this one-char bug was pretty serious, if the new master had the same port of the previous master, but just a different IP address, non-leader Sentinels would not be able to recognize the configuration change. This commit fixes issue #1394. Many thanks to @shanemadden that reported the bug and helped investigating it.
-
antirez authored
This fixes issue #1395.
-
antirez authored
Fixes issue #1298.
-
- 21 Nov, 2013 4 commits