- 26 May, 2016 1 commit
-
-
MOON_CLJ authored
-
- 27 Jan, 2016 1 commit
-
-
antirez authored
1. Bug #3035 is fixed (NULL pointer access). This was happening with the folling set of conditions: * For some reason one of the Sentinels, let's call it Sentinel_A, changed ID (reconfigured from scratch), but is as the same address at which it used to be. * Sentinel_A performs a failover and/or has a newer configuration compared to another Sentinel, that we call, Sentinel_B. * Sentinel_B receives an HELLO message from Sentinel_A, where the address and/or ID is mismatched, but it is reporting a newer configuration for the master they are both monitoring. 2. Sentinels now must have an ID otherwise they are not loaded nor persisted in the configuration. This allows to have conflicting Sentinels with the same address since now the master->sentinels dictionary is indexed by Sentinel ID. 3. The code now detects if a Sentinel is annoucing itself with an IP/port pair already busy (of another Sentinel). The old Sentinel that had the same port/pair is set as having port 0, that means, the address is invalid. We may discover the right address later via HELLO messages.
-
- 12 Jan, 2016 1 commit
-
-
Daniel Shih authored
connection to master/slave/sentinel decames disconnected just after the last PONG and before the next PING.
-
- 08 Sep, 2015 1 commit
-
-
antirez authored
-
- 29 Jul, 2015 1 commit
-
-
antirez authored
Debugging is hard without those when there are problems like the one investigated in issue #2700.
-
- 27 Jul, 2015 1 commit
-
-
antirez authored
-
- 26 Jul, 2015 5 commits
- 24 Jul, 2015 1 commit
-
-
Rogerio Goncalves authored
-
- 12 Jun, 2015 1 commit
-
-
antirez authored
We have a check to rewrite the config properly when a failover is in progress, in order to add the current (already failed over) master as slave, and don't include in the slave list the promoted slave itself. However there was an issue, the variable with the right address was computed but never used when the code was modified, and no tests are available for this feature for two reasons: 1. The Sentinel unit test currently does not test Sentinel ability to persist its state at all. 2. It is a very hard to trigger state since it lasts for little time in the context of the testing framework. However this feature should be covered in the test in some way. The bug was found by @badboy using the clang static analyzer. Effects of the bug on safety of Sentinel === This bug results in severe issues in the following case: 1. A Sentinel is elected leader. 2. During the failover, it persists a wrong config with a known-slave entry listing the master address. 3. The Sentinel crashes and restarts, reading invalid configuration from disk. 4. It sees that the slave now does not obey the logical configuration (should replicate from the current master), so it sends a SLAVEOF command to the master (since the slave master is the same) creating a replication loop (attempt to replicate from itself) which Redis is currently unable to detect. 5. This means that the master is no longer available because of the bug. However the lack of availability should be only transient (at least in my tests, but other states could be possible where the problem is not recovered automatically) because: 6. Sentinels treat masters reporting to be slaves as failing. 7. A new failover is triggered, and a slave is promoted to master. Bug lifetime === The bug is there forever. Commit 16237d78 actually tried to fix the bug but in the wrong way (the computed variable was never used! My fault). So this bug is there basically since the start of Sentinel. Since the bug is hard to trigger, I remember little reports matching this condition, but I remember at least a few. Also in automated tests where instances were stopped and restarted multiple times automatically I remember hitting this issue, however I was not able to reproduce nor to determine with the information I had at the time what was causing the issue.
-
- 25 May, 2015 2 commits
- 22 May, 2015 1 commit
-
-
antirez authored
This commit adds the SENTINEL simulate-failure, that sets specific hooks inside the state machine that will crash Sentinel, for testing purposes.
-
- 20 May, 2015 1 commit
-
-
antirez authored
Trivial omission of the obvious no-match case.
-
- 18 May, 2015 1 commit
-
-
antirez authored
A way for monitoring systems to check that Sentinel is technically able to reach the quorum and failover, using the currently visible Sentinels.
-
- 15 May, 2015 1 commit
-
-
antirez authored
-
- 14 May, 2015 7 commits
-
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
Otherwise pending commands callbacks will fire with a reference that no longer exists.
-
antirez authored
-
antirez authored
The PING trigger was improved again by using two fields instead of a single one to remember when the last ping was sent: 1. The "active" ping is the time at which we sent the last ping that still received no reply. However we continue to ping non replying instances even if they have an old active ping: the link may be disconnected and reconencted in the meantime so the older pings may get lost even if it's a TCP socket. 2. The "last" ping is the time at which we really sent the last ping on the wire, and this is used in order to throttle the amount of pings we send during failures (when no pong is received). All in all the failure detector effectiveness should be identical but we avoid to flood instances with pings during failures or when they are slow.
-
- 13 May, 2015 1 commit
-
-
antirez authored
-
- 12 May, 2015 3 commits
-
-
antirez authored
It's ok to ping as soon as the ping period has elapsed since we received the last PONG, but it's not good that we ping again if there is a pending ping... With this change we'll send a new ping if there is one pending only if two times the ping period elapsed since the ping which is still pending was sent.
-
antirez authored
-
antirez authored
This is useful for debugging and logging activities: given a sentinelRedisInstance object returns a C string representing the instance type: master, slave, sentinel.
-
- 11 May, 2015 3 commits
-
-
antirez authored
-
therealbill authored
This new command triggers a config flush to save the in-memory config to disk. This is useful for cases of a configuration management system or a package manager wiping out your sentinel config while the process is still running - and has not yet been restarted. It can also be useful for scripting a backup and migrate or clone of a running sentinel.
-
antirez authored
-
- 08 May, 2015 4 commits
- 07 May, 2015 1 commit
-
-
antirez authored
Since with a previous commit Sentinels now persist their unique ID, we no longer need to detect duplicated Sentinels and re-add them. We remove and re-add back using different events only in the case of address switch of the same Sentinel, without generating a new +sentinel event.
-
- 06 May, 2015 1 commit
-
-
antirez authored
Previously Sentinels always changed unique ID across restarts, relying on the server.runid field. This is not a good idea, and forced Sentinel to rely on detection of duplicated Sentinels and a potentially dangerous clean-up and re-add operation of the Sentinel instance that was rebooted. Now the ID is generated at the first start and persisted in the configuration file, so that a given Sentinel will have its unique ID forever (unless the configuration is manually deleted or there is a filesystem corruption).
-
- 04 May, 2015 1 commit
-
-
therealbill authored
Originally, only the +slave event which occurs when a slave is reconfigured during sentinelResetMasterAndChangeAddress triggers a flush of the config to disk. However, newly discovered slaves don't apparently trigger this flush but do trigger the +slave event issuance. So if you start up a sentinel, add a master, then add a slave to the master (as a way to reproduce it) you'll see the +slave event issued, but the sentinel config won't be updated with the known-slave entry. This change makes sentinel do the flush of the config if a new slave is deteted in sentinelRefreshInstanceInfo.
-