1. 26 May, 2016 1 commit
  2. 27 Jan, 2016 1 commit
    • antirez's avatar
      Sentinel: improve handling of known Sentinel instances. · 751b5666
      antirez authored
      1. Bug #3035 is fixed (NULL pointer access). This was happening with the
         folling set of conditions:
      
      * For some reason one of the Sentinels, let's call it Sentinel_A, changed ID (reconfigured from scratch), but is as the same address at which it used to be.
      
      * Sentinel_A performs a failover and/or has a newer configuration compared to another Sentinel, that we call, Sentinel_B.
      
      * Sentinel_B receives an HELLO message from Sentinel_A, where the address and/or ID is mismatched, but it is reporting a newer configuration for the master they are both monitoring.
      
      2. Sentinels now must have an ID otherwise they are not loaded nor persisted in the configuration. This allows to have conflicting Sentinels with the same address since now the master->sentinels dictionary is indexed by Sentinel ID.
      
      3. The code now detects if a Sentinel is annoucing itself with an IP/port pair already busy (of another Sentinel). The old Sentinel that had the same port/pair is set as having port 0, that means, the address is invalid. We may discover the right address later via HELLO messages.
      751b5666
  3. 12 Jan, 2016 1 commit
  4. 08 Sep, 2015 1 commit
  5. 29 Jul, 2015 1 commit
  6. 27 Jul, 2015 1 commit
  7. 26 Jul, 2015 5 commits
  8. 24 Jul, 2015 1 commit
  9. 12 Jun, 2015 1 commit
    • antirez's avatar
      Sentinel: fix bug in config rewriting during failover · 821a9866
      antirez authored
      We have a check to rewrite the config properly when a failover is in
      progress, in order to add the current (already failed over) master as
      slave, and don't include in the slave list the promoted slave itself.
      
      However there was an issue, the variable with the right address was
      computed but never used when the code was modified, and no tests are
      available for this feature for two reasons:
      
      1. The Sentinel unit test currently does not test Sentinel ability to
      persist its state at all.
      2. It is a very hard to trigger state since it lasts for little time in
      the context of the testing framework.
      
      However this feature should be covered in the test in some way.
      
      The bug was found by @badboy using the clang static analyzer.
      
      Effects of the bug on safety of Sentinel
      ===
      
      This bug results in severe issues in the following case:
      
      1. A Sentinel is elected leader.
      2. During the failover, it persists a wrong config with a known-slave
      entry listing the master address.
      3. The Sentinel crashes and restarts, reading invalid configuration from
      disk.
      4. It sees that the slave now does not obey the logical configuration
      (should replicate from the current master), so it sends a SLAVEOF
      command to the master (since the slave master is the same) creating a
      replication loop (attempt to replicate from itself) which Redis is
      currently unable to detect.
      5. This means that the master is no longer available because of the bug.
      
      However the lack of availability should be only transient (at least
      in my tests, but other states could be possible where the problem
      is not recovered automatically) because:
      
      6. Sentinels treat masters reporting to be slaves as failing.
      7. A new failover is triggered, and a slave is promoted to master.
      
      Bug lifetime
      ===
      
      The bug is there forever. Commit 16237d78 actually tried to fix the bug
      but in the wrong way (the computed variable was never used! My fault).
      So this bug is there basically since the start of Sentinel.
      
      Since the bug is hard to trigger, I remember little reports matching
      this condition, but I remember at least a few. Also in automated tests
      where instances were stopped and restarted multiple times automatically
      I remember hitting this issue, however I was not able to reproduce nor
      to determine with the information I had at the time what was causing the
      issue.
      821a9866
  10. 25 May, 2015 2 commits
  11. 22 May, 2015 1 commit
  12. 20 May, 2015 1 commit
  13. 18 May, 2015 1 commit
    • antirez's avatar
      Sentinel: SENTINEL CKQUORUM command · abc65e89
      antirez authored
      A way for monitoring systems to check that Sentinel is technically able
      to reach the quorum and failover, using the currently visible Sentinels.
      abc65e89
  14. 15 May, 2015 1 commit
  15. 14 May, 2015 7 commits
  16. 13 May, 2015 1 commit
  17. 12 May, 2015 3 commits
  18. 11 May, 2015 3 commits
  19. 08 May, 2015 4 commits
  20. 07 May, 2015 1 commit
    • antirez's avatar
      Sentinel: don't detect duplicated Sentinels, just address switch · a0cd75cd
      antirez authored
      Since with a previous commit Sentinels now persist their unique ID, we
      no longer need to detect duplicated Sentinels and re-add them. We remove
      and re-add back using different events only in the case of address
      switch of the same Sentinel, without generating a new +sentinel event.
      a0cd75cd
  21. 06 May, 2015 1 commit
    • antirez's avatar
      Sentinel: persist its unique ID across restarts. · 794fc4c9
      antirez authored
      Previously Sentinels always changed unique ID across restarts, relying
      on the server.runid field. This is not a good idea, and forced Sentinel
      to rely on detection of duplicated Sentinels and a potentially dangerous
      clean-up and re-add operation of the Sentinel instance that was
      rebooted.
      
      Now the ID is generated at the first start and persisted in the
      configuration file, so that a given Sentinel will have its unique
      ID forever (unless the configuration is manually deleted or there is a
      filesystem corruption).
      794fc4c9
  22. 04 May, 2015 1 commit
    • therealbill's avatar
      Making sentinel flush config on +slave · cc799d25
      therealbill authored
      Originally, only the +slave event which occurs when a slave is
      reconfigured during sentinelResetMasterAndChangeAddress triggers a flush
      of the config to disk.  However, newly discovered slaves don't
      apparently trigger this flush but do trigger the +slave event issuance.
      
      So if you start up a sentinel, add a master, then add a slave to the
      master (as a way to reproduce it) you'll see the +slave event issued,
      but the sentinel config won't be updated with the known-slave entry.
      
      This change makes sentinel do the flush of the config if a new slave is
      deteted in sentinelRefreshInstanceInfo.
      cc799d25