1. 22 Dec, 2013 3 commits
    • antirez's avatar
      Make new masters inherit replication offsets. · 94e8c9e7
      antirez authored
      Currently replication offsets could be used into a limited way in order
      to understand, out of a set of slaves, what is the one with the most
      updated data. For example this comparison is possible of N slaves
      were replicating all with the same master.
      
      However the replication offset was not transferred from master to slaves
      (that are later promoted as masters) in any way, so for instance if
      there were three instances A, B, C, with A master and B and C
      replication from A, the following could happen:
      
      C disconnects from A.
      B is turned into master.
      A is switched to master of B.
      B receives some write.
      
      In this context there was no way to compare the offset of A and C,
      because B would use its own local master replication offset as
      replication offset to initialize the replication with A.
      
      With this commit what happens is that when B is turned into master it
      inherits the replication offset from A, making A and C comparable.
      In the above case assuming no inconsistencies are created during the
      disconnection and failover process, A will show to have a replication
      offset greater than C.
      
      Note that this does not mean offsets are always comparable to understand
      what is, in a set of instances, since in more complex examples the
      replica with the higher replication offset could be partitioned away
      when picking the instance to elect as new master. However this in
      general improves the ability of a system to try to pick a good replica
      to promote to master.
      94e8c9e7
    • antirez's avatar
      Slave disconnection is an event worth logging. · ba5eb44d
      antirez authored
      ba5eb44d
    • antirez's avatar
      Redis Cluster: add repl_ping_slave_period to slave data validity time. · 66ec1412
      antirez authored
      When the configured node timeout is very small, the data validity time
      (maximum data age for a slave to try a failover) is too little (ten
      times the configured node timeout) when the replication link with the
      master is mostly idle. In this case we'll receive some data from the
      master only every server.repl_ping_slave_period to refresh the last
      interaction with the master.
      
      This commit adds to the max data validity time the slave ping period to
      avoid this problem of slaves sensing too old data without a good reason.
      However this max data validity time is likely a setting that should be
      configurable by the Redis Cluster user in a way completely independent
      from the node timeout.
      66ec1412
  2. 20 Dec, 2013 6 commits
  3. 19 Dec, 2013 4 commits
  4. 17 Dec, 2013 7 commits
  5. 13 Dec, 2013 3 commits
    • antirez's avatar
      Makefile.dep updated. · 2dfc5e35
      antirez authored
      2dfc5e35
    • antirez's avatar
      SDIFF iterator misuse fixed in diff algorithm #1. · c00453da
      antirez authored
      The bug could be easily triggered by:
      
          SADD foo a b c 1 2 3 4 5 6
          SDIFF foo foo
      
      When the key was the same in two sets, an unsafe iterator was used to
      check existence of elements in the same set we were iterating.
      Usually this would just result into a wrong output, however with the
      dict.c API misuse protection we have in place, the result was actually
      an assertion failed that was triggered by the CI test, while creating
      random datasets for the "MASTER and SLAVE consistency" test.
      c00453da
    • antirez's avatar
      Sentinel: dead code removed. · 53201488
      antirez authored
      53201488
  6. 12 Dec, 2013 2 commits
  7. 11 Dec, 2013 2 commits
  8. 10 Dec, 2013 4 commits
    • antirez's avatar
      Slaves heartbeats during sync improved. · 11120689
      antirez authored
      The previous fix for false positive timeout detected by master was not
      complete. There is another blocking stage while loading data for the
      first synchronization with the master, that is, flushing away the
      current data from the DB memory.
      
      This commit uses the newly introduced dict.c callback in order to make
      some incremental work (to send "\n" heartbeats to the master) while
      flushing the old data from memory.
      
      It is hard to write a regression test for this issue unfortunately. More
      support for debugging in the Redis core would be needed in terms of
      functionalities to simulate a slow DB loading / deletion.
      11120689
    • antirez's avatar
      dict.c: added optional callback to dictEmpty(). · 2eb781b3
      antirez authored
      Redis hash table implementation has many non-blocking features like
      incremental rehashing, however while deleting a large hash table there
      was no way to have a callback called to do some incremental work.
      
      This commit adds this support, as an optiona callback argument to
      dictEmpty() that is currently called at a fixed interval (one time every
      65k deletions).
      2eb781b3
    • antirez's avatar
      2c4ab8a5
    • antirez's avatar
      7c531eb5
  9. 09 Dec, 2013 2 commits
    • antirez's avatar
      Slaves heartbeat while loading RDB files. · 27db38d0
      antirez authored
      Starting with Redis 2.8 masters are able to detect timed out slaves,
      while before 2.8 only slaves were able to detect a timed out master.
      
      Now that timeout detection is bi-directional the following problem
      happens as described "in the field" by issue #1449:
      
      1) Master and slave setup with big dataset.
      2) Slave performs the first synchronization, or a full sync
         after a failed partial resync.
      3) Master sends the RDB payload to the slave.
      4) Slave loads this payload.
      5) Master detects the slave as timed out since does not receive back the
         REPLCONF ACK acknowledges.
      
      Here the problem is that the master has no way to know how much the
      slave will take to load the RDB file in memory. The obvious solution is
      to use a greater replication timeout setting, but this is a shame since
      for the 0.1% of operation time we are forced to use a timeout that is
      not what is suited for 99.9% of operation time.
      
      This commit tries to fix this problem with a solution that is a bit of
      an hack, but that modifies little of the replication internals, in order
      to be back ported to 2.8 safely.
      
      During the RDB loading time, we send the master newlines to avoid
      being sensed as timed out. This is the same that the master already does
      while saving the RDB file to still signal its presence to the slave.
      
      The single newline is used because:
      
      1) It can't desync the protocol, as it is only transmitted all or
      nothing.
      2) It can be safely sent while we don't have a client structure for the
      master or in similar situations just with write(2).
      27db38d0
    • antirez's avatar
      Handle inline requested terminated with just \n. · eaf1bfb8
      antirez authored
      eaf1bfb8
  10. 08 Dec, 2013 1 commit
  11. 06 Dec, 2013 2 commits
    • antirez's avatar
      Sentinel: fix reported role info sampling. · c590549e
      antirez authored
      The way the role change was recoded was not sane and too much
      convoluted, causing the role information to be not always updated.
      
      This commit fixes issue #1445.
      c590549e
    • antirez's avatar
      Sentinel: fix reported role fields when master is reset. · 2b414a4b
      antirez authored
      When there is a master address switch, the reported role must be set to
      master so that we have a chance to re-sample the INFO output to check if
      the new address is reporting the right role.
      
      Otherwise if the role was wrong, it will be sensed as wrong even after
      the address switch, and for enough time according to the role change
      time, for Sentinel consider the master SDOWN.
      
      This fixes isue #1446, that describes the effects of this bug in
      practice.
      2b414a4b
  12. 05 Dec, 2013 2 commits
  13. 04 Dec, 2013 1 commit
  14. 03 Dec, 2013 1 commit