1. 13 Dec, 2015 1 commit
  2. 11 Dec, 2015 3 commits
  3. 10 Dec, 2015 5 commits
    • antirez's avatar
      Cluster: more reliable migration tests. · 5ad4f7e0
      antirez authored
      The old version was modeled with two failovers, however after the first
      it is possible that another slave will migrate to the new master, since
      for some time the new master is not backed by any slave. Probably there
      should be some pause after a failover, before the migration. Anyway the
      test is simpler in this way, and depends less on timing.
      5ad4f7e0
    • antirez's avatar
      Fix merge of cluster migrate-to flag. · 711bf140
      antirez authored
      711bf140
    • antirez's avatar
      Cluster: more reliable replicas migration test. · 6007ea3b
      antirez authored
      6007ea3b
    • antirez's avatar
      Remove debugging message left there for error. · 6d5d8d10
      antirez authored
      6d5d8d10
    • antirez's avatar
      Fix replicas migration by adding a new flag. · 2e43bcff
      antirez authored
      Some time ago I broken replicas migration (reported in #2924).
      The idea was to prevent masters without replicas from getting replicas
      because of replica migration, I remember it to create issues with tests,
      but there is no clue in the commit message about why it was so
      undesirable.
      
      However my patch as a side effect totally ruined the concept of replicas
      migration since we want it to work also for instances that, technically,
      never had slaves in the past: promoted slaves.
      
      So now instead the ability to be targeted by replicas migration, is a
      new flag "migrate-to". It only applies to masters, and is set in the
      following two cases:
      
      1. When a master gets a slave, it is set.
      2. When a slave turns into a master because of fail over, it is set.
      
      This way replicas migration targets are only masters that used to have
      slaves, and slaves of masters (that used to have slaves... obviously)
      and are promoted.
      
      The new flag is only internal, and is never exposed in the output nor
      persisted in the nodes configuration, since all the information to
      handle it are implicit in the cluster configuration we already have.
      2e43bcff
  4. 06 Dec, 2015 1 commit
  5. 27 Nov, 2015 3 commits
    • antirez's avatar
      Fix renamed define after merge. · 4f7d1e46
      antirez authored
      4f7d1e46
    • antirez's avatar
      Handle wait3() errors. · 3626699f
      antirez authored
      My guess was that wait3() with WNOHANG could never return -1 and an
      error. However issue #2897 may possibly indicate that this could happen
      under non clear conditions. While we try to understand this better,
      better to handle a return value of -1 explicitly, otherwise in the
      case a BGREWRITE is in progress but wait3() returns -1, the effect is to
      match the first branch of the if/else block since server.rdb_child_pid
      is -1, and call backgroundSaveDoneHandler() without a good reason, that
      will, in turn, crash the Redis server with an assertion.
      3626699f
    • antirez's avatar
  6. 17 Nov, 2015 3 commits
  7. 09 Nov, 2015 1 commit
  8. 27 Oct, 2015 1 commit
  9. 15 Oct, 2015 10 commits
  10. 30 Sep, 2015 1 commit
    • antirez's avatar
      redis-cli pipe mode: don't stay in the write loop forever. · 30978004
      antirez authored
      The code was broken and resulted in redis-cli --pipe to, most of the
      times, writing everything received in the standard input to the Redis
      connection socket without ever reading back the replies, until all the
      content to write was written.
      
      This means that Redis had to accumulate all the output in the output
      buffers of the client, consuming a lot of memory.
      
      Fixed thanks to the original report of anomalies in the behavior
      provided by Twitter user @fsaintjacques.
      30978004
  11. 15 Sep, 2015 1 commit
    • antirez's avatar
      Test: fix false positive in HSTRLEN test. · 652e662d
      antirez authored
      HINCRBY* tests later used the value "tmp" that was sometimes generated
      by the random key generation function. The result was ovewriting what
      Tcl expected to be inside Redis with another value, causing the next
      HSTRLEN test to fail.
      652e662d
  12. 14 Sep, 2015 3 commits
    • antirez's avatar
      Test: MOVE expire test improved. · a0ff29bc
      antirez authored
      Related to #2765.
      a0ff29bc
    • antirez's avatar
      MOVE re-add TTL check fixed. · e2c0d896
      antirez authored
      getExpire() returns -1 when no expire exists.
      
      Related to #2765.
      e2c0d896
    • antirez's avatar
      MOVE now can move TTL metadata as well. · 5b6c7647
      antirez authored
      MOVE was not able to move the TTL: when a key was moved into a different
      database number, it became persistent like if PERSIST was used.
      
      In some incredible way (I guess almost nobody uses Redis MOVE) this bug
      remained unnoticed inside Redis internals for many years.
      Finally Andy Grunwald discovered it and opened an issue.
      
      This commit fixes the bug and adds a regression test.
      
      Close #2765.
      5b6c7647
  13. 08 Sep, 2015 4 commits
  14. 07 Sep, 2015 3 commits
    • antirez's avatar
      Fix merge issues in 490847c6. · ce4c1730
      antirez authored
      ce4c1730
    • antirez's avatar
      Undo slaves state change on failed rdbSaveToSlavesSockets(). · 490847c6
      antirez authored
      As Oran Agra suggested, in startBgsaveForReplication() when the BGSAVE
      attempt returns an error, we scan the list of slaves in order to remove
      them since there is no way to serve them currently.
      
      However we check for the replication state BGSAVE_START, which was
      modified by rdbSaveToSlaveSockets() before forking(). So when fork fails
      the state of slaves remain BGSAVE_END and no cleanup is performed.
      
      This commit fixes the problem by making rdbSaveToSlavesSockets() able to
      undo the state change on fork failure.
      490847c6
    • antirez's avatar
      Sentinel: fix bug in config rewriting during failover · c20218eb
      antirez authored
      We have a check to rewrite the config properly when a failover is in
      progress, in order to add the current (already failed over) master as
      slave, and don't include in the slave list the promoted slave itself.
      
      However there was an issue, the variable with the right address was
      computed but never used when the code was modified, and no tests are
      available for this feature for two reasons:
      
      1. The Sentinel unit test currently does not test Sentinel ability to
      persist its state at all.
      2. It is a very hard to trigger state since it lasts for little time in
      the context of the testing framework.
      
      However this feature should be covered in the test in some way.
      
      The bug was found by @badboy using the clang static analyzer.
      
      Effects of the bug on safety of Sentinel
      ===
      
      This bug results in severe issues in the following case:
      
      1. A Sentinel is elected leader.
      2. During the failover, it persists a wrong config with a known-slave
      entry listing the master address.
      3. The Sentinel crashes and restarts, reading invalid configuration from
      disk.
      4. It sees that the slave now does not obey the logical configuration
      (should replicate from the current master), so it sends a SLAVEOF
      command to the master (since the slave master is the same) creating a
      replication loop (attempt to replicate from itself) which Redis is
      currently unable to detect.
      5. This means that the master is no longer available because of the bug.
      
      However the lack of availability should be only transient (at least
      in my tests, but other states could be possible where the problem
      is not recovered automatically) because:
      
      6. Sentinels treat masters reporting to be slaves as failing.
      7. A new failover is triggered, and a slave is promoted to master.
      
      Bug lifetime
      ===
      
      The bug is there forever. Commit 16237d78 actually tried to fix the bug
      but in the wrong way (the computed variable was never used! My fault).
      So this bug is there basically since the start of Sentinel.
      
      Since the bug is hard to trigger, I remember little reports matching
      this condition, but I remember at least a few. Also in automated tests
      where instances were stopped and restarted multiple times automatically
      I remember hitting this issue, however I was not able to reproduce nor
      to determine with the information I had at the time what was causing the
      issue.
      c20218eb