1. 02 Sep, 2019 1 commit
  2. 20 Aug, 2019 2 commits
  3. 10 Feb, 2016 2 commits
  4. 18 Dec, 2015 2 commits
  5. 17 Dec, 2015 1 commit
    • antirez's avatar
      Fix a race that may lead to the active (slave) client to be freed. · e0b7388d
      antirez authored
      In issue #2948 a crash was reported in processCommand(). Later Oran Agra
      (@oranagra) traced the bug (in private chat) in the following sequence
      of events:
      
      1. Some maxmemory is set.
      2. The slave is the currently active client and is executing PING or
         REPLCONF or whatever a slave can send to its master.
      3. freeMemoryIfNeeded() is called since maxmemory is set.
      4. flushSlavesOutputBuffers() is called by freeMemoryIfNeeded().
      5. During slaves buffers flush, a write error could be encoutered in
         writeToClient() or sendReplyToClient() depending on the version of
         Redis. This will trigger freeClient() against the currently active
         client, so a segmentation fault will likely happen in
         processCommand() immediately after the call to freeMemoryIfNeeded().
      
      There are different possible fixes:
      
      1. Add flags to writeToClient() (recent versions code base) so that
         we can ignore the write errors, and use this flag in
         flushSlavesOutputBuffers(). However this is not simple to do in older
         versions of Redis.
      2. Use freeClientAsync() during write errors. This works but changes the
         current behavior of releasing clients ASAP when possible. Normally
         we write to clients during the normal event loop processing, in the
         writable client, where there is no active client, so no care must be
         taken.
      3. The fix of this commit: to detect that the current client is no
         longer valid. This fix is a bit "ad-hoc", but works across all the
         versions and has the advantage of not changing the remaining
         behavior. Only alters what happens during this race condition,
         hopefully.
      e0b7388d
  6. 15 Dec, 2015 1 commit
  7. 14 Dec, 2015 1 commit
  8. 15 Oct, 2015 7 commits
  9. 14 Oct, 2015 1 commit
    • Kevin McGehee's avatar
      Fix master timeout during handshake · 3e0b34cf
      Kevin McGehee authored
      This change allows a slave to properly time out a dead master during
      the extended asynchronous synchronization state machine.  Now, slaves
      will record their last interaction with the master and apply the
      replication timeout before a response to the PSYNC request is received.
      3e0b34cf
  10. 30 Sep, 2015 1 commit
    • antirez's avatar
      redis-cli pipe mode: don't stay in the write loop forever. · 560142e6
      antirez authored
      The code was broken and resulted in redis-cli --pipe to, most of the
      times, writing everything received in the standard input to the Redis
      connection socket without ever reading back the replies, until all the
      content to write was written.
      
      This means that Redis had to accumulate all the output in the output
      buffers of the client, consuming a lot of memory.
      
      Fixed thanks to the original report of anomalies in the behavior
      provided by Twitter user @fsaintjacques.
      560142e6
  11. 15 Sep, 2015 1 commit
    • antirez's avatar
      Test: fix false positive in HSTRLEN test. · 18bdc279
      antirez authored
      HINCRBY* tests later used the value "tmp" that was sometimes generated
      by the random key generation function. The result was ovewriting what
      Tcl expected to be inside Redis with another value, causing the next
      HSTRLEN test to fail.
      18bdc279
  12. 14 Sep, 2015 3 commits
    • antirez's avatar
      Test: MOVE expire test improved. · 88198003
      antirez authored
      Related to #2765.
      88198003
    • antirez's avatar
      MOVE re-add TTL check fixed. · 5fd2dc31
      antirez authored
      getExpire() returns -1 when no expire exists.
      
      Related to #2765.
      5fd2dc31
    • antirez's avatar
      MOVE now can move TTL metadata as well. · 37843be9
      antirez authored
      MOVE was not able to move the TTL: when a key was moved into a different
      database number, it became persistent like if PERSIST was used.
      
      In some incredible way (I guess almost nobody uses Redis MOVE) this bug
      remained unnoticed inside Redis internals for many years.
      Finally Andy Grunwald discovered it and opened an issue.
      
      This commit fixes the bug and adds a regression test.
      
      Close #2765.
      37843be9
  13. 08 Sep, 2015 4 commits
  14. 07 Sep, 2015 5 commits
    • antirez's avatar
      Fix merge issues in 490847c6. · 7bac7d37
      antirez authored
      7bac7d37
    • antirez's avatar
      Undo slaves state change on failed rdbSaveToSlavesSockets(). · 9155cdc3
      antirez authored
      As Oran Agra suggested, in startBgsaveForReplication() when the BGSAVE
      attempt returns an error, we scan the list of slaves in order to remove
      them since there is no way to serve them currently.
      
      However we check for the replication state BGSAVE_START, which was
      modified by rdbSaveToSlaveSockets() before forking(). So when fork fails
      the state of slaves remain BGSAVE_END and no cleanup is performed.
      
      This commit fixes the problem by making rdbSaveToSlavesSockets() able to
      undo the state change on fork failure.
      9155cdc3
    • antirez's avatar
      Sentinel: fix bug in config rewriting during failover · 8c8a7cde
      antirez authored
      We have a check to rewrite the config properly when a failover is in
      progress, in order to add the current (already failed over) master as
      slave, and don't include in the slave list the promoted slave itself.
      
      However there was an issue, the variable with the right address was
      computed but never used when the code was modified, and no tests are
      available for this feature for two reasons:
      
      1. The Sentinel unit test currently does not test Sentinel ability to
      persist its state at all.
      2. It is a very hard to trigger state since it lasts for little time in
      the context of the testing framework.
      
      However this feature should be covered in the test in some way.
      
      The bug was found by @badboy using the clang static analyzer.
      
      Effects of the bug on safety of Sentinel
      ===
      
      This bug results in severe issues in the following case:
      
      1. A Sentinel is elected leader.
      2. During the failover, it persists a wrong config with a known-slave
      entry listing the master address.
      3. The Sentinel crashes and restarts, reading invalid configuration from
      disk.
      4. It sees that the slave now does not obey the logical configuration
      (should replicate from the current master), so it sends a SLAVEOF
      command to the master (since the slave master is the same) creating a
      replication loop (attempt to replicate from itself) which Redis is
      currently unable to detect.
      5. This means that the master is no longer available because of the bug.
      
      However the lack of availability should be only transient (at least
      in my tests, but other states could be possible where the problem
      is not recovered automatically) because:
      
      6. Sentinels treat masters reporting to be slaves as failing.
      7. A new failover is triggered, and a slave is promoted to master.
      
      Bug lifetime
      ===
      
      The bug is there forever. Commit 16237d78 actually tried to fix the bug
      but in the wrong way (the computed variable was never used! My fault).
      So this bug is there basically since the start of Sentinel.
      
      Since the bug is hard to trigger, I remember little reports matching
      this condition, but I remember at least a few. Also in automated tests
      where instances were stopped and restarted multiple times automatically
      I remember hitting this issue, however I was not able to reproduce nor
      to determine with the information I had at the time what was causing the
      issue.
      8c8a7cde
    • antirez's avatar
      c4785461
    • ubuntu's avatar
      SCAN iter parsing changed from atoi to chartoull · 3ff2d65f
      ubuntu authored
      3ff2d65f
  15. 21 Aug, 2015 5 commits
    • antirez's avatar
      Force slaves to resync after unsuccessful PSYNC. · 194b7e21
      antirez authored
      Using chained replication where C is slave of B which is in turn slave of
      A, if B reconnects the replication link with A but discovers it is no
      longer possible to PSYNC, slaves of B must be disconnected and PSYNC
      not allowed, since the new B dataset may be completely different after
      the synchronization with the master.
      
      Note that there are varius semantical differences in the way this is
      handled now compared to the past. In the past the semantics was:
      
      1. When a slave lost connection with its master, disconnected the chained
      slaves ASAP. Which is not needed since after a successful PSYNC with the
      master, the slaves can continue and don't need to resync in turn.
      
      2. However after a failed PSYNC the replication backlog was not reset, so a
      slave was able to PSYNC successfully even if the instance did a full
      sync with its master, containing now an entirely different data set.
      
      Now instead chained slaves are not disconnected when the slave lose the
      connection with its master, but only when it is forced to full SYNC with
      its master. This means that if the slave having chained slaves does a
      successful PSYNC all its slaves can continue without troubles.
      
      See issue #2694 for more details.
      194b7e21
    • antirez's avatar
    • antirez's avatar
      flushSlavesOutputBuffers(): details clarified via comments. · 12d2a894
      antirez authored
      Talking with @oranagra we had to reason a little bit to understand if
      this function could ever flush the output buffers of the wrong slaves,
      having online state but actually not being ready to receive writes
      before the first ACK is received from them (this happens with diskless
      replication).
      
      Next time we'll just read this comment.
      12d2a894
    • antirez's avatar
      7a026770
    • antirez's avatar
      startBgsaveForReplication(): handle waiting slaves state change. · 5630eeb1
      antirez authored
      Before this commit, after triggering a BGSAVE it was up to the caller of
      startBgsavForReplication() to handle slaves in WAIT_BGSAVE_START in
      order to update them accordingly. However when the replication target is
      the socket, this is not possible since the process of updating the
      slaves and sending the FULLRESYNC reply must be coupled with the process
      of starting an RDB save (the reason is, we need to send the FULLSYNC
      command and spawn a child that will start to send RDB data to the slaves
      ASAP).
      
      This commit moves the responsibility of handling slaves in
      WAIT_BGSAVE_START to startBgsavForReplication() so that for both
      diskless and disk-based replication we have the same chain of
      responsiblity. In order accomodate such change, the syncCommand() also
      needs to put the client in the slave list ASAP (just after the initial
      checks) and not at the end, so that startBgsavForReplication() can find
      the new slave alrady in the list.
      
      Another related change is what happens if the BGSAVE fails because of
      fork() or other errors: we now remove the slave from the list of slaves
      and send an error, scheduling the slave connection to be terminated.
      
      As a side effect of this change the following errors found by
      Oran Agra are fixed (thanks!):
      
      1. rdbSaveToSlavesSockets() on failed fork will get the slaves cleaned
      up, otherwise they remain in a wrong state forever since we setup them
      for full resync before actually trying to fork.
      
      2. updateSlavesWaitingBgsave() with replication target set as "socket"
      was broken since the function changed the slaves state from
      WAIT_BGSAVE_START to WAIT_BGSAVE_END via
      replicationSetupSlaveForFullResync(), so later rdbSaveToSlavesSockets()
      will not find any slave in the right state (WAIT_BGSAVE_START) to feed.
      5630eeb1
  16. 07 Aug, 2015 2 commits
  17. 06 Aug, 2015 1 commit