1. 13 Jan, 2016 1 commit
  2. 02 Jan, 2016 2 commits
  3. 29 Dec, 2015 1 commit
  4. 22 Dec, 2015 2 commits
  5. 18 Dec, 2015 3 commits
  6. 17 Dec, 2015 3 commits
    • antirez's avatar
      Cluster: resharding test now checks AOF consistency. · 9b4dd92c
      antirez authored
      It's a key invariant that when AOF is enabled, after the cluster
      reshards, a crash-recovery event causes all the keys to be still fine
      with the expected logical content. Now this is part of unit 04.
      9b4dd92c
    • antirez's avatar
      Fix a race that may lead to the active (slave) client to be freed. · bb215375
      antirez authored
      In issue #2948 a crash was reported in processCommand(). Later Oran Agra
      (@oranagra) traced the bug (in private chat) in the following sequence
      of events:
      
      1. Some maxmemory is set.
      2. The slave is the currently active client and is executing PING or
         REPLCONF or whatever a slave can send to its master.
      3. freeMemoryIfNeeded() is called since maxmemory is set.
      4. flushSlavesOutputBuffers() is called by freeMemoryIfNeeded().
      5. During slaves buffers flush, a write error could be encoutered in
         writeToClient() or sendReplyToClient() depending on the version of
         Redis. This will trigger freeClient() against the currently active
         client, so a segmentation fault will likely happen in
         processCommand() immediately after the call to freeMemoryIfNeeded().
      
      There are different possible fixes:
      
      1. Add flags to writeToClient() (recent versions code base) so that
         we can ignore the write errors, and use this flag in
         flushSlavesOutputBuffers(). However this is not simple to do in older
         versions of Redis.
      2. Use freeClientAsync() during write errors. This works but changes the
         current behavior of releasing clients ASAP when possible. Normally
         we write to clients during the normal event loop processing, in the
         writable client, where there is no active client, so no care must be
         taken.
      3. The fix of this commit: to detect that the current client is no
         longer valid. This fix is a bit "ad-hoc", but works across all the
         versions and has the advantage of not changing the remaining
         behavior. Only alters what happens during this race condition,
         hopefully.
      bb215375
    • antirez's avatar
      Fix processCommand() comment about return value. · 218e522c
      antirez authored
      218e522c
  7. 16 Dec, 2015 6 commits
  8. 15 Dec, 2015 4 commits
  9. 14 Dec, 2015 2 commits
  10. 13 Dec, 2015 1 commit
  11. 11 Dec, 2015 10 commits
  12. 10 Dec, 2015 3 commits
  13. 09 Dec, 2015 2 commits
    • antirez's avatar
      69897f5f
    • antirez's avatar
      Fix replicas migration by adding a new flag. · e0f22df9
      antirez authored
      Some time ago I broken replicas migration (reported in #2924).
      The idea was to prevent masters without replicas from getting replicas
      because of replica migration, I remember it to create issues with tests,
      but there is no clue in the commit message about why it was so
      undesirable.
      
      However my patch as a side effect totally ruined the concept of replicas
      migration since we want it to work also for instances that, technically,
      never had slaves in the past: promoted slaves.
      
      So now instead the ability to be targeted by replicas migration, is a
      new flag "migrate-to". It only applies to masters, and is set in the
      following two cases:
      
      1. When a master gets a slave, it is set.
      2. When a slave turns into a master because of fail over, it is set.
      
      This way replicas migration targets are only masters that used to have
      slaves, and slaves of masters (that used to have slaves... obviously)
      and are promoted.
      
      The new flag is only internal, and is never exposed in the output nor
      persisted in the nodes configuration, since all the information to
      handle it are implicit in the cluster configuration we already have.
      e0f22df9