1. 08 Jan, 2016 1 commit
  2. 06 Jan, 2016 1 commit
    • antirez's avatar
      Cluster: don't send -ASK to MIGRATE. · 319a4c04
      antirez authored
      For non existing keys, we don't want to send -ASK redirections to
      MIGRATE, since when moving slots from the migrating node to the
      importing node, we want just to ignore keys that are no longer there.
      They may be expired or deleted between the GETKEYSINSLOT call and the
      MIGRATE call. Otherwise this causes an error during migrations with
      redis-trib (or equivalent cluster management tools).
      319a4c04
  3. 18 Dec, 2015 1 commit
  4. 13 Dec, 2015 3 commits
  5. 11 Dec, 2015 1 commit
    • antirez's avatar
      Cluster: replica migration with delay. · bf09e58d
      antirez authored
      We wait a fixed amount of time (5 seconds currently) much greater than
      the usual Cluster node to node communication latency, before migrating.
      This way when a failover occurs, before detecting the new master as a
      target for migration, we give the time to its natural slaves (the slaves
      of the failed over master) to announce they switched to the new master,
      preventing an useless migration operation.
      bf09e58d
  6. 10 Dec, 2015 3 commits
    • antirez's avatar
      Fix merge of cluster migrate-to flag. · 711bf140
      antirez authored
      711bf140
    • antirez's avatar
      Remove debugging message left there for error. · 6d5d8d10
      antirez authored
      6d5d8d10
    • antirez's avatar
      Fix replicas migration by adding a new flag. · 2e43bcff
      antirez authored
      Some time ago I broken replicas migration (reported in #2924).
      The idea was to prevent masters without replicas from getting replicas
      because of replica migration, I remember it to create issues with tests,
      but there is no clue in the commit message about why it was so
      undesirable.
      
      However my patch as a side effect totally ruined the concept of replicas
      migration since we want it to work also for instances that, technically,
      never had slaves in the past: promoted slaves.
      
      So now instead the ability to be targeted by replicas migration, is a
      new flag "migrate-to". It only applies to masters, and is set in the
      following two cases:
      
      1. When a master gets a slave, it is set.
      2. When a slave turns into a master because of fail over, it is set.
      
      This way replicas migration targets are only masters that used to have
      slaves, and slaves of masters (that used to have slaves... obviously)
      and are promoted.
      
      The new flag is only internal, and is never exposed in the output nor
      persisted in the nodes configuration, since all the information to
      handle it are implicit in the cluster configuration we already have.
      2e43bcff
  7. 27 Nov, 2015 2 commits
  8. 17 Jul, 2015 1 commit
  9. 11 Jun, 2015 1 commit
  10. 24 Mar, 2015 1 commit
    • antirez's avatar
      Cluster: redirection refactoring + handling of blocked clients. · 3468cd36
      antirez authored
      There was a bug in Redis Cluster caused by clients blocked in a blocking
      list pop operation, for keys no longer handled by the instance, or
      in a condition where the cluster became down after the client blocked.
      
      A typical situation is:
      
      1) BLPOP <somekey> 0
      2) <somekey> hash slot is resharded to another master.
      
      The client will block forever int this case.
      
      A symmentrical non-cluster-specific bug happens when an instance is
      turned from master to slave. In that case it is more serious since this
      will desynchronize data between slaves and masters. This other bug was
      discovered as a side effect of thinking about the bug explained and
      fixed in this commit, but will be fixed in a separated commit.
      3468cd36
  11. 21 Mar, 2015 5 commits
  12. 20 Mar, 2015 1 commit
    • antirez's avatar
      Cluster: better cluster state transiction handling. · 62893f5b
      antirez authored
      Before we relied on the global cluster state to make sure all the hash
      slots are linked to some node, when getNodeByQuery() is called. So
      finding the hash slot unbound was checked with an assertion. However
      this is fragile. The cluster state is often updated in the
      clusterBeforeSleep() function, and not ASAP on state change, so it may
      happen to process clients with a cluster state that is 'ok' but yet
      certain hash slots set to NULL.
      
      With this commit the condition is also checked in getNodeByQuery() and
      reported with a identical error code of -CLUSTERDOWN but slightly
      different error message so that we have more debugging clue in the
      future.
      
      Root cause of issue #2288.
      62893f5b
  13. 18 Mar, 2015 2 commits
  14. 27 Feb, 2015 1 commit
  15. 26 Feb, 2015 1 commit
    • antirez's avatar
      Improvements to PR #2425 · 53659404
      antirez authored
      1. Remove useless "cs" initialization.
      2. Add a "select" var to capture a condition checked multiple times.
      3. Avoid duplication of the same if (!copy) conditional.
      4. Don't increment dirty if copy is given (no deletion is performed),
         otherwise we propagate MIGRATE when not needed.
      53659404
  16. 25 Feb, 2015 1 commit
  17. 30 Jan, 2015 2 commits
  18. 29 Jan, 2015 5 commits
  19. 22 Jan, 2015 6 commits
    • Matt Stancliff's avatar
      Fix cluster migrate memory leak · ebb07a0b
      Matt Stancliff authored
      Fixes valgrind error:
      48 bytes in 1 blocks are definitely lost in loss record 196 of 373
         at 0x4910D3: je_malloc (jemalloc.c:944)
         by 0x42807D: zmalloc (zmalloc.c:125)
         by 0x41FA0D: dictGetIterator (dict.c:543)
         by 0x41FA48: dictGetSafeIterator (dict.c:555)
         by 0x459B73: clusterHandleSlaveMigration (cluster.c:2776)
         by 0x45BF27: clusterCron (cluster.c:3123)
         by 0x423344: serverCron (redis.c:1239)
         by 0x41D6CD: aeProcessEvents (ae.c:311)
         by 0x41D8EA: aeMain (ae.c:455)
         by 0x41A84B: main (redis.c:3832)
      ebb07a0b
    • Matt Stancliff's avatar
      Fix potential invalid read past end of array · 98faed3a
      Matt Stancliff authored
      If array has N elements, we can't read +1 if we are already at N.
      
      Also, we need to move elements by their storage size in the array,
      not just by individual bytes.
      98faed3a
    • Matt Stancliff's avatar
      Fix cluster reset memory leak · 97ffeb7c
      Matt Stancliff authored
      [maybe] Fixes valgrind errors:
      32 bytes in 4 blocks are definitely lost in loss record 107 of 228
         at 0x80EA447: je_malloc (jemalloc.c:944)
         by 0x806E59C: zrealloc (zmalloc.c:125)
         by 0x80A9AFC: clusterSetMaster (cluster.c:801)
         by 0x80AEDC9: clusterCommand (cluster.c:3994)
         by 0x80682A5: call (redis.c:2049)
         by 0x8068A20: processCommand (redis.c:2309)
         by 0x8076497: processInputBuffer (networking.c:1143)
         by 0x8073BAF: readQueryFromClient (networking.c:1208)
         by 0x8060E98: aeProcessEvents (ae.c:412)
         by 0x806123B: aeMain (ae.c:455)
         by 0x806C3DB: main (redis.c:3832)
      
      64 bytes in 8 blocks are definitely lost in loss record 143 of 228
         at 0x80EA447: je_malloc (jemalloc.c:944)
         by 0x806E59C: zrealloc (zmalloc.c:125)
         by 0x80AAB40: clusterProcessPacket (cluster.c:801)
         by 0x80A847F: clusterReadHandler (cluster.c:1975)
         by 0x30000FF: ???
      
      80 bytes in 10 blocks are definitely lost in loss record 148 of 228
         at 0x80EA447: je_malloc (jemalloc.c:944)
         by 0x806E59C: zrealloc (zmalloc.c:125)
         by 0x80AAB40: clusterProcessPacket (cluster.c:801)
         by 0x80A847F: clusterReadHandler (cluster.c:1975)
         by 0x2FFFFFF: ???
      97ffeb7c
    • Matt Stancliff's avatar
      Fix sending uninitialized bytes · 4a36350d
      Matt Stancliff authored
      Fixes valgrind error:
      Syscall param write(buf) points to uninitialised byte(s)
         at 0x514C35D: ??? (syscall-template.S:81)
         by 0x456B81: clusterWriteHandler (cluster.c:1907)
         by 0x41D596: aeProcessEvents (ae.c:416)
         by 0x41D8EA: aeMain (ae.c:455)
         by 0x41A84B: main (redis.c:3832)
       Address 0x5f268e2 is 2,274 bytes inside a block of size 8,192 alloc'd
         at 0x4932D1: je_realloc (jemalloc.c:1297)
         by 0x428185: zrealloc (zmalloc.c:162)
         by 0x4269E0: sdsMakeRoomFor.part.0 (sds.c:142)
         by 0x426CD7: sdscatlen (sds.c:251)
         by 0x4579E7: clusterSendMessage (cluster.c:1995)
         by 0x45805A: clusterSendPing (cluster.c:2140)
         by 0x45BB03: clusterCron (cluster.c:2944)
         by 0x423344: serverCron (redis.c:1239)
         by 0x41D6CD: aeProcessEvents (ae.c:311)
         by 0x41D8EA: aeMain (ae.c:455)
         by 0x41A84B: main (redis.c:3832)
       Uninitialised value was created by a stack allocation
         at 0x457810: nodeUpdateAddressIfNeeded (cluster.c:1236)
      4a36350d
    • antirez's avatar
      Cluster: node deletion cleanup / centralization. · 0a3edcbe
      antirez authored
      0a3edcbe
    • antirez's avatar
      Cluster: set the slaves->slaveof filed to NULL when master is freed. · 5130c253
      antirez authored
      Related to issue #2289.
      5130c253
  20. 13 Jan, 2015 1 commit
    • antirez's avatar
      Cluster: fetch my IP even if msg is not MEET for the first time. · df1a7fc4
      antirez authored
      In order to avoid that misconfigured cluster nodes at some time may
      force an IP update on other nodes, it is required that nodes update
      their own address only on MEET messages. However it does not make sense
      to do this the first time a node is contacted and yet does not have an
      IP, we just risk that myself->ip remains not assigned if there are
      messages lost or cluster creation procedures that don't make sure
      everybody is targeted by at least one incoming MEET message.
      
      Also fix the logging of the IP switch avoiding the :-1 tail.
      df1a7fc4