1. 21 Aug, 2015 3 commits
    • antirez's avatar
    • antirez's avatar
      7a026770
    • antirez's avatar
      startBgsaveForReplication(): handle waiting slaves state change. · 5630eeb1
      antirez authored
      Before this commit, after triggering a BGSAVE it was up to the caller of
      startBgsavForReplication() to handle slaves in WAIT_BGSAVE_START in
      order to update them accordingly. However when the replication target is
      the socket, this is not possible since the process of updating the
      slaves and sending the FULLRESYNC reply must be coupled with the process
      of starting an RDB save (the reason is, we need to send the FULLSYNC
      command and spawn a child that will start to send RDB data to the slaves
      ASAP).
      
      This commit moves the responsibility of handling slaves in
      WAIT_BGSAVE_START to startBgsavForReplication() so that for both
      diskless and disk-based replication we have the same chain of
      responsiblity. In order accomodate such change, the syncCommand() also
      needs to put the client in the slave list ASAP (just after the initial
      checks) and not at the end, so that startBgsavForReplication() can find
      the new slave alrady in the list.
      
      Another related change is what happens if the BGSAVE fails because of
      fork() or other errors: we now remove the slave from the list of slaves
      and send an error, scheduling the slave connection to be terminated.
      
      As a side effect of this change the following errors found by
      Oran Agra are fixed (thanks!):
      
      1. rdbSaveToSlavesSockets() on failed fork will get the slaves cleaned
      up, otherwise they remain in a wrong state forever since we setup them
      for full resync before actually trying to fork.
      
      2. updateSlavesWaitingBgsave() with replication target set as "socket"
      was broken since the function changed the slaves state from
      WAIT_BGSAVE_START to WAIT_BGSAVE_END via
      replicationSetupSlaveForFullResync(), so later rdbSaveToSlavesSockets()
      will not find any slave in the right state (WAIT_BGSAVE_START) to feed.
      5630eeb1
  2. 07 Aug, 2015 2 commits
  3. 06 Aug, 2015 3 commits
    • antirez's avatar
      Fixed issues introduced during last merge. · c9df63c1
      antirez authored
      c9df63c1
    • antirez's avatar
      ce3a2d08
    • antirez's avatar
      Replication: add REPLCONF CAPA EOF support. · 6974e69f
      antirez authored
      Add the concept of slaves capabilities to Redis, the slave now presents
      to the Redis master with a set of capabilities in the form:
      
          REPLCONF capa SOMECAPA capa OTHERCAPA ...
      
      This has the effect of setting slave->slave_capa with the corresponding
      SLAVE_CAPA macros that the master can test later to understand if it
      the slave will understand certain formats and protocols of the
      replication process. This makes it much simpler to introduce new
      replication capabilities in the future in a way that don't break old
      slaves or masters.
      
      This patch was designed and implemented together with Oran Agra
      (@oranagra).
      6974e69f
  4. 05 Aug, 2015 5 commits
    • antirez's avatar
      Fix replication slave pings period. · a67d67b5
      antirez authored
      For PINGs we use the period configured by the user, but for the newlines
      of slaves waiting for an RDB to be created (including slaves waiting for
      the FULLRESYNC reply) we need to ping with frequency of 1 second, since
      the timeout is fixed and needs to be refreshed.
      a67d67b5
    • antirez's avatar
      Make sure we re-emit SELECT after each new slave full sync setup. · 39994c24
      antirez authored
      In previous commits we moved the FULLRESYNC to the moment we start the
      BGSAVE, so that the offset we provide is the right one. However this
      also means that we need to re-emit the SELECT statement every time a new
      slave starts to accumulate the changes.
      
      To obtian this effect in a more clean way, the function that sends the
      FULLRESYNC reply was overloaded with a more important role of also doing
      this and chanigng the slave state. So it was renamed to
      replicationSetupSlaveForFullResync() to better reflect what it does now.
      39994c24
    • antirez's avatar
      b2ff48ef
    • antirez's avatar
      syncCommand() comments improved. · e684e726
      antirez authored
      e684e726
    • antirez's avatar
      PSYNC initial offset fix. · 4b010572
      antirez authored
      This commit attempts to fix a bug involving PSYNC and diskless
      replication (currently experimental) found by Yuval Inbar from Redis Labs
      and that was later found to have even more far reaching effects (the bug also
      exists when diskstore is off).
      
      The gist of the bug is that, a Redis master replies with +FULLRESYNC to
      a PSYNC attempt that fails and requires a full resynchronization.
      However, the baseline offset sent along with FULLRESYNC was always the
      current master replication offset. This is not ok, because there are
      many reasosn that may delay the RDB file creation. And... guess what,
      the master offset we communicate must be the one of the time the RDB
      was created. So for example:
      
      1) When the BGSAVE for replication is delayed since there is one
         already but is not good for replication.
      2) When the BGSAVE is not needed as we attach one currently ongoing.
      3) When because of diskless replication the BGSAVE is delayed.
      
      In all the above cases the PSYNC reply is wrong and the slave may
      reconnect later claiming to need a wrong offset: this may cause
      data curruption later.
      4b010572
  5. 01 Apr, 2015 1 commit
    • Oran Agra's avatar
      fixes to diskless replication. · c72253ec
      Oran Agra authored
      master was closing the connection if the RDB transfer took long time.
      and also sent PINGs to the slave before it got the initial ACK, in which case the slave wouldn't be able to find the EOF marker.
      c72253ec
  6. 24 Mar, 2015 1 commit
    • antirez's avatar
      Replication: disconnect blocked clients when switching to slave role. · 96aa6106
      antirez authored
      Bug as old as Redis and blocking operations. It's hard to trigger since
      only happens on instance role switch, but the results are quite bad
      since an inconsistency between master and slave is created.
      
      How to trigger the bug is a good description of the bug itself.
      
      1. Client does "BLPOP mylist 0" in master.
      2. Master is turned into slave, that replicates from New-Master.
      3. Client does "LPUSH mylist foo" in New-Master.
      4. New-Master propagates write to slave.
      5. Slave receives the LPUSH, the blocked client get served.
      
      Now Master "mylist" key has "foo", Slave "mylist" key is empty.
      
      Highlights:
      
      * At step "2" above, the client remains attached, basically escaping any
        check performed during command dispatch: read only slave, in that case.
      * At step "5" the slave (that was the master), serves the blocked client
        consuming a list element, which is not consumed on the master side.
      
      This scenario is technically likely to happen during failovers, however
      since Redis Sentinel already disconnects clients using the CLIENT
      command when changing the role of the instance, the bug is avoided in
      Sentinel deployments.
      
      Closes #2473.
      96aa6106
  7. 03 Dec, 2014 1 commit
    • antirez's avatar
      Network bandwidth tracking + refactoring. · d56ef629
      antirez authored
      Track bandwidth used by clients and replication (but diskless
      replication is not tracked since the actual transfer happens in the
      child process).
      
      This includes a refactoring that makes tracking new instantaneous
      metrics simpler.
      d56ef629
  8. 11 Nov, 2014 2 commits
    • antirez's avatar
      Diskless SYNC: fix RDB EOF detection. · a5fcf44f
      antirez authored
      RDB EOF detection was relying on the final part of the RDB transfer to
      be a magic 40 bytes EOF marker. However as the slave is put online
      immediately, and because of sockets timeouts, the replication stream is
      actually contiguous with the RDB file.
      
      This means that to detect the EOF correctly we should either:
      
      1) Scan all the stream searching for the mark. Sucks CPU-wise.
      2) Start to send the replication stream only after an acknowledge.
      3) Implement a proper chunked encoding.
      
      For now solution "2" was picked, so the master does not start to send
      ASAP the stream of commands in the case of diskless replication. We wait
      for the first REPLCONF ACK command from the slave, that certifies us
      that the slave correctly loaded the RDB file and is ready to get more
      data.
      a5fcf44f
    • antirez's avatar
  9. 29 Oct, 2014 21 commits
  10. 06 Oct, 2014 1 commit