1. 09 Nov, 2016 1 commit
    • antirez's avatar
      PSYNC2: different improvements to Redis replication. · 2669fb83
      antirez authored
      The gist of the changes is that now, partial resynchronizations between
      slaves and masters (without the need of a full resync with RDB transfer
      and so forth), work in a number of cases when it was impossible
      in the past. For instance:
      
      1. When a slave is promoted to mastrer, the slaves of the old master can
      partially resynchronize with the new master.
      
      2. Chained slalves (slaves of slaves) can be moved to replicate to other
      slaves or the master itsef, without requiring a full resync.
      
      3. The master itself, after being turned into a slave, is able to
      partially resynchronize with the new master, when it joins replication
      again.
      
      In order to obtain this, the following main changes were operated:
      
      * Slaves also take a replication backlog, not just masters.
      
      * Same stream replication for all the slaves and sub slaves. The
      replication stream is identical from the top level master to its slaves
      and is also the same from the slaves to their sub-slaves and so forth.
      This means that if a slave is later promoted to master, it has the
      same replication backlong, and can partially resynchronize with its
      slaves (that were previously slaves of the old master).
      
      * A given replication history is no longer identified by the `runid` of
      a Redis node. There is instead a `replication ID` which changes every
      time the instance has a new history no longer coherent with the past
      one. So, for example, slaves publish the same replication history of
      their master, however when they are turned into masters, they publish
      a new replication ID, but still remember the old ID, so that they are
      able to partially resynchronize with slaves of the old master (up to a
      given offset).
      
      * The replication protocol was slightly modified so that a new extended
      +CONTINUE reply from the master is able to inform the slave of a
      replication ID change.
      
      * REPLCONF CAPA is used in order to notify masters that a slave is able
      to understand the new +CONTINUE reply.
      
      * The RDB file was extended with an auxiliary field that is able to
      select a given DB after loading in the slave, so that the slave can
      continue receiving the replication stream from the point it was
      disconnected without requiring the master to insert "SELECT" statements.
      This is useful in order to guarantee the "same stream" property, because
      the slave must be able to accumulate an identical backlog.
      
      * Slave pings to sub-slaves are now sent in a special form, when the
      top-level master is disconnected, in order to don't interfer with the
      replication stream. We just use out of band "\n" bytes as in other parts
      of the Redis protocol.
      
      An old design document is available here:
      
      https://gist.github.com/antirez/ae068f95c0d084891305
      
      However the implementation is not identical to the description because
      during the work to implement it, different changes were needed in order
      to make things working well.
      2669fb83
  2. 06 Oct, 2016 1 commit
    • antirez's avatar
      Module: Ability to get context from IO context. · 152c1b68
      antirez authored
      It was noted by @dvirsky that it is not possible to use string functions
      when writing the AOF file. This sometimes is critical since the command
      rewriting may need to be built in the context of the AOF callback, and
      without access to the context, and the limited types that the AOF
      production functions will accept, this can be an issue.
      
      Moreover there are other needs that we can't anticipate regarding the
      ability to use Redis Modules APIs using the context in order to build
      representations to emit AOF / RDB.
      
      Because of this a new API was added that allows the user to get a
      temporary context from the IO context. The context is auto released
      if obtained when the RDB / AOF callback returns.
      
      Calling multiple time the function to get the context, always returns
      the same one, since it is invalid to have more than a single context.
      152c1b68
  3. 02 Oct, 2016 1 commit
  4. 19 Sep, 2016 2 commits
  5. 01 Sep, 2016 1 commit
    • antirez's avatar
      Fix rdb.c var types when calling rdbLoadLen(). · 57a0db94
      antirez authored
      Technically as soon as Redis 64 bit gets proper support for loading
      collections and/or DBs with more than 2^32 elements, the 32 bit version
      should be modified in order to check if what we read from rdbLoadLen()
      overflows. This would only apply to huge RDB files created with a 64 bit
      instance and later loaded into a 32 bit instance.
      57a0db94
  6. 11 Aug, 2016 1 commit
  7. 09 Aug, 2016 2 commits
  8. 21 Jul, 2016 1 commit
    • antirez's avatar
      Avoid simultaneous RDB and AOF child process. · 0a628e51
      antirez authored
      This patch, written in collaboration with Oran Agra (@oranagra) is a companion
      to 780a8b1d. Together the two patches should avoid that the AOF and RDB saving
      processes can be spawned at the same time. Previously conditions that
      could lead to two saving processes at the same time were:
      
      1. When AOF is enabled via CONFIG SET and an RDB saving process is
         already active.
      
      2. When the SYNC command decides to start an RDB saving process ASAP in
         order to serve a new slave that cannot partially resynchronize (but
         only if we have a disk target for replication, for diskless
         replication there is not such a problem).
      
      Condition "1" is not very severe but "2" can happen often and is
      definitely good at degrading Redis performances in an unexpected way.
      
      The two commits have the effect of always spawning RDB savings for
      replication in replicationCron() instead of attempting to start an RDB
      save synchronously. Moreover when a BGSAVE or AOF rewrite must be
      performed, they are instead just postponed using flags that will try to
      perform such operations ASAP.
      
      Finally the BGSAVE command was modified in order to accept a SCHEDULE
      option so that if an AOF rewrite is in progress, when this option is
      given, the command no longer returns an error, but instead schedules an
      RDB rewrite operation for when it will be possible to start it.
      0a628e51
  9. 01 Jul, 2016 3 commits
  10. 05 Jun, 2016 1 commit
  11. 03 Jun, 2016 1 commit
  12. 01 Jun, 2016 3 commits
  13. 25 Apr, 2016 2 commits
  14. 15 Feb, 2016 1 commit
  15. 01 Oct, 2015 3 commits
  16. 07 Sep, 2015 1 commit
    • antirez's avatar
      Undo slaves state change on failed rdbSaveToSlavesSockets(). · 8e555374
      antirez authored
      As Oran Agra suggested, in startBgsaveForReplication() when the BGSAVE
      attempt returns an error, we scan the list of slaves in order to remove
      them since there is no way to serve them currently.
      
      However we check for the replication state BGSAVE_START, which was
      modified by rdbSaveToSlaveSockets() before forking(). So when fork fails
      the state of slaves remain BGSAVE_END and no cleanup is performed.
      
      This commit fixes the problem by making rdbSaveToSlavesSockets() able to
      undo the state change on fork failure.
      8e555374
  17. 05 Aug, 2015 2 commits
    • antirez's avatar
    • antirez's avatar
      Make sure we re-emit SELECT after each new slave full sync setup. · 15de6b10
      antirez authored
      In previous commits we moved the FULLRESYNC to the moment we start the
      BGSAVE, so that the offset we provide is the right one. However this
      also means that we need to re-emit the SELECT statement every time a new
      slave starts to accumulate the changes.
      
      To obtian this effect in a more clean way, the function that sends the
      FULLRESYNC reply was overloaded with a more important role of also doing
      this and chanigng the slave state. So it was renamed to
      replicationSetupSlaveForFullResync() to better reflect what it does now.
      15de6b10
  18. 04 Aug, 2015 1 commit
    • antirez's avatar
      PSYNC initial offset fix. · 292fec05
      antirez authored
      This commit attempts to fix a bug involving PSYNC and diskless
      replication (currently experimental) found by Yuval Inbar from Redis Labs
      and that was later found to have even more far reaching effects (the bug also
      exists when diskstore is off).
      
      The gist of the bug is that, a Redis master replies with +FULLRESYNC to
      a PSYNC attempt that fails and requires a full resynchronization.
      However, the baseline offset sent along with FULLRESYNC was always the
      current master replication offset. This is not ok, because there are
      many reasosn that may delay the RDB file creation. And... guess what,
      the master offset we communicate must be the one of the time the RDB
      was created. So for example:
      
      1) When the BGSAVE for replication is delayed since there is one
         already but is not good for replication.
      2) When the BGSAVE is not needed as we attach one currently ongoing.
      3) When because of diskless replication the BGSAVE is delayed.
      
      In all the above cases the PSYNC reply is wrong and the slave may
      reconnect later claiming to need a wrong offset: this may cause
      data curruption later.
      292fec05
  19. 27 Jul, 2015 1 commit
  20. 26 Jul, 2015 6 commits
  21. 17 Jul, 2015 1 commit
  22. 03 Feb, 2015 1 commit
  23. 28 Jan, 2015 1 commit
    • Matt Stancliff's avatar
      Improve RDB error-on-load handling · d8c7db1b
      Matt Stancliff authored
      Previouly if we loaded a corrupt RDB, Redis printed an error report
      with a big "REPORT ON GITHUB" message at the bottom.  But, we know
      RDB load failures are corrupt data, not corrupt code.
      
      Now when RDB failure is detected (duplicate keys or unknown data
      types in the file), we run check-rdb against the RDB then exit.  The
      automatic check-rdb hopefully gives the user instant feedback
      about what is wrong instead of providing a mysterious stack
      trace.
      d8c7db1b
  24. 21 Jan, 2015 1 commit
  25. 19 Jan, 2015 1 commit
    • Matt Stancliff's avatar
      Improve RDB type correctness · f7043604
      Matt Stancliff authored
      It's possible large objects could be larger than 'int', so let's
      upgrade all size counters to ssize_t.
      
      This also fixes rdbSaveObject serialized bytes calculation.
      Since entire serializations of data structures can be large,
      so we don't want to limit their calculated size to a 32 bit signed max.
      
      This commit increases object size calculation and
      cascades the change back up to serializedlength printing.
      
      Before:
      127.0.0.1:6379> debug object hihihi
      ... encoding:quicklist serializedlength:-2147483559 ...
      
      After:
      127.0.0.1:6379> debug object hihihi
      ... encoding:quicklist serializedlength:2147483737 ...
      f7043604