1. 04 Aug, 2015 1 commit
    • antirez's avatar
      PSYNC initial offset fix. · 292fec05
      antirez authored
      This commit attempts to fix a bug involving PSYNC and diskless
      replication (currently experimental) found by Yuval Inbar from Redis Labs
      and that was later found to have even more far reaching effects (the bug also
      exists when diskstore is off).
      
      The gist of the bug is that, a Redis master replies with +FULLRESYNC to
      a PSYNC attempt that fails and requires a full resynchronization.
      However, the baseline offset sent along with FULLRESYNC was always the
      current master replication offset. This is not ok, because there are
      many reasosn that may delay the RDB file creation. And... guess what,
      the master offset we communicate must be the one of the time the RDB
      was created. So for example:
      
      1) When the BGSAVE for replication is delayed since there is one
         already but is not good for replication.
      2) When the BGSAVE is not needed as we attach one currently ongoing.
      3) When because of diskless replication the BGSAVE is delayed.
      
      In all the above cases the PSYNC reply is wrong and the slave may
      reconnect later claiming to need a wrong offset: this may cause
      data curruption later.
      292fec05
  2. 27 Jul, 2015 1 commit
  3. 26 Jul, 2015 6 commits
  4. 17 Jul, 2015 1 commit
  5. 03 Feb, 2015 1 commit
  6. 28 Jan, 2015 1 commit
    • Matt Stancliff's avatar
      Improve RDB error-on-load handling · d8c7db1b
      Matt Stancliff authored
      Previouly if we loaded a corrupt RDB, Redis printed an error report
      with a big "REPORT ON GITHUB" message at the bottom.  But, we know
      RDB load failures are corrupt data, not corrupt code.
      
      Now when RDB failure is detected (duplicate keys or unknown data
      types in the file), we run check-rdb against the RDB then exit.  The
      automatic check-rdb hopefully gives the user instant feedback
      about what is wrong instead of providing a mysterious stack
      trace.
      d8c7db1b
  7. 21 Jan, 2015 1 commit
  8. 19 Jan, 2015 1 commit
    • Matt Stancliff's avatar
      Improve RDB type correctness · f7043604
      Matt Stancliff authored
      It's possible large objects could be larger than 'int', so let's
      upgrade all size counters to ssize_t.
      
      This also fixes rdbSaveObject serialized bytes calculation.
      Since entire serializations of data structures can be large,
      so we don't want to limit their calculated size to a 32 bit signed max.
      
      This commit increases object size calculation and
      cascades the change back up to serializedlength printing.
      
      Before:
      127.0.0.1:6379> debug object hihihi
      ... encoding:quicklist serializedlength:-2147483559 ...
      
      After:
      127.0.0.1:6379> debug object hihihi
      ... encoding:quicklist serializedlength:2147483737 ...
      f7043604
  9. 09 Jan, 2015 1 commit
  10. 08 Jan, 2015 7 commits
    • antirez's avatar
      Typo fixed: fiels -> fields in rdbSaveInfoAuxFields(). · a7722dc3
      antirez authored
      Thx to @badboy.
      a7722dc3
    • antirez's avatar
      A few more AUX info fields added to RDB. · 4c0e8923
      antirez authored
      4c0e8923
    • antirez's avatar
      RDB AUX fields support. · 206cd219
      antirez authored
      This commit introduces a new RDB data type called 'aux'. It is used in
      order to insert inside an RDB file key-value pairs that may serve
      different needs, without breaking backward compatibility when new
      informations are embedded inside an RDB file. The contract between Redis
      versions is to ignore unknown aux fields when encountered.
      
      Aux fields can be used in order to:
      
      1. Augment the RDB file with info like version of Redis that created the
      RDB file, creation time, used memory while the RDB was created, and so
      forth.
      2. Add state about Redis inside the RDB file that we need to reload
      later: replication offset, previos master run ID, in order to improve
      failovers safety and allow partial resynchronization after a slave
      restart.
      3. Anything that we may want to add to RDB files without breaking the
      ability of past versions of Redis to load the file.
      206cd219
    • antirez's avatar
      1a30e7de
    • antirez's avatar
      New RDB v7 opcode: RESIZEDB. · e8614a1a
      antirez authored
      The new opcode is an hint about the size of the dataset (keys and number
      of expires) we are going to load for a given Redis database inside the
      RDB file. Since hash tables are resized accordingly ASAP, useless
      rehashing is avoided, speeding up load times significantly, in the order
      of ~ 20% or more for larger data sets.
      
      Related issue: #1719
      e8614a1a
    • antirez's avatar
      Use RDB_LOAD_PLAIN to load quicklists and encoded types. · f699b5e8
      antirez authored
      Before we needed to create a string object with an embedded SDS, adn
      basically duplicate the SDS part into a plain zmalloc() allocation.
      f699b5e8
    • antirez's avatar
      RDB refactored to load plain strings from RDB. · 68bc02c3
      antirez authored
      68bc02c3
  11. 02 Jan, 2015 5 commits
    • Matt Stancliff's avatar
      Config: Add quicklist, remove old list options · 02bb515a
      Matt Stancliff authored
      This removes:
        - list-max-ziplist-entries
        - list-max-ziplist-value
      
      This adds:
        - list-max-ziplist-size
        - list-compress-depth
      
      Also updates config file with new sections and updates
      tests to use quicklist settings instead of old list settings.
      02bb515a
    • Matt Stancliff's avatar
      Allow compression of interior quicklist nodes · abdd1414
      Matt Stancliff authored
      Let user set how many nodes to *not* compress.
      
      We can specify a compression "depth" of how many nodes
      to leave uncompressed on each end of the quicklist.
      
      Depth 0 = disable compression.
      Depth 1 = only leave head/tail uncompressed.
        - (read as: "skip 1 node on each end of the list before compressing")
      Depth 2 = leave head, head->next, tail->prev, tail uncompressed.
        - ("skip 2 nodes on each end of the list before compressing")
      Depth 3 = Depth 2 + head->next->next + tail->prev->prev
        - ("skip 3 nodes...")
      etc.
      
      This also:
        - updates RDB storage to use native quicklist compression (if node is
          already compressed) instead of uncompressing, generating the RDB string,
          then re-compressing the quicklist node.
        - internalizes the "fill" parameter for the quicklist so we don't
          need to pass it to _every_ function.  Now it's just a property of
          the list.
        - allows a runtime-configurable compression option, so we can
          expose a compresion parameter in the configuration file if people
          want to trade slight request-per-second performance for up to 90%+
          memory savings in some situations.
        - updates the quicklist tests to do multiple passes: 200k+ tests now.
      abdd1414
    • Matt Stancliff's avatar
      Convert quicklist RDB to store ziplist nodes · 101b3a6e
      Matt Stancliff authored
      Turns out it's a huge improvement during save/reload/migrate/restore
      because, with compression enabled, we're compressing 4k or 8k
      chunks of data consisting of multiple elements in one ziplist
      instead of compressing series of smaller individual elements.
      101b3a6e
    • Matt Stancliff's avatar
      Convert RDB ziplist loading to sdsnative() · 127c15e2
      Matt Stancliff authored
      This saves us an unnecessary zmalloc, memcpy, and two frees.
      127c15e2
    • Matt Stancliff's avatar
      Add quicklist implementation · 5e362b84
      Matt Stancliff authored
      This replaces individual ziplist vs. linkedlist representations
      for Redis list operations.
      
      Big thanks for all the reviews and feedback from everybody in
      https://github.com/antirez/redis/pull/2143
      5e362b84
  12. 23 Dec, 2014 2 commits
  13. 21 Dec, 2014 1 commit
  14. 27 Oct, 2014 1 commit
  15. 23 Oct, 2014 1 commit
  16. 22 Oct, 2014 1 commit
    • antirez's avatar
      Diskless replication: set / reset socket send timeout. · d4f6a171
      antirez authored
      We need to avoid that a child -> slaves transfer can continue forever.
      We use the same timeout used as global replication timeout, which is
      documented to also affect I/O operations during bulk transfers.
      d4f6a171
  17. 17 Oct, 2014 3 commits
  18. 15 Oct, 2014 2 commits
  19. 14 Oct, 2014 2 commits
  20. 08 Oct, 2014 1 commit
    • antirez's avatar
      Define different types of RDB childs. · 2df8341c
      antirez authored
      We need to remember what is the saving strategy of the current RDB child
      process, since the configuration may be modified at runtime via CONFIG
      SET and still we'll need to understand, when the child exists, what to
      do and for what goal the process was initiated: to create an RDB file
      on disk or to write stuff directly to slave's sockets.
      2df8341c