1. 13 Sep, 2021 1 commit
    • zhaozhao.zz's avatar
      PSYNC2: make partial sync possible after master reboot (#8015) · 794442b1
      zhaozhao.zz authored
      The main idea is how to allow a master to load replication info from RDB file when rebooting, if master can load replication info it means that replicas may have the chance to psync with master, it can save much traffic.
      
      The key point is we need guarantee safety and consistency, so there
      are two differences between master and replica:
      
      1. master would load the replication info as secondary ID and
         offset, in case other masters have the same replid.
      2. when master loading RDB, it would propagate expired keys as DEL
         command to replication backlog, then replica can receive these
         commands to delete stale keys.
         p.s. the expired keys when RDB loading is useful for users, so
         we show it as `rdb_last_load_keys_expired` and `rdb_last_load_keys_loaded` in info persistence.
      
      Moreover, after load replication info, master should update
      `no_replica_time` in case loading RDB cost too long time.
      794442b1
  2. 09 Sep, 2021 1 commit
    • sundb's avatar
      Replace all usage of ziplist with listpack for t_zset (#9366) · 3ca6972e
      sundb authored
      Part two of implementing #8702 (zset), after #8887.
      
      ## Description of the feature
      Replaced all uses of ziplist with listpack in t_zset, and optimized some of the code to optimize performance.
      
      ## Rdb format changes
      New `RDB_TYPE_ZSET_LISTPACK` rdb type.
      
      ## Rdb loading improvements:
      1) Pre-expansion of dict for validation of duplicate data for listpack and ziplist.
      2) Simplifying the release of empty key objects when RDB loading.
      3) Unify ziplist and listpack data verify methods for zset and hash, and move code to rdb.c.
      
      ## Interface changes
      1) New `zset-max-listpack-entries` config is an alias for `zset-max-ziplist-entries` (same with `zset-max-listpack-value`).
      2) OBJECT ENCODING will return listpack instead of ziplist.
      
      ## Listpack improvements:
      1) Add `lpDeleteRange` and `lpDeleteRangeWithEntry` functions to delete a range of entries from listpack.
      2) Improve the performance of `lpCompare`, converting from string to integer is faster than converting from integer to string.
      3) Replace `snprintf` with `ll2string` to improve performance in converting numbers to strings in `lpGet()`.
      
      ## Zset improvements:
      1) Improve the performance of `zzlFind` method, use `lpFind` instead of `lpCompare` in a loop.
      2) Use `lpDeleteRangeWithEntry` instead of `lpDelete` twice to delete a element of zset.
      
      ## Tests
      1) Add some unittests for `lpDeleteRange` and `lpDeleteRangeWithEntry` function.
      2) Add zset RDB loading test.
      3) Add benchmark test for `lpCompare` and `ziplsitCompare`.
      4) Add empty listpack zset corrupt dump test.
      3ca6972e
  3. 10 Aug, 2021 1 commit
    • sundb's avatar
      Replace all usage of ziplist with listpack for t_hash (#8887) · 02fd76b9
      sundb authored
      
      
      Part one of implementing #8702 (taking hashes first before other types)
      
      ## Description of the feature
      1. Change ziplist encoded hash objects to listpack encoding.
      2. Convert existing ziplists on RDB loading time. an O(n) operation.
      
      ## Rdb format changes
      1. Add RDB_TYPE_HASH_LISTPACK rdb type.
      2. Bump RDB_VERSION to 10
      
      ## Interface changes
      1. New `hash-max-listpack-entries` config is an alias for `hash-max-ziplist-entries` (same with `hash-max-listpack-value`)
      2. OBJECT ENCODING will return `listpack` instead of `ziplist`
      
      ## Listpack improvements:
      1. Support direct insert, replace integer element (rather than convert back and forth from string)
      3. Add more listpack capabilities to match the ziplist ones (like `lpFind`, `lpRandomPairs` and such)
      4. Optimize element length fetching, avoid multiple calculations
      5. Use inline to avoid function call overhead.
      
      ## Tests
      1. Add a new test to the RDB load time conversion
      2. Adding the listpack unit tests. (based on the one in ziplist.c)
      3. Add a few "corrupt payload: fuzzer findings" tests, and slightly modify existing ones.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      02fd76b9
  4. 05 Aug, 2021 1 commit
  5. 16 Jun, 2021 1 commit
  6. 08 Feb, 2021 1 commit
  7. 22 Sep, 2020 1 commit
  8. 17 Sep, 2020 1 commit
    • Wang Yuan's avatar
      Remove tmp rdb file in background thread (#7762) · b002d2b4
      Wang Yuan authored
      We're already using bg_unlink in several places to delete the rdb file in the background,
      and avoid paying the cost of the deletion from our main thread.
      This commit uses bg_unlink to remove the temporary rdb file in the background too.
      
      However, in case we delete that rdb file just before exiting, we don't actually wait for the
      background thread or the main thread to delete it, and just let the OS clean up after us.
      i.e. we open the file, unlink it and exit with the fd still open.
      
      Furthermore, rdbRemoveTempFile can be called from a thread and was using snprintf which is
      not async-signal-safe, we now use ll2string instead.
      b002d2b4
  9. 09 Apr, 2020 3 commits
  10. 30 Jan, 2020 1 commit
  11. 29 Oct, 2019 1 commit
    • Oran Agra's avatar
      Modules hooks: complete missing hooks for the initial set of hooks · 51c3ff8d
      Oran Agra authored
      * replication hooks: role change, master link status, replica online/offline
      * persistence hooks: saving, loading, loading progress
      * misc hooks: cron loop, shutdown, module loaded/unloaded
      * change the way hooks test work, and add tests for all of the above
      
      startLoading() now gets flag indicating what is loaded.
      stopLoading() now gets an indication of success or failure.
      adding startSaving() and stopSaving() with similar args and role.
      51c3ff8d
  12. 22 Jul, 2019 1 commit
  13. 17 Jul, 2019 2 commits
  14. 15 Mar, 2019 1 commit
  15. 19 Jun, 2018 1 commit
  16. 29 May, 2018 2 commits
    • antirez's avatar
      Don't expire keys while loading RDB from AOF preamble. · 49147f36
      antirez authored
      The AOF tail of a combined RDB+AOF is based on the premise of applying
      the AOF commands to the exact state that there was in the server while
      the RDB was persisted. By expiring keys while loading the RDB file, we
      change the state, so applying the AOF tail later may change the state.
      
      Test case:
      
      * Time1: SET a 10
      * Time2: EXPIREAT a $time5
      * Time3: INCR a
      * Time4: PERSIT A. Start bgrewiteaof with RDB preamble. The value of a is 11 without expire time.
      * Time5: Restart redis from the RDB+AOF: consistency violation.
      
      Thanks to @soloestoy for providing the patch.
      Thanks to @trevor211 for the original issue report and the initial fix.
      
      Check issue #4950 for more info.
      49147f36
    • WuYunlong's avatar
      Fix rdb save by allowing dumping of expire keys, so that when · 2a887bd5
      WuYunlong authored
      we add a new slave, and do a failover, eighter by manual or
      not, other local slaves will delete the expired keys properly.
      2a887bd5
  17. 16 Mar, 2018 2 commits
  18. 15 Mar, 2018 1 commit
    • antirez's avatar
      RDB: Ability to save LFU/LRU info. · d7a5c0eb
      antirez authored
      This is a big win for caching use cases, since on reloading Redis will
      still have some idea about what is worth to evict and what not.
      However this only solves part of the problem because the information is
      only partially propagated to slaves (on write operations). Reads will
      not affect slaves LFU and LRU counters, so after a failover the eviction
      decisions are kinda random until keys start to collect some aging/freq info.
      
      However since new slaves are initially populated via RDB file transfer,
      this means that if we spin up a new slave from a master, and perform an
      immediate manual failover (for instance in order to upgrade the master),
      the slave will have eviction informations to use for some time.
      
      The LFU/LRU info is persisted only if the maxmemory policy is set to one
      of the relevant type, even if no actual "maxmemory"  memory limit is
      set.
      d7a5c0eb
  19. 29 Dec, 2017 1 commit
    • Oran Agra's avatar
      fix processing of large bulks (above 2GB) · 60a4f12f
      Oran Agra authored
      - protocol parsing (processMultibulkBuffer) was limitted to 32big positions in the buffer
        readQueryFromClient potential overflow
      - rioWriteBulkCount used int, although rioWriteBulkString gave it size_t
      - several places in sds.c that used int for string length or index.
      - bugfix in RM_SaveAuxField (return was 1 or -1 and not length)
      - RM_SaveStringBuffer was limitted to 32bit length
      60a4f12f
  20. 01 Dec, 2017 2 commits
  21. 19 Sep, 2017 1 commit
    • antirez's avatar
      PSYNC2: Fix the way replication info is saved/loaded from RDB. · c1c99e9f
      antirez authored
      This commit attempts to fix a number of bugs reported in #4316.
      They are related to the way replication info like replication ID,
      offsets, and currently selected DB in the master client, are stored
      and loaded by Redis. In order to avoid inconsistencies the changes in
      this commit try to enforce that:
      
      1. Replication information are only stored when the RDB file is
      generated by a slave that has a valid 'master' client, so that we can
      always extract the currently selected DB.
      2. When replication informations are persisted in the RDB file, all the
      info for a successful PSYNC or nothing is persisted.
      3. The RDB replication informations are only loaded if the instance is
      configured as a slave, otherwise a master can start with IDs that relate
      to a different history of the data set, and stil retain such IDs in the
      future while receiving unrelated writes.
      c1c99e9f
  22. 27 Jun, 2017 1 commit
    • antirez's avatar
      RDB modules values serialization format version 2. · 365dd037
      antirez authored
      The original RDB serialization format was not parsable without the
      module loaded, becuase the structure was managed only by the module
      itself. Moreover RDB is a streaming protocol in the sense that it is
      both produce di an append-only fashion, and is also sometimes directly
      sent to the socket (in the case of diskless replication).
      
      The fact that modules values cannot be parsed without the relevant
      module loaded is a problem in many ways: RDB checking tools must have
      loaded modules even for doing things not involving the value at all,
      like splitting an RDB into N RDBs by key or alike, or just checking the
      RDB for sanity.
      
      In theory module values could be just a blob of data with a prefixed
      length in order for us to be able to skip it. However prefixing the values
      with a length would mean one of the following:
      
      1. To be able to write some data at a previous offset. This breaks
      stremaing.
      2. To bufferize values before outputting them. This breaks performances.
      3. To have some chunked RDB output format. This breaks simplicity.
      
      Moreover, the above solution, still makes module values a totally opaque
      matter, with the fowllowing problems:
      
      1. The RDB check tool can just skip the value without being able to at
      least check the general structure. For datasets composed mostly of
      modules values this means to just check the outer level of the RDB not
      actually doing any checko on most of the data itself.
      2. It is not possible to do any recovering or processing of data for which a
      module no longer exists in the future, or is unknown.
      
      So this commit implements a different solution. The modules RDB
      serialization API is composed if well defined calls to store integers,
      floats, doubles or strings. After this commit, the parts generated by
      the module API have a one-byte prefix for each of the above emitted
      parts, and there is a final EOF byte as well. So even if we don't know
      exactly how to interpret a module value, we can always parse it at an
      high level, check the overall structure, understand the types used to
      store the information, and easily skip the whole value.
      
      The change is backward compatible: older RDB files can be still loaded
      since the new encoding has a new RDB type: MODULE_2 (of value 7).
      The commit also implements the ability to check RDB files for sanity
      taking advantage of the new feature.
      365dd037
  23. 09 Nov, 2016 1 commit
    • antirez's avatar
      PSYNC2: different improvements to Redis replication. · 2669fb83
      antirez authored
      The gist of the changes is that now, partial resynchronizations between
      slaves and masters (without the need of a full resync with RDB transfer
      and so forth), work in a number of cases when it was impossible
      in the past. For instance:
      
      1. When a slave is promoted to mastrer, the slaves of the old master can
      partially resynchronize with the new master.
      
      2. Chained slalves (slaves of slaves) can be moved to replicate to other
      slaves or the master itsef, without requiring a full resync.
      
      3. The master itself, after being turned into a slave, is able to
      partially resynchronize with the new master, when it joins replication
      again.
      
      In order to obtain this, the following main changes were operated:
      
      * Slaves also take a replication backlog, not just masters.
      
      * Same stream replication for all the slaves and sub slaves. The
      replication stream is identical from the top level master to its slaves
      and is also the same from the slaves to their sub-slaves and so forth.
      This means that if a slave is later promoted to master, it has the
      same replication backlong, and can partially resynchronize with its
      slaves (that were previously slaves of the old master).
      
      * A given replication history is no longer identified by the `runid` of
      a Redis node. There is instead a `replication ID` which changes every
      time the instance has a new history no longer coherent with the past
      one. So, for example, slaves publish the same replication history of
      their master, however when they are turned into masters, they publish
      a new replication ID, but still remember the old ID, so that they are
      able to partially resynchronize with slaves of the old master (up to a
      given offset).
      
      * The replication protocol was slightly modified so that a new extended
      +CONTINUE reply from the master is able to inform the slave of a
      replication ID change.
      
      * REPLCONF CAPA is used in order to notify masters that a slave is able
      to understand the new +CONTINUE reply.
      
      * The RDB file was extended with an auxiliary field that is able to
      select a given DB after loading in the slave, so that the slave can
      continue receiving the replication stream from the point it was
      disconnected without requiring the master to insert "SELECT" statements.
      This is useful in order to guarantee the "same stream" property, because
      the slave must be able to accumulate an identical backlog.
      
      * Slave pings to sub-slaves are now sent in a special form, when the
      top-level master is disconnected, in order to don't interfer with the
      replication stream. We just use out of band "\n" bytes as in other parts
      of the Redis protocol.
      
      An old design document is available here:
      
      https://gist.github.com/antirez/ae068f95c0d084891305
      
      However the implementation is not identical to the description because
      during the work to implement it, different changes were needed in order
      to make things working well.
      2669fb83
  24. 02 Oct, 2016 1 commit
  25. 11 Aug, 2016 1 commit
  26. 09 Aug, 2016 1 commit
  27. 03 Jun, 2016 1 commit
  28. 01 Jun, 2016 3 commits
  29. 27 Jul, 2015 1 commit
  30. 26 Jul, 2015 1 commit
  31. 19 Jan, 2015 1 commit
    • Matt Stancliff's avatar
      Improve RDB type correctness · f7043604
      Matt Stancliff authored
      It's possible large objects could be larger than 'int', so let's
      upgrade all size counters to ssize_t.
      
      This also fixes rdbSaveObject serialized bytes calculation.
      Since entire serializations of data structures can be large,
      so we don't want to limit their calculated size to a 32 bit signed max.
      
      This commit increases object size calculation and
      cascades the change back up to serializedlength printing.
      
      Before:
      127.0.0.1:6379> debug object hihihi
      ... encoding:quicklist serializedlength:-2147483559 ...
      
      After:
      127.0.0.1:6379> debug object hihihi
      ... encoding:quicklist serializedlength:2147483737 ...
      f7043604
  32. 08 Jan, 2015 1 commit
    • antirez's avatar
      RDB AUX fields support. · 206cd219
      antirez authored
      This commit introduces a new RDB data type called 'aux'. It is used in
      order to insert inside an RDB file key-value pairs that may serve
      different needs, without breaking backward compatibility when new
      informations are embedded inside an RDB file. The contract between Redis
      versions is to ignore unknown aux fields when encountered.
      
      Aux fields can be used in order to:
      
      1. Augment the RDB file with info like version of Redis that created the
      RDB file, creation time, used memory while the RDB was created, and so
      forth.
      2. Add state about Redis inside the RDB file that we need to reload
      later: replication offset, previos master run ID, in order to improve
      failovers safety and allow partial resynchronization after a slave
      restart.
      3. Anything that we may want to add to RDB files without breaking the
      ability of past versions of Redis to load the file.
      206cd219