1. 09 Apr, 2020 1 commit
    • antirez's avatar
      RDB: load files faster avoiding useless free+realloc. · 30adc622
      antirez authored
      Reloading of the RDB generated by
      
          DEBUG POPULATE 5000000
          SAVE
      
      is now 25% faster.
      
      This commit also prepares the ability to have more flexibility when
      loading stuff from the RDB, since we no longer use dbAdd() but can
      control exactly how things are added in the database.
      30adc622
  2. 30 Jan, 2020 1 commit
  3. 29 Oct, 2019 1 commit
    • Oran Agra's avatar
      Modules hooks: complete missing hooks for the initial set of hooks · 51c3ff8d
      Oran Agra authored
      * replication hooks: role change, master link status, replica online/offline
      * persistence hooks: saving, loading, loading progress
      * misc hooks: cron loop, shutdown, module loaded/unloaded
      * change the way hooks test work, and add tests for all of the above
      
      startLoading() now gets flag indicating what is loaded.
      stopLoading() now gets an indication of success or failure.
      adding startSaving() and stopSaving() with similar args and role.
      51c3ff8d
  4. 22 Jul, 2019 1 commit
  5. 17 Jul, 2019 2 commits
  6. 15 Mar, 2019 1 commit
  7. 19 Jun, 2018 1 commit
  8. 29 May, 2018 2 commits
    • antirez's avatar
      Don't expire keys while loading RDB from AOF preamble. · 49147f36
      antirez authored
      The AOF tail of a combined RDB+AOF is based on the premise of applying
      the AOF commands to the exact state that there was in the server while
      the RDB was persisted. By expiring keys while loading the RDB file, we
      change the state, so applying the AOF tail later may change the state.
      
      Test case:
      
      * Time1: SET a 10
      * Time2: EXPIREAT a $time5
      * Time3: INCR a
      * Time4: PERSIT A. Start bgrewiteaof with RDB preamble. The value of a is 11 without expire time.
      * Time5: Restart redis from the RDB+AOF: consistency violation.
      
      Thanks to @soloestoy for providing the patch.
      Thanks to @trevor211 for the original issue report and the initial fix.
      
      Check issue #4950 for more info.
      49147f36
    • WuYunlong's avatar
      Fix rdb save by allowing dumping of expire keys, so that when · 2a887bd5
      WuYunlong authored
      we add a new slave, and do a failover, eighter by manual or
      not, other local slaves will delete the expired keys properly.
      2a887bd5
  9. 16 Mar, 2018 2 commits
  10. 15 Mar, 2018 1 commit
    • antirez's avatar
      RDB: Ability to save LFU/LRU info. · d7a5c0eb
      antirez authored
      This is a big win for caching use cases, since on reloading Redis will
      still have some idea about what is worth to evict and what not.
      However this only solves part of the problem because the information is
      only partially propagated to slaves (on write operations). Reads will
      not affect slaves LFU and LRU counters, so after a failover the eviction
      decisions are kinda random until keys start to collect some aging/freq info.
      
      However since new slaves are initially populated via RDB file transfer,
      this means that if we spin up a new slave from a master, and perform an
      immediate manual failover (for instance in order to upgrade the master),
      the slave will have eviction informations to use for some time.
      
      The LFU/LRU info is persisted only if the maxmemory policy is set to one
      of the relevant type, even if no actual "maxmemory"  memory limit is
      set.
      d7a5c0eb
  11. 29 Dec, 2017 1 commit
    • Oran Agra's avatar
      fix processing of large bulks (above 2GB) · 60a4f12f
      Oran Agra authored
      - protocol parsing (processMultibulkBuffer) was limitted to 32big positions in the buffer
        readQueryFromClient potential overflow
      - rioWriteBulkCount used int, although rioWriteBulkString gave it size_t
      - several places in sds.c that used int for string length or index.
      - bugfix in RM_SaveAuxField (return was 1 or -1 and not length)
      - RM_SaveStringBuffer was limitted to 32bit length
      60a4f12f
  12. 01 Dec, 2017 2 commits
  13. 19 Sep, 2017 1 commit
    • antirez's avatar
      PSYNC2: Fix the way replication info is saved/loaded from RDB. · c1c99e9f
      antirez authored
      This commit attempts to fix a number of bugs reported in #4316.
      They are related to the way replication info like replication ID,
      offsets, and currently selected DB in the master client, are stored
      and loaded by Redis. In order to avoid inconsistencies the changes in
      this commit try to enforce that:
      
      1. Replication information are only stored when the RDB file is
      generated by a slave that has a valid 'master' client, so that we can
      always extract the currently selected DB.
      2. When replication informations are persisted in the RDB file, all the
      info for a successful PSYNC or nothing is persisted.
      3. The RDB replication informations are only loaded if the instance is
      configured as a slave, otherwise a master can start with IDs that relate
      to a different history of the data set, and stil retain such IDs in the
      future while receiving unrelated writes.
      c1c99e9f
  14. 27 Jun, 2017 1 commit
    • antirez's avatar
      RDB modules values serialization format version 2. · 365dd037
      antirez authored
      The original RDB serialization format was not parsable without the
      module loaded, becuase the structure was managed only by the module
      itself. Moreover RDB is a streaming protocol in the sense that it is
      both produce di an append-only fashion, and is also sometimes directly
      sent to the socket (in the case of diskless replication).
      
      The fact that modules values cannot be parsed without the relevant
      module loaded is a problem in many ways: RDB checking tools must have
      loaded modules even for doing things not involving the value at all,
      like splitting an RDB into N RDBs by key or alike, or just checking the
      RDB for sanity.
      
      In theory module values could be just a blob of data with a prefixed
      length in order for us to be able to skip it. However prefixing the values
      with a length would mean one of the following:
      
      1. To be able to write some data at a previous offset. This breaks
      stremaing.
      2. To bufferize values before outputting them. This breaks performances.
      3. To have some chunked RDB output format. This breaks simplicity.
      
      Moreover, the above solution, still makes module values a totally opaque
      matter, with the fowllowing problems:
      
      1. The RDB check tool can just skip the value without being able to at
      least check the general structure. For datasets composed mostly of
      modules values this means to just check the outer level of the RDB not
      actually doing any checko on most of the data itself.
      2. It is not possible to do any recovering or processing of data for which a
      module no longer exists in the future, or is unknown.
      
      So this commit implements a different solution. The modules RDB
      serialization API is composed if well defined calls to store integers,
      floats, doubles or strings. After this commit, the parts generated by
      the module API have a one-byte prefix for each of the above emitted
      parts, and there is a final EOF byte as well. So even if we don't know
      exactly how to interpret a module value, we can always parse it at an
      high level, check the overall structure, understand the types used to
      store the information, and easily skip the whole value.
      
      The change is backward compatible: older RDB files can be still loaded
      since the new encoding has a new RDB type: MODULE_2 (of value 7).
      The commit also implements the ability to check RDB files for sanity
      taking advantage of the new feature.
      365dd037
  15. 09 Nov, 2016 1 commit
    • antirez's avatar
      PSYNC2: different improvements to Redis replication. · 2669fb83
      antirez authored
      The gist of the changes is that now, partial resynchronizations between
      slaves and masters (without the need of a full resync with RDB transfer
      and so forth), work in a number of cases when it was impossible
      in the past. For instance:
      
      1. When a slave is promoted to mastrer, the slaves of the old master can
      partially resynchronize with the new master.
      
      2. Chained slalves (slaves of slaves) can be moved to replicate to other
      slaves or the master itsef, without requiring a full resync.
      
      3. The master itself, after being turned into a slave, is able to
      partially resynchronize with the new master, when it joins replication
      again.
      
      In order to obtain this, the following main changes were operated:
      
      * Slaves also take a replication backlog, not just masters.
      
      * Same stream replication for all the slaves and sub slaves. The
      replication stream is identical from the top level master to its slaves
      and is also the same from the slaves to their sub-slaves and so forth.
      This means that if a slave is later promoted to master, it has the
      same replication backlong, and can partially resynchronize with its
      slaves (that were previously slaves of the old master).
      
      * A given replication history is no longer identified by the `runid` of
      a Redis node. There is instead a `replication ID` which changes every
      time the instance has a new history no longer coherent with the past
      one. So, for example, slaves publish the same replication history of
      their master, however when they are turned into masters, they publish
      a new replication ID, but still remember the old ID, so that they are
      able to partially resynchronize with slaves of the old master (up to a
      given offset).
      
      * The replication protocol was slightly modified so that a new extended
      +CONTINUE reply from the master is able to inform the slave of a
      replication ID change.
      
      * REPLCONF CAPA is used in order to notify masters that a slave is able
      to understand the new +CONTINUE reply.
      
      * The RDB file was extended with an auxiliary field that is able to
      select a given DB after loading in the slave, so that the slave can
      continue receiving the replication stream from the point it was
      disconnected without requiring the master to insert "SELECT" statements.
      This is useful in order to guarantee the "same stream" property, because
      the slave must be able to accumulate an identical backlog.
      
      * Slave pings to sub-slaves are now sent in a special form, when the
      top-level master is disconnected, in order to don't interfer with the
      replication stream. We just use out of band "\n" bytes as in other parts
      of the Redis protocol.
      
      An old design document is available here:
      
      https://gist.github.com/antirez/ae068f95c0d084891305
      
      However the implementation is not identical to the description because
      during the work to implement it, different changes were needed in order
      to make things working well.
      2669fb83
  16. 02 Oct, 2016 1 commit
  17. 11 Aug, 2016 1 commit
  18. 09 Aug, 2016 1 commit
  19. 03 Jun, 2016 1 commit
  20. 01 Jun, 2016 3 commits
  21. 27 Jul, 2015 1 commit
  22. 26 Jul, 2015 1 commit
  23. 19 Jan, 2015 1 commit
    • Matt Stancliff's avatar
      Improve RDB type correctness · f7043604
      Matt Stancliff authored
      It's possible large objects could be larger than 'int', so let's
      upgrade all size counters to ssize_t.
      
      This also fixes rdbSaveObject serialized bytes calculation.
      Since entire serializations of data structures can be large,
      so we don't want to limit their calculated size to a 32 bit signed max.
      
      This commit increases object size calculation and
      cascades the change back up to serializedlength printing.
      
      Before:
      127.0.0.1:6379> debug object hihihi
      ... encoding:quicklist serializedlength:-2147483559 ...
      
      After:
      127.0.0.1:6379> debug object hihihi
      ... encoding:quicklist serializedlength:2147483737 ...
      f7043604
  24. 08 Jan, 2015 2 commits
    • antirez's avatar
      RDB AUX fields support. · 206cd219
      antirez authored
      This commit introduces a new RDB data type called 'aux'. It is used in
      order to insert inside an RDB file key-value pairs that may serve
      different needs, without breaking backward compatibility when new
      informations are embedded inside an RDB file. The contract between Redis
      versions is to ignore unknown aux fields when encountered.
      
      Aux fields can be used in order to:
      
      1. Augment the RDB file with info like version of Redis that created the
      RDB file, creation time, used memory while the RDB was created, and so
      forth.
      2. Add state about Redis inside the RDB file that we need to reload
      later: replication offset, previos master run ID, in order to improve
      failovers safety and allow partial resynchronization after a slave
      restart.
      3. Anything that we may want to add to RDB files without breaking the
      ability of past versions of Redis to load the file.
      206cd219
    • antirez's avatar
      New RDB v7 opcode: RESIZEDB. · e8614a1a
      antirez authored
      The new opcode is an hint about the size of the dataset (keys and number
      of expires) we are going to load for a given Redis database inside the
      RDB file. Since hash tables are resized accordingly ASAP, useless
      rehashing is avoided, speeding up load times significantly, in the order
      of ~ 20% or more for larger data sets.
      
      Related issue: #1719
      e8614a1a
  25. 02 Jan, 2015 1 commit
    • Matt Stancliff's avatar
      Convert quicklist RDB to store ziplist nodes · 101b3a6e
      Matt Stancliff authored
      Turns out it's a huge improvement during save/reload/migrate/restore
      because, with compression enabled, we're compressing 4k or 8k
      chunks of data consisting of multiple elements in one ziplist
      instead of compressing series of smaller individual elements.
      101b3a6e
  26. 14 Oct, 2014 1 commit
  27. 19 Jan, 2013 1 commit
  28. 08 Nov, 2012 1 commit
  29. 30 Oct, 2012 1 commit
  30. 02 Jun, 2012 1 commit
    • Alex Mitrofanov's avatar
      Fixed RESTORE hash failure (Issue #532) · 51857c7e
      Alex Mitrofanov authored
      (additional commit notes by antirez@gmail.com):
      
      The rdbIsObjectType() macro was not updated when the new RDB object type
      of ziplist encoded hashes was added.
      
      As a result RESTORE, that uses rdbLoadObjectType(), failed when a
      ziplist encoded hash was loaded.
      This does not affected normal RDB loading because in that case we use
      the lower-level function rdbLoadType().
      
      The commit also adds a regression test.
      51857c7e
  31. 24 Apr, 2012 1 commit
  32. 09 Apr, 2012 1 commit
  33. 31 Mar, 2012 1 commit