1. 21 Dec, 2021 1 commit
    • zhugezy's avatar
      Remove EVAL script verbatim replication, propagation, and deterministic execution logic (#9812) · 1b0968df
      zhugezy authored
      
      
      # Background
      
      The main goal of this PR is to remove relevant logics on Lua script verbatim replication,
      only keeping effects replication logic, which has been set as default since Redis 5.0.
      As a result, Lua in Redis 7.0 would be acting the same as Redis 6.0 with default
      configuration from users' point of view.
      
      There are lots of reasons to remove verbatim replication.
      Antirez has listed some of the benefits in Issue #5292:
      
      >1. No longer need to explain to users side effects into scripts.
          They can do whatever they want.
      >2. No need for a cache about scripts that we sent or not to the slaves.
      >3. No need to sort the output of certain commands inside scripts
          (SMEMBERS and others): this both simplifies and gains speed.
      >4. No need to store scripts inside the RDB file in order to startup correctly.
      >5. No problems about evicting keys during the script execution.
      
      When looking back at Redis 5.0, antirez and core team decided to set the config
      `lua-replicate-commands yes` by default instead of removing verbatim replication
      directly, in case some bad situations happened. 3 years later now before Redis 7.0,
      it's time to remove it formally.
      
      # Changes
      
      - configuration for lua-replicate-commands removed
        - created config file stub for backward compatibility
      - Replication script cache removed
        - this is useless under script effects replication
        - relevant statistics also removed
      - script persistence in RDB files is also removed
      - Propagation of SCRIPT LOAD and SCRIPT FLUSH to replica / AOF removed
      - Deterministic execution logic in scripts removed (i.e. don't run write commands
        after random ones, and sorting output of commands with random order)
        - the flags indicating which commands have non-deterministic results are kept as hints to clients.
      - `redis.replicate_commands()` & `redis.set_repl()` changed
        - now `redis.replicate_commands()` does nothing and return an 1
        - ...and then `redis.set_repl()` can be issued before `redis.replicate_commands()` now
      - Relevant TCL cases adjusted
      - DEBUG lua-always-replicate-commands removed
      
      # Other changes
      - Fix a recent bug comparing CLIENT_ID_AOF to original_client->flags instead of id. (introduced in #9780)
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      1b0968df
  2. 19 Dec, 2021 1 commit
    • Oran Agra's avatar
      Add external test that runs without debug command (#9964) · 6add1b72
      Oran Agra authored
      - add needs:debug flag for some tests
      - disable "save" in external tests (speedup?)
      - use debug_digest proc instead of debug command directly so it can be skipped
      - use OBJECT ENCODING instead of DEBUG OBJECT to get encoding
      - add a proc for OBJECT REFCOUNT so it can be skipped
      - move a bunch of tests in latency_monitor tests to happen later so that latency monitor has some values in it
      - add missing close_replication_stream calls
      - make sure to close the temp client if DEBUG LOG fails
      6add1b72
  3. 08 Dec, 2021 1 commit
  4. 07 Dec, 2021 1 commit
    • Binbin's avatar
      Fix timing issue in logging.tcl with FreeBSD (#9910) · b947049f
      Binbin authored
      A test failure was reported in Daily CI.
      `Crash report generated on SIGABRT` with FreeBSD.
      
      ```
      *** [err]: Crash report generated on SIGABRT in tests/integration/logging.tcl
      Expected [string match *crashed by signal* ### Starting...(logs) in tests/integration/logging.tcl]
      ```
      
      It look like `tail -1000` was executed too early, before it
      printed out all the crash logs. We can give it a few more
      chances by using `wait_for_log_messages`.
      
      Other changes:
      1. In `Server is able to generate a stack trace on selected systems`,
      use `wait_for_log_messages`to reduce the lines of code. And if it
      fails, there are more detailed logs that can be printed.
      
      2. In `Crash report generated on DEBUG SEGFAULT`, we also use
      `wait_for_log_messages` to avoid possible timing issues.
      b947049f
  5. 04 Dec, 2021 1 commit
  6. 02 Dec, 2021 1 commit
    • meir@redislabs.com's avatar
      Redis Functions - Added redis function unit and Lua engine · cbd46317
      meir@redislabs.com authored
      Redis function unit is located inside functions.c
      and contains Redis Function implementation:
      1. FUNCTION commands:
        * FUNCTION CREATE
        * FCALL
        * FCALL_RO
        * FUNCTION DELETE
        * FUNCTION KILL
        * FUNCTION INFO
      2. Register engine
      
      In addition, this commit introduce the first engine
      that uses the Redis Function capabilities, the
      Lua engine.
      cbd46317
  7. 28 Nov, 2021 1 commit
    • Viktor Söderqvist's avatar
      Sort out the mess around writable replicas and lookupKeyRead/Write (#9572) · acf3495e
      Viktor Söderqvist authored
      Writable replicas now no longer use the values of expired keys. Expired keys are
      deleted when lookupKeyWrite() is used, even on a writable replica. Previously,
      writable replicas could use the value of an expired key in write commands such
      as INCR, SUNIONSTORE, etc..
      
      This commit also sorts out the mess around the functions lookupKeyRead() and
      lookupKeyWrite() so they now indicate what we intend to do with the key and
      are not affected by the command calling them.
      
      Multi-key commands like SUNIONSTORE, ZUNIONSTORE, COPY and SORT with the
      store option now use lookupKeyRead() for the keys they're reading from (which will
      not allow reading from logically expired keys).
      
      This commit also fixes a bug where PFCOUNT could return a value of an
      expired key.
      
      Test modules commands have their readonly and write flags updated to correctly
      reflect their lookups for reading or writing. Modules are not required to
      correctly reflect this in their command flags, but this change is made for
      consistency since the tests serve as usage examples.
      
      Fixes #6842. Fixes #7475.
      acf3495e
  8. 24 Nov, 2021 2 commits
    • sundb's avatar
      Replace ziplist with listpack in quicklist (#9740) · 45129059
      sundb authored
      
      
      Part three of implementing #8702, following #8887 and #9366 .
      
      ## Description of the feature
      1. Replace the ziplist container of quicklist with listpack.
      2. Convert existing quicklist ziplists on RDB loading time. an O(n) operation.
      
      ## Interface changes
      1. New `list-max-listpack-size` config is an alias for `list-max-ziplist-size`.
      2. Replace `debug ziplist` command with `debug listpack`.
      
      ## Internal changes
      1. Add `lpMerge` to merge two listpacks . (same as `ziplistMerge`)
      2. Add `lpRepr` to print info of listpack which is used in debugCommand and `quicklistRepr`. (same as `ziplistRepr`)
      3. Replace `QUICKLIST_NODE_CONTAINER_ZIPLIST` with `QUICKLIST_NODE_CONTAINER_PACKED`(following #9357 ).
          It represent that a quicklistNode is a packed node, as opposed to a plain node.
      4. Remove `createZiplistObject` method, which is never used.
      5. Calculate listpack entry size using overhead overestimation in `quicklistAllowInsert`.
          We prefer an overestimation, which would at worse lead to a few bytes below the lowest limit of 4k.
      
      ## Improvements
      1. Calling `lpShrinkToFit` after converting Ziplist to listpack, which was missed at #9366.
      2. Optimize `quicklistAppendPlainNode` to avoid memcpy data.
      
      ## Bugfix
      1. Fix crash in `quicklistRepr` when ziplist is compressed, introduced from #9366.
      
      ## Test
      1. Add unittest for `lpMerge`.
      2. Modify the old quicklist ziplist corrupt dump test.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      45129059
    • Binbin's avatar
      Wait for `asyn_loading` to stop in `short read` test (#9841) · fb4f7be2
      Binbin authored
      In #9323, when `repl-diskless-load` is enabled and set to `swapdb`,
      if the master replication ID hasn't changed, we can load data-set
      asynchronously, and serving read commands during the full resync.
      
      In `diskless loading short read` test, after a loading successfully,
      we will wait for the loading to stop and continue the for loop.
      
      After the introduction of `async_loading`, we also need to check it.
      Otherwise the next loop will start too soon, may trigger a timing issue.
      fb4f7be2
  9. 23 Nov, 2021 1 commit
  10. 22 Nov, 2021 2 commits
    • Oran Agra's avatar
      Fix invalid access in lpFind on corrupted listpack (#9819) · f07dedf7
      Oran Agra authored
      Issue found by corrupt-dump-fuzzer test with ASAN.
      The problem was that lpSkip and lpGetWithSize could read the next listpack entry without validating that it's in range.
      Similarly even the memcmp in lpFind could do that and possibly crash on segfault and now they'll crash on assert first.
      
      The naive fix of using lpAssertValidEntry every time, resulted in 30% degradation in the lpFind benchmark of the unit test.
      The final fix with the condition at the bottom has no performance implications.
      f07dedf7
    • Oran Agra's avatar
      fix string escaping in corrupt-dump test to support TCL8.5 (#9824) · f00a8ad9
      Oran Agra authored
      TCL8.5 can't handle cases where part of the string is escaped and part of it isn't,
      if there's a single char that needs escaping, we need to escape the whole string.
      f00a8ad9
  11. 21 Nov, 2021 2 commits
    • Oran Agra's avatar
      Fix false positive leak reported by GCC ASAN (#9816) · 183b90a6
      Oran Agra authored
      Leak found by the corrupt-dump-fuzzer when using GCC ASAN, which seems
      to falsely report leaks on pointers kept only on the stack when calling exit.
      Instead we now use _exit on panic / assert to skip these leak checks.
      
      Additionally, check for sanitizer warnings in the corrupt-dump-fuzzer between iterations,
      so that when something is found we know which test to relate it too (and it prints reproduction command list)
      183b90a6
    • Oran Agra's avatar
      Prevent LCS from allocating temp memory over proto-max-bulk-len (#9817) · 14176484
      Oran Agra authored
      LCS can allocate immense amount of memory (sizes of two inputs multiplied by each other).
      In the past this caused some possible security issues due to overflows, which we solved
      and also added use of `trymalloc` to return "Insufficient memory" instead of OOM panic zmalloc.
      
      But in case overcommit is enabled, it could be that we won't get the OOM panic, and zmalloc
      will succeed, and then we can get OOM killed by the kernel.
      
      The solution here is to prevent LCS from allocating transient memory that's bigger than
      `proto-max-bulk-len` config.
      This config is not directly related to transient memory, but using a hard coded value ad well as
      introducing a specific config seems wrong.
      
      This comes to solve an error in the corrupt-dump-fuzzer test that started in the daily CI see #9799
      14176484
  12. 11 Nov, 2021 1 commit
    • Ozan Tezcan's avatar
      Add sanitizer support and clean up sanitizer findings (#9601) · b91d8b28
      Ozan Tezcan authored
      - Added sanitizer support. `address`, `undefined` and `thread` sanitizers are available.  
      - To build Redis with desired sanitizer : `make SANITIZER=undefined`
      - There were some sanitizer findings, cleaned up codebase
      - Added tests with address and undefined behavior sanitizers to daily CI.
      - Added tests with address sanitizer to the per-PR CI (smoke out mem leaks sooner).
      
      Basically, there are three types of issues : 
      
      **1- Unaligned load/store** : Most probably, this issue may cause a crash on a platform that
      does not support unaligned access. Redis does unaligned access only on supported platforms.
      
      **2- Signed integer overflow.** Although, signed overflow issue can be problematic time to time
      and change how compiler generates code, current findings mostly about signed shift or simple
      addition overflow. For most platforms Redis can be compiled for, this wouldn't cause any issue
      as far as I can tell (checked generated code on godbolt.org).
      
       **3 -Minor leak** (redis-cli), **use-after-free**(just before calling exit());
      
      UB means nothing guaranteed and risky to reason about program behavior but I don't think any
      of the fixes here worth backporting. As sanitizers are now part of the CI, preventing new issues
      will be the real benefit. 
      b91d8b28
  13. 10 Nov, 2021 1 commit
    • Oran Agra's avatar
      Try solving test timeout on freebsd CI (#9768) · 0927a0dd
      Oran Agra authored
      First, avoid using --accurate on the freebsd CI, we only care about
      systematic issues there due to being different platform, but not
      accuracy
      
      Secondly, when looking at the test which timed out it seems silly and
      outdated:
      - it used KEYS to attempt to trigger lazy expiry, but KEYS doesn't do
        that anymore.
      - it used some hard coded sleeps rather than waiting for things to
        happen and exiting ASAP
      0927a0dd
  14. 09 Nov, 2021 1 commit
    • YaacovHazan's avatar
      fix short timeout in replication short read tests (#9763) · 03406fcb
      YaacovHazan authored
      In both tests, "diskless loading short read" and "diskless loading short read with module",
      the timeout of waiting for the replica to respond to a short read and log it, is too short.
      
      Also, add --dump-logs in runtest-moduleapi for valgrind runs.
      03406fcb
  15. 04 Nov, 2021 2 commits
    • Eduardo Semprebon's avatar
      Replica keep serving data during repl-diskless-load=swapdb for better availability (#9323) · 91d0c758
      Eduardo Semprebon authored
      
      
      For diskless replication in swapdb mode, considering we already spend replica memory
      having a backup of current db to restore in case of failure, we can have the following benefits
      by instead swapping database only in case we succeeded in transferring db from master:
      
      - Avoid `LOADING` response during failed and successful synchronization for cases where the
        replica is already up and running with data.
      - Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
        time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
      - This could be implemented also for disk replication with similar benefits if consumers are willing
        to spend the extra memory usage.
      
      General notes:
      - The concept of `backupDb` becomes `tempDb` for clarity.
      - Async loading mode will only kick in if the replica is syncing from a master that has the same
        repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline. 
      - New property in INFO: `async_loading` to differentiate from the blocking loading
      - Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
        and the tempDb that is passed around.
      - Because this is affecting replicas only, we assume that if they are not readonly and write commands
        during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
        here anyways to avoid complications.
      
      Considerations for review:
      - We have many cases where server.loading flag is used and even though I tried my best, there may
        be cases where async_loading should be checked as well and cases where it shouldn't (would require
        very good understanding of whole code)
      - Several places that had different behavior depending on the loading flag where actually meant to just
        handle commands coming from the AOF client differently than ones coming from real clients, changed
        to check CLIENT_ID_AOF instead.
      
      **Additional for Release Notes**
      - Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
        contribute on triggering next database SAVE
      - New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
      - Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
        Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
        ABORTED and COMPLETED.
      - New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
        to allow modules to declare they support the diskless replication with async loading (when absent, we fall
        back to disk-based loading).
      Co-authored-by: default avatarEduardo Semprebon <edus@saxobank.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      91d0c758
    • menwen's avatar
      Retry when a blocked connection system call is interrupted by a signal (#9629) · ccf8a651
      menwen authored
      
      
      When repl-diskless-load is enabled, the connection is set to the blocking state.
      The connection may be interrupted by a signal during a system call.
      This would have resulted in a disconnection and possibly a reconnection loop.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      ccf8a651
  16. 03 Nov, 2021 1 commit
    • perryitay's avatar
      Add support for list type to store elements larger than 4GB (#9357) · f27083a4
      perryitay authored
      
      
      Redis lists are stored in quicklist, which is currently a linked list of ziplists.
      Ziplists are limited to storing elements no larger than 4GB, so when bigger
      items are added they're getting truncated.
      This PR changes quicklists so that they're capable of storing large items
      in quicklist nodes that are plain string buffers rather than ziplist.
      
      As part of the PR there were few other changes in redis: 
      1. new DEBUG sub-commands: 
         - QUICKLIST-PACKED-THRESHOLD - set the threshold of for the node type to
           be plan or ziplist. default (1GB)
         - QUICKLIST <key> - Shows low level info about the quicklist encoding of <key>
      2. rdb format change:
         - A new type was added - RDB_TYPE_LIST_QUICKLIST_2 . 
         - container type (packed / plain) was added to the beginning of the rdb object
           (before the actual node list).
      3. testing:
         - Tests that requires over 100MB will be by default skipped. a new flag was
           added to 'runtest' to run the large memory tests (not used by default)
      Co-authored-by: default avatarsundb <sundbcn@gmail.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      f27083a4
  17. 02 Nov, 2021 1 commit
    • Binbin's avatar
      Fix timing issue in replication test (#9719) · 58a1d16f
      Binbin authored
      
      
      So it looks like sampling set loglines [count_log_lines -2] was
      executed too late, and the replication managed to complete before that.
      
      ```
      *** [err]: diskless no replicas drop during rdb pipe in tests/integration/replication.tcl
      log message of '"*Diskless rdb transfer, done reading from pipe, 2 replicas still up*"' not found in ./tests/tmp/server.6124.69/stdout after line: 52 till line: 52
      ```
      
      Changes:
      1. when we search the master log file, we start to search from before we sent the REPLICAOF
        command, to prevent a race in which the replication completed before we sampled the log line count.
      2. we don't need to sample the replica loglines sine it's a fresh resplica that's just been started, so the message
        we're looking for is the first occurrence in the log, we can start search from 0.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      58a1d16f
  18. 01 Nov, 2021 1 commit
    • Binbin's avatar
      Fix race condition in psync2-pingoff test (#9712) · cea7809c
      Binbin authored
      
      
      Test failed on freebsd:
      ```
      *** [err]: Make the old master a replica of the new one and check conditions in tests/integration/psync2-pingoff.tcl
      Expected '162' to be equal to '176' (context: type eval line 18 cmd {assert_equal [status $R(0) master_repl_offset] [status $R(1) master_repl_offset]} proc ::test)
      ```
      
      There are two possible race conditions in the test.
      
      1. The code waits for sync_full to increment, and assumes that means the
      master did the fork. But in fact there are cases the master will increment
      that sync_full counter (after replica asks for sync), but will see that
      there's already a fork running and will delay the fork creation.
      
      In this case the INCR will be executed before the fork happens, so it'll
      not be in the command stream. Solve that by waiting for `master_link_status: up`
      on the replica before the INCR.
      
      2. The repl-ping-replica-period is still high (1 second), so there's a chance the
      master will send an additional PING between the two calls to INFO (the line that
      fails is the one that samples INFO from both servers). So there's a chance one of
      them will have an incremented offset due to PING and the other won't have it yet.
      
      In theory we can wait for the repl_offset to match, but then we risk facing a
      situation where that race will hide an offset mis-match. so instead, i think we
      should just change repl-ping-replica-period to prevent further pings from being pushed.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      cea7809c
  19. 29 Oct, 2021 1 commit
  20. 26 Oct, 2021 1 commit
  21. 25 Oct, 2021 2 commits
    • Wang Yuan's avatar
      Add timestamp annotations in AOF (#9326) · 9ec3294b
      Wang Yuan authored
      Add timestamp annotation in AOF, one part of #9325.
      
      Enabled with the new `aof-timestamp-enabled` config option.
      
      Timestamp annotation format is "#TS:${timestamp}\r\n"."
      TS" is short of timestamp and this method could save extra bytes in AOF.
      
      We can use timestamp annotation for some special functions. 
      - know the executing time of commands
      - restore data to a specific point-in-time (by using redis-check-rdb to truncate the file)
      9ec3294b
    • Wang Yuan's avatar
      Replication backlog and replicas use one global shared replication buffer (#9166) · c1718f9d
      Wang Yuan authored
      ## Background
      For redis master, one replica uses one copy of replication buffer, that is a big waste of memory,
      more replicas more waste, and allocate/free memory for every reply list also cost much.
      If we set client-output-buffer-limit small and write traffic is heavy, master may disconnect with
      replicas and can't finish synchronization with replica. If we set  client-output-buffer-limit big,
      master may be OOM when there are many replicas that separately keep much memory.
      Because replication buffers of different replica client are the same, one simple idea is that
      all replicas only use one replication buffer, that will effectively save memory.
      
      Since replication backlog content is the same as replicas' output buffer, now we
      can discard replication backlog memory and use global shared replication buffer
      to implement replication backlog mechanism.
      
      ## Implementation
      I create one global "replication buffer" which contains content of replication stream.
      The structure of "replication buffer" is similar to the reply list that exists in every client.
      But the node of list is `replBufBlock`, which has `id, repl_offset, refcount` fields.
      ```c
      /* Replication buffer blocks is the list of replBufBlock.
       *
       * +--------------+       +--------------+       +--------------+
       * | refcount = 1 |  ...  | refcount = 0 |  ...  | refcount = 2 |
       * +--------------+       +--------------+       +--------------+
       *      |                                            /       \
       *      |                                           /         \
       *      |                                          /           \
       *  Repl Backlog                               Replia_A      Replia_B
       * 
       * Each replica or replication backlog increments only the refcount of the
       * 'ref_repl_buf_node' which it points to. So when replica walks to the next
       * node, it should first increase the next node's refcount, and when we trim
       * the replication buffer nodes, we remove node always from the head node which
       * refcount is 0. If the refcount of the head node is not 0, we must stop
       * trimming and never iterate the next node. */
      
      /* Similar with 'clientReplyBlock', it is used for shared buffers between
       * all replica clients and replication backlog. */
      typedef struct replBufBlock {
          int refcount;           /* Number of replicas or repl backlog using. */
          long long id;           /* The unique incremental number. */
          long long repl_offset;  /* Start replication offset of the block. */
          size_t size, used;
          char buf[];
      } replBufBlock;
      ```
      So now when we feed replication stream into replication backlog and all replicas, we only need
      to feed stream into replication buffer `feedReplicationBuffer`. In this function, we set some fields of
      replication backlog and replicas to references of the global replication buffer blocks. And we also
      need to check replicas' output buffer limit to free if exceeding `client-output-buffer-limit`, and trim
      replication backlog if exceeding `repl-backlog-size`.
      
      When sending reply to replicas, we also need to iterate replication buffer blocks and send its
      content, when totally sending one block for replica, we decrease current node count and
      increase the next current node count, and then free the block which reference is 0 from the
      head of replication buffer blocks.
      
      Since now we use linked list to manage replication backlog, it may cost much time for iterating
      all linked list nodes to find corresponding replication buffer node. So we create a rax tree to
      store some nodes  for index, but to avoid rax tree occupying too much memory, i record
      one per 64 nodes for index.
      
      Currently, to make partial resynchronization as possible as much, we always let replication
      backlog as the last reference of replication buffer blocks, backlog size may exceeds our setting
      if slow replicas that reference vast replication buffer blocks, and this method doesn't increase
      memory usage since they share replication buffer. To avoid freezing server for freeing unreferenced
      replication buffer blocks when we need to trim backlog for exceeding backlog size setting,
      we trim backlog incrementally (free 64 blocks per call now), and make it faster in
      `beforeSleep` (free 640 blocks).
      
      ### Other changes
      - `mem_total_replication_buffers`: we add this field in INFO command, it means the total
        memory of replication buffers used.
      - `mem_clients_slaves`:  now even replica is slow to replicate, and its output buffer memory
        is not 0, but it still may be 0, since replication backlog and replicas share one global replication
        buffer, only if replication buffer memory is more than the repl backlog setting size, we consider
        the excess as replicas' memory. Otherwise, we think replication buffer memory is the consumption
        of repl backlog.
      - Key eviction
        Since all replicas and replication backlog share global replication buffer, we think only the
        part of exceeding backlog size the extra separate consumption of replicas.
        Because we trim backlog incrementally in the background, backlog size may exceeds our
        setting if slow replicas that reference vast replication buffer blocks disconnect.
        To avoid massive eviction loop, we don't count the delayed freed replication backlog into
        used memory even if there are no replicas, i.e. we also regard this memory as replicas's memory.
      - `client-output-buffer-limit` check for replica clients
        It doesn't make sense to set the replica clients output buffer limit lower than the repl-backlog-size
        config (partial sync will succeed and then replica will get disconnected). Such a configuration is
        ignored (the size of repl-backlog-size will be used). This doesn't have memory consumption
        implications since the replica client will share the backlog buffers memory.
      - Drop replication backlog after loading data if needed
        We always create replication backlog if server is a master, we need it because we put DELs in
        it when loading expired keys in RDB, but if RDB doesn't have replication info or there is no rdb,
        it is not possible to support partial resynchronization, to avoid extra memory of replication backlog,
        we drop it.
      - Multi IO threads
       Since all replicas and replication backlog use global replication buffer,  if I/O threads are enabled,
        to guarantee data accessing thread safe, we must let main thread handle sending the output buffer
        to all replicas. But before, other IO threads could handle sending output buffer of all replicas.
      
      ## Other optimizations
      This solution resolve some other problem:
      - When replicas disconnect with master since of out of output buffer limit, releasing the output
        buffer of replicas may freeze server if we set big `client-output-buffer-limit` for replicas, but now,
        it doesn't cause freezing.
      - This implementation may mitigate reply list copy cost time(also freezes server) when one replication
        has huge reply buffer and another replica can copy buffer for full synchronization. now, we just copy
        reference info, it is very light.
      - If we set replication backlog size big, it also may cost much time to copy replication backlog into
        replica's output buffer. But this commit eliminates this problem.
      - Resizing replication backlog size doesn't empty current replication backlog content.
      c1718f9d
  22. 18 Oct, 2021 1 commit
    • Oran Agra's avatar
      Attempt to fix a valgrind test failure due to timing (#9643) · 276b460e
      Oran Agra authored
      in the past few days i've seen two failures in the valgrind daily test.
      
      *** [err]: slave fails full sync and diskless load swapdb recovers it in tests/integration/replication.tcl
      Replica didn't get into loading mode
      
      can't reproduce it, but i'm hoping it's just too slow (to start loading within 5 seconds)
      276b460e
  23. 04 Oct, 2021 1 commit
    • YaacovHazan's avatar
      improve the stability and correctness of "Test child sending info" (#9562) · 5becb7c9
      YaacovHazan authored
      Since we measure the COW size in this test by changing some keys and reading
      the reported COW size, we need to ensure that the "dismiss mechanism" (#8974)
      will not free memory and reduce the COW size.
      
      For that, this commit changes the size of the keys to 512B (less than a page).
      and because some keys may fall into the same page, we are modifying ten keys
      on each iteration and check for at least 50% change in the COW size.
      5becb7c9
  24. 26 Sep, 2021 1 commit
  25. 23 Sep, 2021 1 commit
    • Binbin's avatar
      Add ZMPOP/BZMPOP commands. (#9484) · 14d6abd8
      Binbin authored
      This is similar to the recent addition of LMPOP/BLMPOP (#9373), but zset.
      
      Syntax for the new ZMPOP command:
      `ZMPOP numkeys [<key> ...] MIN|MAX [COUNT count]`
      
      Syntax for the new BZMPOP command:
      `BZMPOP timeout numkeys [<key> ...] MIN|MAX [COUNT count]`
      
      Some background:
      - ZPOPMIN/ZPOPMAX take only one key, and can return multiple elements.
      - BZPOPMIN/BZPOPMAX take multiple keys, but return only one element from just one key.
      - ZMPOP/BZMPOP can take multiple keys, and can return multiple elements from just one key.
      
      Note that ZMPOP/BZMPOP can take multiple keys, it eventually operates on just on key.
      And it will propagate as ZPOPMIN or ZPOPMAX with the COUNT option.
      
      As new commands, if we can not pop any elements, the response like:
      - ZMPOP: Return a NIL in both RESP2 and RESP3, unlike ZPOPMIN/ZPOPMAX return emptyarray.
      - BZMPOP: Return a NIL in both RESP2 and RESP3 when timeout is reached, like BZPOPMIN/BZPOPMAX.
      
      For the normal response is nested arrays in RESP2 and RESP3:
      ```
      ZMPOP/BZMPOP
      1) keyname
      2) 1) 1) member1
            2) score1
         2) 1) member2
            2) score2
      
      In RESP2:
      1) "myzset"
      2) 1) 1) "three"
            2) "3"
         2) 1) "two"
            2) "2"
      
      In RESP3:
      1) "myzset"
      2) 1) 1) "three"
            2) (double) 3
         2) 1) "two"
            2) (double) 2
      ```
      14d6abd8
  26. 19 Sep, 2021 1 commit
  27. 14 Sep, 2021 1 commit
  28. 13 Sep, 2021 1 commit
    • zhaozhao.zz's avatar
      PSYNC2: make partial sync possible after master reboot (#8015) · 794442b1
      zhaozhao.zz authored
      The main idea is how to allow a master to load replication info from RDB file when rebooting, if master can load replication info it means that replicas may have the chance to psync with master, it can save much traffic.
      
      The key point is we need guarantee safety and consistency, so there
      are two differences between master and replica:
      
      1. master would load the replication info as secondary ID and
         offset, in case other masters have the same replid.
      2. when master loading RDB, it would propagate expired keys as DEL
         command to replication backlog, then replica can receive these
         commands to delete stale keys.
         p.s. the expired keys when RDB loading is useful for users, so
         we show it as `rdb_last_load_keys_expired` and `rdb_last_load_keys_loaded` in info persistence.
      
      Moreover, after load replication info, master should update
      `no_replica_time` in case loading RDB cost too long time.
      794442b1
  29. 09 Sep, 2021 3 commits
    • sundb's avatar
      Replace all usage of ziplist with listpack for t_zset (#9366) · 3ca6972e
      sundb authored
      Part two of implementing #8702 (zset), after #8887.
      
      ## Description of the feature
      Replaced all uses of ziplist with listpack in t_zset, and optimized some of the code to optimize performance.
      
      ## Rdb format changes
      New `RDB_TYPE_ZSET_LISTPACK` rdb type.
      
      ## Rdb loading improvements:
      1) Pre-expansion of dict for validation of duplicate data for listpack and ziplist.
      2) Simplifying the release of empty key objects when RDB loading.
      3) Unify ziplist and listpack data verify methods for zset and hash, and move code to rdb.c.
      
      ## Interface changes
      1) New `zset-max-listpack-entries` config is an alias for `zset-max-ziplist-entries` (same with `zset-max-listpack-value`).
      2) OBJECT ENCODING will return listpack instead of ziplist.
      
      ## Listpack improvements:
      1) Add `lpDeleteRange` and `lpDeleteRangeWithEntry` functions to delete a range of entries from listpack.
      2) Improve the performance of `lpCompare`, converting from string to integer is faster than converting from integer to string.
      3) Replace `snprintf` with `ll2string` to improve performance in converting numbers to strings in `lpGet()`.
      
      ## Zset improvements:
      1) Improve the performance of `zzlFind` method, use `lpFind` instead of `lpCompare` in a loop.
      2) Use `lpDeleteRangeWithEntry` instead of `lpDelete` twice to delete a element of zset.
      
      ## Tests
      1) Add some unittests for `lpDeleteRange` and `lpDeleteRangeWithEntry` function.
      2) Add zset RDB loading test.
      3) Add benchmark test for `lpCompare` and `ziplsitCompare`.
      4) Add empty listpack zset corrupt dump test.
      3ca6972e
    • Binbin's avatar
      Add LMPOP/BLMPOP commands. (#9373) · c50af0ae
      Binbin authored
      We want to add COUNT option for BLPOP.
      But we can't do it without breaking compatibility due to the command arguments syntax.
      So this commit introduce two new commands.
      
      Syntax for the new LMPOP command:
      `LMPOP numkeys [<key> ...] LEFT|RIGHT [COUNT count]`
      
      Syntax for the new BLMPOP command:
      `BLMPOP timeout numkeys [<key> ...] LEFT|RIGHT [COUNT count]`
      
      Some background:
      - LPOP takes one key, and can return multiple elements.
      - BLPOP takes multiple keys, but returns one element from just one key.
      - LMPOP can take multiple keys and return multiple elements from just one key.
      
      Note that LMPOP/BLMPOP  can take multiple keys, it eventually operates on just one key.
      And it will propagate as LPOP or RPOP with the COUNT option.
      
      As a new command, it still return NIL if we can't pop any elements.
      For the normal response is nested arrays in RESP2 and RESP3, like:
      ```
      LMPOP/BLMPOP 
      1) keyname
      2) 1) element1
         2) element2
      ```
      I.e. unlike BLPOP that returns a key name and one element so it uses a flat array,
      and LPOP that returns multiple elements with no key name, and again uses a flat array,
      this one has to return a nested array, and it does for for both RESP2 and RESP3 (like SCAN does)
      
      Some discuss can see: #766 #8824
      c50af0ae
    • Wang Yuan's avatar
      Delay to discard cached master when full synchronization (#9398) · cee3d67f
      Wang Yuan authored
      * Delay to discard cache master when full synchronization
      * Don't disconnect with replicas before loading transferred RDB when full sync
      
      Previously, once replica need to start full synchronization with master,
      it will discard cached master whatever full synchronization is failed or
      not. 
      Now we discard cached master only when transferring RDB is finished
      and start to change data space, this make replica could start partial
      resynchronization with another new master if new master is failed
      during full synchronization.
      cee3d67f
  30. 06 Sep, 2021 1 commit
    • Viktor Söderqvist's avatar
      Optimize quicklistIndex to seek from the nearest end (#9454) · 547c3405
      Viktor Söderqvist authored
      Until now, giving a negative index seeks from the end of a list and a
      positive seeks from the beginning. This change makes it seek from
      the nearest end, regardless of the sign of the given index.
      
      quicklistIndex is used by all list commands which operate by index.
      
      LINDEX key 999999 in a list if 1M elements is greately optimized by
      this change. Latency is cut by 75%.
      
      LINDEX key -1000000 in a list of 1M elements, likewise.
      
      LRANGE key -1 -1 is affected by this, since LRANGE converts the
      indices to positive numbers before seeking.
      
      The tests for corrupt dumps are updated to make sure the corrup
      data is seeked in the same direction as before.
      547c3405
  31. 29 Aug, 2021 1 commit
    • Viktor Söderqvist's avatar
      redis-benchmark: improved help and warnings (#9419) · 97dcf95c
      Viktor Söderqvist authored
      1. The output of --help:
      
        * On the Usage line, just write [OPTIONS] [COMMAND ARGS...] instead listing
          only a few arbitrary options and no command.
        * For --cluster, describe that if the command is supplied on the command line,
          the key must contain "{tag}". Otherwise, the command will not be sent to the
          right cluster node.
        * For -r, add a note that if -r is omitted, all commands in a benchmark will
          use the same key. Also align the description.
        * For -t, describe that -t is ignored if a command is supplied on the command
          line.
      
      2. Print a warning if -t is present when a specific command is supplied.
      
      3. Print all warnings and errors to stderr.
      
      4. Remove -e from calls in redis-benchmark test suite.
      97dcf95c
  32. 20 Aug, 2021 1 commit
  33. 18 Aug, 2021 1 commit
    • Yossi Gottlieb's avatar
      Skip OOM-related tests on incompatible platforms. (#9386) · 1d9c8d61
      Yossi Gottlieb authored
      We only run OOM related tests on x86_64 and aarch64, as jemalloc on other
      platforms (notably s390x) may actually succeed very large allocations. As
      a result the test may hang for a very long time at the cleanup phase,
      iterating as many as 2^61 hash table slots.
      1d9c8d61