1. 07 Dec, 2021 1 commit
    • yoav-steinberg's avatar
      Don't write oom score adj to proc unless we're managing it. (#9904) · 1736fa4d
      yoav-steinberg authored
      When disabling redis oom-score-adj managment we restore the
      base value read before enabling oom-score-adj management.
      
      This fixes an issue introduced in #9748 where updating
      `oom-score-adj-values` while `oom-score-adj` was set to `no`
      would write the base oom score adj value read on startup to `/proc`.
      This is a bug since while `oom-score-adj` is disabled we should
      never write to proc and let external processes manage it.
      
      Added appropriate tests.
      1736fa4d
  2. 02 Dec, 2021 2 commits
    • meir@redislabs.com's avatar
      Redis Functions - Added redis function unit and Lua engine · cbd46317
      meir@redislabs.com authored
      Redis function unit is located inside functions.c
      and contains Redis Function implementation:
      1. FUNCTION commands:
        * FUNCTION CREATE
        * FCALL
        * FCALL_RO
        * FUNCTION DELETE
        * FUNCTION KILL
        * FUNCTION INFO
      2. Register engine
      
      In addition, this commit introduce the first engine
      that uses the Redis Function capabilities, the
      Lua engine.
      cbd46317
    • Binbin's avatar
      Fix CONFIG SET test failures in MacOS/FreeBSD (#9881) · e57a4db5
      Binbin authored
      After the introduction of `Multiparam config set` in #9748,
      there are two tests cases failed.
      
      ```
      [exception]: Executing test client: ERR Config set failed - Failed to set current oom_score_adj. Check server logs..
      ERR Config set failed - Failed to set current oom_score_adj. Check server logs.
      ```
      
      `CONFIG sanity` test failed on the `config set oom-score-adj-values`
      which is a "special" config that does not catch no-op changes.
      And then it will update `oom-score-adj` which not supported in
      MacOs. We solve it by adding `oom-score*` to the `skip_configs` list.
      
      ```
      *** [err]: CONFIG SET rollback on apply error in tests/unit/introspection.tcl
      Expected an error but nothing was caught
      ```
      
      `CONFIG SET rollback on apply error` test failed on the
      `config set port $used_port`. In theory, it should throw the
      error `Unable to listen on this port*`. But it failed on MacOs.
      We solve it by adding `-myaddr 127.0.0.1` to the socket call.
      e57a4db5
  3. 01 Dec, 2021 2 commits
    • meir@redislabs.com's avatar
      Redis Functions - Introduce script unit. · fc731bc6
      meir@redislabs.com authored
      Script unit is a new unit located on script.c.
      Its purpose is to provides an API for functions (and eval)
      to interact with Redis. Interaction includes mostly
      executing commands, but also functionalities like calling
      Redis back on long scripts or check if the script was killed.
      
      The interaction is done using a scriptRunCtx object that
      need to be created by the user and initialized using scriptPrepareForRun.
      
      Detailed list of functionalities expose by the unit:
      1. Calling commands (including all the validation checks such as
         acl, cluster, read only run, ...)
      2. Set Resp
      3. Set Replication method (AOF/REPLICATION/NONE)
      4. Call Redis back to on long running scripts to allow Redis reply
         to clients and perform script kill
      
      The commit introduce the new unit and uses it on eval commands to
      interact with Redis.
      fc731bc6
    • yoav-steinberg's avatar
      Multiparam config set (#9748) · 0e5b813e
      yoav-steinberg authored
      We can now do: `config set maxmemory 10m repl-backlog-size 5m`
      
      ## Basic algorithm to support "transaction like" config sets:
      
      1. Backup all relevant current values (via get).
      2. Run "verify" and "set" on everything, if we fail run "restore".
      3. Run "apply" on everything (optional optimization: skip functions already run). If we fail run "restore".
      4. Return success.
      
      ### restore
      1. Run set on everything in backup. If we fail log it and continue (this puts us in an undefined
         state but we decided it's better than the alternative of panicking). This indicates either a bug
         or some unsupported external state.
      2. Run apply on everything in backup (optimization: skip functions already run). If we fail log
         it (see comment above).
      3. Return error.
      
      ## Implementation/design changes:
      * Apply function are idempotent (have no effect if they are run more than once for the same config).
      * No indication in set functions if we're reading the config or running from the `CONFIG SET` command
         (removed `update` argument).
      * Set function should set some config variable and assume an (optional) apply function will use that
         later to apply. If we know this setting can be safely applied immediately and can always be reverted
         and doesn't depend on any other configuration we can apply immediately from within the set function
         (and not store the setting anywhere). This is the case of this `dir` config, for example, which has no
         apply function. No apply function is need also in the case that setting the variable in the `server` struct
         is all that needs to be done to make the configuration take effect. Note that the original concept of `update_fn`,
         which received the old and new values was removed and replaced by the optional apply function.
      * Apply functions use settings written to the `server` struct and don't receive any inputs.
      * I take care that for the generic (non-special) configs if there's no change I avoid calling the setter (possible
         optimization: avoid calling the apply function as well).
      * Passing the same config parameter more than once to `config set` will fail. You can't do `config set my-setting
         value1 my-setting value2`.
      
      Note that getting `save` in the context of the conf file parsing to work here as before was a pain.
      The conf file supports an aggregate `save` definition, where each `save` line is added to the server's
      save params. This is unlike any other line in the config file where each line overwrites any previous
      configuration. Since we now support passing multiple save params in a single line (see top comments
      about `save` in https://github.com/redis/redis/pull/9644) we should deprecate the aggregate nature of
      this config line and perhaps reduce this ugly code in the future.
      0e5b813e
  4. 30 Nov, 2021 2 commits
    • Itamar Haber's avatar
      Adds auto-seq-only-generation via `XADD ... <ms>-*` (#9217) · 21aa1d4b
      Itamar Haber authored
      Adds the ability to autogenerate the sequence part of the millisecond-only explicit ID specified for `XADD`. This is useful in case added entries have an externally-provided timestamp without sub-millisecond resolution.
      21aa1d4b
    • Meir Shpilraien (Spielrein)'s avatar
      Swap '\r\n' with spaces when returning a big number reply from Lua script. (#9870) · b8e82d20
      Meir Shpilraien (Spielrein) authored
      The issue can only happened with a bad Lua script that claims to return
      a big number while actually return data which is not a big number (contains
      chars that are not digits). Such thing will not cause an issue unless the big
      number value contains `\r\n` and then it messes the resp3 structure. The fix
      changes all the appearances of '\r\n' with spaces.
      
      Such an issue can also happened on simple string or error replies but those
      already handle it the same way this PR does (replace `\r\n` with spaces).
      
      Other replies type are not vulnerable to this issue because they are not
      counting on free text that is terminated with `\r\n` (either it contains the
      bulk length like string reply or they are typed reply that can not inject free
      text like boolean or number).
      
      The issue only exists on unstable branch, big number reply on Lua script
      was not yet added to any official release.
      b8e82d20
  5. 29 Nov, 2021 3 commits
    • Binbin's avatar
      Fix CLIENT KILL kill all clients with id 0 (#9853) · 3119a3ae
      Binbin authored
      
      
      * Fix CLIENT KILL kill all clients with id 0 or with skipme
      CLIENT KILL with ID argument should only kill the client with the provided ID. In old code, 
      CLIENT KILL with id 0 will kill all the connected clients.
      Co-authored-by: default avatarOfir Luzon <ofirluzon@gmail.com>
      3119a3ae
    • leishiao's avatar
      improvement of a blocking xread test (#9859) · d56ded89
      leishiao authored
      
      
      This test relies on that `XREAD BLOCK 20000 STREAMS s1{t} s2{t} s3{t} $ $ $`
      is executed by redis before `XADD s2{t} * new abcd1234`. A ` wait_for_blocked_client`
      is needed between the two to ensure the order, otherwise `XADD s2{t} * new abcd1234`
      might be executed first due to network delay causing a test failure.
      Co-authored-by: default avatarxiaolei <xiaolei@91jkys.com>
      d56ded89
    • sundb's avatar
      Fix abnormal compression due to out-of-control recompress (#9849) · 494ee2f1
      sundb authored
      This pr is following #9779 .
      
      ## Describe of feature
      Now when we turn on the `list-compress-depth` configuration, the list will compress
      the ziplist between `[list-compress-depth, -list-compress-depth]`.
      When we need to use the compressed data, we will first decompress it, then use it,
      and finally compress it again.
      It's controlled by `quicklistNode->recompress`, which is designed to avoid the need to
      re-traverse the entire quicklist for compression after each decompression, we only need
      to recompress the quicklsitNode being used.
      In order to ensure the correctness of recompressing, we should normally let
      quicklistDecompressNodeForUse and quicklistCompress appear in pairs, otherwise,
      it may lead to the head and tail being compressed or the middle ziplist not being
      compressed correctly, which is exactly the problem this pr needs to solve.
      
      ## Solution
      1. Reset `quicklistIter` after insert and replace.
          The quicklist node will be compressed in `quicklistInsertAfter`, `quicklistInsertBefore`,
         `quicklistReplaceAtIndex`, so we can safely reset the quicklistIter to avoid it being used again
      2. `quicklistIndex` will return an iterator that can be used to recompress the current node after use.
          
      ## Test
      1. In the `Stress Tester for #3343-Similar Errors` test, when the server crashes or when
         `valgrind` or `asan` error is detected, print violating commands.
      2. Add a crash test due to wrongly recompressing after `lrem`.
      3. Remove `insert before with 0 elements` and `insert after with 0 elements`,
         Now we forbid any operation on an NULL quicklistIter.
      494ee2f1
  6. 28 Nov, 2021 4 commits
    • Binbin's avatar
      Improve stability in some blocking command tests (#9856) · 8759c1e1
      Binbin authored
      In order to test the situation where multiple clients are
      blocked, we set up multiple clients to execute some blocking
      commands. These tests depend on the order of command processing.
      
      Those tests are based on the wrong assumption that the command
      send first will be executed by the server first, which is obviously
      wrong in some network delyas.
      
      This commit ensures orderly execution of commands by waiting
      and judging the number of blocked clients each time.
      
      Fix #9850
      8759c1e1
    • Meir Shpilraien (Spielrein)'s avatar
      Clean Lua stack before parsing call reply to avoid crash on a call with many arguments (#9809) · 6b0b04f1
      Meir Shpilraien (Spielrein) authored
      This commit 0f8b634c (CVE-2021-32626 released in 6.2.6, 6.0.16, 5.0.14)
      fixes an invalid memory write issue by using `lua_checkstack` API to make
      sure the Lua stack is not overflow. This fix was added on 3 places:
      1. `luaReplyToRedisReply`
      2. `ldbRedis`
      3. `redisProtocolToLuaType`
      
      On the first 2 functions, `lua_checkstack` is handled gracefully while the
      last is handled with an assert and a statement that this situation can
      not happened (only with misbehave module):
      
      > the Redis reply might be deep enough to explode the LUA stack (notice
      that currently there is no such command in Redis that returns such a nested
      reply, but modules might do it)
      
      The issue that was discovered is that user arguments is also considered part
      of the stack, and so the following script (for example) make the assertion reachable:
      ```
      local a = {}
      for i=1,7999 do
          a[i] = 1
      end 
      return redis.call("lpush", "l", unpack(a))
      ```
      
      This is a regression because such a script would have worked before and now
      its crashing Redis. The solution is to clear the function arguments from the Lua
      stack which makes the original assumption true and the assertion unreachable.
      6b0b04f1
    • Viktor Söderqvist's avatar
      Sort out the mess around writable replicas and lookupKeyRead/Write (#9572) · acf3495e
      Viktor Söderqvist authored
      Writable replicas now no longer use the values of expired keys. Expired keys are
      deleted when lookupKeyWrite() is used, even on a writable replica. Previously,
      writable replicas could use the value of an expired key in write commands such
      as INCR, SUNIONSTORE, etc..
      
      This commit also sorts out the mess around the functions lookupKeyRead() and
      lookupKeyWrite() so they now indicate what we intend to do with the key and
      are not affected by the command calling them.
      
      Multi-key commands like SUNIONSTORE, ZUNIONSTORE, COPY and SORT with the
      store option now use lookupKeyRead() for the keys they're reading from (which will
      not allow reading from logically expired keys).
      
      This commit also fixes a bug where PFCOUNT could return a value of an
      expired key.
      
      Test modules commands have their readonly and write flags updated to correctly
      reflect their lookups for reading or writing. Modules are not required to
      correctly reflect this in their command flags, but this change is made for
      consistency since the tests serve as usage examples.
      
      Fixes #6842. Fixes #7475.
      acf3495e
    • sundb's avatar
      Fix COMMAND GETKEYS on LCS (#9852) · 4d870078
      sundb authored
      Remove lcsGetKeys to clean up the remaining STRALGO after #9733.
      i.e. it still used a getkeys_proc which was still looking for the KEYS or STRINGS arguments
      4d870078
  7. 24 Nov, 2021 2 commits
    • Binbin's avatar
      Wait for `asyn_loading` to stop in `short read` test (#9841) · fb4f7be2
      Binbin authored
      In #9323, when `repl-diskless-load` is enabled and set to `swapdb`,
      if the master replication ID hasn't changed, we can load data-set
      asynchronously, and serving read commands during the full resync.
      
      In `diskless loading short read` test, after a loading successfully,
      we will wait for the loading to stop and continue the for loop.
      
      After the introduction of `async_loading`, we also need to check it.
      Otherwise the next loop will start too soon, may trigger a timing issue.
      fb4f7be2
    • Binbin's avatar
      Add tests to cover EXPIRE overflow fix (#9839) · 9273d09d
      Binbin authored
      In #8287, some overflow checks have been added. But when
      `when *= 1000` overflows, it will become a positive number.
      And the check not able to catch it. The key will be added with
      a short expiration time and will deleted a few seconds later.
      
      In #9601, will check the overflow after `*=` and return an
      error first, and avoiding this situation.
      
      In this commit, added some tests to cover those code paths.
      Found it in #9825, and close it.
      9273d09d
  8. 23 Nov, 2021 1 commit
    • guybe7's avatar
      QUIT is a command, HOST: and POST are not (#9798) · b161cff5
      guybe7 authored
      Some people complain that QUIT is missing from help/command table.
      Not appearing in COMMAND command, command stats, ACL, etc.
      and instead, there's a hack in processCommand with a comment that looks outdated.
      Note that it is [documented](https://redis.io/commands/quit)
      
      At the same time, HOST: and POST are there in the command table although these are not real commands.
      They would appear in the COMMAND command, and even in commandstats.
      
      Other changes:
      1. Initialize the static logged_time static var in securityWarningCommand
      2. add `no-auth` flag to RESET so it can always be executed.
      b161cff5
  9. 22 Nov, 2021 1 commit
    • Binbin's avatar
      Fix timing issue in sub-second expires test (#9821) · 698b5774
      Binbin authored
      
      
      The `PEXPIRE/PSETEX/PEXPIREAT can set sub-second expires` test is
      a very time sensitive test, it used to occasionally fail on MacOS.
      
      It will perform there internal tests in a loop, as long as one
      fails, it will try to excute again in the next loop.
      
      oranagra suggested that we can split it into three individual tests,
      so that if one fails, we do not need to retry the others. And maybe
      it will increase the chances of success dramatically.
      
      Each is executed 500 times, and the number of retries is collected:
      ```
      PSETEX, total: 500, sum: 745, min: 0, max: 13, avg: 1.49
      
      PEXPIRE, total: 500, sum: 575, min: 0, max: 16, avg: 1.15
      
      PEXPIREAT, total: 500, sum: 0, min: 0, max: 0, avg: 0.0
      
      ALL(old_way), total: 500, sum: 8090, min: 0, max: 138, avg: 16.18
      ```
      
      And we can see the threshold is very low.
      Splitting the test also makes the code better to maintain.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      698b5774
  10. 21 Nov, 2021 1 commit
    • Oran Agra's avatar
      Improve active defrag in jemalloc 5.2 (#9778) · d4e7ffb3
      Oran Agra authored
      Background:
      Following the upgrade to jemalloc 5.2, there was a test that used to be flaky and
      started failing consistently (on 32bit), so we disabled it ​(see #9645).
      
      This is a test that i introduced in #7289 when i attempted to solve a rare stagnation
      problem, and it later turned out i failed to solve it, ans what's more i added a test that
      caused it to be not so rare, and as i mentioned, now in jemalloc 5.2 it became consistent on 32bit.
      
      Stagnation can happen when all the slabs of the bin are equally utilized, so the decision
      to move an allocation from a relatively empty slab to a relatively full one, will never
      happen, and in that test all the slabs are at 50% utilization, so the defragger could just
      keep scanning the keyspace and not move anything.
      
      What this PR changes:
      * First, finally in jemalloc 5.2 we have the count of non-full slabs, so when we compare
        the utilization of the current slab, we can compare it to the average utilization of the non-full
        slabs in our bin, instead of the total average of our bin. this takes the full slabs out of the game,
        since they're not candidates for migration (neither source nor target).
      * Secondly, We add some 12% (100/8) to the decision to defrag an allocation, this is the part
        that aims to avoid stagnation, and it's especially important since the above mentioned change
        can get us closer to stagnation.
      * Thirdly, since jemalloc 5.2 adds sharded bins, we take into account all shards (something
        that's missing from the original PR that merged it), this isn't expected to make any difference
        since anyway there should be just one shard.
      
      How this was benchmarked.
      What i did was run the memefficiency test unit with `--verbose` and compare the defragger hits
      and misses the tests reported.
      At first, when i took into consideration only the non-full slabs, it got a lot worse (i got into
      stagnation, or just got a lot of misses and a lot of hits), but when i added the 10% i got back
      to results that were slightly better than the ones of the jemalloc 5.1 branch. i.e. full defragmentation
      was achieved with fewer hits (relocations), and fewer misses (keyspace scans).
      d4e7ffb3
  11. 18 Nov, 2021 5 commits
    • Yossi Gottlieb's avatar
      366d5101
    • perryitay's avatar
      Fix crashes when list-compress-depth is used. (#9779) · 0c10f0e1
      perryitay authored
      Recently we started using list-compress-depth in tests (was completely untested till now).
      Turns this triggered test failures with the external mode, since the tests left the setting enabled
      and then it was used in other tests (specifically the fuzzer named "Stress tester for #3343-alike bugs").
      
      This PR fixes the issue of the `recompress` flag being left set by mistake, which caused the code to
      later to compress the head or tail nodes (which should never be compressed)
      
      The solution is to reset the recompress flag when it should have been (when it was decided not to compress).
      
      Additionally we're adding some assertions and improve the tests so in order to catch other similar bugs.
      0c10f0e1
    • Eduardo Semprebon's avatar
      Reject PING with MASTERDOWN when replica-serve-stale-data=no (#9757) · 1a255e31
      Eduardo Semprebon authored
      Currently PING returns different status when server is not serving data,
      for example when `LOADING` or `BUSY`.
      But same was not true for `MASTERDOWN`
      This commit makes PING reply with `MASTERDOWN` when
      replica-serve-stale-data=no and link is MASTER is down.
      1a255e31
    • guybe7's avatar
      Obliterate STRALGO! add LCS (which only works on keys) (#9799) · af748988
      guybe7 authored
      Drop the STRALGO command, now LCS is a command of its own and it only works on keys (not input strings).
      The motivation is that STRALGO's syntax was really messed-up...
      - assumes all (future) string algorithms will take similar arguments
      - mixes command that takes keys and one that doesn't in the same command.
      - make it nearly impossible to expose the right key spec in COMMAND INFO (issues cluster clients)
      - hard for cluster clients to determine the key names (firstkey, lastkey, etc)
      - hard for ACL / flags (is it a read command?)
      
      This is a breaking change.
      af748988
    • Binbin's avatar
      Fixes ZPOPMIN/ZPOPMAX wrong replies when count is 0 with non-zset (#9711) · 91e77a0c
      Binbin authored
      Moves ZPOP ... 0 fast exit path after type check to reply with
      WRONGTYPE. In the past it will return an empty array.
      
      Also now count is not allowed to be negative.
      
      see #9680
      
      before:
      ```
      127.0.0.1:6379> set zset str
      OK
      127.0.0.1:6379> zpopmin zset 0
      (empty array)
      127.0.0.1:6379> zpopmin zset -1
      (empty array)
      ```
      
      after:
      ```
      127.0.0.1:6379> set zset str
      OK
      127.0.0.1:6379> zpopmin zset 0
      (error) WRONGTYPE Operation against a key holding the wrong kind of value
      127.0.0.1:6379> zpopmin zset -1
      (error) ERR value is out of range, must be positive
      ```
      91e77a0c
  12. 16 Nov, 2021 1 commit
    • sundb's avatar
      Change lzf to handle values larger than UINT32_MAX (#9776) · 985430b4
      sundb authored
      Redis supports inserting data over 4GB into string (and recently for lists too, see #9357),
      But LZF compression used in RDB files (see `rdbcompression` config), and in quicklist
      (see `list-compress-depth` config) does not support compress/decompress data over
      UINT32_MAX, which will result in corrupting the rdb after compression.
      
      Internal changes:
      1. Modify the `unsigned int` parameter of `lzf_compress/lzf_decompress` to `size_t`.
      2. Modify the variable types in `lzf_compress` involving offsets and lengths to `size_t`.
      3. Set LZF_USE_OFFSETS to 0.
          When LZF_USE_OFFSETS is 1, lzf store offset into `LZF_HSLOT`(32bit). 
          Even in 64-bit, `LZF_USE_OFFSETS` defaults to 1, because lzf assumes that it only
          compresses and decompresses data smaller than UINT32_MAX.
          But now we need to make lzf support 64-bit, turning on `LZF_USE_OFFSETS` will make
          it impossible to store 64-bit offsets or pointers.
          BTW, disable LZF_USE_OFFSETS also brings a few performance improvements.
      
      Tests:
      1. Add test for compress/decompress string large than UINT32_MAX.
      2. Add unittest for compress/decompress quicklistNode.
      985430b4
  13. 15 Nov, 2021 1 commit
    • yoav-steinberg's avatar
      Connection leak in external tests. (#9777) · e968d9ac
      yoav-steinberg authored
      Two issues:
      1. In many tests we simply forgot to close the connections we created, which doesn't matter for normal tests where the server is killed, but creates a leak on external server tests.
      2. When calling `start_server` on external test we create a fresh connection instead of really starting a new server, but never clean it at the end.
      e968d9ac
  14. 13 Nov, 2021 1 commit
    • Binbin's avatar
      Tune expire test threshold. (#9775) · 174eedce
      Binbin authored
      I have seen this CI failure twice on MacOS:
      
      *** [err]: PEXPIRE/PSETEX/PEXPIREAT can set sub-second expires in tests/unit/expire.tcl
      Expected 'somevalue {} somevalue {} somevalue {}' to equal or match '{} {} {} {} somevalue {}'
      
      I did some loop test in my own daily CI, the results show that is
      not particularly stable. Change the threshold from 30 to 50.
      174eedce
  15. 09 Nov, 2021 1 commit
    • YaacovHazan's avatar
      fix short timeout in replication short read tests (#9763) · 03406fcb
      YaacovHazan authored
      In both tests, "diskless loading short read" and "diskless loading short read with module",
      the timeout of waiting for the replica to respond to a short read and log it, is too short.
      
      Also, add --dump-logs in runtest-moduleapi for valgrind runs.
      03406fcb
  16. 04 Nov, 2021 2 commits
    • Eduardo Semprebon's avatar
      Replica keep serving data during repl-diskless-load=swapdb for better availability (#9323) · 91d0c758
      Eduardo Semprebon authored
      
      
      For diskless replication in swapdb mode, considering we already spend replica memory
      having a backup of current db to restore in case of failure, we can have the following benefits
      by instead swapping database only in case we succeeded in transferring db from master:
      
      - Avoid `LOADING` response during failed and successful synchronization for cases where the
        replica is already up and running with data.
      - Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
        time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
      - This could be implemented also for disk replication with similar benefits if consumers are willing
        to spend the extra memory usage.
      
      General notes:
      - The concept of `backupDb` becomes `tempDb` for clarity.
      - Async loading mode will only kick in if the replica is syncing from a master that has the same
        repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline. 
      - New property in INFO: `async_loading` to differentiate from the blocking loading
      - Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
        and the tempDb that is passed around.
      - Because this is affecting replicas only, we assume that if they are not readonly and write commands
        during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
        here anyways to avoid complications.
      
      Considerations for review:
      - We have many cases where server.loading flag is used and even though I tried my best, there may
        be cases where async_loading should be checked as well and cases where it shouldn't (would require
        very good understanding of whole code)
      - Several places that had different behavior depending on the loading flag where actually meant to just
        handle commands coming from the AOF client differently than ones coming from real clients, changed
        to check CLIENT_ID_AOF instead.
      
      **Additional for Release Notes**
      - Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
        contribute on triggering next database SAVE
      - New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
      - Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
        Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
        ABORTED and COMPLETED.
      - New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
        to allow modules to declare they support the diskless replication with async loading (when absent, we fall
        back to disk-based loading).
      Co-authored-by: default avatarEduardo Semprebon <edus@saxobank.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      91d0c758
    • Itamar Haber's avatar
      Fixes LPOP/RPOP wrong replies when count is 0 (#9692) · 06dd202a
      Itamar Haber authored
      Introduced in #8179, this fixes the command's replies in the 0 count edge case.
      [BREAKING] changes the reply type when count is 0 to an empty array (instead of nil)
      Moves LPOP ... 0 fast exit path after type check to reply with WRONGTYPE
      06dd202a
  17. 03 Nov, 2021 2 commits
    • perryitay's avatar
      Add support for list type to store elements larger than 4GB (#9357) · f27083a4
      perryitay authored
      
      
      Redis lists are stored in quicklist, which is currently a linked list of ziplists.
      Ziplists are limited to storing elements no larger than 4GB, so when bigger
      items are added they're getting truncated.
      This PR changes quicklists so that they're capable of storing large items
      in quicklist nodes that are plain string buffers rather than ziplist.
      
      As part of the PR there were few other changes in redis: 
      1. new DEBUG sub-commands: 
         - QUICKLIST-PACKED-THRESHOLD - set the threshold of for the node type to
           be plan or ziplist. default (1GB)
         - QUICKLIST <key> - Shows low level info about the quicklist encoding of <key>
      2. rdb format change:
         - A new type was added - RDB_TYPE_LIST_QUICKLIST_2 . 
         - container type (packed / plain) was added to the beginning of the rdb object
           (before the actual node list).
      3. testing:
         - Tests that requires over 100MB will be by default skipped. a new flag was
           added to 'runtest' to run the large memory tests (not used by default)
      Co-authored-by: default avatarsundb <sundbcn@gmail.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      f27083a4
    • guybe7's avatar
      Fix COMMAND GETKEYS on EVAL without keys (#9733) · f11a2d4d
      guybe7 authored
      Add new no-mandatory-keys flag to support COMMAND GETKEYS of commands
      which have no mandatory keys.
      
      In the past we would have got this error:
      ```
      127.0.0.1:6379> command getkeys eval "return 1" 0
      (error) ERR Invalid arguments specified for command
      ```
      f11a2d4d
  18. 02 Nov, 2021 3 commits
    • Oran Agra's avatar
      Solve issues with tracking test in external mode (#9726) · d25dc089
      Oran Agra authored
      The issue was that setting maxmemory to used_memory and expecting
      eviction is insufficient, since we need to take
      mem_not_counted_for_evict into consideration.
      
      This test got broken by #9166
      d25dc089
    • Oran Agra's avatar
      attempt to fix tracking test issue with external tests due to lazy free (#9722) · 87321deb
      Oran Agra authored
      The External tests started failing recently for unclear reason:
      ```
      *** [err]: Tracking invalidation message of eviction keys should be before response in tests/unit/tracking.tcl
      Expected '0' to be equal to 'invalidate volatile-key' (context: type eval line 21 cmd {assert_equal $res {invalidate volatile-key}} proc ::test)
      ```
      
      I suspect the issue is that the used_memory sample is taken while a lazy free is still being processed.
      87321deb
    • menwen's avatar
      fix defrag test looking at the wrong latency metric (#9723) · d5ca72e3
      menwen authored
      the latency event was renamed in #7726, and the outcome was that the test was
      ineffective (unable to measure the max latency, always seeing 0)
      d5ca72e3
  19. 01 Nov, 2021 1 commit
    • Oran Agra's avatar
      fix valgrind issues with long double module test (#9709) · f1f3cceb
      Oran Agra authored
      The module test in reply.tcl was introduced by #8521 but didn't run until recently (see #9639)
      and then it started failing with valgrind.
      This is because valgrind uses 64 bit long double (unlike most other platforms that have at least 80 bits)
      But besides valgrind, the tests where also incompatible with ARM32, which also uses 64 bit long doubles.
      
      We now use appropriate value to avoid issues with either valgrind or ARM32
      
      In all the double tests, i use 3.141, which is safe since since addReplyDouble uses
      `%.17Lg` which is able to represent this value without adding any digits due to precision loss. 
      
      In the long double, since we use `%.17Lf` in ld2string, it preserves 17 significant
      digits, rather than 17 digit after the decimal point (like in `%.17Lg`).
      So to make these similar, i use value lower than 1 (no digits left of
      the period)
      
      Lastly, we have the same issue with TCL (no long doubles) so we read
      raw protocol in that test.
      
      Note that the only error before this fix (in both valgrind and ARM32 is this:
      ```
      *** [err]: RM_ReplyWithLongDouble: a float reply in tests/unit/moduleapi/reply.tcl
      Expected '3.141' to be equal to '3.14100000000000001' (context: type eval line 2 cmd {assert_equal 3.141 [r rw.longdouble 3.141]} proc ::test)
      ```
      so the changes to debug.c and scripting.tcl aren't really needed, but i consider them a cleanup
      (i.e. scripting.c validated a different constant than the one that's sent to it from debug.c).
      
      Another unrelated change is to add the RESP version to the repeated tests in reply.tcl
      f1f3cceb
  20. 31 Oct, 2021 1 commit
    • Binbin's avatar
      Fix multiple COUNT in LMPOP/BLMPOP/ZMPOP/BZMPOP (#9701) · 03357883
      Binbin authored
      The previous code did not check whether COUNT is set.
      So we can use `lmpop 2 key1 key2 left count 1 count 2`.
      
      This situation can occur in LMPOP/BLMPOP/ZMPOP/BZMPOP commands.
      LMPOP/BLMPOP introduced in #9373, ZMPOP/BZMPOP introduced in #9484.
      03357883
  21. 26 Oct, 2021 1 commit
  22. 25 Oct, 2021 2 commits
    • Shaya Potter's avatar
      Add RM_ReplyWithBigNumber module API (#9639) · 12ce2c39
      Shaya Potter authored
      
      
      Let modules use additional type of RESP3 response (unused by redis so far)
      Also fix tests that where introduced in #8521 but didn't actually run.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      12ce2c39
    • Wang Yuan's avatar
      Replication backlog and replicas use one global shared replication buffer (#9166) · c1718f9d
      Wang Yuan authored
      ## Background
      For redis master, one replica uses one copy of replication buffer, that is a big waste of memory,
      more replicas more waste, and allocate/free memory for every reply list also cost much.
      If we set client-output-buffer-limit small and write traffic is heavy, master may disconnect with
      replicas and can't finish synchronization with replica. If we set  client-output-buffer-limit big,
      master may be OOM when there are many replicas that separately keep much memory.
      Because replication buffers of different replica client are the same, one simple idea is that
      all replicas only use one replication buffer, that will effectively save memory.
      
      Since replication backlog content is the same as replicas' output buffer, now we
      can discard replication backlog memory and use global shared replication buffer
      to implement replication backlog mechanism.
      
      ## Implementation
      I create one global "replication buffer" which contains content of replication stream.
      The structure of "replication buffer" is similar to the reply list that exists in every client.
      But the node of list is `replBufBlock`, which has `id, repl_offset, refcount` fields.
      ```c
      /* Replication buffer blocks is the list of replBufBlock.
       *
       * +--------------+       +--------------+       +--------------+
       * | refcount = 1 |  ...  | refcount = 0 |  ...  | refcount = 2 |
       * +--------------+       +--------------+       +--------------+
       *      |                                            /       \
       *      |                                           /         \
       *      |                                          /           \
       *  Repl Backlog                               Replia_A      Replia_B
       * 
       * Each replica or replication backlog increments only the refcount of the
       * 'ref_repl_buf_node' which it points to. So when replica walks to the next
       * node, it should first increase the next node's refcount, and when we trim
       * the replication buffer nodes, we remove node always from the head node which
       * refcount is 0. If the refcount of the head node is not 0, we must stop
       * trimming and never iterate the next node. */
      
      /* Similar with 'clientReplyBlock', it is used for shared buffers between
       * all replica clients and replication backlog. */
      typedef struct replBufBlock {
          int refcount;           /* Number of replicas or repl backlog using. */
          long long id;           /* The unique incremental number. */
          long long repl_offset;  /* Start replication offset of the block. */
          size_t size, used;
          char buf[];
      } replBufBlock;
      ```
      So now when we feed replication stream into replication backlog and all replicas, we only need
      to feed stream into replication buffer `feedReplicationBuffer`. In this function, we set some fields of
      replication backlog and replicas to references of the global replication buffer blocks. And we also
      need to check replicas' output buffer limit to free if exceeding `client-output-buffer-limit`, and trim
      replication backlog if exceeding `repl-backlog-size`.
      
      When sending reply to replicas, we also need to iterate replication buffer blocks and send its
      content, when totally sending one block for replica, we decrease current node count and
      increase the next current node count, and then free the block which reference is 0 from the
      head of replication buffer blocks.
      
      Since now we use linked list to manage replication backlog, it may cost much time for iterating
      all linked list nodes to find corresponding replication buffer node. So we create a rax tree to
      store some nodes  for index, but to avoid rax tree occupying too much memory, i record
      one per 64 nodes for index.
      
      Currently, to make partial resynchronization as possible as much, we always let replication
      backlog as the last reference of replication buffer blocks, backlog size may exceeds our setting
      if slow replicas that reference vast replication buffer blocks, and this method doesn't increase
      memory usage since they share replication buffer. To avoid freezing server for freeing unreferenced
      replication buffer blocks when we need to trim backlog for exceeding backlog size setting,
      we trim backlog incrementally (free 64 blocks per call now), and make it faster in
      `beforeSleep` (free 640 blocks).
      
      ### Other changes
      - `mem_total_replication_buffers`: we add this field in INFO command, it means the total
        memory of replication buffers used.
      - `mem_clients_slaves`:  now even replica is slow to replicate, and its output buffer memory
        is not 0, but it still may be 0, since replication backlog and replicas share one global replication
        buffer, only if replication buffer memory is more than the repl backlog setting size, we consider
        the excess as replicas' memory. Otherwise, we think replication buffer memory is the consumption
        of repl backlog.
      - Key eviction
        Since all replicas and replication backlog share global replication buffer, we think only the
        part of exceeding backlog size the extra separate consumption of replicas.
        Because we trim backlog incrementally in the background, backlog size may exceeds our
        setting if slow replicas that reference vast replication buffer blocks disconnect.
        To avoid massive eviction loop, we don't count the delayed freed replication backlog into
        used memory even if there are no replicas, i.e. we also regard this memory as replicas's memory.
      - `client-output-buffer-limit` check for replica clients
        It doesn't make sense to set the replica clients output buffer limit lower than the repl-backlog-size
        config (partial sync will succeed and then replica will get disconnected). Such a configuration is
        ignored (the size of repl-backlog-size will be used). This doesn't have memory consumption
        implications since the replica client will share the backlog buffers memory.
      - Drop replication backlog after loading data if needed
        We always create replication backlog if server is a master, we need it because we put DELs in
        it when loading expired keys in RDB, but if RDB doesn't have replication info or there is no rdb,
        it is not possible to support partial resynchronization, to avoid extra memory of replication backlog,
        we drop it.
      - Multi IO threads
       Since all replicas and replication backlog use global replication buffer,  if I/O threads are enabled,
        to guarantee data accessing thread safe, we must let main thread handle sending the output buffer
        to all replicas. But before, other IO threads could handle sending output buffer of all replicas.
      
      ## Other optimizations
      This solution resolve some other problem:
      - When replicas disconnect with master since of out of output buffer limit, releasing the output
        buffer of replicas may freeze server if we set big `client-output-buffer-limit` for replicas, but now,
        it doesn't cause freezing.
      - This implementation may mitigate reply list copy cost time(also freezes server) when one replication
        has huge reply buffer and another replica can copy buffer for full synchronization. now, we just copy
        reference info, it is very light.
      - If we set replication backlog size big, it also may cost much time to copy replication backlog into
        replica's output buffer. But this commit eliminates this problem.
      - Resizing replication backlog size doesn't empty current replication backlog content.
      c1718f9d