1. 02 Dec, 2021 1 commit
    • meir@redislabs.com's avatar
      Redis Functions - Added redis function unit and Lua engine · cbd46317
      meir@redislabs.com authored
      Redis function unit is located inside functions.c
      and contains Redis Function implementation:
      1. FUNCTION commands:
        * FUNCTION CREATE
        * FCALL
        * FCALL_RO
        * FUNCTION DELETE
        * FUNCTION KILL
        * FUNCTION INFO
      2. Register engine
      
      In addition, this commit introduce the first engine
      that uses the Redis Function capabilities, the
      Lua engine.
      cbd46317
  2. 01 Dec, 2021 5 commits
    • meir@redislabs.com's avatar
      Redis Functions - Moved invoke Lua code functionality to script_lua.c · f21dc38a
      meir@redislabs.com authored
      The functionality was moved to script_lua.c under
      callFunction function. Its purpose is to call the Lua
      function already located on the top of the Lua stack.
      
      Used the new function on eval.c to invoke Lua code.
      The function will also be used to invoke Lua
      code on the Lua engine.
      f21dc38a
    • meir@redislabs.com's avatar
      Redis Functions - Introduce script unit. · fc731bc6
      meir@redislabs.com authored
      Script unit is a new unit located on script.c.
      Its purpose is to provides an API for functions (and eval)
      to interact with Redis. Interaction includes mostly
      executing commands, but also functionalities like calling
      Redis back on long scripts or check if the script was killed.
      
      The interaction is done using a scriptRunCtx object that
      need to be created by the user and initialized using scriptPrepareForRun.
      
      Detailed list of functionalities expose by the unit:
      1. Calling commands (including all the validation checks such as
         acl, cluster, read only run, ...)
      2. Set Resp
      3. Set Replication method (AOF/REPLICATION/NONE)
      4. Call Redis back to on long running scripts to allow Redis reply
         to clients and perform script kill
      
      The commit introduce the new unit and uses it on eval commands to
      interact with Redis.
      fc731bc6
    • meir@redislabs.com's avatar
      Redis Functions - Move Lua related variable into luaCtx struct · e0cd580a
      meir@redislabs.com authored
      The following variable was renamed:
      1. lua_caller 			-> script_caller
      2. lua_time_limit 		-> script_time_limit
      3. lua_timedout 		-> script_timedout
      4. lua_oom 			-> script_oom
      5. lua_disable_deny_script 	-> script_disable_deny_script
      6. in_eval			-> in_script
      
      The following variables was moved to lctx under eval.c
      1.  lua
      2.  lua_client
      3.  lua_cur_script
      4.  lua_scripts
      5.  lua_scripts_mem
      6.  lua_replicate_commands
      7.  lua_write_dirty
      8.  lua_random_dirty
      9.  lua_multi_emitted
      10. lua_repl
      11. lua_kill
      12. lua_time_start
      13. lua_time_snapshot
      
      This commit is in a low risk of introducing any issues and it
      is just moving varibales around and not changing any logic.
      e0cd580a
    • meir@redislabs.com's avatar
      Redis Functions - Move code to make review process easier. · 22aab1ce
      meir@redislabs.com authored
      This commit is only move code around without changing it.
      The reason behind this is to make review process easier
      by allowing the reviewer to simply ignore all code movements.
      
      changes:
      1. rename scripting.c to eval.c
      2. introduce and new file, script_lua.c, and move parts of Lua
         functionality to this new file. script_lua.c will eventually
         contains the shared code between legacy lua and lua engine.
      
      This commit does not compiled on purpose. Its only purpose is to move
      code and rename files.
      22aab1ce
    • yoav-steinberg's avatar
      Multiparam config set (#9748) · 0e5b813e
      yoav-steinberg authored
      We can now do: `config set maxmemory 10m repl-backlog-size 5m`
      
      ## Basic algorithm to support "transaction like" config sets:
      
      1. Backup all relevant current values (via get).
      2. Run "verify" and "set" on everything, if we fail run "restore".
      3. Run "apply" on everything (optional optimization: skip functions already run). If we fail run "restore".
      4. Return success.
      
      ### restore
      1. Run set on everything in backup. If we fail log it and continue (this puts us in an undefined
         state but we decided it's better than the alternative of panicking). This indicates either a bug
         or some unsupported external state.
      2. Run apply on everything in backup (optimization: skip functions already run). If we fail log
         it (see comment above).
      3. Return error.
      
      ## Implementation/design changes:
      * Apply function are idempotent (have no effect if they are run more than once for the same config).
      * No indication in set functions if we're reading the config or running from the `CONFIG SET` command
         (removed `update` argument).
      * Set function should set some config variable and assume an (optional) apply function will use that
         later to apply. If we know this setting can be safely applied immediately and can always be reverted
         and doesn't depend on any other configuration we can apply immediately from within the set function
         (and not store the setting anywhere). This is the case of this `dir` config, for example, which has no
         apply function. No apply function is need also in the case that setting the variable in the `server` struct
         is all that needs to be done to make the configuration take effect. Note that the original concept of `update_fn`,
         which received the old and new values was removed and replaced by the optional apply function.
      * Apply functions use settings written to the `server` struct and don't receive any inputs.
      * I take care that for the generic (non-special) configs if there's no change I avoid calling the setter (possible
         optimization: avoid calling the apply function as well).
      * Passing the same config parameter more than once to `config set` will fail. You can't do `config set my-setting
         value1 my-setting value2`.
      
      Note that getting `save` in the context of the conf file parsing to work here as before was a pain.
      The conf file supports an aggregate `save` definition, where each `save` line is added to the server's
      save params. This is unlike any other line in the config file where each line overwrites any previous
      configuration. Since we now support passing multiple save params in a single line (see top comments
      about `save` in https://github.com/redis/redis/pull/9644) we should deprecate the aggregate nature of
      this config line and perhaps reduce this ugly code in the future.
      0e5b813e
  3. 30 Nov, 2021 5 commits
    • Itamar Haber's avatar
      Adds auto-seq-only-generation via `XADD ... <ms>-*` (#9217) · 21aa1d4b
      Itamar Haber authored
      Adds the ability to autogenerate the sequence part of the millisecond-only explicit ID specified for `XADD`. This is useful in case added entries have an externally-provided timestamp without sub-millisecond resolution.
      21aa1d4b
    • Wen Hui's avatar
      Sentinel master reboot fix (#9438) · 2afa41f6
      Wen Hui authored
      Add master-reboot-down-after-period as a configurable parameter, to make it possible to trigger a failover from a master that is responding with `-LOADING` for a long time after being restarted.
      2afa41f6
    • 丽媛自己动's avatar
      modify misleading note in comment (#9865) · af072c26
      丽媛自己动 authored
      now rdbSaveInfo used in both way, so i think we should update previous notes, in case of misleading
      af072c26
    • Viktor Söderqvist's avatar
      bdf531e3
    • Meir Shpilraien (Spielrein)'s avatar
      Swap '\r\n' with spaces when returning a big number reply from Lua script. (#9870) · b8e82d20
      Meir Shpilraien (Spielrein) authored
      The issue can only happened with a bad Lua script that claims to return
      a big number while actually return data which is not a big number (contains
      chars that are not digits). Such thing will not cause an issue unless the big
      number value contains `\r\n` and then it messes the resp3 structure. The fix
      changes all the appearances of '\r\n' with spaces.
      
      Such an issue can also happened on simple string or error replies but those
      already handle it the same way this PR does (replace `\r\n` with spaces).
      
      Other replies type are not vulnerable to this issue because they are not
      counting on free text that is terminated with `\r\n` (either it contains the
      bulk length like string reply or they are typed reply that can not inject free
      text like boolean or number).
      
      The issue only exists on unstable branch, big number reply on Lua script
      was not yet added to any official release.
      b8e82d20
  4. 29 Nov, 2021 6 commits
    • Binbin's avatar
      Fix CLIENT KILL kill all clients with id 0 (#9853) · 3119a3ae
      Binbin authored
      
      
      * Fix CLIENT KILL kill all clients with id 0 or with skipme
      CLIENT KILL with ID argument should only kill the client with the provided ID. In old code, 
      CLIENT KILL with id 0 will kill all the connected clients.
      Co-authored-by: default avatarOfir Luzon <ofirluzon@gmail.com>
      3119a3ae
    • yoav-steinberg's avatar
      fix deprecation of _BSD_SOURCE feature test macro (#9861) · 2386e541
      yoav-steinberg authored
      caused a build warning in linenoise since glibc 2.20
      2386e541
    • OfirMos's avatar
      fixed mem leak on rdb load error (#9860) · 9f9c7857
      OfirMos authored
      a rare case of short read that can happen when breaking the master-replica
      connection on diskless load mode,
      9f9c7857
    • Binbin's avatar
      Add REDIS_CFLAGS='-Werror' to CI tests (#9828) · 980bb3ae
      Binbin authored
      Update CI so that warnings cause build failures.
      
      Also fix a warning in `test-sanitizer-address`:
      ```
      In function ‘strncpy’,
         inlined from ‘clusterUpdateMyselfIp’ at cluster.c:545:13:
      
      /usr/include/x86_64-linux-gnu/bits/string_fortified.h:106:10:
      error: ‘__builtin_strncpy’ specified bound 46 equals destination size [-Werror=stringop-truncation]
      
        106 |   return __builtin___strncpy_chk (__dest, __src, __len, __bos (__dest));
            |          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      cc1: all warnings being treated as errors
      ```
      980bb3ae
    • leishiao's avatar
      improvement of a blocking xread test (#9859) · d56ded89
      leishiao authored
      
      
      This test relies on that `XREAD BLOCK 20000 STREAMS s1{t} s2{t} s3{t} $ $ $`
      is executed by redis before `XADD s2{t} * new abcd1234`. A ` wait_for_blocked_client`
      is needed between the two to ensure the order, otherwise `XADD s2{t} * new abcd1234`
      might be executed first due to network delay causing a test failure.
      Co-authored-by: default avatarxiaolei <xiaolei@91jkys.com>
      d56ded89
    • sundb's avatar
      Fix abnormal compression due to out-of-control recompress (#9849) · 494ee2f1
      sundb authored
      This pr is following #9779 .
      
      ## Describe of feature
      Now when we turn on the `list-compress-depth` configuration, the list will compress
      the ziplist between `[list-compress-depth, -list-compress-depth]`.
      When we need to use the compressed data, we will first decompress it, then use it,
      and finally compress it again.
      It's controlled by `quicklistNode->recompress`, which is designed to avoid the need to
      re-traverse the entire quicklist for compression after each decompression, we only need
      to recompress the quicklsitNode being used.
      In order to ensure the correctness of recompressing, we should normally let
      quicklistDecompressNodeForUse and quicklistCompress appear in pairs, otherwise,
      it may lead to the head and tail being compressed or the middle ziplist not being
      compressed correctly, which is exactly the problem this pr needs to solve.
      
      ## Solution
      1. Reset `quicklistIter` after insert and replace.
          The quicklist node will be compressed in `quicklistInsertAfter`, `quicklistInsertBefore`,
         `quicklistReplaceAtIndex`, so we can safely reset the quicklistIter to avoid it being used again
      2. `quicklistIndex` will return an iterator that can be used to recompress the current node after use.
          
      ## Test
      1. In the `Stress Tester for #3343-Similar Errors` test, when the server crashes or when
         `valgrind` or `asan` error is detected, print violating commands.
      2. Add a crash test due to wrongly recompressing after `lrem`.
      3. Remove `insert before with 0 elements` and `insert after with 0 elements`,
         Now we forbid any operation on an NULL quicklistIter.
      494ee2f1
  5. 28 Nov, 2021 5 commits
    • Binbin's avatar
      Improve stability in some blocking command tests (#9856) · 8759c1e1
      Binbin authored
      In order to test the situation where multiple clients are
      blocked, we set up multiple clients to execute some blocking
      commands. These tests depend on the order of command processing.
      
      Those tests are based on the wrong assumption that the command
      send first will be executed by the server first, which is obviously
      wrong in some network delyas.
      
      This commit ensures orderly execution of commands by waiting
      and judging the number of blocked clients each time.
      
      Fix #9850
      8759c1e1
    • Meir Shpilraien (Spielrein)'s avatar
      Clean Lua stack before parsing call reply to avoid crash on a call with many arguments (#9809) · 6b0b04f1
      Meir Shpilraien (Spielrein) authored
      This commit 0f8b634c (CVE-2021-32626 released in 6.2.6, 6.0.16, 5.0.14)
      fixes an invalid memory write issue by using `lua_checkstack` API to make
      sure the Lua stack is not overflow. This fix was added on 3 places:
      1. `luaReplyToRedisReply`
      2. `ldbRedis`
      3. `redisProtocolToLuaType`
      
      On the first 2 functions, `lua_checkstack` is handled gracefully while the
      last is handled with an assert and a statement that this situation can
      not happened (only with misbehave module):
      
      > the Redis reply might be deep enough to explode the LUA stack (notice
      that currently there is no such command in Redis that returns such a nested
      reply, but modules might do it)
      
      The issue that was discovered is that user arguments is also considered part
      of the stack, and so the following script (for example) make the assertion reachable:
      ```
      local a = {}
      for i=1,7999 do
          a[i] = 1
      end 
      return redis.call("lpush", "l", unpack(a))
      ```
      
      This is a regression because such a script would have worked before and now
      its crashing Redis. The solution is to clear the function arguments from the Lua
      stack which makes the original assumption true and the assertion unreachable.
      6b0b04f1
    • Meir Shpilraien (Spielrein)'s avatar
      Fix Lua C API violation on lua msgpack lib. (#9832) · a8c1253b
      Meir Shpilraien (Spielrein) authored
      msgpack lib missed using lua_checkstack and so on rare
      cases overflow the stack by at most 2 elements. This is a
      violation of the Lua C API. Notice that Lua allocates
      additional 5 more elements on top of lua->stack_last
      so Redis does not access an invalid memory. But it is an
      API violation and we should avoid it.
      
      This PR also added a new Lua compilation option. The new
      option can be enable using environment variable called
      LUA_DEBUG. If set to `yes` (by default `no`), Lua will be
      compiled without optimizations and with debug symbols (`-O0 -g`).
      In addition, in this new mode, Lua will be compiled with the
      `-DLUA_USE_APICHECK` flag that enables extended Lua C API
      validations.
      
      In addition, set LUA_DEBUG=yes on daily valgrind flow so we
      will be able to catch Lua C API violations in the future.
      a8c1253b
    • Viktor Söderqvist's avatar
      Sort out the mess around writable replicas and lookupKeyRead/Write (#9572) · acf3495e
      Viktor Söderqvist authored
      Writable replicas now no longer use the values of expired keys. Expired keys are
      deleted when lookupKeyWrite() is used, even on a writable replica. Previously,
      writable replicas could use the value of an expired key in write commands such
      as INCR, SUNIONSTORE, etc..
      
      This commit also sorts out the mess around the functions lookupKeyRead() and
      lookupKeyWrite() so they now indicate what we intend to do with the key and
      are not affected by the command calling them.
      
      Multi-key commands like SUNIONSTORE, ZUNIONSTORE, COPY and SORT with the
      store option now use lookupKeyRead() for the keys they're reading from (which will
      not allow reading from logically expired keys).
      
      This commit also fixes a bug where PFCOUNT could return a value of an
      expired key.
      
      Test modules commands have their readonly and write flags updated to correctly
      reflect their lookups for reading or writing. Modules are not required to
      correctly reflect this in their command flags, but this change is made for
      consistency since the tests serve as usage examples.
      
      Fixes #6842. Fixes #7475.
      acf3495e
    • sundb's avatar
      Fix COMMAND GETKEYS on LCS (#9852) · 4d870078
      sundb authored
      Remove lcsGetKeys to clean up the remaining STRALGO after #9733.
      i.e. it still used a getkeys_proc which was still looking for the KEYS or STRINGS arguments
      4d870078
  6. 27 Nov, 2021 1 commit
  7. 25 Nov, 2021 1 commit
    • uriyage's avatar
      Do not watch keys for dirty client (#9829) · fa48fb2d
      uriyage authored
      
      
      Currently, the watching clients are marked as dirty when a watched
      key is touched, but we continue watching the keys for no reason.
      Then, when the same key is touched again, we iterate again on the
      watching clients list and mark all clients as dirty again.
      Only when the exec/unwatch command is issued will the client be
      removed from the key->watching_clients list. The same applies when
      a dirty client calls the WATCH command. The key will be added to be
      watched by the client even if it has no effect.
      
      In the field, no performance degradation was observed as a result of the
      current implementation; it is merely a cleanup with possible memory and
      performance gains in some situations.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      fa48fb2d
  8. 24 Nov, 2021 4 commits
    • Pavel Melkozerov's avatar
      fix fob bad log messages in rdbSave (#9842) (#9843) · 9630ded3
      Pavel Melkozerov authored
      
      
      logs message prints wrong file is failed to open temporary file
      logs have error occurred in getcwd (uses same errno to report error)
      Co-authored-by: default avatarPavel Melkozerov <pavel.melkozerov@nokia.com>
      9630ded3
    • sundb's avatar
      Replace ziplist with listpack in quicklist (#9740) · 45129059
      sundb authored
      
      
      Part three of implementing #8702, following #8887 and #9366 .
      
      ## Description of the feature
      1. Replace the ziplist container of quicklist with listpack.
      2. Convert existing quicklist ziplists on RDB loading time. an O(n) operation.
      
      ## Interface changes
      1. New `list-max-listpack-size` config is an alias for `list-max-ziplist-size`.
      2. Replace `debug ziplist` command with `debug listpack`.
      
      ## Internal changes
      1. Add `lpMerge` to merge two listpacks . (same as `ziplistMerge`)
      2. Add `lpRepr` to print info of listpack which is used in debugCommand and `quicklistRepr`. (same as `ziplistRepr`)
      3. Replace `QUICKLIST_NODE_CONTAINER_ZIPLIST` with `QUICKLIST_NODE_CONTAINER_PACKED`(following #9357 ).
          It represent that a quicklistNode is a packed node, as opposed to a plain node.
      4. Remove `createZiplistObject` method, which is never used.
      5. Calculate listpack entry size using overhead overestimation in `quicklistAllowInsert`.
          We prefer an overestimation, which would at worse lead to a few bytes below the lowest limit of 4k.
      
      ## Improvements
      1. Calling `lpShrinkToFit` after converting Ziplist to listpack, which was missed at #9366.
      2. Optimize `quicklistAppendPlainNode` to avoid memcpy data.
      
      ## Bugfix
      1. Fix crash in `quicklistRepr` when ziplist is compressed, introduced from #9366.
      
      ## Test
      1. Add unittest for `lpMerge`.
      2. Modify the old quicklist ziplist corrupt dump test.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      45129059
    • Binbin's avatar
      Wait for `asyn_loading` to stop in `short read` test (#9841) · fb4f7be2
      Binbin authored
      In #9323, when `repl-diskless-load` is enabled and set to `swapdb`,
      if the master replication ID hasn't changed, we can load data-set
      asynchronously, and serving read commands during the full resync.
      
      In `diskless loading short read` test, after a loading successfully,
      we will wait for the loading to stop and continue the for loop.
      
      After the introduction of `async_loading`, we also need to check it.
      Otherwise the next loop will start too soon, may trigger a timing issue.
      fb4f7be2
    • Binbin's avatar
      Add tests to cover EXPIRE overflow fix (#9839) · 9273d09d
      Binbin authored
      In #8287, some overflow checks have been added. But when
      `when *= 1000` overflows, it will become a positive number.
      And the check not able to catch it. The key will be added with
      a short expiration time and will deleted a few seconds later.
      
      In #9601, will check the overflow after `*=` and return an
      error first, and avoiding this situation.
      
      In this commit, added some tests to cover those code paths.
      Found it in #9825, and close it.
      9273d09d
  9. 23 Nov, 2021 2 commits
    • Oran Agra's avatar
      fix invalid read on corrupt ziplist (#9831) · a3a01429
      Oran Agra authored
      If the last bytes in ziplist are corrupt and we decode from tail to head,
      we may reach slightly outside the ziplist.
      a3a01429
    • guybe7's avatar
      QUIT is a command, HOST: and POST are not (#9798) · b161cff5
      guybe7 authored
      Some people complain that QUIT is missing from help/command table.
      Not appearing in COMMAND command, command stats, ACL, etc.
      and instead, there's a hack in processCommand with a comment that looks outdated.
      Note that it is [documented](https://redis.io/commands/quit)
      
      At the same time, HOST: and POST are there in the command table although these are not real commands.
      They would appear in the COMMAND command, and even in commandstats.
      
      Other changes:
      1. Initialize the static logged_time static var in securityWarningCommand
      2. add `no-auth` flag to RESET so it can always be executed.
      b161cff5
  10. 22 Nov, 2021 3 commits
    • Oran Agra's avatar
      Fix invalid access in lpFind on corrupted listpack (#9819) · f07dedf7
      Oran Agra authored
      Issue found by corrupt-dump-fuzzer test with ASAN.
      The problem was that lpSkip and lpGetWithSize could read the next listpack entry without validating that it's in range.
      Similarly even the memcmp in lpFind could do that and possibly crash on segfault and now they'll crash on assert first.
      
      The naive fix of using lpAssertValidEntry every time, resulted in 30% degradation in the lpFind benchmark of the unit test.
      The final fix with the condition at the bottom has no performance implications.
      f07dedf7
    • Oran Agra's avatar
      fix string escaping in corrupt-dump test to support TCL8.5 (#9824) · f00a8ad9
      Oran Agra authored
      TCL8.5 can't handle cases where part of the string is escaped and part of it isn't,
      if there's a single char that needs escaping, we need to escape the whole string.
      f00a8ad9
    • Binbin's avatar
      Fix timing issue in sub-second expires test (#9821) · 698b5774
      Binbin authored
      
      
      The `PEXPIRE/PSETEX/PEXPIREAT can set sub-second expires` test is
      a very time sensitive test, it used to occasionally fail on MacOS.
      
      It will perform there internal tests in a loop, as long as one
      fails, it will try to excute again in the next loop.
      
      oranagra suggested that we can split it into three individual tests,
      so that if one fails, we do not need to retry the others. And maybe
      it will increase the chances of success dramatically.
      
      Each is executed 500 times, and the number of retries is collected:
      ```
      PSETEX, total: 500, sum: 745, min: 0, max: 13, avg: 1.49
      
      PEXPIRE, total: 500, sum: 575, min: 0, max: 16, avg: 1.15
      
      PEXPIREAT, total: 500, sum: 0, min: 0, max: 0, avg: 0.0
      
      ALL(old_way), total: 500, sum: 8090, min: 0, max: 138, avg: 16.18
      ```
      
      And we can see the threshold is very low.
      Splitting the test also makes the code better to maintain.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      698b5774
  11. 21 Nov, 2021 5 commits
    • Oran Agra's avatar
      Fix false positive leak reported by GCC ASAN (#9816) · 183b90a6
      Oran Agra authored
      Leak found by the corrupt-dump-fuzzer when using GCC ASAN, which seems
      to falsely report leaks on pointers kept only on the stack when calling exit.
      Instead we now use _exit on panic / assert to skip these leak checks.
      
      Additionally, check for sanitizer warnings in the corrupt-dump-fuzzer between iterations,
      so that when something is found we know which test to relate it too (and it prints reproduction command list)
      183b90a6
    • Ozan Tezcan's avatar
      Don't use accurate option with ASAN unit tests (#9818) · a68b71ac
      Ozan Tezcan authored
      specifically the ziplist and listpack unit tests and benchmarks run for too long with address sanitizer and --accurate
      a68b71ac
    • Yossi Gottlieb's avatar
      Fix occasional RM_Call() crashes. (#9805) · fd0ca747
      Yossi Gottlieb authored
      With dynamically growing argc (#9528), it is necessary to initialize
      argv_len. Normally createClient() handles that, but in the case of a
      module shared_client, this needs to be done explicitly.
      
      This also addresses an issue with rewriteClientCommandArgument() which
      doesn't properly handle the case where the new element extends beyond
      argc but not beyond argv_len.
      fd0ca747
    • Oran Agra's avatar
      Prevent LCS from allocating temp memory over proto-max-bulk-len (#9817) · 14176484
      Oran Agra authored
      LCS can allocate immense amount of memory (sizes of two inputs multiplied by each other).
      In the past this caused some possible security issues due to overflows, which we solved
      and also added use of `trymalloc` to return "Insufficient memory" instead of OOM panic zmalloc.
      
      But in case overcommit is enabled, it could be that we won't get the OOM panic, and zmalloc
      will succeed, and then we can get OOM killed by the kernel.
      
      The solution here is to prevent LCS from allocating transient memory that's bigger than
      `proto-max-bulk-len` config.
      This config is not directly related to transient memory, but using a hard coded value ad well as
      introducing a specific config seems wrong.
      
      This comes to solve an error in the corrupt-dump-fuzzer test that started in the daily CI see #9799
      14176484
    • Oran Agra's avatar
      Improve active defrag in jemalloc 5.2 (#9778) · d4e7ffb3
      Oran Agra authored
      Background:
      Following the upgrade to jemalloc 5.2, there was a test that used to be flaky and
      started failing consistently (on 32bit), so we disabled it ​(see #9645).
      
      This is a test that i introduced in #7289 when i attempted to solve a rare stagnation
      problem, and it later turned out i failed to solve it, ans what's more i added a test that
      caused it to be not so rare, and as i mentioned, now in jemalloc 5.2 it became consistent on 32bit.
      
      Stagnation can happen when all the slabs of the bin are equally utilized, so the decision
      to move an allocation from a relatively empty slab to a relatively full one, will never
      happen, and in that test all the slabs are at 50% utilization, so the defragger could just
      keep scanning the keyspace and not move anything.
      
      What this PR changes:
      * First, finally in jemalloc 5.2 we have the count of non-full slabs, so when we compare
        the utilization of the current slab, we can compare it to the average utilization of the non-full
        slabs in our bin, instead of the total average of our bin. this takes the full slabs out of the game,
        since they're not candidates for migration (neither source nor target).
      * Secondly, We add some 12% (100/8) to the decision to defrag an allocation, this is the part
        that aims to avoid stagnation, and it's especially important since the above mentioned change
        can get us closer to stagnation.
      * Thirdly, since jemalloc 5.2 adds sharded bins, we take into account all shards (something
        that's missing from the original PR that merged it), this isn't expected to make any difference
        since anyway there should be just one shard.
      
      How this was benchmarked.
      What i did was run the memefficiency test unit with `--verbose` and compare the defragger hits
      and misses the tests reported.
      At first, when i took into consideration only the non-full slabs, it got a lot worse (i got into
      stagnation, or just got a lot of misses and a lot of hits), but when i added the 10% i got back
      to results that were slightly better than the ones of the jemalloc 5.1 branch. i.e. full defragmentation
      was achieved with fewer hits (relocations), and fewer misses (keyspace scans).
      d4e7ffb3
  12. 20 Nov, 2021 1 commit
  13. 19 Nov, 2021 1 commit