1. 18 Oct, 2023 4 commits
    • Oran Agra's avatar
      Redis 7.0.14 · c1d92a69
      Oran Agra authored
      c1d92a69
    • Binbin's avatar
      Support NO ONE block in REPLICAOF command json (#12633) · 6573acbd
      Binbin authored
      The current commands.json doesn't mention the special NO ONE arguments.
      This change is also applied to SLAVEOF
      
      (cherry picked from commit 8d92f7f2)
      6573acbd
    • Jachin's avatar
      Fix compile on macOS 13 (#12611) · 8ada737f
      Jachin authored
      Use the __MAC_OS_X_VERSION_MIN_REQUIRED macro to detect the
      macOS system version instead of using MAC_OS_X_VERSION_10_6.
      
      From MacOSX14.0.sdk, the default definitions of MAC_OS_X_VERSION_xxx have
      been removed in usr/include/AvailabilityMacros.h. It includes AvailabilityVersions.h,
      where the following condition must be met:
      `#if (!defined(_POSIX_C_SOURCE) && !defined(_XOPEN_SOURCE)) || defined(_DARWIN_C_SOURCE)`
      Only then will MAC_OS_X_VERSION_xxx be defined.
      However, in the project, _DARWIN_C_SOURCE is not defined, which leads to the
      loss of the definition for MAC_OS_X_VERSION_10_6.
      
      (cherry picked from commit a2b0701d)
      8ada737f
    • Yossi Gottlieb's avatar
      Fix issue of listen before chmod on Unix sockets (CVE-2023-45145) · 7f486ea6
      Yossi Gottlieb authored
      Before this commit, Unix socket setup performed chmod(2) on the socket
      file after calling listen(2). Depending on what umask is used, this
      could leave the file with the wrong permissions for a short period of
      time. As a result, another process could exploit this race condition and
      establish a connection that would otherwise not be possible.
      
      We now make sure the socket permissions are set up prior to calling
      listen(2).
      
      (cherry picked from commit a11b3bc34a054818f2ac70e50adfc542ca1cba42)
      7f486ea6
  2. 06 Sep, 2023 6 commits
    • Oran Agra's avatar
      Redis 7.0.13 · 49dbedb1
      Oran Agra authored
      49dbedb1
    • bodong.ybd's avatar
      Fix sort_ro get-keys function return wrong key number (#12522) · 0f14d327
      bodong.ybd authored
      Before:
      ```
      127.0.0.1:6379> command getkeys sort_ro key
      (empty array)
      127.0.0.1:6379>
      ```
      After:
      ```
      127.0.0.1:6379> command getkeys sort_ro key
      1) "key"
      127.0.0.1:6379>
      ```
      
      (cherry picked from commit b59f53ef)
      0f14d327
    • zhaozhao.zz's avatar
      do not call handleClientsBlockedOnKeys inside yielding command (#12459) · 4d67bb6a
      zhaozhao.zz authored
      
      
      Fix the assertion when a busy script (timeout) signal ready keys (like LPUSH),
      and then an arbitrary client's `allow-busy` command steps into `handleClientsBlockedOnKeys`
      try wake up clients blocked on keys (like BLPOP).
      
      Reproduction process:
      1. start a redis with aof
          `./redis-server --appendonly yes`
      2. exec blpop
          `127.0.0.1:6379> blpop a 0`
      3. use another client call a busy script and this script push the blocked key
          `127.0.0.1:6379> eval "redis.call('lpush','a','b') while(1) do end" 0`
      4. user a new client call an allow-busy command like auth
          `127.0.0.1:6379> auth a`
      
      BTW, this issue also break the atomicity of script.
      
      This bug has been around for many years, the old versions only have the
      atomic problem, only 7.0/7.2 has the assertion problem.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      (cherry picked from commit 8226f39f)
      4d67bb6a
    • Meir Shpilraien (Spielrein)'s avatar
      Ensure that the function load timeout is disabled during loading from RDB/AOF... · 37599fe7
      Meir Shpilraien (Spielrein) authored
      Ensure that the function load timeout is disabled during loading from RDB/AOF and on replicas. (#12451)
      
      When loading a function from either RDB/AOF or a replica, it is essential not to
      fail on timeout errors. The loading time may vary due to various factors, such as
      hardware specifications or the system's workload during the loading process.
      Once a function has been successfully loaded, it should be allowed to load from
      persistence or on replicas without encountering a timeout failure.
      
      To maintain a clear separation between the engine and Redis internals, the
      implementation refrains from directly checking the state of Redis within the
      engine itself. Instead, the engine receives the desired timeout as part of the
      library creation and duly respects this timeout value. If Redis wishes to disable
      any timeout, it can simply send a value of 0.
      
      (cherry picked from commit 2ee1bbb5)
      37599fe7
    • Sankar's avatar
      Process loss of slot ownership in cluster bus (#12344) · ea1bc6f6
      Sankar authored
      Process loss of slot ownership in cluster bus
      
      When a node no longer owns a slot, it clears the bit corresponding
      to the slot in the cluster bus messages. The receiving nodes
      currently don't record the fact that the sender stopped claiming
      a slot until some other node in the cluster starts claiming the slot.
      This can cause a slot to go missing during slot migration when subjected
      to inopportune race with addition of new shards or a failover.
      This fix forces the receiving nodes to process the loss of ownership
      to avoid spreading wrong information.
      
      (cherry picked from commit 1190f25c)
      ea1bc6f6
    • sundb's avatar
      Skip test for sdsRemoveFreeSpace when mem_allocator is not jemalloc (#11878) · 646069a9
      sundb authored
      Test `trim on SET with big value` (introduced from #11817) fails under mac m1 with libc mem_allocator.
      The reason is that malloc(33000) will allocate 65536 bytes(>42000).
      This test still passes under ubuntu with libc mem_allocator.
      
      ```
      *** [err]: trim on SET with big value in tests/unit/type/string.tcl
      Expected [r memory usage key] < 42000 (context: type source line 471 file /Users/iospack/data/redis_fork/tests/unit/type/string.tcl cmd {assert {[r memory usage key] < 42000}} proc ::test)
      ```
      
      simple test under mac m1 with libc mem_allocator:
      ```c
      void *p = zmalloc(33000);
      printf("malloc size: %zu\n", zmalloc_size(p));
      
      # output
      malloc size: 65536
      ```
      
      (cherry picked from commit 3fba3ccd)
      646069a9
  3. 10 Jul, 2023 10 commits
    • Oran Agra's avatar
      Redis 7.0.12 · 8e73f9d3
      Oran Agra authored
      8e73f9d3
    • sundb's avatar
      Fix compile errors when building with gcc-12 or clang (partial #12035) · f90ecfb1
      sundb authored
      This is a partial cherry-pick from Redis 7.2
      
      ## Fix various compilation warnings and errors
      
      5) server.c
      
      COMPILER: gcc-13 with FORTIFY_SOURCE
      
      WARNING:
      ```
      In function 'lookupCommandLogic',
          inlined from 'lookupCommandBySdsLogic' at server.c:3139:32:
      server.c:3102:66: error: '*(robj **)argv' may be used uninitialized [-Werror=maybe-uninitialized]
       3102 |     struct redisCommand *base_cmd = dictFetchValue(commands, argv[0]->ptr);
            |                                                              ~~~~^~~
      ```
      
      REASON: The compiler thinks that the `argc` returned by `sdssplitlen()` could be 0,
      resulting in an empty array of size 0 being passed to lookupCommandLogic.
      this should be a false positive, `argc` can't be 0 when strings are not NULL.
      
      SOLUTION: add an assert to let the compiler know that `argc` is positive.
      
      ## Other changes
      1) Fixed `ps -p [pid]`  doesn't output `<defunct>` when using procps 4.x causing `replication
        child dies when parent is killed - diskless` test to fail.
      
      (cherry picked from commit 42c8c618)
      f90ecfb1
    • Lior Lahav's avatar
      Fix possible crash in command getkeys (#12380) · bd1dac0c
      Lior Lahav authored
      
      
      When getKeysUsingKeySpecs processes a command with more than one key-spec,
      and called with a total of more than 256 keys, it'll call getKeysPrepareResult again,
      but since numkeys isn't updated, getKeysPrepareResult will not bother to copy key
      names from the old result (leaving these slots uninitialized). Furthermore, it did not
      consider the keys it already found when allocating more space.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      (cherry picked from commit b7559d9f)
      bd1dac0c
    • sundb's avatar
      Use Reservoir Sampling for random sampling of dict, and fix hang during fork (#12276) · 25f610fc
      sundb authored
      
      
      ## Issue:
      When a dict has a long chain or the length of the chain is longer than
      the number of samples, we will never be able to sample the elements
      at the end of the chain using dictGetSomeKeys().
      This could mean that SRANDMEMBER can be hang in and endless loop.
      The most severe case, is the pathological case of when someone uses SCAN+DEL
      or SSCAN+SREM creating an unevenly distributed dict.
      This was amplified by the recent change in #11692 which prevented a
      down-sizing rehashing while there is a fork.
      
      ## Solution
      1. Before, we will stop sampling when we reach the maximum number
        of samples, even if there is more data after the current chain.
        Now when we reach the maximum we use the Reservoir Sampling
        algorithm to fairly sample the end of the chain that cannot be sampled
      2. Fix the rehashing code, so that the same as it allows rehashing for up-sizing
        during fork when the ratio is extreme, it will allow it for down-sizing as well.
      
      Issue was introduced (or became more severe) by #11692
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      (cherry picked from commit b00a2351)
      25f610fc
    • Binbin's avatar
      Add missing return on -UNKILLABLE sent by master case (#12277) · eb64a97d
      Binbin authored
      We now no longer propagate scripts (started from 7.0), so this is a
      very rare issue that in nearly-dead-code.
      
      This is an overlook in #9780
      
      (cherry picked from commit e4d183af)
      eb64a97d
    • Oran Agra's avatar
      Fix WAIT for clients being blocked in a module command (#12220) · 2ba8de9d
      Oran Agra authored
      So far clients being blocked and unblocked by a module command would
      update the c->woff variable and so WAIT was ineffective and got released
      without waiting for the command actions to propagate.
      
      This seems to have existed since forever, but not for RM_BlockClientOnKeys.
      
      It is problematic though to know if the module did or didn't propagate
      anything in that command, so for now, instead of adding an API, we'll
      just update the woff to the latest offset when unblocking, this will
      cause the client to possibly wait excessively, but that's not that bad.
      
      (cherry picked from commit 6117f288)
      2ba8de9d
    • Shaya Potter's avatar
      Fix memory leak when RM_Call's RUN_AS_USER fails (#12158) · 1d2839a8
      Shaya Potter authored
      previously the argv wasn't freed so would leak.  not a common case, but should be handled.
      
      Solution: move RUN_AS_USER setup and error exit to the right place.
      this way, when we do `goto cleanup` (instead of return) it'll automatically do the right thing (including autoMemoryAdd)
      Removed the user argument from moduleAllocTempClient (reverted to the state before 6e993a5d
      
      )
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      (cherry picked from commit 71e6abe4)
      1d2839a8
    • Brennan's avatar
      Prevent repetitive backlog trimming (#12155) · c340fd5a
      Brennan authored
      When `replicationFeedSlaves()` serializes a command, it repeatedly calls
      `feedReplicationBuffer()` to feed it to the replication backlog piece by piece.
      It is unnecessary to call `incrementalTrimReplicationBacklog()` for every small
      amount of data added with `feedReplicationBuffer()` as the chance of the conditions
      being met for trimming are very low and these frequent calls add up to a notable
      performance cost. Instead, we will only attempt trimming when a new block is added
      to the replication backlog.
      
      Using redis-benchmark to saturate a local redis server indicated a performance
      improvement of around 3-3.5% for 100 byte SET commands with this change.
      
      (cherry picked from commit 40e6131b)
      c340fd5a
    • zhaozhao.zz's avatar
      Free backlog only if rsi is invalid when master reboot (#12088) · 88682ca3
      zhaozhao.zz authored
      When master reboot from RDB, if rsi in RDB is valid we should not free replication backlog, even if master_repl_offset or repl-offset is 0.
      
      Since if master doesn't send any data to replicas master_repl_offset is 0, it's a valid number.
      
      A clear example:
      
      1. start a master and apply some write commands, the master's master_repl_offset is 0 since it has no replicas.
      2. stop write commands on master, and start another instance and replicaof the master, trigger an FULLRESYNC
      3. the master's master_repl_offset is still 0 (set a large number for repl-ping-replica-period), do BGSAVE and restart the master
      4. master load master_repl_offset from RDB's rsi and it's still 0, and we should make sure replica can partially resync with master.
      
      (cherry picked from commit b0dd7b32)
      88682ca3
    • Oran Agra's avatar
      Lua cjson and cmsgpack integer overflow issues (CVE-2022-24834) · f6a7c9f9
      Oran Agra authored
      
      
      * Fix integer overflows due to using wrong integer size.
      * Add assertions / panic when overflow still happens.
      * Deletion of dead code to avoid need to maintain it
      * Some changes are not because of bugs, but rather paranoia.
      * Improve cmsgpack and cjson test coverage.
      Co-authored-by: default avatarYossi Gottlieb <yossigo@gmail.com>
      f6a7c9f9
  4. 17 Apr, 2023 7 commits
    • Oran Agra's avatar
      Redis 7.0.11 · 391aa407
      Oran Agra authored
      391aa407
    • Oran Agra's avatar
      fix false valgrind error on new hash test (#11200) · 6b17d824
      Oran Agra authored
      New test fails on valgrind because strtold("+inf") with valgrind returns a non-inf result
      same thing is done in incr.tcl.
      
      (cherry picked from commit c3b7bde9)
      6b17d824
    • Oran Agra's avatar
      Avoid valgrind fishy value warning on corrupt restore payloads (#10937) · 5656cc82
      Oran Agra authored
      The corrupt dump fuzzer uncovered a valgrind warning saying:
      ```
      ==76370== Argument 'size' of function malloc has a fishy (possibly negative) value: -3744781444216323815
      ```
      This allocation would have failed (returning NULL) and being handled properly by redis (even before this change), but we also want to silence the valgrind warnings (which are checking that casting to ssize_t produces a non-negative value).
      
      The solution i opted for is to explicitly fail these allocations (returning NULL), before even reaching `malloc` (which would have failed and return NULL too).
      
      The implication is that we will not be able to support a single allocation of more than 2GB on a 32bit system (which i don't think is a realistic scenario).
      i.e. i do think we could be facing cases were redis consumes more than 2gb on a 32bit system, but not in a single allocation.
      
      The byproduct of this, is that i dropped the overflow assertions, since these will now lead to the same OOM panic we have for failed allocations.
      
      (cherry picked from commit 599e59eb)
      5656cc82
    • sundb's avatar
      Use dummy allocator to make accesses defined as per standard (#11982) · 863fcfbf
      sundb authored
      
      
      NOTE: for 7.0 backport we don't declare malloc_size attributes in
      zmalloc.h so that we don't take the risk of inducing any crashes in a
      bugfix release, so will only have effect if LTO was enforced from
      outside.
      
      ## Issue
      When we use GCC-12 later or clang 9.0 later to build with `-D_FORTIFY_SOURCE=3`,
      we can see the following buffer overflow:
      ```
      === REDIS BUG REPORT START: Cut & paste starting from here ===
      6263:M 06 Apr 2023 08:59:12.915 # Redis 255.255.255 crashed by signal: 6, si_code: -6
      6263:M 06 Apr 2023 08:59:12.915 # Crashed running the instruction at: 0x7f03d59efa7c
      
      ------ STACK TRACE ------
      EIP:
      /lib/x86_64-linux-gnu/libc.so.6(pthread_kill+0x12c)[0x7f03d59efa7c]
      
      Backtrace:
      /lib/x86_64-linux-gnu/libc.so.6(+0x42520)[0x7f03d599b520]
      /lib/x86_64-linux-gnu/libc.so.6(pthread_kill+0x12c)[0x7f03d59efa7c]
      /lib/x86_64-linux-gnu/libc.so.6(raise+0x16)[0x7f03d599b476]
      /lib/x86_64-linux-gnu/libc.so.6(abort+0xd3)[0x7f03d59817f3]
      /lib/x86_64-linux-gnu/libc.so.6(+0x896f6)[0x7f03d59e26f6]
      /lib/x86_64-linux-gnu/libc.so.6(__fortify_fail+0x2a)[0x7f03d5a8f76a]
      /lib/x86_64-linux-gnu/libc.so.6(+0x1350c6)[0x7f03d5a8e0c6]
      src/redis-server 127.0.0.1:25111(+0xd5e80)[0x557cddd3be80]
      src/redis-server 127.0.0.1:25111(feedReplicationBufferWithObject+0x78)[0x557cddd3c768]
      src/redis-server 127.0.0.1:25111(replicationFeedSlaves+0x1a4)[0x557cddd3cbc4]
      src/redis-server 127.0.0.1:25111(+0x8721a)[0x557cddced21a]
      src/redis-server 127.0.0.1:25111(call+0x47a)[0x557cddcf38ea]
      src/redis-server 127.0.0.1:25111(processCommand+0xbf4)[0x557cddcf4aa4]
      src/redis-server 127.0.0.1:25111(processInputBuffer+0xe6)[0x557cddd22216]
      src/redis-server 127.0.0.1:25111(readQueryFromClient+0x3a8)[0x557cddd22898]
      src/redis-server 127.0.0.1:25111(+0x1b9134)[0x557cdde1f134]
      src/redis-server 127.0.0.1:25111(aeMain+0x119)[0x557cddce5349]
      src/redis-server 127.0.0.1:25111(main+0x466)[0x557cddcd6716]
      /lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x7f03d5982d90]
      /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x7f03d5982e40]
      src/redis-server 127.0.0.1:25111(_start+0x25)[0x557cddcd7025]
      ```
      
      The main reason is that when FORTIFY_SOURCE is enabled, GCC or clang will enhance some
      common functions, such as `strcpy`, `memcpy`, `fgets`, etc, so that they can detect buffer
      overflow errors and stop program execution, thus improving the safety of the program.
      We use `zmalloc_usable_size()` everywhere to use memory blocks, but that is an abuse since the
      malloc_usable_size() isn't meant for this kind of use, it is for diagnostics only. That is also why the
      behavior is flaky when built with _FORTIFY_SOURCE, the compiler can sense that we reach outside
      the allocated block and SIGABRT.
      
      ### Solution
      If we need to use the additional memory we got, we need to use a dummy realloc with `alloc_size` attribute
      and no inlining, (see `extend_to_usable`) to let the compiler see the large of memory we need to use.
      This can either be an implicit call inside `z*usable` that returns the size, so that the caller doesn't have any
      other worry, or it can be a normal zmalloc call which means that if the caller wants to use
      zmalloc_usable_size it must also use extend_to_usable.
      
      ### Changes
      
      This PR does the following:
      1) rename the current z[try]malloc_usable family to z[try]malloc_internal and don't expose them to users outside zmalloc.c,
      2) expose a new set of `z[*]_usable` family that use z[*]_internal and `extend_to_usable()` implicitly, the caller gets the
        size of the allocation and it is safe to use.
      3) go over all the users of `zmalloc_usable_size` and convert them to use the `z[*]_usable` family if possible.
      4) in the places where the caller can't use `z[*]_usable` and store the real size, and must still rely on zmalloc_usable_size,
        we still make sure that the allocation used `z[*]_usable` (which has a call to `extend_to_usable()`) and ignores the
        returning size, this way a later call to `zmalloc_usable_size` is still safe.
      
      [4] was done for module.c and listpack.c, all the others places (sds, reply proto list, replication backlog, client->buf)
      are using [3].
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      (cherry picked from commit e0b378d2)
      863fcfbf
    • Slava Koyfman's avatar
      Disconnect pub-sub subscribers when revoking `allchannels` permission (#11992) · 90f489b0
      Slava Koyfman authored
      The existing logic for killing pub-sub clients did not handle the `allchannels`
      permission correctly. For example, if you:
      
          ACL SETUSER foo allchannels
      
      Have a client authenticate as the user `foo` and subscribe to a channel, and then:
      
          ACL SETUSER foo resetchannels
      
      The subscribed client would not be disconnected, though new clients under that user
      would be blocked from subscribing to any channels.
      
      This was caused by an incomplete optimization in `ACLKillPubsubClientsIfNeeded`
      checking whether the new channel permissions were a strict superset of the old ones.
      
      (cherry picked from commit f38aa6bf)
      90f489b0
    • Binbin's avatar
      Fix fork done handler wrongly update fsync metrics and enhance AOF_ FSYNC_ALWAYS (#11973) · 17885684
      Binbin authored
      This PR fix several unrelated bugs that were discovered by the same set of tests
      (WAITAOF tests in #11713), could make the `WAITAOF` test hang.
      
      The change in `backgroundRewriteDoneHandler` is about MP-AOF.
      That leftover / old code assumes that we started a new AOF file just now
      (when we have a new base into which we're gonna incrementally write), but
      the fact is that with MP-AOF, the fork done handler doesn't really affect the
      incremental file being maintained by the parent process, there's no reason to
      re-issue `SELECT`, and no reason to update any of the fsync variables in that flow.
      This should have been deleted with MP-AOF (introduced in #9788, 7.0).
      The damage is that the update to `aof_fsync_offset` will cause us to miss an fsync
      in `flushAppendOnlyFile`, that happens if we stop write commands in `AOF_FSYNC_EVERYSEC`
      while an AOFRW is in progress. This caused a new `WAITAOF` test to sometime hang forever.
      
      Also because of MP-AOF, we needed to change `aof_fsync_offset` to `aof_last_incr_fsync_offset`
      and match it to `aof_last_incr_size` in `flushAppendOnlyFile`. This is because in the past we compared
      `aof_fsync_offset` and `aof_current_size`, but with MP-AOF it could be the total AOF file will be
      smaller after AOFRW, and the (already existing) incr file still has data that needs to be fsynced.
      
      The change in `flushAppendOnlyFile`, about the `AOF_FSYNC_ALWAYS`, it is follow #6053
      (the details is in #5985), we also check `AOF_FSYNC_ALWAYS` to handle a case where
      appendfsync is changed from everysec to always while there is data that's written but not yet fsynced.
      
      (cherry picked from commit cb171786)
      17885684
    • chendianqiang's avatar
      fix hincrbyfloat not to create a key if the new value is invalid (#11149) · 1c1bd618
      chendianqiang authored
      
      
      Check the validity of the value before performing the create operation,
      prevents new data from being generated even if the request fails to execute.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      Co-authored-by: default avatarchendianqiang <chendianqiang@meituan.com>
      Co-authored-by: default avatarBinbin <binloveplay1314@qq.com>
      (cherry picked from commit bc7fe41e)
      1c1bd618
  5. 20 Mar, 2023 7 commits
    • Oran Agra's avatar
      Redis 7.0.10 · f651708a
      Oran Agra authored
      f651708a
    • Oran Agra's avatar
      Avoid assertion when MSETNX is used with the same key twice (CVE-2023-28425) · 6956d15b
      Oran Agra authored
      Using the same key twice in MSETNX command would trigger an assertion.
      
      This reverts #11594 (introduced in Redis 7.0.8)
      6956d15b
    • Binbin's avatar
      Fix tail->repl_offset update in feedReplicationBuffer (#11905) · 66ff5e69
      Binbin authored
      
      
      In #11666, we added a while loop and will split a big reply
      node to multiple nodes. The update of tail->repl_offset may
      be wrong. Like before #11666, we would have created at most
      one new reply node, and now we will create multiple nodes if
      it is a big reply node.
      
      Now we are creating more than one node, and the tail->repl_offset
      of all the nodes except the last one are incorrect. Because we
      update master_repl_offset at the beginning, and then use it to
      update the tail->repl_offset. This would have lead to an assertion
      during PSYNC, a test was added to validate that case.
      
      Besides that, the calculation of size was adjusted to fix
      tests that failed due to a combination of a very low backlog size,
      and some thresholds of that get violated because of the relatively
      high overhead of replBufBlock. So now if the backlog size / 16 is too
      small, we'll take PROTO_REPLY_CHUNK_BYTES instead.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      (cherry picked from commit 7997874f)
      66ff5e69
    • xbasel's avatar
      Large blocks of replica client output buffer could lead to psync loops and... · 88695894
      xbasel authored
      
      Large blocks of replica client output buffer could lead to psync loops and unnecessary memory usage (#11666)
      
      This can happen when a key almost equal or larger than the
      client output buffer limit of the replica is written.
      
      Example:
      1. DB is empty
      2. Backlog size is 1 MB
      3. Client out put buffer limit is 2 MB
      4. Client writes a 3 MB key
      5. The shared replication buffer will have a single node which contains
      the key written above, and it exceeds the backlog size.
      
      At this point the client output buffer usage calculation will report the
      replica buffer to be 3 MB (or more) even after sending all the data to
      the replica.
      The primary drops the replica connection for exceeding the limits,
      the replica reconnects and successfully executes partial sync but the
      primary will drop the connection again because the buffer usage is still
      3 MB. This happens over and over.
      
      To mitigate the problem, this fix limits the maximum size of a single
      backlog node to be (repl_backlog_size/16). This way a single node can't
      exceed the limits of the COB (the COB has to be larger than the
      backlog).
      It also means that if the backlog has some excessive data it can't trim,
      it would be at most about 6% overuse.
      
      other notes:
      1. a loop was added in feedReplicationBuffer which caused a massive LOC
        change due to indentation, the actual changes are just the `min(max` and the loop.
      3. an unrelated change in an existing test to speed up a server termination which took 10 seconds.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      (cherry picked from commit 7be7834e)
      88695894
    • Binbin's avatar
      Fix the bug that CLIENT REPLY OFF|SKIP cannot receive push notifications (#11875) · f8ae7a41
      Binbin authored
      This bug seems to be there forever, CLIENT REPLY OFF|SKIP will
      mark the client with CLIENT_REPLY_OFF or CLIENT_REPLY_SKIP flags.
      With these flags, prepareClientToWrite called by addReply* will
      return C_ERR directly. So the client can't receive the Pub/Sub
      messages and any other push notifications, e.g client side tracking.
      
      In this PR, we adding a CLIENT_PUSHING flag, disables the reply
      silencing flags. When adding push replies, set the flag, after the reply,
      clear the flag. Then add the flag check in prepareClientToWrite.
      
      Fixes #11874
      
      Note, the SUBSCRIBE command response is a bit awkward,
      see https://github.com/redis/redis-doc/pull/2327
      
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      (cherry picked from commit 416842e6)
      f8ae7a41
    • Madelyn Olson's avatar
      Always compact nodes in stream listpacks after creating new nodes (#11885) · 17181517
      Madelyn Olson authored
      This change attempts to alleviate a minor memory usage degradation for Redis 6.2 and onwards when using rather large objects (~2k) in streams. Introduced in #6281, we pre-allocate the head nodes of a stream to be 4kb, to limit the amount of unnecessary initial reallocations that are done. However, if we only ever allocate one object because 2 objects exceeds the max_stream_entry_size, we never actually shrink it to fit the single item. This can lead to a lot of excessive memory usage. For smaller item sizes this becomes less of an issue, as the overhead decreases as the items become smaller in size.
      
      This commit also changes the MEMORY USAGE of streams, since it was reporting the lpBytes instead of the allocated size. This introduced an observability issue when diagnosing the memory issue, since Redis reported the same amount of used bytes pre and post change, even though the new implementation allocated more memory.
      
      (cherry picked from commit 2bb29e4a)
      17181517
    • Ozan Tezcan's avatar
      Ignore RM_Call deny-oom flag if maxmemory is zero (#11319) · a3903221
      Ozan Tezcan authored
      If a command gets an OOM response and then if we set maxmemory to zero
      to disable the limit, server.pre_command_oom_state never gets updated
      and it stays true. As RM_Call() calls with "respect deny-oom" flag checks
      server.pre_command_oom_state, all calls will fail with OOM.
      
      Added server.maxmemory check in RM_Call() to process deny-oom flag
      only if maxmemory is configured.
      
      (cherry picked from commit 18920813)
      a3903221
  6. 28 Feb, 2023 6 commits