1. 18 Jan, 2024 5 commits
    • Chen Tianjie's avatar
      Optimize dictTypeResizeAllowed to avoid mistaken OOM judgement. (#12950) · f81c3fd8
      Chen Tianjie authored
      When doing dict resizing, dictTypeResizeAllowed is used to judge whether
      the new allocated memory for rehashing would cause OOM.
      
      However when shrinking, we alloc `_dictNextExp(d->ht_used[0])` bytes of
      memory, while in `dictTypeResizeAllowed` we still use
      `_dictNextExp(d->ht_used[0]+1)` as the new allocated memory size. This
      will overestimate the memory used by shrinking at special conditions,
      causing a false OOM judgement.
      f81c3fd8
    • Binbin's avatar
      Fix minor memory leaks in dictTest (#12962) · 1c7eb0ad
      Binbin authored
      Introduced in #12952, reported by valgrind.
      1c7eb0ad
    • Binbin's avatar
      Call emptyData when disk-based sync rdbLoad fails (#12510) · 0e5a4a27
      Binbin authored
      We doing this in diskless on-empty-db mode, when diskless
      loading fails, we will call emptyData to remove the half-loaded
      data in case we started with an empty replica.
      
      Now when a disk-based sync rdbLoad fails, we will call emptyData
      too in case it loads partially incomplete data.
      
      when the replica attempts another re-sync, it'll empty the dataset
      again anyway, so this affects two things:
      1. memory consumption in the time gap until the next rdb loading begins
      2. if the unsynced replica is for some reason promoted, it would have kept
        the partial dataset instead of being empty.
      0e5a4a27
    • Binbin's avatar
      Fix unexpected resize causing test failure (#12960) · 29e6245a
      Binbin authored
      Before #12850, we will only try to shrink the dict in serverCron,
      which we can control by using a child process, but now every time
      we delete a key, the shrink check will be called.
      
      In these test (added in #12802), we meant to disable the resizing,
      but druing the delete, the dict will meet the force shrink, like
      2 / 128 = 0.015 < 0.2, the delete will trigger a force resize and
      will cause the test to fail.
      
      In this commit, we try to keep the load factor at 3 / 128 = 0.023,
      that is, do not meet the force shrink.
      29e6245a
    • Binbin's avatar
      Fix dict resize ratio checks, avoid precision loss from integer division (#12952) · 14b1edfd
      Binbin authored
      In the past we used integers to compare ratios, let us assume that
      we have the following data in expanding:
      ```
      used / size > 5
      `80 / 16 > 5` is false
      `81 / 16 > 5` is false
      `95 / 16 > 5` is false
      `96 / 16 > 5` is true
      ```
      
      Because the integer result is rounded, our resize breaks the ratio
      constraint, this has existed since the beginning, which resulted in
      us not strictly following the ratio (shrink also has the same issue).
      
      This PR change it to multiplication to avoid floating point
      calculations.
      14b1edfd
  2. 17 Jan, 2024 1 commit
    • Binbin's avatar
      Fix race in slot dict resize test (#12942) · 131d95f2
      Binbin authored
      The test have a race:
      ```
      *** [err]: Redis can rewind and trigger smaller slot resizing in tests/unit/other.tcl
      Expected '[Dictionary HT]
      Hash table 0 stats (main hash table):
       table size: 12
       number of elements: 2
      [Expires HT]
      Hash table 0 stats (main hash table):
      No stats available for empty dictionaries
      ' to match '*table size: 8*' (context: type eval line 12 cmd {assert_match "*table size: 8*" [r debug HTSTATS 0]} proc ::test)
      ```
      
      When `r del "{alice}$j"` is executed in the loop, when the key is
      deleted to [9, 12], the load factor has meet HASHTABLE_MIN_FILL,
      if serverCron happens to trigger slot dict resize, then the test
      will fail. Because there is not way to meet HASHTABLE_MIN_FILL in
      the subsequent dels.
      
      The solution is to avoid triggering the resize in advance. We can
      use multi to delete them at once, or we can disable the resize.
      Since we disabled resize in the previous test, the fix also uses
      the method of disabling resize.
      
      The test is introduced in #12802.
      131d95f2
  3. 15 Jan, 2024 3 commits
    • Binbin's avatar
      Updated comments on dictResizeEnable for new dict shrink (#12946) · ecc31bc6
      Binbin authored
      The new shrink was added in #12850.
      Also updated outdated comments, see #11692.
      ecc31bc6
    • Yanqi Lv's avatar
      Shrink dict when deleting dictEntry (#12850) · e2b7932b
      Yanqi Lv authored
      When we insert entries into dict, it may autonomously expand if needed.
      However, when we delete entries from dict, it doesn't shrink to the
      proper size. If there are few entries in a very large dict, it may cause
      huge waste of memory and inefficiency when iterating.
      
      The main keyspace dicts (keys and expires), are shrinked by cron
      (`tryResizeHashTables` calls `htNeedsResize` and `dictResize`),
      And some data structures such as zset and hash also do that (call
      `htNeedsResize`) right after a loop of calls to `dictDelete`,
      But many other dicts are completely missing that call (they can only
      expand).
      
      In this PR, we provide the ability to automatically shrink the dict when
      deleting. The conditions triggering the shrinking is the same as
      `htNeedsResize` used to have. i.e. we expand when we're over 100%
      utilization, and shrink when we're below 10% utilization.
      
      Additionally:
      * Add `dictPauseAutoResize` so that flows that do mass deletions, will
      only trigger shrinkage at the end.
      * Rename `dictResize` to `dictShrinkToFit` (same logic as it used to
      have, but better name describing it)
      * Rename `_dictExpand` to `_dictResize` (same logic as it used to have,
      but better name describing it)
       
      related to discussion
      https://github.com/redis/redis/pull/12819#discussion_r1409293878
      
      
      
      ---------
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      Co-authored-by: default avatarzhaozhao.zz <zhaozhao.zz@alibaba-inc.com>
      e2b7932b
    • zhaozhao.zz's avatar
      fix scripts access wrong slot if they disagree with pre-declared keys (#12906) · bb2b6e29
      zhaozhao.zz authored
      Regarding how to obtain the hash slot of a key, there is an optimization
      in `getKeySlot()`, it is used to avoid redundant hash calculations for
      keys: when the current client is in the process of executing a command,
      it can directly use the slot of the current client because the slot to
      access has already been calculated in advance in `processCommand()`.
      
      However, scripts are a special case where, in default mode or with
      `allow-cross-slot-keys` enabled, they are allowed to access keys beyond
      the pre-declared range. This means that the keys they operate on may not
      belong to the slot of the pre-declared keys. Currently, when the
      commands in a script are executed, the slot of the original client
      (i.e., the current client) is not correctly updated, leading to
      subsequent access to the wrong slot.
      
      This PR fixes the above issue. When checking the cluster constraints in
      a script, the slot to be accessed by the current command is set for the
      original client (i.e., the current client). This ensures that
      `getKeySlot()` gets the correct slot cache.
      
      Additionally, the following modifications are made:
      
      1. The 'sort' and 'sort_ro' commands use `getKeySlot()` instead of
      `c->slot` because the client could be an engine client in a script and
      can lead to potential bug.
      2. `getKeySlot()` is also used in pubsub to obtain the slot for the
      channel, standardizing the way slots are retrieved.
      bb2b6e29
  4. 14 Jan, 2024 1 commit
  5. 12 Jan, 2024 1 commit
    • Chen Tianjie's avatar
      Correct bytes_per_key computing. (#12897) · 87786342
      Chen Tianjie authored
      Change the calculation method of bytes_per_key to make it closer to
      the true average key size. The calculation method is as follows:
      
      mh->bytes_per_key = mh->total_keys ? (mh->dataset / mh->total_keys) : 0;
      87786342
  6. 11 Jan, 2024 2 commits
  7. 10 Jan, 2024 1 commit
  8. 09 Jan, 2024 2 commits
  9. 08 Jan, 2024 5 commits
    • Binbin's avatar
      Fix minor fd leak in rdbSaveToSlavesSockets (#12919) · 14e4a983
      Binbin authored
      We should close server.rdb_child_exit_pipe when redisFork fails,
      otherwise the pipe fd will be leaked.
      
      Just a cleanup.
      14e4a983
    • Andy Pan's avatar
      Re-indent code and reduce code being complied on Solaris for anetKeepAlive (#12914) · 50b8b997
      Andy Pan authored
      This is a follow-up PR for #12782, in which we introduced nested
      preprocessor directives for TCP keep-alive on Solaris and added
      redundant indentation for code. Besides, it could result in unreachable
      code due to the lack of `#else` on the latest Solaris 11.4 where
      `TCP_KEEPIDLE`, `TCP_KEEPINTVL`, and `TCP_KEEPCNT` are available. As a
      result, this PR does three main things:
      
      - To eliminate the redundant indention for C code in nested preprocessor
      directives
      - To add `#else` directives and move `TCP_KEEPALIVE_THRESHOLD` +
      `TCP_KEEPALIVE_ABORT_THRESHOLD` settings under it, avoid unreachable
      code and compiler warnings when `#if defined(TCP_KEEPIDLE) &&
      defined(TCP_KEEPINTVL) && defined(TCP_KEEPCNT)` is met on Solaris 11.4
      - To remove a few trailing whitespace in comments
      50b8b997
    • Yanqi Lv's avatar
      Optimize performance when many clients [p|s]unsubscribe simultaneously (#12838) · c452e414
      Yanqi Lv authored
      I'm testing the performance of Pub/Sub command recently. I find if many
      clients unsubscribe or are killed simultaneously, Redis needs a long
      time to deal with it.
      
      In my experiment, I set 5000 clients and each client subscribes 100
      channels. Then I call `client kill type pubsub` to simulate the
      situation where clients unsubscribe all channels at the same time and
      calculate the execution time. The result shows that it takes about 23s.
      I use the _perf_ and find that `listSearchKey` in
      `pubsubUnsubscribeChannel` costs more than 90% cpu time. I think we can
      optimize this situation.
      
      In this PR, I replace list with dict to track the clients subscribing
      the channel more efficiently. It changes O(N) to O(1) in the search
      phase. Then I repeat the experiment as above. The results are as
      follows.
      
      |              | Execution Time(s) |used_memory(MB) |
      | :---------------- | :------: | :----: |
      | unstable(1bd0b549)        |   23.734   | 65.41 |
      | optimize-pubsub           |   0.288   | 67.66 |
      
      Thanks for #11595 , I use a no-value dict and the results shows that the
      performance improves significantly but the memory usage only increases
      slightly.
      
      Notice:
      
      - This PR will cause the performance degradation about 20% in
      `[p|s]subscribe` command but won't freeze Redis.
      c452e414
    • debing.sun's avatar
      Change destination key's key-spec flag from RW to OW for SINTERSTORE command (#12917) · 4730563e
      debing.sun authored
      In #10122, we set the destination key's flag of SINTERSTORE to `RW`, 
      however, this command doesn't actually read or modify the destination
      key, just overwrites it.
      Therefore, we change it to `OW` similarly to all other *STORE commands.
      4730563e
    • Binbin's avatar
      Fix CLUSTER SHARDS crash in 7.0/7.2 mixed clusters where shard ids are not sync (#12832) · 5b0c6a82
      Binbin authored
      Crash reported in #12695. In the process of upgrading the cluster from
      7.0 to 7.2, because the 7.0 nodes will not gossip shard id, in 7.2 we
      will rely on shard id to build the server.cluster->shards dict.
      
      In some cases, for example, the 7.0 master node and the 7.2 replica node.
      From the view of 7.2 replica node, the cluster->shards dictionary does not
      have its master node. In this case calling CLUSTER SHARDS on the 7.2 replica
      node may crash.
      
      We should fix the underlying assumption of updateShardId, which is that the
      shard dict should be always in sync with the node's shard_id. The fix was
      suggested by PingXie, see more details in #12695.
      5b0c6a82
  10. 07 Jan, 2024 2 commits
    • debing.sun's avatar
      Make RM_Yield thread-safe (#12905) · ca1f67af
      debing.sun authored
      ## Issues and solutions from #12817
      1. Touch ProcessingEventsWhileBlocked and calling moduleCount() without
      GIL in afterSleep()
          - Introduced: 
             Version: 7.0.0
             PR: #9963
      
         - Harm Level: Very High
      If the module thread calls `RM_Yield()` before the main thread enters
      afterSleep(),
      and modifies `ProcessingEventsWhileBlocked`(+1), it will cause the main
      thread to not wait for GIL,
      which can lead to all kinds of unforeseen problems, including memory
      data corruption.
      
         - Initial / Abandoned Solution:
            * Added `__thread` specifier for ProcessingEventsWhileBlocked.
      `ProcessingEventsWhileBlocked` is used to protect against nested event
      processing, but event processing
      in the main thread and module threads should be completely independent
      and unaffected, so it is safer
               to use TLS.
      * Adding a cached module count to keep track of the current number of
      modules, to avoid having to use `dictSize()`.
          
          - Related Warnings:
      ```
      WARNING: ThreadSanitizer: data race (pid=1136)
        Write of size 4 at 0x0001045990c0 by thread T4 (mutexes: write M0):
          #0 processEventsWhileBlocked networking.c:4135 (redis-server:arm64+0x10006d124)
          #1 RM_Yield module.c:2410 (redis-server:arm64+0x10018b66c)
          #2 bg_call_worker <null>:83232836 (blockedclient.so:arm64+0x16a8)
      
        Previous read of size 4 at 0x0001045990c0 by main thread:
          #0 afterSleep server.c:1861 (redis-server:arm64+0x100024f98)
          #1 aeProcessEvents ae.c:408 (redis-server:arm64+0x10000fd64)
          #2 aeMain ae.c:496 (redis-server:arm64+0x100010f0c)
          #3 main server.c:7220 (redis-server:arm64+0x10003f38c)
      ```
      
      2. aeApiPoll() is not thread-safe
      When using RM_Yield to handle events in a module thread, if the main
      thread has not yet
      entered `afterSleep()`, both the module thread and the main thread may
      touch `server.el` at the same time.
      
          - Introduced: 
             Version: 7.0.0
             PR: #9963
      
         - Old / Abandoned Solution:
      Adding a new mutex to protect timing between after beforeSleep() and
      before afterSleep().
      Defect: If the main thread enters the ae loop without any IO events, it
      will wait until
      the next timeout or until there is any event again, and the module
      thread will
      always hang until the main thread leaves the event loop.
      
          - Related Warnings:
      ```
      SUMMARY: ThreadSanitizer: data race ae_kqueue.c:55 in addEventMask
      ==================
      ==================
      WARNING: ThreadSanitizer: data race (pid=14682)
        Write of size 4 at 0x000100b54000 by thread T9 (mutexes: write M0):
          #0 aeApiPoll ae_kqueue.c:175 (redis-server:arm64+0x100010588)
          #1 aeProcessEvents ae.c:399 (redis-server:arm64+0x10000fb84)
          #2 processEventsWhileBlocked networking.c:4138 (redis-server:arm64+0x10006d3c4)
          #3 RM_Yield module.c:2410 (redis-server:arm64+0x10018b66c)
          #4 bg_call_worker <null>:16042052 (blockedclient.so:arm64+0x169c)
      
        Previous write of size 4 at 0x000100b54000 by main thread:
          #0 aeApiPoll ae_kqueue.c:175 (redis-server:arm64+0x100010588)
          #1 aeProcessEvents ae.c:399 (redis-server:arm64+0x10000fb84)
          #2 aeMain ae.c:496 (redis-server:arm64+0x100010da8)
          #3 main server.c:7238 (redis-server:arm64+0x10003f51c)
      ```
      
      ## The final fix as the comments:
      https://github.com/redis/redis/pull/12817#discussion_r1436427232
      Optimized solution based on the above comment:
      
      First, we add `module_gil_acquring` to indicate whether the main thread
      is currently in the acquiring GIL state.
      
      When the module thread starts to yield, there are two possibilities(we
      assume the caller keeps the GIL):
      1. The main thread is in the mid of beforeSleep() and afterSleep(), that
      is, `module_gil_acquring` is not 1 now.
      At this point, the module thread will wake up the main thread through
      the pipe and leave the yield,
      waiting for the next yield when the main thread may already in the
      acquiring GIL state.
          
      2. The main thread is in the acquiring GIL state.
      The module thread release the GIL, yielding CPU to give the main thread
      an opportunity to start
      event processing, and then acquire the GIL again until the main thread
      releases it.
      This is what
      https://github.com/redis/redis/pull/12817#discussion_r1436427232
      
      
      mentioned direction.
      
      ---------
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      ca1f67af
    • Binbin's avatar
      Use shard-id of the master if the replica does not support shard-id (#12805) · 4cae66f5
      Binbin authored
      If there are nodes in the cluster that do not support shard-id, they
      will gossip shard-id. From the perspective of nodes that support shard-id,
      their shard-id is meaningless (since shard-id is randomly generated when
      we create a node.)
      
      Nodes that support shard-id will save the shard-id information in nodes.conf.
      If the node is restarted according to nodes.conf, the server will report a
      corrupted cluster config file error. Because auxShardIdSetter will reject
      configurations with inconsistent master-replica shard-ids.
      
      A cluster-wide consensus for the node's shard_id is not necessary. The key
      is maintaining consistency of the shard_id on each individual 7.2 node.
      As the cluster progressively upgrades to version 7.2, we can expect the
      shard_ids across all nodes to naturally converge and align.
      
      In this PR, when processing the gossip, if sender is a replica and does not
      support shard-id, set the shard_id to the shard_id of its master.
      4cae66f5
  11. 04 Jan, 2024 1 commit
  12. 03 Jan, 2024 2 commits
    • Lior Kogan's avatar
      Update CONTRIBUTING.md (#12907) · 51898383
      Lior Kogan authored
      - Referring to Redis Discord channel instead of the mailing list.
      - Referring to the licensing instead of repeating it.
      51898383
    • Madelyn Olson's avatar
      Handle recursive serverAsserts and provide more information for recursive segfaults (#12857) · 068051e3
      Madelyn Olson authored
      This change is trying to make two failure modes a bit easier to deep dive:
      1. If a serverPanic or serverAssert occurs during the info (or module)
      printing, it will recursively panic, which is a lot of fun as it will
      just keep recursively printing. It will eventually stack overflow, but
      will generate a lot of text in the process.
      2. When a segfault happens during the segfault handler, no information
      is communicated other than it happened. This can be problematic because
      `info` may help diagnose the real issue, but without fixing the
      recursive crash it might be hard to get at that info.
      068051e3
  13. 02 Jan, 2024 1 commit
    • AshMosh's avatar
      Manage number of new connections per cycle (#12178) · c3f8b542
      AshMosh authored
      
      
      There are situations (especially in TLS) in which the engine gets too occupied managing a large number of new connections. Existing connections may time-out while the server is processing the new connections initial TLS handshakes, which may cause cause new connections to be established, perpetuating the problem. To better manage the tradeoff between new connection rate and other workloads, this change adds a new config to manage maximum number of new connections per event loop cycle, instead of using a predetermined number (currently 1000).
      
      This change introduces two new configurations, max-new-connections-per-cycle and max-new-tls-connections-per-cycle. The default value of the tcp connections is 10 per cycle and the default value of tls connections per cycle is 1.
      ---------
      Co-authored-by: default avatarMadelyn Olson <madelyneolson@gmail.com>
      c3f8b542
  14. 28 Dec, 2023 5 commits
  15. 27 Dec, 2023 3 commits
    • Chen Tianjie's avatar
      Replace slots_to_channels radix tree with slot specific dictionaries for shard channels. (#12804) · 85279595
      Chen Tianjie authored
      
      
      We have achieved replacing `slots_to_keys` radix tree with key->slot
      linked list (#9356), and then replacing the list with slot specific
      dictionaries for keys (#11695).
      
      Shard channels behave just like keys in many ways, and we also need a
      slots->channels mapping. Currently this is still done by using a radix
      tree. So we should split `server.pubsubshard_channels` into 16384 dicts
      and drop the radix tree, just like what we did to DBs.
      
      Some benefits (basically the benefits of what we've done to DBs):
      1. Optimize counting channels in a slot. This is currently used only in
      removing channels in a slot. But this is potentially more useful:
      sometimes we need to know how many channels there are in a specific slot
      when doing slot migration. Counting is now implemented by traversing the
      radix tree, and with this PR it will be as simple as calling `dictSize`,
      from O(n) to O(1).
      2. The radix tree in the cluster has been removed. The shard channel
      names no longer require additional storage, which can save memory.
      3. Potentially useful in slot migration, as shard channels are logically
      split by slots, thus making it easier to migrate, remove or add as a
      whole.
      4. Avoid rehashing a big dict when there is a large number of channels.
      
      Drawbacks:
      1. Takes more memory than using radix tree when there are relatively few
      shard channels.
      
      What this PR does:
      1. in cluster mode, split `server.pubsubshard_channels` into 16384
      dicts, in standalone mode, still use only one dict.
      2. drop the `slots_to_channels` radix tree.
      3. to save memory (to solve the drawback above), all 16384 dicts are
      created lazily, which means only when a channel is about to be inserted
      to the dict will the dict be initialized, and when all channels are
      deleted, the dict would delete itself.
      5. use `server.shard_channel_count` to keep track of the number of all
      shard channels.
      
      ---------
      Co-authored-by: default avatarViktor Söderqvist <viktor.soderqvist@est.tech>
      85279595
    • Moshe Kaplan's avatar
      config.c: Avoid leaking file handle if file is 0 bytes (#12828) · fa751f9b
      Moshe Kaplan authored
      If fopen() is successful and redis_fstat determines that the file is 0
      bytes, the file handle stored in fp will leak. This change closes the
      filehandle stored in fp if the file is 0 bytes.
      
      Second attempt at fixing Coverity 390029
      
      This is a follow-up to #12796
      fa751f9b
    • sundb's avatar
      Fix oom-score-adj test due to no permission (#12887) · bef57153
      sundb authored
      
      
      Fix #12792
      
      On ubuntu 23(lunar), non-root users will not be allowed to change the
      oom_score_adj of a process to a value that is too low.
      Since terminal's default oom_score_adj is 200, if we run the test on
      terminal, we won't be able to set the oom_score_adj of the redis process
      to 9 or 22, which is too low.
      
      Reproduction on ubuntu 23(lunar) terminal:
      ```sh
      $ cat /proc/`pgrep redis-server`/oom_score_adj
      200
      $ echo 100 > /proc/`pgrep redis-server`/oom_score_adj
      # success without error
      $ echo 99 > /proc/`pgrep redis-server`/oom_score_adj
      echo: write error: Permission denied
      ```
      
      As from the output above, we can only set the minimum oom score of redis
      processes to 100.
      By modifying the test, make oom_score_adj only increase upwards and not
      decrease.
      
      ---------
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      bef57153
  16. 26 Dec, 2023 4 commits
  17. 24 Dec, 2023 1 commit