1. 21 Nov, 2024 1 commit
    • Oran Agra's avatar
      Add Lua VM memory to memory overhead, now that it's part of zmalloc (#13660) · 79fd2558
      Oran Agra authored
      To complement the work done in #13133.
      it added the script VMs memory to be counted as part of zmalloc, but
      that means they
      should be also counted as part of the non-value overhead.
      
      this commit contains some refactoring to make variable names and
      function names less confusing.
      it also adds a new field named `script.VMs` into the `MEMORY STATS`
      command.
      
      additionally, clear scripts and stats between tests in external mode
      (which is related to how this issue was discovered)
      79fd2558
  2. 14 Nov, 2024 1 commit
  3. 29 Oct, 2024 1 commit
    • Moti Cohen's avatar
      Add KEYSIZES section to INFO (#13592) · 2ec78d26
      Moti Cohen authored
      This PR adds a new section to the `INFO` command output, called
      `keysizes`. This section provides detailed statistics on the
      distribution of key sizes for each data type (strings, lists, sets,
      hashes and zsets) within the dataset. The distribution is tracked using
      a base-2 logarithmic histogram.
      
      # Motivation
      Currently, Redis lacks a built-in feature to track key sizes and item
      sizes per data type at a granular level. Understanding the distribution
      of key sizes is critical for monitoring memory usage and optimizing
      performance, particularly in large datasets. This enhancement will allow
      users to inspect the size distribution of keys directly from the `INFO`
      command, assisting with performance analysis and capacity planning.
      
      # Changes
      New Section in `INFO` Command: A new section called `keysizes` has been
      added to the `INFO` command output. This section reports a per-database,
      per-type histogram of key sizes. It provides insights into how many keys
      fall into specific size ranges (represented in powers of 2).
      
      **Example output:**
      ```
      127.0.0.1:6379> INFO keysizes
      # Keysizes
      db0_distrib_strings_sizes:1=19,2=655,512=100899,1K=31,2K=29,4K=23,8K=16,16K=3,32K=2
      db0_distrib_lists_items:1=5784492,32=3558,64=1047,128=676,256=533,512=218,4K=1,8K=42
      db0_distrib_sets_items:1=735564=50612,8=21462,64=1365,128=974,2K=292,4K=154,8K=89,
      db0_distrib_hashes_items:2=1,4=544,32=141169,64=207329,128=4349,256=136226,1K=1
      ```
      ## Future Use Cases:
      The key size distribution is collected per slot as well, laying the
      groundwork for future enhancements related to Redis Cluster.
      2ec78d26
  4. 10 Oct, 2024 1 commit
    • guybe7's avatar
      Cleanups related to expiry/eviction (#13591) · a38c29b6
      guybe7 authored
      1. `dbRandomKey`: excessive call to `dbFindExpires` (will always return
      1 if `allvolatile` + anyway called inside `expireIfNeeded`
      2. Add `deleteKeyAndPropagate` that is used by both expiry/eviction
      3. Change the order of calls in `expireIfNeeded` to save redundant calls
      to `keyIsExpired`
      4. `expireIfNeeded`: move `OBJ_STATIC_REFCOUNT` to
      `deleteKeyAndPropagate`
      5. `performEvictions` now uses `deleteEvictedKeyAndPropagate`
      6. active-expire: moved `postExecutionUnitOperations` inside
      `activeExpireCycleTryExpire`
      7. `activeExpireCycleTryExpire`: less indentation + expire a key if `now
      == t`
      8. rename `lazy_expire_disabled` to `allow_access_expired`
      a38c29b6
  5. 04 Sep, 2024 1 commit
    • debing.sun's avatar
      Introduce reusable query buffer for client reads (#13488) · ea3e8b79
      debing.sun authored
      This PR is based on the commits from PR
      https://github.com/valkey-io/valkey/pull/258,
      https://github.com/valkey-io/valkey/pull/593,
      https://github.com/valkey-io/valkey/pull/639
      
      This PR optimizes client query buffer handling in Redis by introducing
      a reusable query buffer that is used by default for client reads. This
      reduces memory usage by ~20KB per client by avoiding allocations for
      most clients using short (<16KB) complete commands. For larger or
      partial commands, the client still gets its own private buffer.
      
      The primary changes are:
      
      * Adding a reusable query buffer `thread_shared_qb` that clients use by
      default.
      * Modifying client querybuf initialization and reset logic.
      * Freeing idle client query buffers when empty to allow reuse of the
      reusable query buffer.
      * Master client query buffers are kept private as their contents need to
      be preserved for replication stream.
      * When nested commands is executed, only the first user uses the reuse
      buffer, and subsequent users will still use the private buffer.
      
      In addition to the memory savings, this change shows a 3% improvement in
      latency and throughput when running with 1000 active clients.
      
      The memory reduction may also help reduce the need to evict clients when
      reaching max memory limit, as the query buffer is the main memory
      consumer per client.
      
      This PR is different from https://github.com/valkey-io/valkey/pull/258
      
      
      1. When a client is in the mid of requiring a reused buffer and
      returning it, regardless of whether the query buffer has changed
      (expanded), we do not update the reused query buffer in the middle, but
      return the reused query buffer (expanded or with data remaining) or
      reset it at the end.
      2. Adding a new thread variable `thread_shared_qb_used` to avoid
      multiple clients requiring the reusable query buffer at the same time.
      
      ---------
      Signed-off-by: default avatarUri Yagelnik <uriy@amazon.com>
      Signed-off-by: default avatarMadelyn Olson <matolson@amazon.com>
      Co-authored-by: default avatarUri Yagelnik <uriy@amazon.com>
      Co-authored-by: default avatarMadelyn Olson <madelyneolson@gmail.com>
      Co-authored-by: default avataroranagra <oran@redislabs.com>
      ea3e8b79
  6. 03 Sep, 2024 1 commit
    • Ozan Tezcan's avatar
      Reply LOADING on replica while flushing the db (#13495) · a7afd1d2
      Ozan Tezcan authored
      On a full sync, replica starts discarding existing db. If the existing 
      db is huge and flush is happening synchronously, replica may become 
      unresponsive. 
      
      Adding a change to yield back to event loop while flushing db on 
      a replica. Replica will reply -LOADING in this case. Note that while 
      replica is loading the new rdb, it may get an error and start flushing
      the partial db. This step may take a long time as well. Similarly, 
      replica will reply -LOADING in this case. 
      
      To call processEventsWhileBlocked() and reply -LOADING, we need to do:
      - Set connSetReadHandler() null not to process further data from the master
      - Set server.loading flag
      - Call blockingOperationStarts()
      
      rdbload() already does these steps and calls processEventsWhileBlocked()
      while loading the rdb. Added a new call rdbLoadWithEmptyFunc() which 
      accepts callback to flush db before loading rdb or when an error 
      happens while loading. 
      
      For diskless replication, doing something similar and calling emptyData()
      after setting required flags.
      
      Additional changes:
      - Allow `appendonly` config change during loading. 
       Config can be changed while loading data on startup or on replication 
       when slave is loading RDB. We allow config change command to update 
       `server.aof_enabled` and then lazily apply config change after loading
       operation is completed.
       
       - Added a test for `replica-lazy-flush` config
      a7afd1d2
  7. 03 Jul, 2024 1 commit
    • Filipe Oliveira (Redis)'s avatar
      Reduce getNodeByQuery overhead (#13221) · 26a2dcb9
      Filipe Oliveira (Redis) authored
      
      
      The following PR does the following changes based upon on CPU profile
      info. The `getNodeByQuery` function represents 8.2% of an overhead of
      12.3% when comparing single shard cluster with standalone.
      Proposed changes:
      - inlinging keyHashSlot to reduce overhead of that function call
      - Reduce duplicate calls to getCommandFlags within getNodeByQuery
      
      The above changes represent an improvement of approximately 5% on the achievable ops/sec.
      Co-authored-by: default avatarfilipecosta90 <filipecosta.90@gmail.com>
      26a2dcb9
  8. 02 Jul, 2024 1 commit
  9. 24 Jun, 2024 1 commit
    • Moti Cohen's avatar
      Adapt HRANDFIELD to HFE feature (#13348) · e26ea35c
      Moti Cohen authored
      Considerations for the selected imp of HRANDFIELD & HFE feature:
      
      HRANDFIELD might access any of the fields in the hash as some of them
      might be expired. And so the Implementation of HRANDFIELD along with HFEs
      might be one of the two options:
      1. Expire hash-fields before diving into handling HRANDFIELD.
      2. Refine HRANDFIELD cases to deal with expired fields.
      
      Regarding the first option, as reference, the command RANDOMKEY also
      declareson O(1) complexity, yet might be stuck on a very long (but not infinite)
      loop trying to find non-expired keys. Furthermore RANDOMKEY also evicts expired 
      keys along the way even though it is categorized as a read-only command. Note
      that the case of HRANDFIELD is more lightweight versus RANDOMKEY since 
      HFEs have much more effective and aggressive active-expiration for fields behind.
      
      The second option introduces additional implementation complexity to HRANDFIELD.
      We could further refine HRANDFIELD cases to differentiate between scenarios
      with many expired fields versus few expired fields, and adjust based on the
      percentage of expired fields. However, this approach could still lead to long
      loops or necessitate expiring fields before selecting them. For the “lightweight”
      cases it is also expected to have a lightweight expiration.
      
      Considering the pros and cons, and the fact that HRANDFIELD is an infrequent
      command (particularly with HFEs) and the fact we have effective active-expiration
      behind for hash-fields, it is better to keep it simple and choose option number 1.
      
      Other changes:
      * Don't mark command dirty by internal hashTypeExpire(). It causes to read 
        only command of HRANDFIELD to be accidently propagated (This flag
        should be indicated at higher level, by the command functions).
      * Align `hashTypeExpireIfNeeded()` and `hashTypeGetValue()` to be more
        aligned with `expireIfNeeded()` logic of keyspace.
      e26ea35c
  10. 04 Jun, 2024 1 commit
    • gms's avatar
      Fix crash due to unblock client during slot migration (#13311) · f36b5a85
      gms authored
      
      
      In #13224, we found a crash during cluster slot migration but don't know
      why. So i check all the return C_OK in processCommand to see if we are
      missing some duration reset and see this.
      
      This fix is like #12247, when we reject the command, we should reset the
      duration. I test it and verify it can fix #13224.
      
      So the reason may because we are using stream block and then during the
      slot migration, it got a redirect and then crash the server.
      
      ---------
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      f36b5a85
  11. 29 May, 2024 1 commit
    • Moti Cohen's avatar
      HFE to support AOF and replicas (#13285) · 33fc0fbf
      Moti Cohen authored
      * For replica sake, rewrite commands `H*EXPIRE*` , `HSETF`, `HGETF` to
      have absolute unix time in msec.
      * On active-expiration of field, propagate HDEL to replica
      (`propagateHashFieldDeletion()`)
      * On lazy-expiration, propagate HDEL to replica (`hashTypeGetValue()`
      now calls `hashTypeDelete()`. It also takes care to call
      `propagateHashFieldDeletion()`).
      * Fix `H*EXPIRE*` command such that if it gets flag `LT` and it doesn’t
      have any expiration on the field then it will considered as valid
      condition.
      
      Note, replicas doesn’t make any active expiration, and should avoid lazy
      expiration. On `hashTypeGetValue()` it doesn't check expiration (As long
      as the master didn’t request to delete the field, it is valid)
      
      TODO: 
      * Attach `dbid` to HASH metadata. See
      [here](https://github.com/redis/redis/pull/13209#discussion_r1593385850
      
      )
      
      ---------
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      33fc0fbf
  12. 23 May, 2024 1 commit
    • Moti Cohen's avatar
      Add Statistics hashes_with_expiry_fields to INFO (#13275) · f34f2ade
      Moti Cohen authored
      Added hashes_with_expiry_fields.
      Optimially it would better to have statistic of that counts all fields
      with expiry. But it requires careful logic and computation to follow and
      deep dive listpacks and hashes. This statistics is trivial to achieve
      and reflected by global HFE DS that has builtin enumeration of all the
      hashes that are registered in it.
      f34f2ade
  13. 17 May, 2024 1 commit
    • Ronen Kalish's avatar
      Hfe serialization listpack (#13243) · 323be4d6
      Ronen Kalish authored
      Add RDB de/serialization for HFE
      
      This PR adds two new RDB types: `RDB_TYPE_HASH_METADATA` and
      `RDB_TYPE_HASH_LISTPACK_TTL` to save HFE data.
      When the hash RAM encoding is dict, it will be saved in the former, and
      when it is listpack it will be saved in the latter.
      Both formats just add the TTL value for each field after the data that
      was previously saved, i.e HASH_METADATA will save the number of entries
      and, for each entry, key, value and TTL, whereas listpack is saved as a
      blob.
      On read, the usual dict <--> listpack conversion takes place if
      required.
      In addition, when reading a hash that was saved as a dict fields are
      actively expired if expiry is due. Currently this slao holds for
      listpack encoding, but it is supposed to be removed.
      
      TODO:
      Remove active expiry on load when loading from listpack format (unless
      we'll decide to keep it)
      323be4d6
  14. 18 Apr, 2024 1 commit
    • Moti Cohen's avatar
      Hash Field Expiration - Basic support · c18ff056
      Moti Cohen authored
      - Add ebuckets & mstr data structures
      - Integrate active & lazy expiration
      - Add most of the commands 
      - Add support for dict (listpack is missing)
      TODOs:  RDB, notification, listpack, HSET, HGETF, defrag, aof
      c18ff056
  15. 16 Apr, 2024 1 commit
    • Binbin's avatar
      Allocate Lua VM code with jemalloc instead of libc, and count it used memory (#13133) · 804110a4
      Binbin authored
      
      
      ## Background
      1. Currently Lua memory control does not pass through Redis's zmalloc.c.
      Redis maxmemory cannot limit memory problems caused by users abusing lua
      since these lua VM memory is not part of used_memory.
      
      2. Since jemalloc is much better (fragmentation and speed), and also we
      know it and trust it. we are
      going to use jemalloc instead of libc to allocate the Lua VM code and
      count it used memory.
      
      ## Process:
      In this PR, we will use jemalloc in lua. 
      1. Create an arena for all lua vm (script and function), which is
      shared, in order to avoid blocking defragger.
      2. Create a bound tcache for the lua VM, since the lua VM and the main
      thread are by default in the same tcache, and if there is no isolated
      tcache, lua may request memory from the tcache which has just been freed
      by main thread, and vice versa
      On the other hand, since lua vm might be release in bio thread, but
      tcache is not thread-safe, we need to recreate
          the tcache every time we recreate the lua vm.
      3. Remove lua memory statistics from memory fragmentation statistics to
      avoid the effects of lua memory fragmentation
      
      ## Other
      Add the following new fields to `INFO DEBUG` (we may promote them to
      INFO MEMORY some day)
      1. allocator_allocated_lua: total number of bytes allocated of lua arena
      2. allocator_active_lua: total number of bytes in active pages allocated
      in lua arena
      3. allocator_resident_lua: maximum number of bytes in physically
      resident data pages mapped in lua arena
      4. allocator_frag_bytes_lua: fragment bytes in lua arena
      
      This is oranagra's idea, and i got some help from sundb.
      
      This solves the third point in #13102.
      
      ---------
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      804110a4
  16. 04 Apr, 2024 1 commit
    • debing.sun's avatar
      Fix daylight race condition and some thread leaks (#13191) · 4581d432
      debing.sun authored
      fix some issues that come from sanitizer thread report.
      
      1. when the main thread is updating daylight_active, other threads (bio,
      module thread) may be writing logs at the same time.
      ```
      WARNING: ThreadSanitizer: data race (pid=661064)
        Read of size 4 at 0x55c9a4d11c70 by thread T2:
          #0 serverLogRaw /home/sundb/data/redis_fork/src/server.c:116 (redis-server+0x8d797) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
          #1 _serverLog.constprop.2 /home/sundb/data/redis_fork/src/server.c:146 (redis-server+0x2a3b14) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
          #2 bioProcessBackgroundJobs /home/sundb/data/redis_fork/src/bio.c:329 (redis-server+0x1c24ca) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
      
        Previous write of size 4 at 0x55c9a4d11c70 by main thread (mutexes: write M0, write M1, write M2, write M3):
          #0 updateCachedTimeWithUs /home/sundb/data/redis_fork/src/server.c:1102 (redis-server+0x925e7) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
          #1 updateCachedTimeWithUs /home/sundb/data/redis_fork/src/server.c:1087 (redis-server+0x925e7)
          #2 updateCachedTime /home/sundb/data/redis_fork/src/server.c:1118 (redis-server+0x925e7)
          #3 afterSleep /home/sundb/data/redis_fork/src/server.c:1811 (redis-server+0x925e7)
          #4 aeProcessEvents /home/sundb/data/redis_fork/src/ae.c:389 (redis-server+0x85ae0) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
          #5 aeProcessEvents /home/sundb/data/redis_fork/src/ae.c:342 (redis-server+0x85ae0)
          #6 aeMain /home/sundb/data/redis_fork/src/ae.c:477 (redis-server+0x85ae0)
          #7 main /home/sundb/data/redis_fork/src/server.c:7211 (redis-server+0x7168c) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
      ```
      
      2. thread leaks in module tests
      ```
      WARNING: ThreadSanitizer: thread leak (pid=668683)
        Thread T13 (tid=670041, finished) created by main thread at:
          #0 pthread_create ../../../../src/libsanitizer/tsan/tsan_interceptors_posix.cpp:1036 (libtsan.so.2+0x3d179) (BuildId: 28a9f70061dbb2dfa2cef661d3b23aff4ea13536)
          #1 HelloBlockNoTracking_RedisCommand /home/sundb/data/redis_fork/tests/modules/blockonbackground.c:200 (blockonbackground.so+0x97fd) (BuildId: 9cd187906c57e88cdf896d121d1d96448b37a136)
          #2 HelloBlockNoTracking_RedisCommand /home/sundb/data/redis_fork/tests/modules/blockonbackground.c:169 (blockonbackground.so+0x97fd)
          #3 call /home/sundb/data/redis_fork/src/server.c:3546 (redis-server+0x9b7fb) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
          #4 processCommand /home/sundb/data/redis_fork/src/server.c:4176 (redis-server+0xa091c) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
          #5 processCommandAndResetClient /home/sundb/data/redis_fork/src/networking.c:2468 (redis-server+0xd2b8e) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
          #6 processInputBuffer /home/sundb/data/redis_fork/src/networking.c:2576 (redis-server+0xd2b8e)
          #7 readQueryFromClient /home/sundb/data/redis_fork/src/networking.c:2722 (redis-server+0xd358f) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
          #8 callHandler /home/sundb/data/redis_fork/src/connhelpers.h:58 (redis-server+0x288a7b) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
          #9 connSocketEventHandler /home/sundb/data/redis_fork/src/socket.c:277 (redis-server+0x288a7b)
          #10 aeProcessEvents /home/sundb/data/redis_fork/src/ae.c:417 (redis-server+0x85b45) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
          #11 aeProcessEvents /home/sundb/data/redis_fork/src/ae.c:342 (redis-server+0x85b45)
          #12 aeMain /home/sundb/data/redis_fork/src/ae.c:477 (redis-server+0x85b45)
          #13 main /home/sundb/data/redis_fork/src/server.c:7211 (redis-server+0x7168c) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
      ```
      4581d432
  17. 20 Mar, 2024 1 commit
  18. 19 Mar, 2024 1 commit
    • Binbin's avatar
      Prevent lua error_reply abuse from causing errorstats to become larger (#13141) · e04d41d7
      Binbin authored
      Users who abuse lua error_reply will generate a new error object on each
      error call, which can make server.errors get bigger and bigger. This
      will
      cause the server to block when calling INFO (we also return errorstats
      by
      default).
      
      To prevent the damage it can cause, when a misuse is detected, we will
      print a warning log and disable the errorstats to avoid adding more new
      errors. It can be re-enabled via CONFIG RESETSTAT.
      
      Because server.errors may be very large (it may be better now since we
      have the limit), config resetstat may block for a while. So in
      resetErrorTableStats, we will try to lazyfree server.errors.
      
      See the related discussion at the end of #8217.
      e04d41d7
  19. 18 Mar, 2024 1 commit
    • Binbin's avatar
      Fix dictionary use-after-free in active expire and make kvstore iter to respect EMPTY flag (#13135) · 7b070423
      Binbin authored
      After #13072, there is an use-after-free error. In expireScanCallback, we
      will delete the dict, and then in dictScan we will continue to use the dict,
      like we will doing `dictResumeRehashing(d)` in the end, this casued an error.
      
      In this PR, in freeDictIfNeeded, if the dict's pauserehash is set, don't
      delete the dict yet, and then when scan returns try to delete it again.
      
      At the same time, we noticed that there will be similar problems in iterator.
      We may also delete elements during the iteration process, causing the dict
      to be deleted, so the part related to iter in the PR has also been modified.
      dictResetIterator was also missing from the previous kvstoreIteratorNextDict,
      we currently have no scenario that elements will be deleted in kvstoreIterator
      process, deal with it together to avoid future problems. Added some simple
      tests to verify the changes.
      
      In addition, the modification in #13072 omitted initTempDb and emptyDbAsync,
      and they were also added. This PR also remove the slow flag from the expire
      test (consumes 1.3s) so that problems can be found in CI in the future.
      7b070423
  20. 13 Mar, 2024 2 commits
    • Binbin's avatar
      Add KVSTORE_FREE_EMPTY_DICTS to cluster mode keys / expires kvstore (#13072) · 3b3d16f7
      Binbin authored
      
      
      Currently (following #11695, and #12822), keys kvstore and expires
      kvstore both flag with ON_DEMAND, it means that a cluster node will
      only allocate a dict when the slot is assigned to it and populated,
      but on the other hand, when the slot is unassigned, the dict will
      remain allocated.
      
      We considered releasing the dict when the slot is unassigned, but it
      causes complications on replicas. On the other hand, from benchmarks
      we conducted, it looks like the performance impact of releasing the
      dict when it becomes empty and re-allocate it when a key is added
      again, isn't huge.
      
      This PR add KVSTORE_FREE_EMPTY_DICTS to cluster mode keys / expires
      kvstore.
      
      The impact is about about 2% performance drop, for this hopefully
      uncommon scenario.
      
      ---------
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      3b3d16f7
    • Binbin's avatar
      Lua eval scripts first in first out LRU eviction (#13108) · ad28d222
      Binbin authored
      In some cases, users will abuse lua eval. Each EVAL call generates
      a new lua script, which is added to the lua interpreter and cached
      to redis-server, consuming a large amount of memory over time.
      
      Since EVAL is mostly the one that abuses the lua cache, and these
      won't have pipeline issues (i.e. the script won't disappear
      unexpectedly,
      and cause errors like it would with SCRIPT LOAD and EVALSHA),
      we implement a plain FIFO LRU eviction only for these (not for
      scripts loaded with SCRIPT LOAD).
      
      ### Implementation notes:
      When not abused we'll probably have less than 100 scripts, and when
      abused we'll have many thousands. So we use a hard coded value of 500
      scripts. And considering that we don't have many scripts, then unlike
      keys, we don't need to worry about the memory usage of keeping a true
      sorted LRU linked list. We compute the SHA of each script anyway,
      and put the script in a dict, we can store a listNode there, and use
      it for quick removal and re-insertion into an LRU list each time the
      script is used.
      
      ### New interfaces:
      At the same time, a new `evicted_scripts` field is added to
      INFO, which represents the number of evicted eval scripts. Users
      can check it to see if they are abusing EVAL.
      
      ### benchmark:
      `./src/redis-benchmark -P 10 -n 1000000 -r 10000000000 eval "return
      __rand_int__" 0`
      
      The simple abuse of eval benchmark test that will create 1 million EVAL
      scripts. The performance has been improved by 50%, and the max latency
      has dropped from 500ms to 13ms (this may be caused by table expansion
      inside Lua when the number of scripts is large). And in the INFO memory,
      it used to consume 120MB (server cache) + 310MB (lua engine), but now
      it only consumes 70KB (server cache) + 210KB (lua_engine) because of
      the scripts eviction.
      
      For non-abusive case of about 100 EVAL scripts, there's no noticeable
      change in performance or memory usage.
      
      ### unlikely potentially breaking change:
      in theory, a user can maybe load a
      script with EVAL and then use EVALSHA to call it (by calculating the
      SHA1 value on the client side), it could be that if we read the docs
      carefully we'll realized it's a valid scenario, but we suppose it's
      extremely rare. So it may happen that EVALSHA acts on a script created
      by EVAL, and the script is evicted and EVALSHA returns a NOSCRIPT error.
      that is if you have more than 500 scripts being used in the same
      transaction / pipeline.
      
      This solves the second point in #13102.
      ad28d222
  21. 01 Mar, 2024 1 commit
    • Chen Tianjie's avatar
      Add overhead of all DBs and rehashing dict count to info. (#12913) · 4cae99e7
      Chen Tianjie authored
      
      
      Sometimes we need to make fast judgement about why Redis is suddenly
      taking more memory. One of the reasons is main DB's dicts doing
      rehashing.
      
      We may use `MEMORY STATS` to monitor the overhead memory of each DB, but
      there still lacks a total sum to show an overall trend. So this PR adds
      the total overhead of all DBs to `INFO MEMORY` section, together with
      the total count of rehashing DB dicts, providing some intuitive metrics
      about main dicts rehashing.
      
      This PR adds the following metrics to INFO MEMORY
      * `mem_overhead_db_hashtable_rehashing` - only size of ht[0] in
      dictionaries we're rehashing (i.e. the memory that's gonna get released
      soon)
      
      and a similar ones to MEMORY STATS:
      * `overhead.db.hashtable.lut` (complements the existing
      `overhead.hashtable.main` and `overhead.hashtable.expires` which also
      counts the `dictEntry` structs too)
      * `overhead.db.hashtable.rehashing` - temporary rehashing overhead.
      * `db.dict.rehashing.count` - number of top level dictionaries being
      rehashed.
      
      ---------
      Co-authored-by: default avatarzhaozhao.zz <zhaozhao.zz@alibaba-inc.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      4cae99e7
  22. 20 Feb, 2024 1 commit
    • debing.sun's avatar
      Defragger improvements around large bins (#12996) · f6785df6
      debing.sun authored
      
      
      Implement #12963
      
      ## Changes
      1. large bins don't have external fragmentation or are at least
      non-defraggable, so we should ignore the effect of
      large bins when measuring fragmentation, and only measure fragmentation
      of small bins. this affects both the allocator_frag* metrics and also
      the active-defrag trigger
      2. Adding INFO metrics for `muzzy` memory, which is memory returned to
      the OS but still shows as RSS until the OS reclaims it.
      
      ---------
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      f6785df6
  23. 19 Feb, 2024 1 commit
    • zhaozhao.zz's avatar
      Calculate the incremental rehash time more precisely (#13063) · 8876d264
      zhaozhao.zz authored
      In the `databasesCron()`, the time consumed by
      `kvstoreIncrementallyRehash()` is used to calculate the exit condition.
      However, within `kvstoreIncrementallyRehash()`, the loop first checks
      for timeout before performing rehashing. Therefore, the time for the
      last rehash isn't accounted for, making the consumed time inaccurate. We
      need to precisely calculate all the time spent on rehashing.
      Additionally, the time allocated to `kvstoreIncrementallyRehash()`
      should be the remaining time, which is
      `INCREMENTAL_REHASHING_THRESHOLD_US` minus the already consumed
      `elapsed_us`.
      8876d264
  24. 18 Feb, 2024 2 commits
    • Binbin's avatar
      AOF_FSYNC_EVERYSEC higher resolution, change aof_last_fsync and... · 9103ccc3
      Binbin authored
      
      AOF_FSYNC_EVERYSEC higher resolution, change aof_last_fsync and aof_flush_postponed_start to use mstime (#13041)
      
      Currently aof_last_fsync is using a low resolution unixtime is really
      bad,
      it checks if the absolute number of (full) seconds changed by one.
      depending on which side of the second barrier it falls, we can get very
      different results.
      
      This PR change the resolution to use milliseconds instead of complete
      seconds.
      
      In cases where the event loop cycle duration is short and their rapid
      (e.g. running
      many fast commands with short pipeline, or a high `hz` config), this
      change will not
      make much difference, since in anyway, we'll be quick to detect that
      we're on a "new
      second", and it's likely that these fsync will always be executed close
      to the second
      switch barrier.
      
      But in cases of rare or slow event loops cycles (e.g. either slow
      commands, or very
      low rate of traffic to redis, and low `hz`), it could easily be that
      with the old code,
      in some cases we'll have over 1.5 seconds between fsyncs, and in others
      less than 0.5.
      
      see discussion in #8612
      
      This PR also handle aof_flush_postponed_start as well, the damage there
      is smaller
      since the threshold is 2 seconds, and not 1.
      
      ---------
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      9103ccc3
    • zhaozhao.zz's avatar
      Add metrics for WATCH (#12966) · 50d6fe8c
      zhaozhao.zz authored
      Redis has some special commands that mark the client's state, such as
      `subscribe` and `blpop`, which mark the client as `CLIENT_PUBSUB` or
      `CLIENT_BLOCKED`, and we have metrics for the special use cases.
      
      However, there are also other special commands, like `WATCH`, which
      although do not have a specific flags, and should also be considered
      stateful client types. For stateful clients, in many scenarios, the
      connections cannot be shared in "connection pool", meaning connection
      pool cannot be used. For example, whenever the `WATCH` command is
      executed, a new connection is required to put the client into the "watch
      state" because the watched keys are stored in the client.
      
      If different business logic requires watching different keys, separate
      connections must be used; otherwise, there will be contamination. This
      also means that if a user's business heavily relies on the `WATCH`
      command, a large number of connections will be required.
      
      Recently we have encountered this situation in our platform, where some
      users consume a significant number of connections when using Redis
      because of `WATCH`.
      
      I hope we can have a way to observe these special use cases and special
      client connections. Here I add a few monitoring metrics:
      
      1. `watching_clients` in `INFO` reply: The number of clients currently
      in the "watching" state.
      2. `total_watched_keys` in `INFO` reply: The total number of keys being
      watched.
      3. `watch` in `CLIENT LIST` reply: The number of keys each client is
      currently watching.
      50d6fe8c
  25. 08 Feb, 2024 1 commit
    • Binbin's avatar
      Add new DEBUG dict-resizing command to disable the dict resize (#13043) · 493e31e3
      Binbin authored
      The test fails here and there:
      ```
      *** [err]: expire scan should skip dictionaries with lot's of empty buckets in tests/unit/expire.tcl
      scan didn't handle slot skipping logic.
      ```
      
      There are two case:
      1. In the case of passing the test, we use child process to avoid the
      dict resize, but it can not completely limit it, since in the dictDelete
      we still have chance to trigger the resize (hit the force radio). The
      reason why our test passed before is because the expire dict is still
      in the rehashing process, so the dictDelete, the dictShrinkIfNeeded can
      not trigger the resize.
      
      2. In the case of failing the test, the expire dict finished the
      rehashing,
      so the last dictDelete, the dictShrinkIfNeeded trigger the dict resize
      since it hit the force radio, so the skipping logic fail.
      
      This PR add a new DEBUG command to disbale the dict resize.
      493e31e3
  26. 06 Feb, 2024 1 commit
    • Binbin's avatar
      Re-compute active_defrag_running after adjusting defrag configurations (#13020) · 13bd3643
      Binbin authored
      Currently, once active defrag starts, we can not adjust
      active_defrag_running
      downwards. This is because active_defrag_running will be dynamically
      compute
      based on the fragmentation, we think we should not lower the effort when
      the
      fragmentation drops.
      
      However, we need to note that active_defrag_running will also be
      dynamically
      computed based on configurations. In this case, we are not respecting
      cycle-min
      or cycle-max. Some people may realize halfway through that defrag
      consumes a
      lot and want to adjust it.
      
      Previously we could only turn off activedefrag and then turn it on again
      to
      adjust active_defrag_running downwards. So in this PR, when a active
      defrag
      configuration change is made, we will re-compute it.
      
      These configuration items are:
      - active-defrag-cycle-min
      - active-defrag-cycle-max
      - active-defrag-threshold-upper
      13bd3643
  27. 05 Feb, 2024 1 commit
    • guybe7's avatar
      Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822) · 8cd62f82
      guybe7 authored
      # Description
      Gather most of the scattered `redisDb`-related code from the per-slot
      dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
      it's a class that represents an array of dictionaries.
      
      # Motivation
      The main motivation is code cleanliness, the idea of using an array of
      dictionaries is very well-suited to becoming a self-contained data
      structure.
      This allowed cleaning some ugly code, among others: loops that run twice
      on the main dict and expires dict, and duplicate code for allocating and
      releasing this data structure.
      
      # Notes
      1. This PR reverts the part of https://github.com/redis/redis/pull/12848
      where the `rehashing` list is global (handling rehashing `dict`s is
      under the responsibility of `kvstore`, and should not be managed by the
      server)
      2. This PR also replaces the type of `server.pubsubshard_channels` from
      `dict**` to `kvstore` (original PR:
      https://github.com/redis/redis/pull/12804). After that was done,
      server.pubsub_channels was also chosen to be a `kvstore` (with only one
      `dict`, which seems odd) just to make the code cleaner by making it the
      same type as `server.pubsubshard_channels`, see
      `pubsubtype.serverPubSubChannels`
      3. the keys and expires kvstores are currenlty configured to allocate
      the individual dicts only when the first key is added (unlike before, in
      which they allocated them in advance), but they won't release them when
      the last key is deleted.
      
      Worth mentioning that due to the recent change the reply of DEBUG
      HTSTATS changed, in case no keys were ever added to the db.
      
      before:
      ```
      127.0.0.1:6379> DEBUG htstats 9
      [Dictionary HT]
      Hash table 0 stats (main hash table):
      No stats available for empty dictionaries
      [Expires HT]
      Hash table 0 stats (main hash table):
      No stats available for empty dictionaries
      ```
      
      after:
      ```
      127.0.0.1:6379> DEBUG htstats 9
      [Dictionary HT]
      [Expires HT]
      ```
      8cd62f82
  28. 30 Jan, 2024 1 commit
    • Binbin's avatar
      Fix blocking commands timeout is reset due to re-processing command (#13004) · 492021db
      Binbin authored
      In #11012, we will reprocess command when client is unblocked on keys,
      in some blocking commands, for example, in the XREADGROUP BLOCK
      scenario,
      because of the re-processing command, we will recalculate the block
      timeout,
      causing the blocking time to be reset.
      
      This commit add a new CLIENT_REPROCESSING_COMMAND clent flag, explicitly
      let the command know that it is being re-processed, later in
      blockForKeys
      we will not reset the timeout.
      
      Affected BLOCK cases: 
      - list / zset / stream, added test cases for each.
      
      Unaffected cases:
      - module (never re-process the commands).
      - WAIT / WAITAOF (never re-process the commands).
      
      Fixes #12998.
      492021db
  29. 29 Jan, 2024 1 commit
    • Chen Tianjie's avatar
      Optimize resizing hash table to resize not only non-empty dicts. (#12819) · af7ceeb7
      Chen Tianjie authored
      The function `tryResizeHashTables` only attempts to shrink the dicts
      that has keys (change from #11695), this was a serious problem until the
      change in #12850 since it meant if all keys are deleted, we won't shrink
      the dick.
      But still, both dictShrink and dictExpand may be blocked by a fork child
      process, therefore, the cron job needs to perform both dictShrink and
      dictExpand, for not just non-empty dicts, but all dicts in DBs.
      
      What this PR does:
      
      1. Try to resize all dicts in DBs (not just non-empty ones, as it was
      since #12850)
      2. handle both shrink and expand (not just shrink, as it was since
      forever)
      3. Refactor some APIs about dict resizing (get rid of `htNeedsShrink`
      `htNeedsShrink` `dictShrinkToFit`, and expose `dictShrinkIfNeeded`
      `dictExpandIfNeeded` which already contains all the code of those
      functions we get rid of, to make APIs more neat)
      4. In the `Don't rehash if redis has child process` test, now that cron
      would do resizing, we no longer need to write to DB after the child
      process got killed, and can wait for the cron to expand the hash table.
      af7ceeb7
  30. 25 Jan, 2024 1 commit
    • zhaozhao.zz's avatar
      Revert multi OOM limit and add multi buffer limit (#12961) · 85a834bf
      zhaozhao.zz authored
      Fix #9926 , and introduce an alternative method to prevent abuse of
      transactions:
      
      1. revert #5454 (which was blocking read-only transactions in OOM
      state), and break the tie of MULTI state memory usage and the server OOM
      state. Meaning that we'll limit the total memory a single client can
      queue, and do that unconditionally regardless of the server being OOM or
      not.
      2. to prevent abuse of transactions, we use the
      `client-query-buffer-limit` to restrict the size of the transaction.
      Because the commands cached in the MULTI/EXEC queue have not been
      executed yet, so they are also considered a part of the "query buffer"
      in a broader sense. In other words, the commands in the MULTI queue and
      the `querybuf` of the client together constitute the "query buffer".
      When they exceed the limit, the connection will be disconnected.
      
      The reasoning is that it's sensible to sends a single command with a
      huge (1GB) argument, and it's sensible to sends a transaction with many
      small commands, but it's probably not common to sends a long transaction
      with many huge arguments (will consume a lot of memory before even being
      executed).
      
      If anyone runs into that, they can simply increase the
      `client-query-buffer-limit` config.
      
      P.S. To prevent DDoS attacks, unauthenticated clients have a separate
      hard limit. Their query buffer should not exceed a maximum of 1MB. In
      other words, if the query buffer of an unauthenticated client exceeds
      1MB or the `client-query-buffer-limit` (if it is set to a value smaller
      than 1MB,), the connection will be disconnected.
      85a834bf
  31. 23 Jan, 2024 1 commit
    • Binbin's avatar
      Some cleanups around function (#12940) · 628c0dea
      Binbin authored
      This PR did some cleanups around function:
      - drop the comment about Libraries Ctx, since we do have comment
        in functionsLibCtx, no need to maintain multiple copies.
      - remove outdated comment about the dropped Library description.
      - remove unused desc and code vars in functionExtractLibMetaData.
      - fix engines_nemory typo, changed it to engines_memory.
      - remove outdated comment about FUNCTION CREATE and FUNCTION INFO,
        FUNCTION CREATE was renamed to FUNCTION LOAD.
      - Check in initServer whether the return of functionsInit is OK.
      628c0dea
  32. 19 Jan, 2024 2 commits
    • Yanqi Lv's avatar
      Change the threshold of dict expand, shrink and rehash (#12948) · b07174af
      Yanqi Lv authored
      Before this change (most recently modified in
      https://github.com/redis/redis/pull/12850#discussion_r1421406393), The
      trigger for normal expand threshold was 100% utilization and the trigger
      for normal shrink threshold was 10% (HASHTABLE_MIN_FILL).
      While during fork (DICT_RESIZE_AVOID), when we want to avoid rehash, the
      trigger thresholds were multiplied by 5 (`dict_force_resize_ratio`),
      meaning 500% for expand and 2% (100/10/5) for shrink.
      
      However, in `dictRehash` (the incremental rehashing), the rehashing
      threshold for shrinking during fork (DICT_RESIZE_AVOID) was 20% by
      mistake.
      This meant that if a shrinking is triggered when `dict_can_resize` is
      `DICT_RESIZE_ENABLE` which the threshold is 10%, the rehashing can
      continue when `dict_can_resize` is `DICT_RESIZE_AVOID`.
      This would cause unwanted CopyOnWrite damage.
      
      It'll make sense to change the thresholds of the rehash trigger and the
      thresholds of the incremental rehashing the same, however, in one we
      compare the size of the hash table to the number of records, and in the
      other we compare the size of ht[0] to the size of ht[1], so the formula
      is not exactly the same.
      
      to make things easier we change all the thresholds to powers of 2, so
      the normal shrinking threshold is changed from 100/10 (i.e. 10%) to
      100/8 (i.e. 12.5%), and we change the threshold during forks from 5 to
      4, i.e. from 500% to 400% for expand, and from 2% (100/10/5) to 3.125%
      (100/8/4)
      b07174af
    • debing.sun's avatar
      Fix race condition issues between the main thread and module threads (#12817) · d0640029
      debing.sun authored
      Fix #12785 and other race condition issues.
      See the following isolated comments.
      
      The following report was obtained using SANITIZER thread.
      ```sh
      make SANITIZER=thread
      ./runtest-moduleapi --config io-threads 4 --config io-threads-do-reads yes --accurate
      ```
      
      1. Fixed thread-safe issue in RM_UnblockClient()
      Related discussion:
      https://github.com/redis/redis/pull/12817#issuecomment-1831181220
      * When blocking a client in a module using `RM_BlockClientOnKeys()` or
      `RM_BlockClientOnKeysWithFlags()`
      with a timeout_callback, calling RM_UnblockClient() in module threads
      can lead to race conditions
           in `updateStatsOnUnblock()`.
      
           - Introduced: 
              Version: 6.2
              PR: #7491
      
           - Touch:
      `server.stat_numcommands`, `cmd->latency_histogram`, `server.slowlog`,
      and `server.latency_events`
           
           - Harm Level: High
      Potentially corrupts the memory data of `cmd->latency_histogram`,
      `server.slowlog`, and `server.latency_events`
      
           - Solution:
      Differentiate whether the call to moduleBlockedClientTimedOut() comes
      from the module or the main thread.
      Since we can't know if RM_UnblockClient() comes from module threads, we
      always assume it does and
      let `updateStatsOnUnblock()` asynchronously update the unblock status.
           
      * When error reply is called in timeout_callback(), ctx is not
      thread-safe, eventually lead to race conditions in `afterErrorReply`.
      
           - Introduced: 
              Version: 6.2
              PR: #8217
      
           - Touch
             `server.stat_total_error_replies`, `server.errors`, 
      
           - Harm Level: High
             Potentially corrupts the memory data of `server.errors`
         
            - Solution: 
      Make the ctx in `timeout_callback()` with `REDISMODULE_CTX_THREAD_SAFE`,
      and asynchronously reply errors to the client.
      
      2. Made RM_Reply*() family API thread-safe
      Related discussion:
      https://github.com/redis/redis/pull/12817#discussion_r1408707239
      Call chain: `RM_Reply*()` -> `_addReplyToBufferOrList()` -> touch
      server.current_client
      
          - Introduced: 
             Version: 7.2.0
             PR: #12326
      
         - Harm Level: None
      Since the module fake client won't have the `CLIENT_PUSHING` flag, even
      if we touch server.current_client,
           we can still exit after `c->flags & CLIENT_PUSHING`.
      
         - Solution
            Checking `c->flags & CLIENT_PUSHING` earlier.
      
      3. Made freeClient() thread-safe
          Fix #12785
      
          - Introduced: 
             Version: 4.0
      Commit:
      https://github.com/redis/redis/commit/3fcf959e609e850a114d4016843e4c991066ebac
      
          - Harm Level: Moderate
             * Trigger assertion
      It happens when the module thread calls freeClient while the io-thread
      is in progress,
      which just triggers an assertion, and doesn't make any race condiaions.
      
      * Touch `server.current_client`, `server.stat_clients_type_memory`, and
      `clientMemUsageBucket->clients`.
      It happens between the main thread and the module threads, may cause
      data corruption.
      1. Error reset `server.current_client` to NULL, but theoretically this
      won't happen,
      because the module has already reset `server.current_client` to old
      value before entering freeClient.
      2. corrupts `clientMemUsageBucket->clients` in
      updateClientMemUsageAndBucket().
      3. Causes server.stat_clients_type_memory memory statistics to be
      inaccurate.
          
          - Solution:
      * No longer counts memory usage on fake clients, to avoid updating
      `server.stat_clients_type_memory` in freeClient.
      * No longer resetting `server.current_client` in unlinkClient, because
      the fake client won't be evicted or disconnected in the mid of the
      process.
      * Judgment assertion `io_threads_op == IO_THREADS_OP_IDLE` only if c is
      not a fake client.
      
      4. Fixed free client args without GIL
      Related discussion:
      https://github.com/redis/redis/pull/12817#discussion_r1408706695
      When freeing retained strings in the module thread (refcount decr), or
      using them in some way (refcount incr), we should do so while holding
      the GIL,
      otherwise, they might be simultaneously freed while the main thread is
      processing the unblock client state.
      
          - Introduced: 
             Version: 6.2.0
             PR: #8141
      
         - Harm Level: Low
           Trigger assertion or double free or memory leak. 
      
         - Solution:
      Documenting that module API users need to ensure any access to these
      retained strings is done with the GIL locked
      
      5. Fix adding fake client to server.clients_pending_write
          It will incorrectly log the memory usage for the fake client.
      Related discussion:
      https://github.com/redis/redis/pull/12817#issuecomment-1851899163
      
          - Introduced: 
             Version: 4.0
      Commit:
      https://github.com/redis/redis/commit/9b01b64430fbc1487429144d2e4e72a4a7fd9db2
      
      
      
          - Harm Level: None
            Only result in NOP
      
          - Solution:
             * Don't add fake client into server.clients_pending_write
      * Add c->conn assertion for updateClientMemUsageAndBucket() and
      updateClientMemoryUsage() to avoid same
               issue in the future.
      So now it will be the responsibility of the caller of both of them to
      avoid passing in fake client.
      
      6. Fix calling RM_BlockedClientMeasureTimeStart() and
      RM_BlockedClientMeasureTimeEnd() without GIL
          - Introduced: 
             Version: 6.2
             PR: #7491
      
         - Harm Level: Low
      Causes inaccuracies in command latency histogram and slow logs, but does
      not corrupt memory.
      
         - Solution:
      Module API users, if know that non-thread-safe APIs will be used in
      multi-threading, need to take responsibility for protecting them with
      their own locks instead of the GIL, as using the GIL is too expensive.
      
      ### Other issue
      1. RM_Yield is not thread-safe, fixed via #12905.
      
      ### Summarize
      1. Fix thread-safe issues for `RM_UnblockClient()`, `freeClient()` and
      `RM_Yield`, potentially preventing memory corruption, data disorder, or
      assertion.
      2. Updated docs and module test to clarify module API users'
      responsibility for locking non-thread-safe APIs in multi-threading, such
      as RM_BlockedClientMeasureTimeStart/End(), RM_FreeString(),
      RM_RetainString(), and RM_HoldString().
      
      ### About backpot to 7.2
      1. The implement of (1) is not too satisfying, would like to get more
      eyes.
      2. (2), (3) can be safely for backport
      3. (4), (6) just modifying the module tests and updating the
      documentation, no need for a backpot.
      4. (5) is harmless, no need for a backpot.
      
      ---------
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      d0640029
  33. 18 Jan, 2024 1 commit
    • Binbin's avatar
      Fix dict resize ratio checks, avoid precision loss from integer division (#12952) · 14b1edfd
      Binbin authored
      In the past we used integers to compare ratios, let us assume that
      we have the following data in expanding:
      ```
      used / size > 5
      `80 / 16 > 5` is false
      `81 / 16 > 5` is false
      `95 / 16 > 5` is false
      `96 / 16 > 5` is true
      ```
      
      Because the integer result is rounded, our resize breaks the ratio
      constraint, this has existed since the beginning, which resulted in
      us not strictly following the ratio (shrink also has the same issue).
      
      This PR change it to multiplication to avoid floating point
      calculations.
      14b1edfd
  34. 15 Jan, 2024 1 commit
    • Yanqi Lv's avatar
      Shrink dict when deleting dictEntry (#12850) · e2b7932b
      Yanqi Lv authored
      When we insert entries into dict, it may autonomously expand if needed.
      However, when we delete entries from dict, it doesn't shrink to the
      proper size. If there are few entries in a very large dict, it may cause
      huge waste of memory and inefficiency when iterating.
      
      The main keyspace dicts (keys and expires), are shrinked by cron
      (`tryResizeHashTables` calls `htNeedsResize` and `dictResize`),
      And some data structures such as zset and hash also do that (call
      `htNeedsResize`) right after a loop of calls to `dictDelete`,
      But many other dicts are completely missing that call (they can only
      expand).
      
      In this PR, we provide the ability to automatically shrink the dict when
      deleting. The conditions triggering the shrinking is the same as
      `htNeedsResize` used to have. i.e. we expand when we're over 100%
      utilization, and shrink when we're below 10% utilization.
      
      Additionally:
      * Add `dictPauseAutoResize` so that flows that do mass deletions, will
      only trigger shrinkage at the end.
      * Rename `dictResize` to `dictShrinkToFit` (same logic as it used to
      have, but better name describing it)
      * Rename `_dictExpand` to `_dictResize` (same logic as it used to have,
      but better name describing it)
       
      related to discussion
      https://github.com/redis/redis/pull/12819#discussion_r1409293878
      
      
      
      ---------
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      Co-authored-by: default avatarzhaozhao.zz <zhaozhao.zz@alibaba-inc.com>
      e2b7932b
  35. 08 Jan, 2024 1 commit
    • Yanqi Lv's avatar
      Optimize performance when many clients [p|s]unsubscribe simultaneously (#12838) · c452e414
      Yanqi Lv authored
      I'm testing the performance of Pub/Sub command recently. I find if many
      clients unsubscribe or are killed simultaneously, Redis needs a long
      time to deal with it.
      
      In my experiment, I set 5000 clients and each client subscribes 100
      channels. Then I call `client kill type pubsub` to simulate the
      situation where clients unsubscribe all channels at the same time and
      calculate the execution time. The result shows that it takes about 23s.
      I use the _perf_ and find that `listSearchKey` in
      `pubsubUnsubscribeChannel` costs more than 90% cpu time. I think we can
      optimize this situation.
      
      In this PR, I replace list with dict to track the clients subscribing
      the channel more efficiently. It changes O(N) to O(1) in the search
      phase. Then I repeat the experiment as above. The results are as
      follows.
      
      |              | Execution Time(s) |used_memory(MB) |
      | :---------------- | :------: | :----: |
      | unstable(1bd0b549)        |   23.734   | 65.41 |
      | optimize-pubsub           |   0.288   | 67.66 |
      
      Thanks for #11595 , I use a no-value dict and the results shows that the
      performance improves significantly but the memory usage only increases
      slightly.
      
      Notice:
      
      - This PR will cause the performance degradation about 20% in
      `[p|s]subscribe` command but won't freeze Redis.
      c452e414
  36. 07 Jan, 2024 1 commit
    • debing.sun's avatar
      Make RM_Yield thread-safe (#12905) · ca1f67af
      debing.sun authored
      ## Issues and solutions from #12817
      1. Touch ProcessingEventsWhileBlocked and calling moduleCount() without
      GIL in afterSleep()
          - Introduced: 
             Version: 7.0.0
             PR: #9963
      
         - Harm Level: Very High
      If the module thread calls `RM_Yield()` before the main thread enters
      afterSleep(),
      and modifies `ProcessingEventsWhileBlocked`(+1), it will cause the main
      thread to not wait for GIL,
      which can lead to all kinds of unforeseen problems, including memory
      data corruption.
      
         - Initial / Abandoned Solution:
            * Added `__thread` specifier for ProcessingEventsWhileBlocked.
      `ProcessingEventsWhileBlocked` is used to protect against nested event
      processing, but event processing
      in the main thread and module threads should be completely independent
      and unaffected, so it is safer
               to use TLS.
      * Adding a cached module count to keep track of the current number of
      modules, to avoid having to use `dictSize()`.
          
          - Related Warnings:
      ```
      WARNING: ThreadSanitizer: data race (pid=1136)
        Write of size 4 at 0x0001045990c0 by thread T4 (mutexes: write M0):
          #0 processEventsWhileBlocked networking.c:4135 (redis-server:arm64+0x10006d124)
          #1 RM_Yield module.c:2410 (redis-server:arm64+0x10018b66c)
          #2 bg_call_worker <null>:83232836 (blockedclient.so:arm64+0x16a8)
      
        Previous read of size 4 at 0x0001045990c0 by main thread:
          #0 afterSleep server.c:1861 (redis-server:arm64+0x100024f98)
          #1 aeProcessEvents ae.c:408 (redis-server:arm64+0x10000fd64)
          #2 aeMain ae.c:496 (redis-server:arm64+0x100010f0c)
          #3 main server.c:7220 (redis-server:arm64+0x10003f38c)
      ```
      
      2. aeApiPoll() is not thread-safe
      When using RM_Yield to handle events in a module thread, if the main
      thread has not yet
      entered `afterSleep()`, both the module thread and the main thread may
      touch `server.el` at the same time.
      
          - Introduced: 
             Version: 7.0.0
             PR: #9963
      
         - Old / Abandoned Solution:
      Adding a new mutex to protect timing between after beforeSleep() and
      before afterSleep().
      Defect: If the main thread enters the ae loop without any IO events, it
      will wait until
      the next timeout or until there is any event again, and the module
      thread will
      always hang until the main thread leaves the event loop.
      
          - Related Warnings:
      ```
      SUMMARY: ThreadSanitizer: data race ae_kqueue.c:55 in addEventMask
      ==================
      ==================
      WARNING: ThreadSanitizer: data race (pid=14682)
        Write of size 4 at 0x000100b54000 by thread T9 (mutexes: write M0):
          #0 aeApiPoll ae_kqueue.c:175 (redis-server:arm64+0x100010588)
          #1 aeProcessEvents ae.c:399 (redis-server:arm64+0x10000fb84)
          #2 processEventsWhileBlocked networking.c:4138 (redis-server:arm64+0x10006d3c4)
          #3 RM_Yield module.c:2410 (redis-server:arm64+0x10018b66c)
          #4 bg_call_worker <null>:16042052 (blockedclient.so:arm64+0x169c)
      
        Previous write of size 4 at 0x000100b54000 by main thread:
          #0 aeApiPoll ae_kqueue.c:175 (redis-server:arm64+0x100010588)
          #1 aeProcessEvents ae.c:399 (redis-server:arm64+0x10000fb84)
          #2 aeMain ae.c:496 (redis-server:arm64+0x100010da8)
          #3 main server.c:7238 (redis-server:arm64+0x10003f51c)
      ```
      
      ## The final fix as the comments:
      https://github.com/redis/redis/pull/12817#discussion_r1436427232
      Optimized solution based on the above comment:
      
      First, we add `module_gil_acquring` to indicate whether the main thread
      is currently in the acquiring GIL state.
      
      When the module thread starts to yield, there are two possibilities(we
      assume the caller keeps the GIL):
      1. The main thread is in the mid of beforeSleep() and afterSleep(), that
      is, `module_gil_acquring` is not 1 now.
      At this point, the module thread will wake up the main thread through
      the pipe and leave the yield,
      waiting for the next yield when the main thread may already in the
      acquiring GIL state.
          
      2. The main thread is in the acquiring GIL state.
      The module thread release the GIL, yielding CPU to give the main thread
      an opportunity to start
      event processing, and then acquire the GIL again until the main thread
      releases it.
      This is what
      https://github.com/redis/redis/pull/12817#discussion_r1436427232
      
      
      mentioned direction.
      
      ---------
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      ca1f67af
  37. 28 Dec, 2023 1 commit
    • guybe7's avatar
      WAITAOF: Try to wake blocked clients ASAP in the next beforeSleep (#12627) · 12b611b3
      guybe7 authored
      In case server.fsynced_reploff changed (e.g. flushAppendOnly set it to
      server.master_repl_offset in case there was nothing to fsync) we want to
      avoid sleeping before the next beforeSleep so we can call
      blockedBeforeSleep ASAP.
      without that, in case there's no incoming traffic, we could be waiting
      for the next cron timer event to wake us up.
      12b611b3