1. 21 May, 2024 1 commit
    • debing.sun's avatar
      Have consistent behavior of SPUBLISH within multi/exec like regular command (#13276) · 9ffc35c9
      debing.sun authored
      
      
      This PR is based on the commits from PR #12944.
      
      Allow SPUBLISH command within multi/exec on replica
      
      Behavior on unstable:
      
      ```
      127.0.0.1:6380> CLUSTER NODES
      39ce8aa20f1f0d91f1a88d976ee1926dfefcdf1a 127.0.0.1:6380@16380 myself,slave 8b0feb120b68aac489d6a5af9c77dc40d71bc792 0 0 0 connected
      8b0feb120b68aac489d6a5af9c77dc40d71bc792 127.0.0.1:6379@16379 master - 0 1705091681202 0 connected 0-16383
      127.0.0.1:6380> SPUBLISH hello world
      (integer) 0
      127.0.0.1:6380> MULTI
      OK
      127.0.0.1:6380(TX)> SPUBLISH hello world
      QUEUED
      127.0.0.1:6380(TX)> EXEC
      (error) MOVED 866 127.0.0.1:6379
      ```
      
      With this change:
      
      ```
      127.0.0.1:6380> SPUBLISH hello world
      (integer) 0
      127.0.0.1:6380> MULTI
      OK
      127.0.0.1:6380(TX)> SPUBLISH hello world
      QUEUED
      127.0.0.1:6380(TX)> EXEC
      1) (integer) 0
      ```
      
      ---------
      Co-authored-by: default avatarHarkrishn Patro <harkrisp@amazon.com>
      Co-authored-by: default avataroranagra <oran@redislabs.com>
      9ffc35c9
  2. 14 May, 2024 1 commit
  3. 06 May, 2024 1 commit
    • guybe7's avatar
      XREADGROUP from PEL should not affect server.dirty (#13251) · 0e1de78f
      guybe7 authored
      Because it does not cause any propagation (arguably it should, see the
      comment in the tcl file)
      
      The motivation for this fix is that in 6.2 if dirty changed without
      propagation inside MULTI/EXEC it would cause propagation of EXEC only,
      which would result in the replica sending errors to its master
      0e1de78f
  4. 16 Apr, 2024 1 commit
    • Binbin's avatar
      Allocate Lua VM code with jemalloc instead of libc, and count it used memory (#13133) · 804110a4
      Binbin authored
      
      
      ## Background
      1. Currently Lua memory control does not pass through Redis's zmalloc.c.
      Redis maxmemory cannot limit memory problems caused by users abusing lua
      since these lua VM memory is not part of used_memory.
      
      2. Since jemalloc is much better (fragmentation and speed), and also we
      know it and trust it. we are
      going to use jemalloc instead of libc to allocate the Lua VM code and
      count it used memory.
      
      ## Process:
      In this PR, we will use jemalloc in lua. 
      1. Create an arena for all lua vm (script and function), which is
      shared, in order to avoid blocking defragger.
      2. Create a bound tcache for the lua VM, since the lua VM and the main
      thread are by default in the same tcache, and if there is no isolated
      tcache, lua may request memory from the tcache which has just been freed
      by main thread, and vice versa
      On the other hand, since lua vm might be release in bio thread, but
      tcache is not thread-safe, we need to recreate
          the tcache every time we recreate the lua vm.
      3. Remove lua memory statistics from memory fragmentation statistics to
      avoid the effects of lua memory fragmentation
      
      ## Other
      Add the following new fields to `INFO DEBUG` (we may promote them to
      INFO MEMORY some day)
      1. allocator_allocated_lua: total number of bytes allocated of lua arena
      2. allocator_active_lua: total number of bytes in active pages allocated
      in lua arena
      3. allocator_resident_lua: maximum number of bytes in physically
      resident data pages mapped in lua arena
      4. allocator_frag_bytes_lua: fragment bytes in lua arena
      
      This is oranagra's idea, and i got some help from sundb.
      
      This solves the third point in #13102.
      
      ---------
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      804110a4
  5. 02 Apr, 2024 1 commit
    • Moti Cohen's avatar
      Change FLUSHALL/FLUSHDB SYNC to run as blocking ASYNC (#13167) · 4df03796
      Moti Cohen authored
      # Overview
      Users utilize the `FLUSHDB SYNC` and `FLUSHALL SYNC` commands for a variety of 
      reasons. The main issue with this command is that if the database becomes 
      substantial in size, the server will be unresponsive for an extended period. 
      Other than freezing application traffic, this may also lead some clients making 
      incorrect judgments about the server's availability. For instance, a watchdog may 
      erroneously decide to terminate the process, resulting in potential adverse 
      outcomes. While a `FLUSH* ASYNC` can address these issues, it might not be used 
      for two reasons: firstly, it's not the default, and secondly, in some cases, the 
      client issuing the flush wants to wait for its completion before repopulating the 
      database.
      
      Between the option of triggering FLUSH* asynchronously in the background without 
      indication for completion versus running it synchronously in the foreground by 
      the main thread, there is another more appealing option. We can block the
      client that requested the flush, execute the flush command in the background, and 
      once done, unblock the client and return notification for completion. This approach 
      ensures the server remains responsive to other clients, and the blocked client 
      receives the expected response only after the flush operation has been successfully 
      carried out.
      
      # Implementation details
      Instead of defining yet another flavor to the flush command, we can modify
      `FLUSHALL SYNC` and `FLUSHDB SYNC` always run in this new mode.
      
      ## Extending BIO Threads capabilities
      Today jobs that are carried out by BIO threads don't have the capability to 
      indicate completion to the main thread. We can add this infrastructure by having
      an additional dummy job, coined as completion-job, that eventually will be written 
      by BIO threads to a response-queue. The main thread will take care to consume items
      from the response-queue and call the provided callback function of each 
      completion-job.
      
      ## FLUSH* SYNC to run as blocking ASYNC
      Command `FLUSH* SYNC` will be modified to create one or more async jobs to flush
      DB(s) and afterward will push additional completion-job request. By sending the
      completion job request only at the end, the main thread will be called back only
      after all the preceding jobs completed their task in the background. During that
      time, the client of the command is suspended and marked as `BLOCKED_LAZYFREE`
      whereas any other client will be able to communicate with the server without any
      issue.
      4df03796
  6. 19 Mar, 2024 2 commits
    • Yanqi Lv's avatar
      fix wrong data type conversion in zrangeResultBeginStore (#13148) · bad33f87
      Yanqi Lv authored
      In `beginResultEmission`, -1 means the result length is not known in
      advance. But after #12185, if we pass -1 to `zrangeResultBeginStore`, it
      will convert to SIZE_MAX in `zsetTypeCreate` and try to `dictExpand`.
      Although `dictExpand` won't succeed because the size overflows, I think
      we'd better to avoid this wrong conversion.
      
      This bug can be triggered when the source of `zrangestore` doesn't exist
      or we use `zrangestore` command with `byscore` or `bylex`.
      The impact is that dst keys will be converted to use skiplist instead of
      listpack.
      bad33f87
    • Binbin's avatar
      Prevent lua error_reply abuse from causing errorstats to become larger (#13141) · e04d41d7
      Binbin authored
      Users who abuse lua error_reply will generate a new error object on each
      error call, which can make server.errors get bigger and bigger. This
      will
      cause the server to block when calling INFO (we also return errorstats
      by
      default).
      
      To prevent the damage it can cause, when a misuse is detected, we will
      print a warning log and disable the errorstats to avoid adding more new
      errors. It can be re-enabled via CONFIG RESETSTAT.
      
      Because server.errors may be very large (it may be better now since we
      have the limit), config resetstat may block for a while. So in
      resetErrorTableStats, we will try to lazyfree server.errors.
      
      See the related discussion at the end of #8217.
      e04d41d7
  7. 18 Mar, 2024 1 commit
    • Binbin's avatar
      Fix dictionary use-after-free in active expire and make kvstore iter to respect EMPTY flag (#13135) · 7b070423
      Binbin authored
      After #13072, there is an use-after-free error. In expireScanCallback, we
      will delete the dict, and then in dictScan we will continue to use the dict,
      like we will doing `dictResumeRehashing(d)` in the end, this casued an error.
      
      In this PR, in freeDictIfNeeded, if the dict's pauserehash is set, don't
      delete the dict yet, and then when scan returns try to delete it again.
      
      At the same time, we noticed that there will be similar problems in iterator.
      We may also delete elements during the iteration process, causing the dict
      to be deleted, so the part related to iter in the PR has also been modified.
      dictResetIterator was also missing from the previous kvstoreIteratorNextDict,
      we currently have no scenario that elements will be deleted in kvstoreIterator
      process, deal with it together to avoid future problems. Added some simple
      tests to verify the changes.
      
      In addition, the modification in #13072 omitted initTempDb and emptyDbAsync,
      and they were also added. This PR also remove the slow flag from the expire
      test (consumes 1.3s) so that problems can be found in CI in the future.
      7b070423
  8. 13 Mar, 2024 2 commits
    • Binbin's avatar
      Lua eval scripts first in first out LRU eviction (#13108) · ad28d222
      Binbin authored
      In some cases, users will abuse lua eval. Each EVAL call generates
      a new lua script, which is added to the lua interpreter and cached
      to redis-server, consuming a large amount of memory over time.
      
      Since EVAL is mostly the one that abuses the lua cache, and these
      won't have pipeline issues (i.e. the script won't disappear
      unexpectedly,
      and cause errors like it would with SCRIPT LOAD and EVALSHA),
      we implement a plain FIFO LRU eviction only for these (not for
      scripts loaded with SCRIPT LOAD).
      
      ### Implementation notes:
      When not abused we'll probably have less than 100 scripts, and when
      abused we'll have many thousands. So we use a hard coded value of 500
      scripts. And considering that we don't have many scripts, then unlike
      keys, we don't need to worry about the memory usage of keeping a true
      sorted LRU linked list. We compute the SHA of each script anyway,
      and put the script in a dict, we can store a listNode there, and use
      it for quick removal and re-insertion into an LRU list each time the
      script is used.
      
      ### New interfaces:
      At the same time, a new `evicted_scripts` field is added to
      INFO, which represents the number of evicted eval scripts. Users
      can check it to see if they are abusing EVAL.
      
      ### benchmark:
      `./src/redis-benchmark -P 10 -n 1000000 -r 10000000000 eval "return
      __rand_int__" 0`
      
      The simple abuse of eval benchmark test that will create 1 million EVAL
      scripts. The performance has been improved by 50%, and the max latency
      has dropped from 500ms to 13ms (this may be caused by table expansion
      inside Lua when the number of scripts is large). And in the INFO memory,
      it used to consume 120MB (server cache) + 310MB (lua engine), but now
      it only consumes 70KB (server cache) + 210KB (lua_engine) because of
      the scripts eviction.
      
      For non-abusive case of about 100 EVAL scripts, there's no noticeable
      change in performance or memory usage.
      
      ### unlikely potentially breaking change:
      in theory, a user can maybe load a
      script with EVAL and then use EVALSHA to call it (by calculating the
      SHA1 value on the client side), it could be that if we read the docs
      carefully we'll realized it's a valid scenario, but we suppose it's
      extremely rare. So it may happen that EVALSHA acts on a script created
      by EVAL, and the script is evicted and EVALSHA returns a NOSCRIPT error.
      that is if you have more than 500 scripts being used in the same
      transaction / pipeline.
      
      This solves the second point in #13102.
      ad28d222
    • Ronen Kalish's avatar
      Xread last entry in stream (#7388) (#13117) · a8e74511
      Ronen Kalish authored
      
      
      Allow using `+` as a special ID for last item in stream on XREAD
      command.
      
      This would allow to iterate on a stream with XREAD starting with the
      last available message instead of the next one which `$` is used for.
      I.e. the caller can use `BLOCK` and `+` on the first call, and change to
      `$` on the next call.
      
      Closes #7388
      
      ---------
      Co-authored-by: default avatarFelipe Machado <462154+felipou@users.noreply.github.com>
      a8e74511
  9. 12 Mar, 2024 1 commit
  10. 10 Mar, 2024 1 commit
    • Matthew Douglass's avatar
      Fix conversion of numbers in lua args to redis args (#13115) · 5fdaa53d
      Matthew Douglass authored
      
      
      Since lua_Number is not explicitly an integer or a double, we need to
      make an effort
      to convert it as an integer when that's possible, since the string could
      later be used
      in a context that doesn't support scientific notation (e.g. 1e9 instead
      of 100000000).
      
      Since fpconv_dtoa converts numbers with the equivalent of `%f` or `%e`,
      which ever is shorter,
      this would break if we try to pass a long integer number to a command
      that takes integer.
      we'll get an implicit conversion to string in Lua, and then the parsing
      in getLongLongFromObjectOrReply will fail.
      
      ```
      > eval "redis.call('hincrby', 'key', 'field', '1000000000')" 0
      (nil)
      > eval "redis.call('hincrby', 'key', 'field', tonumber('1000000000'))" 0
      (error) ERR value is not an integer or out of range script: ac99c32e4daf7e300d593085b611de261954a946, on @user_script:1.
      ```
      
      Switch to using ll2string if the number can be safely represented as a
      long long.
      
      The problem was introduced in #10587 (Redis 7.2).
      closes #13113.
      
      ---------
      Co-authored-by: default avatarBinbin <binloveplay1314@qq.com>
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      5fdaa53d
  11. 05 Mar, 2024 1 commit
  12. 04 Mar, 2024 1 commit
    • debing.sun's avatar
      Implement defragmentation for pubsub kvstore (#13058) · ad127303
      debing.sun authored
      
      
      After #13013
      
      ### This PR make effort to defrag the pubsub kvstore in the following
      ways:
      
      1. Till now server.pubsub(shard)_channels only share channel name obj
      with the first subscribed client, now change it so that the clients and
      the pubsub kvstore share the channel name robj.
      This would save a lot of memory when there are many subscribers to the
      same channel.
      It also means that we only need to defrag the channel name robj in the
      pubsub kvstore, and then update
      all client references for the current channel, avoiding the need to
      iterate through all the clients to do the same things.
          
      2. Refactor the code to defragment pubsub(shard) in the same way as
      defragment of keys and EXPIRES, with the exception that we only
      defragment pubsub(without shard) when slot is zero.
      
      
      ### Other
      Fix an overlook in #11695, if defragment doesn't reach the end time, we
      should wait for the current
      db's keys and expires, pubsub and pubsubshard to finish before leaving,
      now it's possible to exit
      early when the keys are defragmented.
      
      ---------
      Co-authored-by: default avataroranagra <oran@redislabs.com>
      ad127303
  13. 01 Mar, 2024 1 commit
    • Chen Tianjie's avatar
      Add overhead of all DBs and rehashing dict count to info. (#12913) · 4cae99e7
      Chen Tianjie authored
      
      
      Sometimes we need to make fast judgement about why Redis is suddenly
      taking more memory. One of the reasons is main DB's dicts doing
      rehashing.
      
      We may use `MEMORY STATS` to monitor the overhead memory of each DB, but
      there still lacks a total sum to show an overall trend. So this PR adds
      the total overhead of all DBs to `INFO MEMORY` section, together with
      the total count of rehashing DB dicts, providing some intuitive metrics
      about main dicts rehashing.
      
      This PR adds the following metrics to INFO MEMORY
      * `mem_overhead_db_hashtable_rehashing` - only size of ht[0] in
      dictionaries we're rehashing (i.e. the memory that's gonna get released
      soon)
      
      and a similar ones to MEMORY STATS:
      * `overhead.db.hashtable.lut` (complements the existing
      `overhead.hashtable.main` and `overhead.hashtable.expires` which also
      counts the `dictEntry` structs too)
      * `overhead.db.hashtable.rehashing` - temporary rehashing overhead.
      * `db.dict.rehashing.count` - number of top level dictionaries being
      rehashed.
      
      ---------
      Co-authored-by: default avatarzhaozhao.zz <zhaozhao.zz@alibaba-inc.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      4cae99e7
  14. 29 Feb, 2024 1 commit
    • Binbin's avatar
      Fix propagation of entries_read by calling streamPropagateGroupID unconditionally (#12898) · f17381a3
      Binbin authored
      In XREADGROUP ACK, because streamPropagateXCLAIM does not propagate
      entries-read, entries-read will be inconsistent between master and
      replicas.
      I.e. if no entries were claimed, it would have propagated correctly, but
      if some
      were claimed, then the entries-read field would be inconsistent on the
      replica.
      
      The fix was suggested by guybe7, call streamPropagateGroupID
      unconditionally,
      so that we will normalize entries_read on the replicas. In the past, we
      would
      only set propagate_last_id when NOACK was specified. And in #9127,
      XCLAIM did
      not propagate entries_read in ACK, which would cause entries_read to be
      inconsistent between master and replicas.
      
      Another approach is add another arg to XCLAIM and let it propagate
      entries_read,
      but we decided not to use it. Because we want minimal damage in case
      there's an
      old target and new source (in the worst case scenario, the new source
      doesn't
      recognize XGROUP SETID ... ENTRIES READ and the lag is lost. If we
      change XCLAIM,
      the damage is much more severe).
      
      In this patch, now if the user uses XREADGROUP .. COUNT 1 there will be
      an additional
      overhead of MULTI, EXEC and XGROUPSETID. We assume the extra command in
      case of
      COUNT 1 (4x factor, changing from one XCLAIM to
      MULTI+XCLAIM+XSETID+EXEC), is probably
      ok since reading just one entry is in any case very inefficient (a
      client round trip
      per record), so we're hoping it's not a common case.
      
      Issue was introduced in #9127.
      f17381a3
  15. 22 Feb, 2024 2 commits
    • debing.sun's avatar
      Expose lua os.clock() api (#12971) · 4a265554
      debing.sun authored
      
      
      Implement #12699
      
      This PR exposing Lua os.clock() api for getting the elapsed time of Lua
      code execution.
      
      Using:
      ```lua
      local start = os.clock()
      ...
      do something
      ...
      local elpased = os.clock() - start
      ```
      
      ---------
      Co-authored-by: default avatarMeir Shpilraien (Spielrein) <meir@redis.com>
      Co-authored-by: default avatarMadelyn Olson <34459052+madolson@users.noreply.github.com>
      4a265554
    • debing.sun's avatar
      Determine the large limit of the quicklist node based on fill (#12659) · 165afc5f
      debing.sun authored
      Following #12568
      
      In issue #9357, when inserting an element larger than 1GB, we currently
      store it in a plain node instead of a listpack.
      Presently, when we insert an element that exceeds the maximum size of a
      packed node, it cannot be accommodated in any other nodes, thus ending
      up isolated like a large element.
      I.e. it's a node with only one element, but it's listpack encoded rather
      than a plain buffer.
      
      This PR lowers the threshold for considering an element as 'large' from
      1GB to the maximum size of a node.
      While this change doesn't completely resolve the bug mentioned in the
      previous PR, it does mitigate its potential impact.
      
      As a result of this change, we can now only use LSET to replace an
      element with another element that falls below the maximum size
      threshold.
      In the worst-case scenario, with a fill of -5, the largest packed node
      we can create is 2GB (32k * 64k):
      * 32k: The smallest element in a listpack is 2 bytes, which allows us to
      store up to 32k elements.
      * 64k: This is the maximum size for a single quicklist node.
      
      ## Others
      To fully fix #9357, we need more work, as discussed in #12568, when we
      insert an element into a quicklistNode, it may be created in a new node,
      put into another node, or merged, and we can't correctly delete the node
      that was supposed to be deleted.
      I'm not sure it's worth it, since it involves a lot of modifications.
      165afc5f
  16. 20 Feb, 2024 2 commits
    • Binbin's avatar
      Fix wathced client test timing issue caused by late close (#13062) · 3c2ea1ea
      Binbin authored
      There is a timing issue in the test, close may arrive late, or in
      freeClientAsync we will free the client in async way, which will
      lead to errors in watching_clients statistics, since we will only
      unwatch all keys when we truly freeClient.
      
      Add a wait here to avoid this problem. Also fixed some outdated
      comments i saw. The test was introduced in #12966.
      3c2ea1ea
    • Binbin's avatar
      Fix timing issue in blockedclient test (#13071) · 4e3be944
      Binbin authored
      We can see that the past time here happens to be busy_time_limit,
      causing the test to fail:
      ```
      [err]: RM_Call from blocked client in tests/unit/moduleapi/blockedclient.tcl
      Expected '50' to be more than '50' (context: type eval line 26 cmd {assert_morethan [expr [clock clicks -milliseconds]-$start] $busy_time_limit} proc ::test)
      ```
      
      It is reasonable for them to be equal, so equal is added here.
      It should be noted that in the previous `Busy module command` test,
      we also used assert_morethan_equal, so this should have been missed
      at the time.
      4e3be944
  17. 18 Feb, 2024 1 commit
    • zhaozhao.zz's avatar
      Add metrics for WATCH (#12966) · 50d6fe8c
      zhaozhao.zz authored
      Redis has some special commands that mark the client's state, such as
      `subscribe` and `blpop`, which mark the client as `CLIENT_PUBSUB` or
      `CLIENT_BLOCKED`, and we have metrics for the special use cases.
      
      However, there are also other special commands, like `WATCH`, which
      although do not have a specific flags, and should also be considered
      stateful client types. For stateful clients, in many scenarios, the
      connections cannot be shared in "connection pool", meaning connection
      pool cannot be used. For example, whenever the `WATCH` command is
      executed, a new connection is required to put the client into the "watch
      state" because the watched keys are stored in the client.
      
      If different business logic requires watching different keys, separate
      connections must be used; otherwise, there will be contamination. This
      also means that if a user's business heavily relies on the `WATCH`
      command, a large number of connections will be required.
      
      Recently we have encountered this situation in our platform, where some
      users consume a significant number of connections when using Redis
      because of `WATCH`.
      
      I hope we can have a way to observe these special use cases and special
      client connections. Here I add a few monitoring metrics:
      
      1. `watching_clients` in `INFO` reply: The number of clients currently
      in the "watching" state.
      2. `total_watched_keys` in `INFO` reply: The total number of keys being
      watched.
      3. `watch` in `CLIENT LIST` reply: The number of keys each client is
      currently watching.
      50d6fe8c
  18. 15 Feb, 2024 1 commit
    • Binbin's avatar
      Increase tolerance range to block reprocess tests to avoid timing issues (#13053) · 32f44da5
      Binbin authored
      These tests have all failed in daily CI:
      ```
      *** [err]: Blocking XREADGROUP for stream key that has clients blocked on stream - reprocessing command in tests/unit/type/stream-cgroups.tcl
      Expected '1101' to be between to '1000' and '1100' (context: type eval line 23 cmd {assert_range [expr $end-$start] 1000 1100} proc ::test)
      
      *** [err]: BLPOP unblock but the key is expired and then block again - reprocessing command in tests/unit/type/list.tcl
      Expected '1101' to be between to '1000' and '1100' (context: type eval line 23 cmd {assert_range [expr $end-$start] 1000 1100} proc ::test)
      
      *** [err]: BZPOPMIN unblock but the key is expired and then block again - reprocessing command in tests/unit/type/zset.tcl
      Expected '1103' to be between to '1000' and '1100' (context: type eval line 23 cmd {assert_range [expr $end-$start] 1000 1100} proc ::test)
      ```
      
      Increase the range to avoid failures, and improve the comment to be
      clearer.
      tests was introduced in #13004.
      32f44da5
  19. 12 Feb, 2024 1 commit
    • Binbin's avatar
      Fix CLIENAT KILL MAXAGE test timing issue (#13047) · 8eeece4a
      Binbin authored
      This test fails occasionally:
      ```
      *** [err]: CLIENT KILL maxAGE will kill old clients in tests/unit/introspection.tcl
      Expected 2 == 1 (context: type eval line 14 cmd {assert {$res == 1}} proc ::test)
      ```
      
      This test is very likely to do a false positive if the execute time
      takes longer than the max age, for example, if the execution time
      between sleep and kill exceeds 1s, rd2 will also be killed due to
      the max age.
      
      The test can adjust the order of execution statements to increase
      the probability of passing, but this is still will be a timing issue
      in some slow machines, so decided give it a few more chances.
      
      The test was introduced in #12299.
      8eeece4a
  20. 08 Feb, 2024 3 commits
    • Binbin's avatar
      Add new DEBUG dict-resizing command to disable the dict resize (#13043) · 493e31e3
      Binbin authored
      The test fails here and there:
      ```
      *** [err]: expire scan should skip dictionaries with lot's of empty buckets in tests/unit/expire.tcl
      scan didn't handle slot skipping logic.
      ```
      
      There are two case:
      1. In the case of passing the test, we use child process to avoid the
      dict resize, but it can not completely limit it, since in the dictDelete
      we still have chance to trigger the resize (hit the force radio). The
      reason why our test passed before is because the expire dict is still
      in the rehashing process, so the dictDelete, the dictShrinkIfNeeded can
      not trigger the resize.
      
      2. In the case of failing the test, the expire dict finished the
      rehashing,
      so the last dictDelete, the dictShrinkIfNeeded trigger the dict resize
      since it hit the force radio, so the skipping logic fail.
      
      This PR add a new DEBUG command to disbale the dict resize.
      493e31e3
    • Binbin's avatar
      Fix SORT STORE quicklist with the right options (#13042) · 813327b2
      Binbin authored
      We forgot to call quicklistSetOptions after createQuicklistObject,
      in the sort store scenario, we will create a quicklist with default
      fill or compress options.
      
      This PR adds fill and depth parameters to createQuicklistObject to
      specify that options need to be set after creating a quicklist.
      
      This closes #12871.
      
      release notes:
      > Fix lists created by SORT STORE to respect list compression and
      packing configs.
      813327b2
    • debing.sun's avatar
      Fix crash due to merge of quicklist node introduced by #12955 (#13040) · 1e8dc1da
      debing.sun authored
      Fix two crash introducted by #12955
      
      When a quicklist node can't be inserted and split, we eventually merge
      the current node with its neighboring
      nodes after inserting, and compress the current node and its siblings.
      
      1. When the current node is merged with another node, the current node
      may become invalid and can no longer be used.
      
         Solution: let `_quicklistMergeNodes()` return the merged nodes.
      
      3. If the current node is a LZF quicklist node, its recompress will be
      1. If the split node can be merged with a sibling node to become head or
      tail, recompress may cause the head and tail to be compressed, which is
      not allowed.
      
          Solution: always recompress to 0 after merging.
      1e8dc1da
  21. 07 Feb, 2024 1 commit
    • Binbin's avatar
      Fix dict don't rehash when there is child test (#13035) · 886b1170
      Binbin authored
      The reason is the same as #13016. The reason is that in #12819,
      in cron, in addition to trying to shrink, we will also tyring
      to expand. The dict was expanded by cron before we trigger the
      bgsave since we do have the enough keys (4096) to hit the radio.
      
      Before the bgsave, we only add 4095 keys to avoid this issue.
      886b1170
  22. 06 Feb, 2024 2 commits
    • debing.sun's avatar
      Prevent LSET command from causing quicklist plain node size to exceed 4GB (#12955) · 1f00c951
      debing.sun authored
      Fix #12864
      
      The main reason for this crash is that when replacing a element of a
      quicklist packed node with lpReplace() method,
      if the final size is larger than 4GB, lpReplace() will fail and returns
      NULL, causing `node->entry` to be incorrectly set to NULL.
      
      Since the inserted data is not a large element, we can't just replace it
      like a large element, first quicklistInsertAfter()
      and then quicklistDelIndex(), because the current node may be merged and
      invalidated in quicklistInsertAfter().
      
      The solution of this PR:
      When replacing a node fails (listpack exceeds 4GB), split the current
      node, create a new node to put in the middle, and try to merge them.
      This is the same as inserting a large element.
      In the worst case, its size will not exceed 4GB.
      1f00c951
    • Binbin's avatar
      Re-compute active_defrag_running after adjusting defrag configurations (#13020) · 13bd3643
      Binbin authored
      Currently, once active defrag starts, we can not adjust
      active_defrag_running
      downwards. This is because active_defrag_running will be dynamically
      compute
      based on the fragmentation, we think we should not lower the effort when
      the
      fragmentation drops.
      
      However, we need to note that active_defrag_running will also be
      dynamically
      computed based on configurations. In this case, we are not respecting
      cycle-min
      or cycle-max. Some people may realize halfway through that defrag
      consumes a
      lot and want to adjust it.
      
      Previously we could only turn off activedefrag and then turn it on again
      to
      adjust active_defrag_running downwards. So in this PR, when a active
      defrag
      configuration change is made, we will re-compute it.
      
      These configuration items are:
      - active-defrag-cycle-min
      - active-defrag-cycle-max
      - active-defrag-threshold-upper
      13bd3643
  23. 05 Feb, 2024 2 commits
    • guybe7's avatar
      Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822) · 8cd62f82
      guybe7 authored
      # Description
      Gather most of the scattered `redisDb`-related code from the per-slot
      dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
      it's a class that represents an array of dictionaries.
      
      # Motivation
      The main motivation is code cleanliness, the idea of using an array of
      dictionaries is very well-suited to becoming a self-contained data
      structure.
      This allowed cleaning some ugly code, among others: loops that run twice
      on the main dict and expires dict, and duplicate code for allocating and
      releasing this data structure.
      
      # Notes
      1. This PR reverts the part of https://github.com/redis/redis/pull/12848
      where the `rehashing` list is global (handling rehashing `dict`s is
      under the responsibility of `kvstore`, and should not be managed by the
      server)
      2. This PR also replaces the type of `server.pubsubshard_channels` from
      `dict**` to `kvstore` (original PR:
      https://github.com/redis/redis/pull/12804). After that was done,
      server.pubsub_channels was also chosen to be a `kvstore` (with only one
      `dict`, which seems odd) just to make the code cleaner by making it the
      same type as `server.pubsubshard_channels`, see
      `pubsubtype.serverPubSubChannels`
      3. the keys and expires kvstores are currenlty configured to allocate
      the individual dicts only when the first key is added (unlike before, in
      which they allocated them in advance), but they won't release them when
      the last key is deleted.
      
      Worth mentioning that due to the recent change the reply of DEBUG
      HTSTATS changed, in case no keys were ever added to the db.
      
      before:
      ```
      127.0.0.1:6379> DEBUG htstats 9
      [Dictionary HT]
      Hash table 0 stats (main hash table):
      No stats available for empty dictionaries
      [Expires HT]
      Hash table 0 stats (main hash table):
      No stats available for empty dictionaries
      ```
      
      after:
      ```
      127.0.0.1:6379> DEBUG htstats 9
      [Dictionary HT]
      [Expires HT]
      ```
      8cd62f82
    • Binbin's avatar
      Fix active expire timeout when db done the scanning (#13030) · f20774ec
      Binbin authored
      When db->expires_cursor==0, it means the DB is done the scanning,
      we should exit the loop to avoid the useless scanning.
      
      It is easy to see the active expire timeout in the modified test,
      for example, let's assume that there is only 1 expired key in the
      DB, and the size / buckets ratio is less than 1%, which means that
      we will skip it in isExpiryDictValidForSamplingCb, and the return
      value of expires_cursor is 0.
      
      Because `data.sampled == 0` is always true, so `repeat` is also
      always true, we will keep scanning the DB, but every time it is
      skipped by the previous judgment (expires_cursor = 0), until the
      timelimit is finally exhausted.
      f20774ec
  24. 31 Jan, 2024 3 commits
    • Binbin's avatar
      Fix dict resize allow test (#13016) · 9a7d3118
      Binbin authored
      Ci report this failure:
      ```
      *** [err]: Don't rehash if used memory exceeds maxmemory after rehash in tests/unit/maxmemory.tcl
      Expected '4098' to equal or match '4002'
      
      WARNING: the new maxmemory value set via CONFIG SET (1176088) is smaller than the current memory usage (1231083)
      ```
      
      It can be seen from the log that used_memory changed before we set
      maxmemory.
      The reason is that in #12819, in cron, in addition to trying to shrink,
      we will
      also tyring to expand. The dict was expanded by cron before we set
      maxmemory,
      causing the test to fail.
      
      Before setting maxmemory, we only add 4095 keys to avoid triggering
      resize.
      9a7d3118
    • Binbin's avatar
      Fix module assertion crash when timer and timeout are unlocked in the same event loop (#13015) · 6016973a
      Binbin authored
      When we use a timer to unblock a client in module, if the timer
      period and the block timeout are very close, they will unblock the
      client in the same event loop, and it will trigger the assertion.
      The reason is that in moduleBlockedClientTimedOut we will protect
      against re-processing, so we don't actually call updateStatsOnUnblock
      (see #12817), so we are not able to reset the c->duration. 
      
      The reason is unblockClientOnTimeout() didn't realize that bc had
      been unblocked. We add a function to the module to determine if bc
      is blocked, and then use it in unblockClientOnTimeout() to exit.
      
      There is the stack:
      ```
      beforeSleep
      blockedBeforeSleep
      handleBlockedClientsTimeout
      checkBlockedClientTimeout
      unblockClientOnTimeout
      unblockClient
      resetClient
      -- assertion, crash the server
      'c->duration == 0' is not true
      ```
      6016973a
    • Binbin's avatar
      Fix module unblock crash due to no timeout_callback (#13017) · 74a6e48a
      Binbin authored
      The block timeout is passed in the test case, but we do not pass
      in the timeout_callback, and it will crash when unlocking. In this
      case, in moduleBlockedClientTimedOut we will check timeout_callback.
      There is the stack:
      ```
      beforeSleep
      blockedBeforeSleep
      handleBlockedClientsTimeout
      checkBlockedClientTimeout
      unblockClientOnTimeout
      replyToBlockedClientTimedOut
      moduleBlockedClientTimedOut
      -- timeout_callback is NULL, invalidFunctionWasCalled
      bc->timeout_callback(&ctx,(void**)c->argv,c->argc);
      ```
      74a6e48a
  25. 30 Jan, 2024 4 commits
    • Chen Tianjie's avatar
      Add novalues option to command HSCAN. (#12765) · f469dd8c
      Chen Tianjie authored
      
      
      Add a way to HSCAN a hash key, and get only the filed names.
      Command syntax is now:
      ```
      HSCAN key cursor [MATCH pattern] [COUNT count] [NOVALUES]
      ```
      when `NOVALUES` is on, the command will only return keys in the hash.
      
      ---------
      Co-authored-by: default avatarViktor Söderqvist <viktor.soderqvist@est.tech>
      f469dd8c
    • Slava Koyfman's avatar
      Implement `CLIENT KILL MAXAGE <maxage>` (#12299) · 24f6d08b
      Slava Koyfman authored
      
      
      Adds an ability to kill clients older than a specified age.
      
      Also, fixed the age calculation in `catClientInfoString` to use
      `commandTimeSnapshot`
      instead of the old `server.unixtime`, and added missing documentation
      for
      `CLIENT KILL ID` to output of `CLIENT help`.
      
      ---------
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      24f6d08b
    • Oran Agra's avatar
      fix dict rehash tests introduced by #12802 broken by #12819 (#13009) · 7c9f41b5
      Oran Agra authored
      tests consistently fail on timeout (sleep that's too short).
      it now takes more time because in #12819 we iterate on all dicts, not
      just non-empty ones.
      it passed the PR's CI because it skips the `slow` tag, which might have
      been misplaced, but now it is probably required.
      with the fix, the tests take quite a lot of time:
      ```
      [ok]: Redis can trigger resizing (1860 ms)
      [ok]: Redis can rewind and trigger smaller slot resizing (744 ms)
      ```
      before #12819:
      ```
      [ok]: Redis can trigger resizing (309 ms)
      [ok]: Redis can rewind and trigger smaller slot resizing (295 ms)
      ```
      
      failure:
      https://github.com/redis/redis/actions/runs/7704158180/job/20995931735
      ```
      *** [err]: expire scan should skip dictionaries with lot's of empty buckets in tests/unit/expire.tcl
      scan didn't handle slot skipping logic.
      *** [err]: Redis can trigger resizing in tests/unit/other.tcl
      Expected '[Dictionary HT]
      Hash table 0 stats (main hash table):
       table size: 128
       number of elements: 5
      [Expires HT]
      Hash table 0 stats (main hash table):
      No stats available for empty dictionaries
      ' to match '*table size: 8*' (context: type eval line 29 cmd {assert_match "*table size: 8*" [r debug HTSTATS 0]} proc ::test) 
      *** [err]: Redis can rewind and trigger smaller slot resizing in tests/unit/other.tcl
      Expected '[Dictionary HT]
      Hash table 0 stats (main hash table):
       table size: 256
       number of elements: 10
      [Expires HT]
      Hash table 0 stats (main hash table):
      No stats available for empty dictionaries
      ' to match '*table size: 16*' (context: type eval line 27 cmd {assert_match "*table size: 16*" [r debug HTSTATS 0]} proc ::test) 
      ```
      7c9f41b5
    • Binbin's avatar
      Fix blocking commands timeout is reset due to re-processing command (#13004) · 492021db
      Binbin authored
      In #11012, we will reprocess command when client is unblocked on keys,
      in some blocking commands, for example, in the XREADGROUP BLOCK
      scenario,
      because of the re-processing command, we will recalculate the block
      timeout,
      causing the blocking time to be reset.
      
      This commit add a new CLIENT_REPROCESSING_COMMAND clent flag, explicitly
      let the command know that it is being re-processed, later in
      blockForKeys
      we will not reset the timeout.
      
      Affected BLOCK cases: 
      - list / zset / stream, added test cases for each.
      
      Unaffected cases:
      - module (never re-process the commands).
      - WAIT / WAITAOF (never re-process the commands).
      
      Fixes #12998.
      492021db
  26. 29 Jan, 2024 2 commits
    • Chen Tianjie's avatar
      Optimize resizing hash table to resize not only non-empty dicts. (#12819) · af7ceeb7
      Chen Tianjie authored
      The function `tryResizeHashTables` only attempts to shrink the dicts
      that has keys (change from #11695), this was a serious problem until the
      change in #12850 since it meant if all keys are deleted, we won't shrink
      the dick.
      But still, both dictShrink and dictExpand may be blocked by a fork child
      process, therefore, the cron job needs to perform both dictShrink and
      dictExpand, for not just non-empty dicts, but all dicts in DBs.
      
      What this PR does:
      
      1. Try to resize all dicts in DBs (not just non-empty ones, as it was
      since #12850)
      2. handle both shrink and expand (not just shrink, as it was since
      forever)
      3. Refactor some APIs about dict resizing (get rid of `htNeedsShrink`
      `htNeedsShrink` `dictShrinkToFit`, and expose `dictShrinkIfNeeded`
      `dictExpandIfNeeded` which already contains all the code of those
      functions we get rid of, to make APIs more neat)
      4. In the `Don't rehash if redis has child process` test, now that cron
      would do resizing, we no longer need to write to DB after the child
      process got killed, and can wait for the cron to expand the hash table.
      af7ceeb7
    • Ozan Tezcan's avatar
      Add RM_TryCalloc() and RM_TryRealloc() (#12985) · c5273cae
      Ozan Tezcan authored
      Modules may want to handle allocation failures gracefully. Adding
      RM_TryCalloc() and RM_TryRealloc() for it.
      RM_TryAlloc() was added before:
      https://github.com/redis/redis/pull/10541
      c5273cae