1. 20 Feb, 2024 4 commits
    • debing.sun's avatar
      Defragger improvements around large bins (#12996) · f6785df6
      debing.sun authored
      
      
      Implement #12963
      
      ## Changes
      1. large bins don't have external fragmentation or are at least
      non-defraggable, so we should ignore the effect of
      large bins when measuring fragmentation, and only measure fragmentation
      of small bins. this affects both the allocator_frag* metrics and also
      the active-defrag trigger
      2. Adding INFO metrics for `muzzy` memory, which is memory returned to
      the OS but still shows as RSS until the OS reclaims it.
      
      ---------
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      f6785df6
    • Binbin's avatar
      xinfo-stream add minimum to seen-time, skip logreqres in fuzzer (#13056) · ca5cac99
      Binbin authored
      
      
      Recently I saw in CI that reply-schemas-validator fails here:
      ```
      Failed validating 'minimum' in schema[1]['properties']['groups']['items']['properties']['consumers']['items']['properties']['active-time']:
          {'description': 'Last time this consumer was active (successful '
                          'reading/claiming).',
           'minimum': 0,
           'type': 'integer'}
      
      On instance['groups'][0]['consumers'][0]['active-time']:
          -1729380548878722639
      ```
      
      The reason is that in fuzzer, we may restore corrupted active-time,
      which will cause the reply schema CI to fail.
      
      The fuzzer can cause corrupt the state in many places, which will
      bugs that mess up the reply, so we decided to skip logreqres.
      
      Also, seen-time is the same type as active-time, adding the minimum.
      
      ---------
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      ca5cac99
    • Binbin's avatar
      Fix wathced client test timing issue caused by late close (#13062) · 3c2ea1ea
      Binbin authored
      There is a timing issue in the test, close may arrive late, or in
      freeClientAsync we will free the client in async way, which will
      lead to errors in watching_clients statistics, since we will only
      unwatch all keys when we truly freeClient.
      
      Add a wait here to avoid this problem. Also fixed some outdated
      comments i saw. The test was introduced in #12966.
      3c2ea1ea
    • Binbin's avatar
      Fix timing issue in blockedclient test (#13071) · 4e3be944
      Binbin authored
      We can see that the past time here happens to be busy_time_limit,
      causing the test to fail:
      ```
      [err]: RM_Call from blocked client in tests/unit/moduleapi/blockedclient.tcl
      Expected '50' to be more than '50' (context: type eval line 26 cmd {assert_morethan [expr [clock clicks -milliseconds]-$start] $busy_time_limit} proc ::test)
      ```
      
      It is reasonable for them to be equal, so equal is added here.
      It should be noted that in the previous `Busy module command` test,
      we also used assert_morethan_equal, so this should have been missed
      at the time.
      4e3be944
  2. 19 Feb, 2024 3 commits
    • judeng's avatar
      add -fno-omit-frame-pointer to default complication flags (#12973) · fc3a68d8
      judeng authored
      Currently redis uses O3 level optimization would remove the frame pointer
      in the target bin.
      
      In the very old past, when gcc optimized at O1 and above levels, the
      frame pointer is deleted by default to improve performance. This saves
      the RBP registers and reduces the pop/push instructions. But it makes it
      difficult for us to observe the running status of the program. For
      example, the perf tool cannot be used effectively, especially the modern
      eBPF tools such as bcc/memleak.
      fc3a68d8
    • guybe7's avatar
      Adds a README to the command JSON files (#13066) · 6df42df2
      guybe7 authored
      Add readme about the command json folder, what it does, and who should
      (not) use it.
      see discussion
      https://github.com/redis/redis/issues/9359#issuecomment-1936420698
      
      
      
      ---------
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      Co-authored-by: default avatarBinbin <binloveplay1314@qq.com>
      6df42df2
    • zhaozhao.zz's avatar
      Calculate the incremental rehash time more precisely (#13063) · 8876d264
      zhaozhao.zz authored
      In the `databasesCron()`, the time consumed by
      `kvstoreIncrementallyRehash()` is used to calculate the exit condition.
      However, within `kvstoreIncrementallyRehash()`, the loop first checks
      for timeout before performing rehashing. Therefore, the time for the
      last rehash isn't accounted for, making the consumed time inaccurate. We
      need to precisely calculate all the time spent on rehashing.
      Additionally, the time allocated to `kvstoreIncrementallyRehash()`
      should be the remaining time, which is
      `INCREMENTAL_REHASHING_THRESHOLD_US` minus the already consumed
      `elapsed_us`.
      8876d264
  3. 18 Feb, 2024 3 commits
    • Binbin's avatar
      AOF_FSYNC_EVERYSEC higher resolution, change aof_last_fsync and... · 9103ccc3
      Binbin authored
      
      AOF_FSYNC_EVERYSEC higher resolution, change aof_last_fsync and aof_flush_postponed_start to use mstime (#13041)
      
      Currently aof_last_fsync is using a low resolution unixtime is really
      bad,
      it checks if the absolute number of (full) seconds changed by one.
      depending on which side of the second barrier it falls, we can get very
      different results.
      
      This PR change the resolution to use milliseconds instead of complete
      seconds.
      
      In cases where the event loop cycle duration is short and their rapid
      (e.g. running
      many fast commands with short pipeline, or a high `hz` config), this
      change will not
      make much difference, since in anyway, we'll be quick to detect that
      we're on a "new
      second", and it's likely that these fsync will always be executed close
      to the second
      switch barrier.
      
      But in cases of rare or slow event loops cycles (e.g. either slow
      commands, or very
      low rate of traffic to redis, and low `hz`), it could easily be that
      with the old code,
      in some cases we'll have over 1.5 seconds between fsyncs, and in others
      less than 0.5.
      
      see discussion in #8612
      
      This PR also handle aof_flush_postponed_start as well, the damage there
      is smaller
      since the threshold is 2 seconds, and not 1.
      
      ---------
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      9103ccc3
    • Binbin's avatar
      redis-cli - fix sscanf incorrect return-value check warnings (#13059) · dd92dd8f
      Binbin authored
      From CodeQL: The result of scanf is only checked against 0, but
      it can also return EOF.
      
      Reported in https://github.com/redis/redis/security/code-scanning/38.
      Reported in https://github.com/redis/redis/security/code-scanning/39.
      dd92dd8f
    • zhaozhao.zz's avatar
      Add metrics for WATCH (#12966) · 50d6fe8c
      zhaozhao.zz authored
      Redis has some special commands that mark the client's state, such as
      `subscribe` and `blpop`, which mark the client as `CLIENT_PUBSUB` or
      `CLIENT_BLOCKED`, and we have metrics for the special use cases.
      
      However, there are also other special commands, like `WATCH`, which
      although do not have a specific flags, and should also be considered
      stateful client types. For stateful clients, in many scenarios, the
      connections cannot be shared in "connection pool", meaning connection
      pool cannot be used. For example, whenever the `WATCH` command is
      executed, a new connection is required to put the client into the "watch
      state" because the watched keys are stored in the client.
      
      If different business logic requires watching different keys, separate
      connections must be used; otherwise, there will be contamination. This
      also means that if a user's business heavily relies on the `WATCH`
      command, a large number of connections will be required.
      
      Recently we have encountered this situation in our platform, where some
      users consume a significant number of connections when using Redis
      because of `WATCH`.
      
      I hope we can have a way to observe these special use cases and special
      client connections. Here I add a few monitoring metrics:
      
      1. `watching_clients` in `INFO` reply: The number of clients currently
      in the "watching" state.
      2. `total_watched_keys` in `INFO` reply: The total number of keys being
      watched.
      3. `watch` in `CLIENT LIST` reply: The number of keys each client is
      currently watching.
      50d6fe8c
  4. 15 Feb, 2024 3 commits
    • Binbin's avatar
      Minor optimization in kvstoreDictAddRaw when dict exists (#13054) · c8548737
      Binbin authored
      Usually, the probability that a dict exists is much greater than the
      probability that it does not exist. In kvstoreDictAddRaw, we will call
      kvstoreGetDict multiple times. Based on this assumption, we change
      createDictIfNeeded to something like get or create function:
      ```
      before:
      dict exist: 2 kvstoreGetDict
      dict non-exist: 2 kvstoreGetDict
      
      after:
      dict exist: 1 kvstoreGetDict
      dict non-exist: 3 kvstoreGetDict
      ```
      
      A possible 3% performance improvement was observed:
      
      In addition, some typos/comments i saw have been cleaned up.
      c8548737
    • Binbin's avatar
      zunionInterDiffGenericCommand use ztrycalloc to avoid OOM panic (#13052) · 063de675
      Binbin authored
      In low memory situations, sending a big number of arguments (sets)
      may cause OOM panic. Use ztrycalloc, like we do on LCS and XAUTOCLAIM,
      and fail gracefully.
      
      This change affects the following commands: ZUNION, ZINTER, ZDIFF,
      ZUNIONSTORE, ZINTERSTORE, ZDIFFSTORE, ZINTERCARD.
      063de675
    • Binbin's avatar
      Increase tolerance range to block reprocess tests to avoid timing issues (#13053) · 32f44da5
      Binbin authored
      These tests have all failed in daily CI:
      ```
      *** [err]: Blocking XREADGROUP for stream key that has clients blocked on stream - reprocessing command in tests/unit/type/stream-cgroups.tcl
      Expected '1101' to be between to '1000' and '1100' (context: type eval line 23 cmd {assert_range [expr $end-$start] 1000 1100} proc ::test)
      
      *** [err]: BLPOP unblock but the key is expired and then block again - reprocessing command in tests/unit/type/list.tcl
      Expected '1101' to be between to '1000' and '1100' (context: type eval line 23 cmd {assert_range [expr $end-$start] 1000 1100} proc ::test)
      
      *** [err]: BZPOPMIN unblock but the key is expired and then block again - reprocessing command in tests/unit/type/zset.tcl
      Expected '1103' to be between to '1000' and '1100' (context: type eval line 23 cmd {assert_range [expr $end-$start] 1000 1100} proc ::test)
      ```
      
      Increase the range to avoid failures, and improve the comment to be
      clearer.
      tests was introduced in #13004.
      32f44da5
  5. 14 Feb, 2024 1 commit
    • Sankar's avatar
      Do not include gossip about receiver in cluster messages (#13046) · c1d2ac2a
      Sankar authored
      The receiver does not update any of its cluster state based on gossip
      about itself. This commit explicitly avoids sending or processing gossip
      about the receiver.
      
      Currently cluster bus gossips include 10% of nodes in the cluster with a
      minimum of 3 nodes. For up to 30 node clusters, this commit makes sure
      that 1/3 of the gossip (1 out of 3 gossips) is never discarded. This
      should help with relatively faster convergence of cluster state in
      general.
      c1d2ac2a
  6. 12 Feb, 2024 3 commits
  7. 11 Feb, 2024 1 commit
    • debing.sun's avatar
      Fix the failure of defrag test under 32-bit (#13013) · 676f27ac
      debing.sun authored
      Fail CI:
      https://github.com/redis/redis/actions/runs/7837608438/job/21387609715
      
      
      
      ## Why defragment tests only failed under 32-bit
      
      First of all, under 32-bit jemalloc will allocate more small bins and
      less large bins, which will also lead to more external fragmentation,
      therefore, the fragmentation ratio is higher in 32-bit than in 64-bit,
      so the defragment tests(`Active defrag eval scripts: cluster` and
      `Active defrag big keys: cluster`) always fails in 32-bit.
      
      ## Why defragment tests only failed with cluster
      The fowllowing is the result of `Active defrag eval scripts: cluster`
      test.
      
      1) Before #11695, the fragmentation ratio is 3.11%.
      
      2) After #11695, the fragmentation ratio grew to 4.58%.
      Since we are using per-slot dictionary to manage slots, we will only
      defragment the contents of these dictionaries (keys, values), but not
      the dictionaries' struct and ht_table, which means that frequent
      shrinking and expanding of the dictionaries, will make more fragments.
      
      3) After #12850 and #12948, In cluster mode, a large number of cluster
      slot dicts will be shrunk, creating additional fragmention, and the
      dictionary will not be defragged.
      
      ## Solution
      * Add defragmentation of the per-slot dictionary's own structures, dict
      struct and ht_table.
      
      ## Other change
      * Increase floating point print precision of `frags` and `rss` in debug
      logs for defrag
      
      ---------
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      676f27ac
  8. 08 Feb, 2024 3 commits
    • Binbin's avatar
      Add new DEBUG dict-resizing command to disable the dict resize (#13043) · 493e31e3
      Binbin authored
      The test fails here and there:
      ```
      *** [err]: expire scan should skip dictionaries with lot's of empty buckets in tests/unit/expire.tcl
      scan didn't handle slot skipping logic.
      ```
      
      There are two case:
      1. In the case of passing the test, we use child process to avoid the
      dict resize, but it can not completely limit it, since in the dictDelete
      we still have chance to trigger the resize (hit the force radio). The
      reason why our test passed before is because the expire dict is still
      in the rehashing process, so the dictDelete, the dictShrinkIfNeeded can
      not trigger the resize.
      
      2. In the case of failing the test, the expire dict finished the
      rehashing,
      so the last dictDelete, the dictShrinkIfNeeded trigger the dict resize
      since it hit the force radio, so the skipping logic fail.
      
      This PR add a new DEBUG command to disbale the dict resize.
      493e31e3
    • Binbin's avatar
      Fix SORT STORE quicklist with the right options (#13042) · 813327b2
      Binbin authored
      We forgot to call quicklistSetOptions after createQuicklistObject,
      in the sort store scenario, we will create a quicklist with default
      fill or compress options.
      
      This PR adds fill and depth parameters to createQuicklistObject to
      specify that options need to be set after creating a quicklist.
      
      This closes #12871.
      
      release notes:
      > Fix lists created by SORT STORE to respect list compression and
      packing configs.
      813327b2
    • debing.sun's avatar
      Fix crash due to merge of quicklist node introduced by #12955 (#13040) · 1e8dc1da
      debing.sun authored
      Fix two crash introducted by #12955
      
      When a quicklist node can't be inserted and split, we eventually merge
      the current node with its neighboring
      nodes after inserting, and compress the current node and its siblings.
      
      1. When the current node is merged with another node, the current node
      may become invalid and can no longer be used.
      
         Solution: let `_quicklistMergeNodes()` return the merged nodes.
      
      3. If the current node is a LZF quicklist node, its recompress will be
      1. If the split node can be merged with a sibling node to become head or
      tail, recompress may cause the head and tail to be compressed, which is
      not allowed.
      
          Solution: always recompress to 0 after merging.
      1e8dc1da
  9. 07 Feb, 2024 2 commits
    • Binbin's avatar
      Fix heap-use-after-free when pubsubshard_channels became NULL (#13038) · 81666a65
      Binbin authored
      
      
      After fix for #13033, address sanitizer reports this heap-use-after-free
      error. When the pubsubshard_channels dict becomes empty, we will delete
      the dict, and the dictReleaseIterator will call dictResetIterator, it
      will use the dict so we will trigger the error.
      
      This PR introduced a new struct kvstoreDictIterator to wrap
      dictIterator.
      Replace the original dict iterator with the new kvstore dict iterator.
      
      ---------
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      Co-authored-by: default avatarguybe7 <guy.benoish@redislabs.com>
      81666a65
    • Binbin's avatar
      Fix dict don't rehash when there is child test (#13035) · 886b1170
      Binbin authored
      The reason is the same as #13016. The reason is that in #12819,
      in cron, in addition to trying to shrink, we will also tyring
      to expand. The dict was expanded by cron before we trigger the
      bgsave since we do have the enough keys (4096) to hit the radio.
      
      Before the bgsave, we only add 4095 keys to avoid this issue.
      886b1170
  10. 06 Feb, 2024 6 commits
    • debing.sun's avatar
      Prevent LSET command from causing quicklist plain node size to exceed 4GB (#12955) · 1f00c951
      debing.sun authored
      Fix #12864
      
      The main reason for this crash is that when replacing a element of a
      quicklist packed node with lpReplace() method,
      if the final size is larger than 4GB, lpReplace() will fail and returns
      NULL, causing `node->entry` to be incorrectly set to NULL.
      
      Since the inserted data is not a large element, we can't just replace it
      like a large element, first quicklistInsertAfter()
      and then quicklistDelIndex(), because the current node may be merged and
      invalidated in quicklistInsertAfter().
      
      The solution of this PR:
      When replacing a node fails (listpack exceeds 4GB), split the current
      node, create a new node to put in the middle, and try to merge them.
      This is the same as inserting a large element.
      In the worst case, its size will not exceed 4GB.
      1f00c951
    • Gann's avatar
      Improve error handling in connSocketBlockingConnect for various connction failures (#13008) · 0777dc78
      Gann authored
      This commit addresses a problem in connSocketBlockingConnect where
      different types of connection failures, including timeouts and other
      errors, were not consistently handled. Previously, the function did not
      return C_ERR immediately after detecting a connection failure, which
      could lead to inconsistent states and misinterpretation of the
      connection status.
      
      With this update, connSocketBlockingConnect now correctly returns C_ERR
      upon encountering any connection error, ensuring that all types of
      connection failures are handled consistently and the behavior of the
      function aligns with expected outcomes in case of connection issues.
      
      Closes #12900
      0777dc78
    • Binbin's avatar
      Fix invalid dictNext usage when pubsubshard_channels became empty (#13033) · 80965154
      Binbin authored
      After #12822, when pubsubshard_channels became empty, kvstoreDictDelete
      will delete the dict (which is the only one currently deleting dicts
      that
      become empty) and in the next loop, we will make an invalid call to
      dictNext.
      
      After the dict becomes empty, we break out of the loop without calling
      dictNext.
      80965154
    • Binbin's avatar
      Re-compute active_defrag_running after adjusting defrag configurations (#13020) · 13bd3643
      Binbin authored
      Currently, once active defrag starts, we can not adjust
      active_defrag_running
      downwards. This is because active_defrag_running will be dynamically
      compute
      based on the fragmentation, we think we should not lower the effort when
      the
      fragmentation drops.
      
      However, we need to note that active_defrag_running will also be
      dynamically
      computed based on configurations. In this case, we are not respecting
      cycle-min
      or cycle-max. Some people may realize halfway through that defrag
      consumes a
      lot and want to adjust it.
      
      Previously we could only turn off activedefrag and then turn it on again
      to
      adjust active_defrag_running downwards. So in this PR, when a active
      defrag
      configuration change is made, we will re-compute it.
      
      These configuration items are:
      - active-defrag-cycle-min
      - active-defrag-cycle-max
      - active-defrag-threshold-upper
      13bd3643
    • Binbin's avatar
      Minor optimization for expire dict in defragKey (#13027) · 87eaf119
      Binbin authored
      Since now a DB in cluster mode is divided into 16384 dicts, here
      we directly check kvstoreDictSize instead of kvstoreSize, which
      may have a higher probability that we can save the lookup.
      
      The other change is a cleanup, obviously kvstoreGetHash should be
      applied to the db->expires dicts.
      87eaf119
    • Binbin's avatar
      Fix kvstore unable to push resize_cursor for resize when dict is NULL (#13031) · 84fd745d
      Binbin authored
      
      
      When the dict is NULL, we also need to push resize_cursor, otherwise it
      will keep doing useless continue here, and there is no way to resize the
      other dict behind it.
      
      Introduced in #12822.
      
      ---------
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      84fd745d
  11. 05 Feb, 2024 2 commits
    • guybe7's avatar
      Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822) · 8cd62f82
      guybe7 authored
      # Description
      Gather most of the scattered `redisDb`-related code from the per-slot
      dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
      it's a class that represents an array of dictionaries.
      
      # Motivation
      The main motivation is code cleanliness, the idea of using an array of
      dictionaries is very well-suited to becoming a self-contained data
      structure.
      This allowed cleaning some ugly code, among others: loops that run twice
      on the main dict and expires dict, and duplicate code for allocating and
      releasing this data structure.
      
      # Notes
      1. This PR reverts the part of https://github.com/redis/redis/pull/12848
      where the `rehashing` list is global (handling rehashing `dict`s is
      under the responsibility of `kvstore`, and should not be managed by the
      server)
      2. This PR also replaces the type of `server.pubsubshard_channels` from
      `dict**` to `kvstore` (original PR:
      https://github.com/redis/redis/pull/12804). After that was done,
      server.pubsub_channels was also chosen to be a `kvstore` (with only one
      `dict`, which seems odd) just to make the code cleaner by making it the
      same type as `server.pubsubshard_channels`, see
      `pubsubtype.serverPubSubChannels`
      3. the keys and expires kvstores are currenlty configured to allocate
      the individual dicts only when the first key is added (unlike before, in
      which they allocated them in advance), but they won't release them when
      the last key is deleted.
      
      Worth mentioning that due to the recent change the reply of DEBUG
      HTSTATS changed, in case no keys were ever added to the db.
      
      before:
      ```
      127.0.0.1:6379> DEBUG htstats 9
      [Dictionary HT]
      Hash table 0 stats (main hash table):
      No stats available for empty dictionaries
      [Expires HT]
      Hash table 0 stats (main hash table):
      No stats available for empty dictionaries
      ```
      
      after:
      ```
      127.0.0.1:6379> DEBUG htstats 9
      [Dictionary HT]
      [Expires HT]
      ```
      8cd62f82
    • Binbin's avatar
      Fix active expire timeout when db done the scanning (#13030) · f20774ec
      Binbin authored
      When db->expires_cursor==0, it means the DB is done the scanning,
      we should exit the loop to avoid the useless scanning.
      
      It is easy to see the active expire timeout in the modified test,
      for example, let's assume that there is only 1 expired key in the
      DB, and the size / buckets ratio is less than 1%, which means that
      we will skip it in isExpiryDictValidForSamplingCb, and the return
      value of expires_cursor is 0.
      
      Because `data.sampled == 0` is always true, so `repeat` is also
      always true, we will keep scanning the DB, but every time it is
      skipped by the previous judgment (expires_cursor = 0), until the
      timelimit is finally exhausted.
      f20774ec
  12. 04 Feb, 2024 2 commits
    • Daz's avatar
      Add missing structural API changes to JSON file (#12434) · 02a87885
      Daz authored
      
      
      The JSON file lacks the following structural API changes:
      
      - GEORADIUSBYMEMBER: add the ANY option for COUNT since 6.2.0.
      - GEORADIUSBYMEMBER_RO: add the ANY option for COUNT since 6.2.0.
      - GEORADIUS_RO: Added support for uppercase unit names since 7.0.0.
      - GEORADIUSBYMEMBER_RO: Added support for uppercase unit names since
      7.0.0.
      
      ---------
      Signed-off-by: default avatardaz-3ux <daz-3ux@proton.me>
      Co-authored-by: default avatarbodong.ybd <bodong.ybd@alibaba-inc.com>
      Co-authored-by: default avatarViktor Söderqvist <viktor.soderqvist@est.tech>
      Co-authored-by: default avataryangpengda.333 <yangpengda.333@bytedance.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      02a87885
    • Yanqi Lv's avatar
      Make db->avg_ttl more precise (#12949) · c1041c2c
      Yanqi Lv authored
      
      
      Currently, We compute `db->avg_ttl` after each short `dbScan` sweep (a
      few buckets without checking the time limit). But after each `dbScan`
      sweep, we don't have much data and this makes the db->avg_ttl less
      precise. For example, even if we scan the whole db, we can't get the
      exact avg_ttl because we separate the data.
      i.e. because of the running average, if we issue 16 calls to scan, we'll
      give lower weight to the first one, and higher weight to the last one.
      I think we should calculate `db->avg_ttl` until completing more of the
      db iteration (judgement of time limit or the beginning of iterating next
      db) because we have more sample data in this db and can get more
      accurate result. In the best case, if we scan the whole db, we can get
      the exact avg_ttl.
      
      In this PR, we postpone the avg_ttl calculation until the judgement of
      time limit or iteration of next db, so we can accumulate more data to
      get more precise avg_ttl.
      Note that we still need to make sure to decay the old TTLs at the same
      speed as before, which is why we want to run the decay mechanism several
      times, or use the Pow formula, see the comment in the code.
      
      In my experiment, this PR can improve 89% or 52% accuracy in different
      workload.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      c1041c2c
  13. 01 Feb, 2024 1 commit
    • Yanqi Lv's avatar
      Refine the purpose of rdb saving with accurate flags (#12925) · 62153b3b
      Yanqi Lv authored
      In Redis, rdb is produced in three scenarios mainly.
      
      - backup, such as `bgsave` and `save` command
      - full sync in replication
      - aof rewrite if `aof-use-rdb-preamble` is yes
      
      We also have some RDB flags to identify the purpose of rdb saving.
      ```C
      /* flags on the purpose of rdb save or load */
      #define RDBFLAGS_NONE 0                 /* No special RDB loading. */
      #define RDBFLAGS_AOF_PREAMBLE (1<<0)    /* Load/save the RDB as AOF preamble. */
      #define RDBFLAGS_REPLICATION (1<<1)     /* Load/save for SYNC. */
      ```
      
      But currently, it seems that these flags and purposes of rdb saving
      don't exactly match. I find it in `rdbSaveRioWithEOFMark` which calls
      `startSaving` with `RDBFLAGS_REPLICATION` but `rdbSaveRio` with
      `RDBFLAGS_NONE`.
      ```C
      int rdbSaveRioWithEOFMark(int req, rio *rdb, int *error, rdbSaveInfo *rsi) {
          char eofmark[RDB_EOF_MARK_SIZE];
      
          startSaving(RDBFLAGS_REPLICATION);
          getRandomHexChars(eofmark,RDB_EOF_MARK_SIZE);
          if (error) *error = 0;
          if (rioWrite(rdb,"$EOF:",5) == 0) goto werr;
          if (rioWrite(rdb,eofmark,RDB_EOF_MARK_SIZE) == 0) goto werr;
          if (rioWrite(rdb,"\r\n",2) == 0) goto werr;
          if (rdbSaveRio(req,rdb,error,RDBFLAGS_NONE,rsi) == C_ERR) goto werr;
          if (rioWrite(rdb,eofmark,RDB_EOF_MARK_SIZE) == 0) goto werr;
          stopSaving(1);
          return C_OK;
      
      werr: /* Write error. */
          /* Set 'error' only if not already set by rdbSaveRio() call. */
          if (error && *error == 0) *error = errno;
          stopSaving(0);
          return C_ERR;
      }
      ```
      
      In this PR, I refine the purpose of rdb saving with accurate flags.
      62153b3b
  14. 31 Jan, 2024 3 commits
    • Binbin's avatar
      Fix dict resize allow test (#13016) · 9a7d3118
      Binbin authored
      Ci report this failure:
      ```
      *** [err]: Don't rehash if used memory exceeds maxmemory after rehash in tests/unit/maxmemory.tcl
      Expected '4098' to equal or match '4002'
      
      WARNING: the new maxmemory value set via CONFIG SET (1176088) is smaller than the current memory usage (1231083)
      ```
      
      It can be seen from the log that used_memory changed before we set
      maxmemory.
      The reason is that in #12819, in cron, in addition to trying to shrink,
      we will
      also tyring to expand. The dict was expanded by cron before we set
      maxmemory,
      causing the test to fail.
      
      Before setting maxmemory, we only add 4095 keys to avoid triggering
      resize.
      9a7d3118
    • Binbin's avatar
      Fix module assertion crash when timer and timeout are unlocked in the same event loop (#13015) · 6016973a
      Binbin authored
      When we use a timer to unblock a client in module, if the timer
      period and the block timeout are very close, they will unblock the
      client in the same event loop, and it will trigger the assertion.
      The reason is that in moduleBlockedClientTimedOut we will protect
      against re-processing, so we don't actually call updateStatsOnUnblock
      (see #12817), so we are not able to reset the c->duration. 
      
      The reason is unblockClientOnTimeout() didn't realize that bc had
      been unblocked. We add a function to the module to determine if bc
      is blocked, and then use it in unblockClientOnTimeout() to exit.
      
      There is the stack:
      ```
      beforeSleep
      blockedBeforeSleep
      handleBlockedClientsTimeout
      checkBlockedClientTimeout
      unblockClientOnTimeout
      unblockClient
      resetClient
      -- assertion, crash the server
      'c->duration == 0' is not true
      ```
      6016973a
    • Binbin's avatar
      Fix module unblock crash due to no timeout_callback (#13017) · 74a6e48a
      Binbin authored
      The block timeout is passed in the test case, but we do not pass
      in the timeout_callback, and it will crash when unlocking. In this
      case, in moduleBlockedClientTimedOut we will check timeout_callback.
      There is the stack:
      ```
      beforeSleep
      blockedBeforeSleep
      handleBlockedClientsTimeout
      checkBlockedClientTimeout
      unblockClientOnTimeout
      replyToBlockedClientTimedOut
      moduleBlockedClientTimedOut
      -- timeout_callback is NULL, invalidFunctionWasCalled
      bc->timeout_callback(&ctx,(void**)c->argv,c->argc);
      ```
      74a6e48a
  15. 30 Jan, 2024 3 commits
    • Chen Tianjie's avatar
      Add novalues option to command HSCAN. (#12765) · f469dd8c
      Chen Tianjie authored
      
      
      Add a way to HSCAN a hash key, and get only the filed names.
      Command syntax is now:
      ```
      HSCAN key cursor [MATCH pattern] [COUNT count] [NOVALUES]
      ```
      when `NOVALUES` is on, the command will only return keys in the hash.
      
      ---------
      Co-authored-by: default avatarViktor Söderqvist <viktor.soderqvist@est.tech>
      f469dd8c
    • Slava Koyfman's avatar
      Implement `CLIENT KILL MAXAGE <maxage>` (#12299) · 24f6d08b
      Slava Koyfman authored
      
      
      Adds an ability to kill clients older than a specified age.
      
      Also, fixed the age calculation in `catClientInfoString` to use
      `commandTimeSnapshot`
      instead of the old `server.unixtime`, and added missing documentation
      for
      `CLIENT KILL ID` to output of `CLIENT help`.
      
      ---------
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      24f6d08b
    • Oran Agra's avatar
      fix dict rehash tests introduced by #12802 broken by #12819 (#13009) · 7c9f41b5
      Oran Agra authored
      tests consistently fail on timeout (sleep that's too short).
      it now takes more time because in #12819 we iterate on all dicts, not
      just non-empty ones.
      it passed the PR's CI because it skips the `slow` tag, which might have
      been misplaced, but now it is probably required.
      with the fix, the tests take quite a lot of time:
      ```
      [ok]: Redis can trigger resizing (1860 ms)
      [ok]: Redis can rewind and trigger smaller slot resizing (744 ms)
      ```
      before #12819:
      ```
      [ok]: Redis can trigger resizing (309 ms)
      [ok]: Redis can rewind and trigger smaller slot resizing (295 ms)
      ```
      
      failure:
      https://github.com/redis/redis/actions/runs/7704158180/job/20995931735
      ```
      *** [err]: expire scan should skip dictionaries with lot's of empty buckets in tests/unit/expire.tcl
      scan didn't handle slot skipping logic.
      *** [err]: Redis can trigger resizing in tests/unit/other.tcl
      Expected '[Dictionary HT]
      Hash table 0 stats (main hash table):
       table size: 128
       number of elements: 5
      [Expires HT]
      Hash table 0 stats (main hash table):
      No stats available for empty dictionaries
      ' to match '*table size: 8*' (context: type eval line 29 cmd {assert_match "*table size: 8*" [r debug HTSTATS 0]} proc ::test) 
      *** [err]: Redis can rewind and trigger smaller slot resizing in tests/unit/other.tcl
      Expected '[Dictionary HT]
      Hash table 0 stats (main hash table):
       table size: 256
       number of elements: 10
      [Expires HT]
      Hash table 0 stats (main hash table):
      No stats available for empty dictionaries
      ' to match '*table size: 16*' (context: type eval line 27 cmd {assert_match "*table size: 16*" [r debug HTSTATS 0]} proc ::test) 
      ```
      7c9f41b5