1. 29 Sep, 2024 1 commit
    • Moti Cohen's avatar
      Add new SFLUSH command to cluster for slot-based FLUSH (#13564) · d092d64d
      Moti Cohen authored
      This PR introduces a new `SFLUSH` command to cluster mode that allows
      partial flushing of nodes based on specified slot ranges. Current
      implementation is designed to flush all slots of a shard, but future
      extensions could allow for more granular flushing.
      
      **Command Usage:**
      `SFLUSH <start-slot> <end-slot> [<start-slot> <end-slot>]* [SYNC|ASYNC]`
      
      This command removes all data from the specified slots, either
      synchronously or asynchronously depending on the optional SYNC/ASYNC
      argument.
      
      **Functionality:**
      Current imp of `SFLUSH` command verifies that the provided slot ranges
      are valid and cover all of the node's slots before proceeding. If slots
      are partially or incorrectly specified, the command will fail and return
      an error, ensuring that all slots of a node must be fully covered for
      the flush to proceed.
      
      The function supports both synchronous (default) and asynchronous
      flushing. In addition, if possible, SFLUSH SYNC will be run as blocking
      ASYNC as an optimization.
      d092d64d
  2. 19 Sep, 2024 1 commit
    • Moti Cohen's avatar
      Extend modules API to read also expired keys and subkeys (#13526) · 3a3cacfe
      Moti Cohen authored
      The PR extends `RedisModule_OpenKey`'s flags to include
      `REDISMODULE_OPEN_KEY_ACCESS_EXPIRED`, which allows to access expired
      keys.
      
      It also allows to access expired subkeys. Currently relevant only for
      hash fields
      and has its impact on `RM_HashGet` and `RM_Scan`.
      3a3cacfe
  3. 13 Sep, 2024 1 commit
  4. 12 Sep, 2024 2 commits
    • Filipe Oliveira (Redis)'s avatar
      Optimize SSCAN command in case of listpack or intset encoding: avoid the usage... · f2f85ba3
      Filipe Oliveira (Redis) authored
      Optimize SSCAN command in case of listpack or intset encoding: avoid the usage of intermediate list. From 2N to N iterations (#13530)
      
      On SSCAN, in case of listpack and intset encoding we actually reply the
      entire set, and always reply with the cursor 0.
      
      For those cases, we don't need to accumulate the replies in a list and
      can completely avoid the overhead of list appending and then iterating
      over the list again -- meaning we do N iterations instead of 2N
      iterations over the SET and save intermediate memory as well.
      
      Preliminary benchmarks, `SSCAN set:100 0`, showcased an improvement of
      60% as visible bellow on a SET with 100 string elements (listpack
      encoded).
      f2f85ba3
    • Oran Agra's avatar
      RED-129256, Fix TOUCH command from script in no-touch mode (#13512) · 610eb26c
      Oran Agra authored
      
      
      When a client in no-touch mode issues a TOUCH command on a key, the
      key’s access time should be updated, but in scripts, and module's
      RM_Call, it isn’t updated.
      Command proc should be matched to the executing client, not the current
      client.
      Co-authored-by: default avatarUdi Ron <udi@speedb.io>
      610eb26c
  5. 04 Sep, 2024 1 commit
  6. 03 Sep, 2024 1 commit
    • Ozan Tezcan's avatar
      Reply LOADING on replica while flushing the db (#13495) · a7afd1d2
      Ozan Tezcan authored
      On a full sync, replica starts discarding existing db. If the existing 
      db is huge and flush is happening synchronously, replica may become 
      unresponsive. 
      
      Adding a change to yield back to event loop while flushing db on 
      a replica. Replica will reply -LOADING in this case. Note that while 
      replica is loading the new rdb, it may get an error and start flushing
      the partial db. This step may take a long time as well. Similarly, 
      replica will reply -LOADING in this case. 
      
      To call processEventsWhileBlocked() and reply -LOADING, we need to do:
      - Set connSetReadHandler() null not to process further data from the master
      - Set server.loading flag
      - Call blockingOperationStarts()
      
      rdbload() already does these steps and calls processEventsWhileBlocked()
      while loading the rdb. Added a new call rdbLoadWithEmptyFunc() which 
      accepts callback to flush db before loading rdb or when an error 
      happens while loading. 
      
      For diskless replication, doing something similar and calling emptyData()
      after setting required flags.
      
      Additional changes:
      - Allow `appendonly` config change during loading. 
       Config can be changed while loading data on startup or on replication 
       when slave is loading RDB. We allow config change command to update 
       `server.aof_enabled` and then lazily apply config change after loading
       operation is completed.
       
       - Added a test for `replica-lazy-flush` config
      a7afd1d2
  7. 20 Aug, 2024 1 commit
    • judeng's avatar
      improve performance for scan command when matching data type (#12395) · 7f0a7f0a
      judeng authored
      Move the TYPE filtering to the scan callback so that avoided the
      `lookupKey` operation. This is the follow-up to #12209 . In this thread
      we introduced two breaking changes:
      1. we will not attempt to do lazy expire (delete) a key that was
      filtered by not matching the TYPE (like we already do for MATCH
      pattern).
      2. when the specified key TYPE filter is an unknown type, server will
      reply a error immediately instead of doing a full scan that comes back
      empty handed.
      7f0a7f0a
  8. 01 Jul, 2024 1 commit
    • Oran Agra's avatar
      Fix possible crash due to OOM panic on invalid command (#13380) · 69b7137d
      Oran Agra authored
      getKeysUsingKeySpece had the range check AFTER the allocation, of the
      keys buffer, which could lead to an OOM panic when invalid arguments are
      provided, leading to an overflow.
      The allocated memory is only used after the range check, so there's no
      risk of buffer overrun.
      The OOM panic can happen on 32bit builds, or 64 builds running on
      systems with less than 4GB of RAM, and is reachable via the COMMAND
      GETKEYSANDFLAGS, and ACL key name validation.
      69b7137d
  9. 29 May, 2024 1 commit
    • Moti Cohen's avatar
      HFE to support AOF and replicas (#13285) · 33fc0fbf
      Moti Cohen authored
      * For replica sake, rewrite commands `H*EXPIRE*` , `HSETF`, `HGETF` to
      have absolute unix time in msec.
      * On active-expiration of field, propagate HDEL to replica
      (`propagateHashFieldDeletion()`)
      * On lazy-expiration, propagate HDEL to replica (`hashTypeGetValue()`
      now calls `hashTypeDelete()`. It also takes care to call
      `propagateHashFieldDeletion()`).
      * Fix `H*EXPIRE*` command such that if it gets flag `LT` and it doesn’t
      have any expiration on the field then it will considered as valid
      condition.
      
      Note, replicas doesn’t make any active expiration, and should avoid lazy
      expiration. On `hashTypeGetValue()` it doesn't check expiration (As long
      as the master didn’t request to delete the field, it is valid)
      
      TODO: 
      * Attach `dbid` to HASH metadata. See
      [here](https://github.com/redis/redis/pull/13209#discussion_r1593385850
      
      )
      
      ---------
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      33fc0fbf
  10. 28 May, 2024 1 commit
    • Ozan Tezcan's avatar
      Fix hscan return value (#13297) · 6a11d458
      Ozan Tezcan authored
      In the last step of hscan, while replying to client, we assume all items
      in the result list are keys which are mstr instances. Though, there 
      might be values which are sds instances. 
      
      Added a check to avoid calling mstrlen() for value objects.
      
      To reproduce:
      ```
      127.0.0.1:6379> hset myhash1 a 11111111111111111111111111111111111111111111111111111111111111111
      (integer) 0
      127.0.0.1:6379> hscan myhash1 0
      1) "0"
      2) 1) "a"
         2) "11111111111111111111111111111111111111111111111111111111111111111\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
      ```
      6a11d458
  11. 22 May, 2024 1 commit
  12. 08 May, 2024 1 commit
    • Ozan Tezcan's avatar
      Add listpack support, hgetf and hsetf commands (#13209) · ca4ed48d
      Ozan Tezcan authored
      **Changes:**
      - Adds listpack support to hash field expiration 
      - Implements hgetf/hsetf commands
      
      **Listpack support for hash field expiration**
      
      We keep field name and value pairs in listpack for the hash type. With
      this PR, if one of hash field expiration command is called on the key
      for the first time, it converts listpack layout to triplets to hold
      field name, value and ttl per field. If a field does not have a TTL, we
      store zero as the ttl value. Zero is encoded as two bytes in the
      listpack. So, once we convert listpack to hold triplets, for the fields
      that don't have a TTL, it will be consuming those extra 2 bytes per
      item. Fields are ordered by ttl in the listpack to find the field with
      minimum expiry time efficiently.
      
      **New command implementations as part of this PR:** 
      
      - HGETF command
      
      For each specified field get its value and optionally set the field's
      expiration time in sec/msec /unix-sec/unix-msec:
        ```
        HGETF key 
          [NX | XX | GT | LT]
      [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT
      unix-time-milliseconds | PERSIST]
          <FIELDS count field [field ...]>
        ```
      
      - HSETF command
      
      For each specified field value pair: set field to value and optionally
      set the field's expiration time in sec/msec /unix-sec/unix-msec:
        ```
        HSETF key 
          [DC] 
          [DCF | DOF] 
          [NX | XX | GT | LT] 
          [GETNEW | GETOLD] 
      [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT
      unix-time-milliseconds | KEEPTTL]
          <FVS count field value [field value …]>
        ```
      
      Todo:
      - Performance improvement.
      - rdb load/save
      - aof
      - defrag
      ca4ed48d
  13. 18 Apr, 2024 1 commit
    • Moti Cohen's avatar
      Hash Field Expiration - Basic support · c18ff056
      Moti Cohen authored
      - Add ebuckets & mstr data structures
      - Integrate active & lazy expiration
      - Add most of the commands 
      - Add support for dict (listpack is missing)
      TODOs:  RDB, notification, listpack, HSET, HGETF, defrag, aof
      c18ff056
  14. 02 Apr, 2024 1 commit
    • Moti Cohen's avatar
      Change FLUSHALL/FLUSHDB SYNC to run as blocking ASYNC (#13167) · 4df03796
      Moti Cohen authored
      # Overview
      Users utilize the `FLUSHDB SYNC` and `FLUSHALL SYNC` commands for a variety of 
      reasons. The main issue with this command is that if the database becomes 
      substantial in size, the server will be unresponsive for an extended period. 
      Other than freezing application traffic, this may also lead some clients making 
      incorrect judgments about the server's availability. For instance, a watchdog may 
      erroneously decide to terminate the process, resulting in potential adverse 
      outcomes. While a `FLUSH* ASYNC` can address these issues, it might not be used 
      for two reasons: firstly, it's not the default, and secondly, in some cases, the 
      client issuing the flush wants to wait for its completion before repopulating the 
      database.
      
      Between the option of triggering FLUSH* asynchronously in the background without 
      indication for completion versus running it synchronously in the foreground by 
      the main thread, there is another more appealing option. We can block the
      client that requested the flush, execute the flush command in the background, and 
      once done, unblock the client and return notification for completion. This approach 
      ensures the server remains responsive to other clients, and the blocked client 
      receives the expected response only after the flush operation has been successfully 
      carried out.
      
      # Implementation details
      Instead of defining yet another flavor to the flush command, we can modify
      `FLUSHALL SYNC` and `FLUSHDB SYNC` always run in this new mode.
      
      ## Extending BIO Threads capabilities
      Today jobs that are carried out by BIO threads don't have the capability to 
      indicate completion to the main thread. We can add this infrastructure by having
      an additional dummy job, coined as completion-job, that eventually will be written 
      by BIO threads to a response-queue. The main thread will take care to consume items
      from the response-queue and call the provided callback function of each 
      completion-job.
      
      ## FLUSH* SYNC to run as blocking ASYNC
      Command `FLUSH* SYNC` will be modified to create one or more async jobs to flush
      DB(s) and afterward will push additional completion-job request. By sending the
      completion job request only at the end, the main thread will be called back only
      after all the preceding jobs completed their task in the background. During that
      time, the client of the command is suspended and marked as `BLOCKED_LAZYFREE`
      whereas any other client will be able to communicate with the server without any
      issue.
      4df03796
  15. 20 Mar, 2024 1 commit
  16. 18 Mar, 2024 1 commit
    • Binbin's avatar
      Fix dictionary use-after-free in active expire and make kvstore iter to respect EMPTY flag (#13135) · 7b070423
      Binbin authored
      After #13072, there is an use-after-free error. In expireScanCallback, we
      will delete the dict, and then in dictScan we will continue to use the dict,
      like we will doing `dictResumeRehashing(d)` in the end, this casued an error.
      
      In this PR, in freeDictIfNeeded, if the dict's pauserehash is set, don't
      delete the dict yet, and then when scan returns try to delete it again.
      
      At the same time, we noticed that there will be similar problems in iterator.
      We may also delete elements during the iteration process, causing the dict
      to be deleted, so the part related to iter in the PR has also been modified.
      dictResetIterator was also missing from the previous kvstoreIteratorNextDict,
      we currently have no scenario that elements will be deleted in kvstoreIterator
      process, deal with it together to avoid future problems. Added some simple
      tests to verify the changes.
      
      In addition, the modification in #13072 omitted initTempDb and emptyDbAsync,
      and they were also added. This PR also remove the slow flag from the expire
      test (consumes 1.3s) so that problems can be found in CI in the future.
      7b070423
  17. 26 Feb, 2024 1 commit
    • Yanqi Lv's avatar
      Optimize DEL on expired keys (#13080) · 0a12f380
      Yanqi Lv authored
      
      
      If we call `DEL` on expired keys, keys may be deleted in
      `expireIfNeeded` and we don't need to call `dbSyncDelete` or
      `dbAsyncDelete` after, which repeat the deletion process(i.e. find keys
      in main db).
      
      In this PR, I refine the return values of `expireIfNeeded` to indicate
      whether we have deleted the expired key to avoid the potential redundant
      deletion logic in `delGenericCommand`. Besides, because both KEY_EXPIRED
      and KEY_DELETED are non-zero, this PR won't affect other functions
      calling `expireIfNeeded`.
      
      I also make a performance test. I first close active expiration by
      `debug set-active-expire 0` and write 1 million keys with 1ms TTL. Then
      I repeatedly delete 100 expired keys in one `DEL`. The results are as
      follow, which shows that this PR can improve performance by about 10% in
      this situation.
      **unstable**
      ```
      Summary:
        throughput summary: 10080.65 requests per second
        latency summary (msec):
                avg       min       p50       p95       p99       max
              0.953     0.136     0.959     1.215     1.335     2.247
      ```
      
      **This PR**
      ```
      Summary:			
        throughput summary: 11074.20 requests per second			
        latency summary (msec):			
                avg       min       p50       p95       p99       max			
              0.865     0.128     0.879     1.055     1.175     2.159			
      ```
      
      ---------
      Co-authored-by: default avatarViktor Söderqvist <viktor.soderqvist@est.tech>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      0a12f380
  18. 07 Feb, 2024 1 commit
  19. 05 Feb, 2024 1 commit
    • guybe7's avatar
      Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822) · 8cd62f82
      guybe7 authored
      # Description
      Gather most of the scattered `redisDb`-related code from the per-slot
      dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
      it's a class that represents an array of dictionaries.
      
      # Motivation
      The main motivation is code cleanliness, the idea of using an array of
      dictionaries is very well-suited to becoming a self-contained data
      structure.
      This allowed cleaning some ugly code, among others: loops that run twice
      on the main dict and expires dict, and duplicate code for allocating and
      releasing this data structure.
      
      # Notes
      1. This PR reverts the part of https://github.com/redis/redis/pull/12848
      where the `rehashing` list is global (handling rehashing `dict`s is
      under the responsibility of `kvstore`, and should not be managed by the
      server)
      2. This PR also replaces the type of `server.pubsubshard_channels` from
      `dict**` to `kvstore` (original PR:
      https://github.com/redis/redis/pull/12804). After that was done,
      server.pubsub_channels was also chosen to be a `kvstore` (with only one
      `dict`, which seems odd) just to make the code cleaner by making it the
      same type as `server.pubsubshard_channels`, see
      `pubsubtype.serverPubSubChannels`
      3. the keys and expires kvstores are currenlty configured to allocate
      the individual dicts only when the first key is added (unlike before, in
      which they allocated them in advance), but they won't release them when
      the last key is deleted.
      
      Worth mentioning that due to the recent change the reply of DEBUG
      HTSTATS changed, in case no keys were ever added to the db.
      
      before:
      ```
      127.0.0.1:6379> DEBUG htstats 9
      [Dictionary HT]
      Hash table 0 stats (main hash table):
      No stats available for empty dictionaries
      [Expires HT]
      Hash table 0 stats (main hash table):
      No stats available for empty dictionaries
      ```
      
      after:
      ```
      127.0.0.1:6379> DEBUG htstats 9
      [Dictionary HT]
      [Expires HT]
      ```
      8cd62f82
  20. 30 Jan, 2024 1 commit
  21. 29 Jan, 2024 1 commit
    • Chen Tianjie's avatar
      Optimize resizing hash table to resize not only non-empty dicts. (#12819) · af7ceeb7
      Chen Tianjie authored
      The function `tryResizeHashTables` only attempts to shrink the dicts
      that has keys (change from #11695), this was a serious problem until the
      change in #12850 since it meant if all keys are deleted, we won't shrink
      the dick.
      But still, both dictShrink and dictExpand may be blocked by a fork child
      process, therefore, the cron job needs to perform both dictShrink and
      dictExpand, for not just non-empty dicts, but all dicts in DBs.
      
      What this PR does:
      
      1. Try to resize all dicts in DBs (not just non-empty ones, as it was
      since #12850)
      2. handle both shrink and expand (not just shrink, as it was since
      forever)
      3. Refactor some APIs about dict resizing (get rid of `htNeedsShrink`
      `htNeedsShrink` `dictShrinkToFit`, and expose `dictShrinkIfNeeded`
      `dictExpandIfNeeded` which already contains all the code of those
      functions we get rid of, to make APIs more neat)
      4. In the `Don't rehash if redis has child process` test, now that cron
      would do resizing, we no longer need to write to DB after the child
      process got killed, and can wait for the cron to expand the hash table.
      af7ceeb7
  22. 22 Jan, 2024 1 commit
    • zhaozhao.zz's avatar
      Set the correct id for tempDb (#12947) · 8d0156eb
      zhaozhao.zz authored
      background: some modules need to know the `dbid` information, such as
      the function used during RDB loading:
      
      ```
      robj *rdbLoadObject(int rdbtype, rio *rdb, sds key, int dbid, int *error) {
      ....
              moduleInitIOContext(io,mt,rdb,&keyobj,dbid);
      ```
      
      However, during replication, the "tempDb" created for diskless RDB
      loading is not correctly set with the dbid. This leads to passing the
      wrong dbid to the `rdbLoadObject` function (as tempDb uses zcalloc, all
      ids are 0).
      
      ```
      disklessLoadInitTempDb()->rdbLoadRioWithLoadingCtx()->
              /* Read value */
              val = rdbLoadObject(type,rdb,key,db->id,&error);
      ```
      
      To fix it, set the correct ID (relative index) for the tempdb.
      8d0156eb
  23. 15 Dec, 2023 1 commit
    • zhaozhao.zz's avatar
      Unified db rehash method for both standalone and cluster (#12848) · d8a21c57
      zhaozhao.zz authored
      After #11695, we added two functions `rehashingStarted` and
      `rehashingCompleted` to the dict structure. We also registered two
      handlers for the main database's dict and expire structures. This allows
      the main database to record the dict in `rehashing` list when rehashing
      starts. Later, in `serverCron`, the `incrementallyRehash` function is
      continuously called to perform the rehashing operation. However,
      currently, when rehashing is completed, `rehashingCompleted` does not
      remove the dict from the `rehashing` list. This results in the
      `rehashing` list containing many invalid dicts. Although subsequent cron
      checks and removes dicts that don't require rehashing, it is still
      inefficient.
      
      This PR implements the functionality to remove the dict from the
      `rehashing` list in `rehashingCompleted`. This is achieved by adding
      `metadata` to the dict structure, which keeps track of its position in
      the `rehashing` list, allowing for quick removal. This approach avoids
      storing duplicate dicts in the `rehashing` list.
      
      Additionally, there are other modifications:
      
      1. Whether in standalone or cluster mode, the dict in database is
      inserted into the rehashing linked list when rehashing starts. This
      eliminates the need to distinguish between standalone and cluster mode
      in `incrementallyRehash`. The function only needs to focus on the dicts
      in the `rehashing` list that require rehashing.
      2. `rehashing` list is moved from per-database to Redis server level.
      This decouples `incrementallyRehash` from the database ID, and in
      standalone mode, there is no need to iterate over all databases,
      avoiding unnecessary access to databases that do not require rehashing.
      In the future, even if unsharded-cluster mode supports multiple
      databases, there will be no risk involved.
      3. The insertion and removal operations of dict structures in the
      `rehashing` list are decoupled from `activerehashing` config.
      `activerehashing` only controls whether `incrementallyRehash` is
      executed in serverCron. There is no need for additional steps when
      modifying the `activerehashing` switch, as in #12705.
      d8a21c57
  24. 10 Dec, 2023 2 commits
    • Binbin's avatar
      Handle missing fields in dbSwapDatabases and swapMainDbWithTempDb (#12763) · 62419c01
      Binbin authored
      The change in dbSwapDatabases seems harmless. Because in non-clustered
      mode, dbBuckets calculations are strictly accurate and in cluster mode,
      we only have one DB. Modify it for uniformity (just like resize_cursor).
      
      The change in swapMainDbWithTempDb is needed in case we swap with the
      temp db, otherwise the overhead memory usage of db can be miscalculated.
      
      In addition we will swap all fields (including rehashing list), just for
      completeness (and reduce the chance of surprises in the future).
      
      Introduced in #12697.
      62419c01
    • Binbin's avatar
      Remove dead code around should_expand_db (#12767) · a3ae2ed3
      Binbin authored
      when dbExpand is called from rdb.c with try_expand set to 0, it will
      either panic panic on OOM, or be non-fatal (should not fail RDB loading)
      
      At the same time, the log text has been slightly adjusted to make it
      more unified.
      a3ae2ed3
  25. 07 Dec, 2023 2 commits
    • Chen Tianjie's avatar
      Avoid unnecessary slot computing in KEYS command. (#12843) · f2d59c4f
      Chen Tianjie authored
      If not in cluster mode, there is no need to compute slot.
      
      A bit optimization for #12754
      f2d59c4f
    • zhaozhao.zz's avatar
      Fix replica node cannot expand dicts when loading legacy RDB (#12839) · 8e11f84d
      zhaozhao.zz authored
      When loading RDB on cluster nodes, it is necessary to consider the
      scenario where a node is a replica.
      
      For example, during a rolling upgrade, new version instances are often
      mounted as replicas on old version instances. In this case, the full
      synchronization legacy RDB does not contain slot information, and the
      new version instance, acting as a replica, should be able to handle the
      legacy RDB correctly for `dbExpand`.
      
      Additionally, renaming `getMyClusterSlotCount` to `getMyShardSlotCount`
      would be appropriate.
      
      Introduced in #11695
      8e11f84d
  26. 06 Dec, 2023 1 commit
    • zhaozhao.zz's avatar
      Make the sampling logic in eviction clearer (#12781) · 9ee1cc33
      zhaozhao.zz authored
      
      
      Additional optimizations for the eviction logic in #11695:
      
      To make the eviction logic clearer and decouple the number of sampled
      keys from the running mode (cluster or standalone).
      * When sampling in each database, we only care about the number of keys
      in the current database (not the dicts we sampled from).
      * If there are a insufficient number of keys in the current database
      (e.g. 10 times the value of `maxmemory_samples`), we can break out
      sooner (to avoid looping on a sparse database).
      * We'll never try to sample the db dicts more times than the number of
      non-empty dicts in the db (max 1 in non-cluster mode).
      
      And it also ensures that each database has a sufficient amount of
      sampled keys, so even if unsharded-cluster supports multiple databases,
      there won't be any issues.
      
      other changes:
      1. keep track of the number of non-empty dicts in each database.
      2. move key_count tracking into cumulativeKeyCountAdd rather than all
      it's callers
      
      ---------
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      9ee1cc33
  27. 05 Dec, 2023 1 commit
  28. 28 Nov, 2023 2 commits
    • zhaozhao.zz's avatar
      Fix resize hash tables stuck on the last non-empty slot (#12802) · a1c5171c
      zhaozhao.zz authored
      Introduced in #11695 .
      
      The tryResizeHashTables function gets stuck on the last non-empty slot
      while iterating through dictionaries. It does not restart from the
      beginning. The reason for this issue is a problem with the usage of
      dbIteratorNextDict:
      
      /* Returns next dictionary from the iterator, or NULL if iteration is complete. */
      dict *dbIteratorNextDict(dbIterator *dbit) {
          if (dbit->next_slot == -1) return NULL;
          dbit->slot = dbit->next_slot;
          dbit->next_slot = dbGetNextNonEmptySlot(dbit->db, dbit->slot, dbit->keyType);
          return dbGetDictFromIterator(dbit);
      }
      
      When iterating to the last non-empty slot, next_slot is set to -1,
      causing it to loop indefinitely on that slot. We need to modify the code
      to ensure that after iterating to the last non-empty slot, it returns to
      the first non-empty slot.
      
      BTW, function tryResizeHashTables is actually iterating over slots
      that have keys. However, in its implementation, it leverages the
      dbIterator (which is a key iterator) to obtain slot and dictionary
      information. While this approach works fine, but it is not very
      intuitive. This PR also improves readability by changing the iteration
      to directly iterate over slots, thereby enhancing clarity.
      a1c5171c
    • zhaozhao.zz's avatar
      clarify the comment of findSlotByKeyIndex function (#12811) · 095d0578
      zhaozhao.zz authored
      The current comment for `findSlotByKeyIndex` is a bit ambiguous and can
      be misleading, as it may be misunderstood as getting the next slot
      corresponding to target.
      095d0578
  29. 22 Nov, 2023 3 commits
  30. 16 Nov, 2023 1 commit
  31. 15 Nov, 2023 1 commit
    • Binbin's avatar
      Empty rehashing list in emptyDbStructure (#12764) · 4366bbaa
      Binbin authored
      This is currently harmless, since we have already cleared the dict
      before, it will reset the rehashidx to -1, and in incrementallyRehash
      we will call dictIsRehashing to check.
      
      It would be nice to empty the list to avoid meaningless attempts, and
      the code is also unified to reduce misunderstandings.
      4366bbaa
  32. 14 Nov, 2023 1 commit
    • Binbin's avatar
      Fix DB iterator not resetting pauserehash causing dict being unable to rehash (#12757) · fe363063
      Binbin authored
      When using DB iterator, it will use dictInitSafeIterator to init a old safe
      dict iterator. When dbIteratorNext is used, it will jump to the next slot db
      dict when we are done a dict. During this process, we do not have any calls to
      dictResumeRehashing, which causes the dict's pauserehash to always be > 0.
      
      And at last, it will be returned directly in dictRehashMilliseconds, which causes
      us to have slot dict in a state where rehash cannot be completed.
      
      In the "expire scan should skip dictionaries with lot's of empty buckets" test,
      adding a `keys *` can reproduce the problem stably. `keys *` will call dbIteratorNext
      to trigger a traversal of all slot dicts.
      
      Added dbReleaseIterator and dbIteratorInitNextSafeIterator methods to call dictResetIterator.
      Issue was introduced in #11695.
      fe363063
  33. 12 Nov, 2023 1 commit
    • Roshan Khatri's avatar
      Add DEBUG_ASSERTIONS option to custom assert (#12667) · 88e83e51
      Roshan Khatri authored
      This PR introduces a new macro, serverAssertWithInfoDebug, to do complex assertions only for debugging. The main intention is to allow running complex operations during tests without impacting runtime performance. This assertion is enabled when setting DEBUG_ASSERTIONS.
      
      The DEBUG_ASSERTIONS flag is set for the daily and CI variants of `test-sanitizer-address`.
      88e83e51
  34. 10 Nov, 2023 1 commit