1. 05 Sep, 2024 1 commit
  2. 11 Aug, 2024 1 commit
    • Moti Cohen's avatar
      On HDEL last field with expiry, update global HFE DS (#13470) · 806459f4
      Moti Cohen authored
      Hash field expiration is optimized to avoid frequent update global HFE DS for
      each field deletion. Eventually active-expiration will run and update or remove
      the hash from global HFE DS gracefully. Nevertheless, statistic "subexpiry"
      might reflect wrong number of hashes with HFE to the user if HDEL deletes
      the last field with expiration in hash (yet there are more fields without expiration).
      
      Following this change, if HDEL the last field with expiration in the hash then
      take care to remove the hash from global HFE DS as well.
      806459f4
  3. 11 Jul, 2024 1 commit
  4. 07 Jul, 2024 1 commit
    • Moti Cohen's avatar
      HFE - RDB serialize also hash min expiration (For ROF flow) (#13391) · dcf02298
      Moti Cohen authored
      * Following this feature, Redis (ROF) may implement flow that allows hashes to be
        dumped directly from RDB to FLUSH without parsing. In this scenario, it is still
        essential to determine when to update hashes due to expired fields. By writing
        and reading the next minimum hash-field expiration before serializing objects
        to and from RDB, we can effectively track and expire hash fields without the need
        to parse the hash during loading.
      
          Before:
          #define RDB_TYPE_HASH_METADATA 22
          #define RDB_TYPE_HASH_LISTPACK_EX 23
          
          After:
          /* Hash with HFEs. Doesn't attach min TTL at start */
          #define RDB_TYPE_HASH_METADATA_PRE_GA 22      
          /* Hash LP with HFEs. Doesn't attach min TTL at start */
          #define RDB_TYPE_HASH_LISTPACK_EX_PRE_GA 23   
          /* Hash with HFEs. Attach min TTL at start */
          #define RDB_TYPE_HASH_METADATA 24             
          /* Hash LP with HFEs. Attach min TTL at start */
          #define RDB_TYPE_HASH_LISTPACK_EX 25          
      
      
      * Manually test loading RDB file before the change and verify hash and its HFEs 
        is as expected.
      * Added `subexpires` counter to `redis-check-rdb`
      dcf02298
  5. 02 Jul, 2024 1 commit
  6. 26 Jun, 2024 1 commit
    • Moti Cohen's avatar
      HFE - count in command must match actual number of fields (#13369) · a9267137
      Moti Cohen authored
      There was wrong preliminary assumption that we can optionally provide
      vector of arguments more than count.
      This is error-prone approach that leaded to actual error in that case.
      This PR enforce that vector of argument match count.
      
      Also fixed flaky HRANDFIELD test.
      a9267137
  7. 25 Jun, 2024 1 commit
    • Moti Cohen's avatar
      Fix H(P)EXPIREAT command to propagate HDEL as well (#13364) · 5eac99c3
      Moti Cohen authored
      H(P)EXPIREAT command might delete fields in case the absolute time is in the 
      past. Those HDELs need to be propagated as well.
       
      In general, as we need to propagate H(P)EXPIRE(AT) command to the replica, each 
      field that is mentioned in the command should be categorized into one of the four
      options:
      1. Managed to update field’s expiration time - propagate it to replica as part 
         of the HPEXPIREAT command.
      2. Deleted the field because the time is in the past - propagate also HDEL command
         to delete the field and remove the field from the propagated HPEXPIREAT.
      3. Condition not met for the field - Remove the field from the propagated
         HPEXPIREAT command.
      4. Field does not exists - Remove the field from the propagated HPEXPIREAT command.
      
      If none of the provided fields match option number 1, then avoid also propagating 
      the HPEXPIREAT command to the replica.
      
      This approach is aligned with the EXPIRE command:
      If a given key has already expired, then DEL will be propagated instead of
      EXPIRE command. If condition not met, then command will be rejected. Otherwise, 
      EXPIRE command will be propagated for given key.
      5eac99c3
  8. 24 Jun, 2024 1 commit
    • Moti Cohen's avatar
      Adapt HRANDFIELD to HFE feature (#13348) · e26ea35c
      Moti Cohen authored
      Considerations for the selected imp of HRANDFIELD & HFE feature:
      
      HRANDFIELD might access any of the fields in the hash as some of them
      might be expired. And so the Implementation of HRANDFIELD along with HFEs
      might be one of the two options:
      1. Expire hash-fields before diving into handling HRANDFIELD.
      2. Refine HRANDFIELD cases to deal with expired fields.
      
      Regarding the first option, as reference, the command RANDOMKEY also
      declareson O(1) complexity, yet might be stuck on a very long (but not infinite)
      loop trying to find non-expired keys. Furthermore RANDOMKEY also evicts expired 
      keys along the way even though it is categorized as a read-only command. Note
      that the case of HRANDFIELD is more lightweight versus RANDOMKEY since 
      HFEs have much more effective and aggressive active-expiration for fields behind.
      
      The second option introduces additional implementation complexity to HRANDFIELD.
      We could further refine HRANDFIELD cases to differentiate between scenarios
      with many expired fields versus few expired fields, and adjust based on the
      percentage of expired fields. However, this approach could still lead to long
      loops or necessitate expiring fields before selecting them. For the “lightweight”
      cases it is also expected to have a lightweight expiration.
      
      Considering the pros and cons, and the fact that HRANDFIELD is an infrequent
      command (particularly with HFEs) and the fact we have effective active-expiration
      behind for hash-fields, it is better to keep it simple and choose option number 1.
      
      Other changes:
      * Don't mark command dirty by internal hashTypeExpire(). It causes to read 
        only command of HRANDFIELD to be accidently propagated (This flag
        should be indicated at higher level, by the command functions).
      * Align `hashTypeExpireIfNeeded()` and `hashTypeGetValue()` to be more
        aligned with `expireIfNeeded()` logic of keyspace.
      e26ea35c
  9. 20 Jun, 2024 1 commit
    • Moti Cohen's avatar
      Fix rdbLoadObject() empty hash (#13347) · e18a173a
      Moti Cohen authored
      As part of HFE feature, the logic of rdbLoadObject() was wrongly
      modified to indicate of loaded empty hash from RDB as hash that all its
      fields got expired. Rollback to `emptykey` logic. This function should
      load blindly all fields, expired or not. Manually verified.
      
      Few more minor fixes:
      - remove hash double check of emptyKey
      - Fix from `sds` to `hfield` in rdbLoadObject() (not really a bug. Both
      are of type char*)
      - Revert code rdbLoadObject() to get dbid instead of db
      e18a173a
  10. 14 Jun, 2024 1 commit
    • Ozan Tezcan's avatar
      Reply with array of return codes if the key does not exist for HFE commands (#13343) · 4aa25d04
      Ozan Tezcan authored
      Currently, HFE commands reply with empty array if the key does not
      exist. Though, non-existing key and empty key is the same thing. 
      It means fields given in the command do not exist in the empty key. 
      So, replying with an array of 'no field' error codes (-2) suits better 
      to Redis logic. e.g. Similarly, `hmget` returns array of nulls if the 
      key does not exist.
      
      After this PR:
      ```
      127.0.0.1:6379> hpersist missingkey fields 2 a b
      1) (integer) -2
      2) (integer) -2
      ```
      4aa25d04
  11. 11 Jun, 2024 1 commit
    • debing.sun's avatar
      Add new hexpired notification for HFE (#13329) · ed10f737
      debing.sun authored
      
      
      When the hash field expired, we will send a new `hexpired` notification.
      It mainly includes the following three cases:
      1. When field expired by active expiration.
      2. When field expired by lazy expiration.
      3. When the user uses the `h(p)expire(at)` command, the user will also
      get a `hexpired` notification if the field expires during the command.
      
      ## Improvement
      1. Now if more than one field expires in the hmget command, we will only
      send a `hexpired` notification.
      2. When a field with TTL is deleted by commands like hdel without
      updating the global DS, active expire will not send a notification.
      
      ---------
      Co-authored-by: default avatarOzan Tezcan <ozantezcan@gmail.com>
      Co-authored-by: default avatarMoti Cohen <moti.cohen@redis.com>
      ed10f737
  12. 10 Jun, 2024 2 commits
    • Moti Cohen's avatar
      Reserve 2 bits out of EB_EXPIRE_TIME_MAX for possible future use (#13331) · f01fdc39
      Moti Cohen authored
      Reserve 2 bits out of hash-field expiration time (`EB_EXPIRE_TIME_MAX`)
      for possible future lightweight indexing/categorizing of fields. It can
      be achieved by hacking HFE as follows:
      ```
      HPEXPIREAT key [ 2^47 + USER_INDEX ] FIELDS numfields field [field …]
      ```
      
      Redis will also need to expose kind of `HEXPIRESCAN` and `HEXPIRECOUNT`
      for this idea. Yet to be better defined.
      
      `HFE_MAX_ABS_TIME_MSEC` constraint must be enforced only at API level.
      Internally, the expiration time can be up to `EB_EXPIRE_TIME_MAX` for
      future readiness.
      f01fdc39
    • Moti Cohen's avatar
      HFE - Avoid lazy expire if called by modules + cleanup (#13326) · ce121b92
      Moti Cohen authored
      Need to be carefull if called by modules since modules API allow to open
      and close key handler. We don't want to invalidate the handler
      underneath.
      
      * hashTypeExists(), hashTypeGetValueObject() - will return the logical
      state of the field. A flag will indicate noExpire.
      * RM_HashGet() - Will get NULL if the field expired. Fields won’t be
      deleted.
      * RM_ScanKey() - might return 0 items if all fields got expired. Fields
      won’t be deleted.
      * RM_HashSet() - If set, then override expired field. If delete, we can
      either delete or leave it to active-expiration. XX/NX - logically
      correct (Verify with tests).
      
      Nice to have (not implemented):
      * RedisModule_CloseKey() - We can local active-expire up-to 100 items. 
      
      Note:
      Length will be wrong to modules just like redis (Count expired fields).
      ce121b92
  13. 04 Jun, 2024 2 commits
    • debing.sun's avatar
      Prevent negative expire parameter in HEXPIRE/HEXPIREAT/HPEXPIRE/HPEXPIREAT commands (#13310) · 9a2c6ba4
      debing.sun authored
      
      
      1. Don't allow HEXPIRE/HEXPIREAT/HPEXPIRE/HPEXPIREAT command expire
      parameters is negative
      
      2. Remove a dead code reported from Coverity.
      when `unit` is not `UNIT_SECONDS`, the second `if (expire > (long long)
      EB_EXPIRE_TIME_MAX)` will be dead code.
      ```c
      # t_hash.c
      2988    /* Check expire overflow */
            	cond_at_most: Condition expire > 281474976710655LL, taking false branch. Now the value of expire is at most 281474976710655.
      2989    if (expire > (long long) EB_EXPIRE_TIME_MAX) {
      2990        addReplyErrorExpireTime(c);
      2991        return;
      2992    }
      
      2994    if (unit == UNIT_SECONDS) {
      2995        if (expire > (long long) EB_EXPIRE_TIME_MAX / 1000) {
      2996            addReplyErrorExpireTime(c);
      2997            return;
      2998        }
      2999        expire *= 1000;
      3000    } else {
            	at_most: At condition expire > 281474976710655LL, the value of expire must be at most 281474976710655.
            	dead_error_condition: The condition expire > 281474976710655LL cannot be true.
      3001        if (expire > (long long) EB_EXPIRE_TIME_MAX) {
            	
      CID 494223: (#1 of 1): Logically dead code (DEADCODE)
      dead_error_begin: Execution cannot reach this statement: addReplyErrorExpireTime(c);.
      3002            addReplyErrorExpireTime(c);
      3003            return;
      3004        }
      3005    }
      ```
      
      ---------
      Co-authored-by: default avatarOzan Tezcan <ozantezcan@gmail.com>
      9a2c6ba4
    • Ozan Tezcan's avatar
      Use lookupKeyWrite() for hpersist command (#13321) · 293a68af
      Ozan Tezcan authored
      As hpersist is a write command, we should use lookupKeyWrite() instead
      of lookupKeyRead() to fetch the key.
      293a68af
  14. 03 Jun, 2024 1 commit
    • Moti Cohen's avatar
      Fix returned value nextExpireTime by ebExpire() (#13313) · 56169112
      Moti Cohen authored
      At `ebuckets` structure, On `ebExpire()`, if the callback indicated to update 
      the item expiration time and return it back to ebuckets (`ACT_UPDATE_EXP_ITEM`), 
      then returned value `nextExpireTime` should be updated, if needed. 
      Invalid value of `nextExpireTime` was modified from 0 to `EB_EXPIRE_TIME_INVALID`.
      56169112
  15. 29 May, 2024 1 commit
    • Moti Cohen's avatar
      HFE to support AOF and replicas (#13285) · 33fc0fbf
      Moti Cohen authored
      * For replica sake, rewrite commands `H*EXPIRE*` , `HSETF`, `HGETF` to
      have absolute unix time in msec.
      * On active-expiration of field, propagate HDEL to replica
      (`propagateHashFieldDeletion()`)
      * On lazy-expiration, propagate HDEL to replica (`hashTypeGetValue()`
      now calls `hashTypeDelete()`. It also takes care to call
      `propagateHashFieldDeletion()`).
      * Fix `H*EXPIRE*` command such that if it gets flag `LT` and it doesn’t
      have any expiration on the field then it will considered as valid
      condition.
      
      Note, replicas doesn’t make any active expiration, and should avoid lazy
      expiration. On `hashTypeGetValue()` it doesn't check expiration (As long
      as the master didn’t request to delete the field, it is valid)
      
      TODO: 
      * Attach `dbid` to HASH metadata. See
      [here](https://github.com/redis/redis/pull/13209#discussion_r1593385850
      
      )
      
      ---------
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      33fc0fbf
  16. 26 May, 2024 1 commit
  17. 22 May, 2024 2 commits
    • Ozan Tezcan's avatar
      Improve performance of hfe listpack (#13279) · a25b1539
      Ozan Tezcan authored
      
      
      This PR contains a few optimizations for hfe listpack.
      - Hfe fields are ordered by TTL in the listpack. There are two cases
      that we want to search listpack according to TTLs:
      - As part of active-expiry, we need to find the fields that are expired.
      e.g. find fields that have smaller TTLs than given timestamp.
      - When we want to add a new field, we need to find the correct position
      to maintain the order by TTL. e.g. find the field that has a higher TTL
      than the one we want to insert.
        
      Iterating with lpNext() to compare TTLs has a performance cost as
      lpNext() calls lpValidateIntegrity() for each entry. Instead, this PR
      adds `lpFindCb()` to the listpack which accepts a comparator callback.
      It preserves same validation logic of lpFind() which is faster than
      search with lpNext().
        
      - We have field name, value, ttl for a single hfe field. Inserting these
      items one by one to listpack is costly. Especially, as we place fields
      according to TTL, most additions will end up in the middle of the
      listpack. Each insert causes realloc + memmove. This PR introduces
      `lpBatchInsert()` to add multiple items in one go.
      
      - For hsetf, if we are going to update value and TTL at the same time,
      currently, we update the value first and later update the TTL (two
      distinct listpack operation). This PR improves it by doing it with a
      single update operation.
      
      ---------
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      a25b1539
    • debing.sun's avatar
      sanitize dump payload for HFE (#13278) · 95cbe879
      debing.sun authored
      Add the following validations:
      1. Get TTL using the lpGetIntegerValue() method instead of lpGetValue(),
      Ref https://github.com/redis/redis/pull/13209#discussion_r1602569422
      
      
      2. The TTL of listpackex is a number in the valid range
      (0~EB_EXPIRE_TIME_MAX) and ordered.
      3. The TTL fields of listpackex are ordered. 
      4. The TTL of hashtable is within the valid range
      (0~EB_EXPIRE_TIME_MAX).
      
      Other:
      Fix the missing of handling OBJ_ENCODING_LISTPACK_EX in
      dismissHashObject().
      
      ---------
      Co-authored-by: default avatarOzan Tezcan <ozantezcan@gmail.com>
      95cbe879
  18. 17 May, 2024 1 commit
    • Ronen Kalish's avatar
      Hfe serialization listpack (#13243) · 323be4d6
      Ronen Kalish authored
      Add RDB de/serialization for HFE
      
      This PR adds two new RDB types: `RDB_TYPE_HASH_METADATA` and
      `RDB_TYPE_HASH_LISTPACK_TTL` to save HFE data.
      When the hash RAM encoding is dict, it will be saved in the former, and
      when it is listpack it will be saved in the latter.
      Both formats just add the TTL value for each field after the data that
      was previously saved, i.e HASH_METADATA will save the number of entries
      and, for each entry, key, value and TTL, whereas listpack is saved as a
      blob.
      On read, the usual dict <--> listpack conversion takes place if
      required.
      In addition, when reading a hash that was saved as a dict fields are
      actively expired if expiry is due. Currently this slao holds for
      listpack encoding, but it is supposed to be removed.
      
      TODO:
      Remove active expiry on load when loading from listpack format (unless
      we'll decide to keep it)
      323be4d6
  19. 16 May, 2024 1 commit
  20. 14 May, 2024 1 commit
    • debing.sun's avatar
      Add defragment support for HFE (#13229) · 80be2cc2
      debing.sun authored
      
      
      ## Background
      1. All hash objects that contain HFE are referenced by db->hexpires.
      2. All fields in a dict hash object with HFE are referenced by an
      ebucket.
      
      So when we defrag the hash object or the field in a dict with HFE, we
      also need to update the references in them.
      
      ## Interface
      1. Add a new interface `ebDefragItem`, which can accept a defrag
      callback to defrag items in ebuckets, and simultaneously update their
      references in the ebucket.
      
      ## Mainly changes
      1. The key type of dict of hash object is no longer sds, so add new
      `activeDefragHfieldDict()` to defrag the dict instead of
      `activeDefragSdsDict()`.
      2. When we defrag the dict of hash object by using `dictScanDefrag()`,
      we always set the defrag callback `defragKey` of `dictDefragFunctions`
      to NULL, because we can't reallocate a field with out updating it's
      reference in ebuckets.
      Instead, we will defrag the field of the dict and update its reference
      in the callback `dictScanDefrag` of dictScanFunction().
      3. When we defrag the hash robj with HFE, we will use `ebDefragItem` to
      defrag the robj and update the reference in db->hexpires.
      
      ## TODO:
      Defrag ebucket structure incremently, which will be handler in a future
      PR.
      
      ---------
      Co-authored-by: default avatarOzan Tezcan <ozantezcan@gmail.com>
      Co-authored-by: default avatarMoti Cohen <moti.cohen@redis.com>
      80be2cc2
  21. 13 May, 2024 1 commit
    • Ozan Tezcan's avatar
      Fix hgetf/hsetf reply type by returning string (#13263) · 5066e6e9
      Ozan Tezcan authored
      If encoding is listpack, hgetf and hsetf commands reply field value type
      as integer.
      This PR fixes it by returning string.
      
      Problematic cases:
      ```
      127.0.0.1:6379> hset hash one 1
      (integer) 1
      127.0.0.1:6379> hgetf hash fields 1 one
      1) (integer) 1
      127.0.0.1:6379> hsetf hash GETOLD fvs 1 one 2
      1) (integer) 1
      127.0.0.1:6379> hsetf hash DOF GETNEW fvs 1 one 2
      1) (integer) 2
      ```
      
      Additional fixes:
      - hgetf/hsetf command description text
      
      Fixes #13261, #13262
      5066e6e9
  22. 09 May, 2024 1 commit
  23. 08 May, 2024 2 commits
    • Ozan Tezcan's avatar
      Add listpack support, hgetf and hsetf commands (#13209) · ca4ed48d
      Ozan Tezcan authored
      **Changes:**
      - Adds listpack support to hash field expiration 
      - Implements hgetf/hsetf commands
      
      **Listpack support for hash field expiration**
      
      We keep field name and value pairs in listpack for the hash type. With
      this PR, if one of hash field expiration command is called on the key
      for the first time, it converts listpack layout to triplets to hold
      field name, value and ttl per field. If a field does not have a TTL, we
      store zero as the ttl value. Zero is encoded as two bytes in the
      listpack. So, once we convert listpack to hold triplets, for the fields
      that don't have a TTL, it will be consuming those extra 2 bytes per
      item. Fields are ordered by ttl in the listpack to find the field with
      minimum expiry time efficiently.
      
      **New command implementations as part of this PR:** 
      
      - HGETF command
      
      For each specified field get its value and optionally set the field's
      expiration time in sec/msec /unix-sec/unix-msec:
        ```
        HGETF key 
          [NX | XX | GT | LT]
      [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT
      unix-time-milliseconds | PERSIST]
          <FIELDS count field [field ...]>
        ```
      
      - HSETF command
      
      For each specified field value pair: set field to value and optionally
      set the field's expiration time in sec/msec /unix-sec/unix-msec:
        ```
        HSETF key 
          [DC] 
          [DCF | DOF] 
          [NX | XX | GT | LT] 
          [GETNEW | GETOLD] 
      [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT
      unix-time-milliseconds | KEEPTTL]
          <FVS count field value [field value …]>
        ```
      
      Todo:
      - Performance improvement.
      - rdb load/save
      - aof
      - defrag
      ca4ed48d
    • Moti Cohen's avatar
      ebuckets: Add test for ACT_UPDATE_EXP_ITEM (#13249) · 13401f8b
      Moti Cohen authored
      - On ebExpire() verify the logic of update expired value to a new time
      rather than remove it.
      - Refine ebuckets benchmark
      13401f8b
  24. 25 Apr, 2024 1 commit
    • Moti Cohen's avatar
      Support HSET+expire in one command, at infra level (#13230) · c33c91db
      Moti Cohen authored
      Unify infra of `HSETF`, `HEXPIRE`, `HSET` and provide API for RDB load
      as well. Whereas setting plain fields is rather straightforward, setting
      expiration time to fields might be time-consuming and complex since each 
      update of expiration time, not only updates `ebuckets` of corresponding hash, 
      but also might update `ebuckets` of global HFE DS. It is required to opt 
      sequence of field updates with expirartion for a given hash, such that only once
      done, the global HFE DS will get updated.
      
      To do so, follow the scheme:
      1. Call `hashTypeSetExInit()` to initialize the HashTypeSetEx struct.
      2. Call `hashTypeSetEx()` one time or more, for each field/expiration update.
      3. Call `hashTypeSetExDone()` for notification and update of global HFE.
      
      If expiration is not required, then avoid this API and use instead hashTypeSet().
      c33c91db
  25. 18 Apr, 2024 1 commit
    • Moti Cohen's avatar
      Hash Field Expiration - Basic support · c18ff056
      Moti Cohen authored
      - Add ebuckets & mstr data structures
      - Integrate active & lazy expiration
      - Add most of the commands 
      - Add support for dict (listpack is missing)
      TODOs:  RDB, notification, listpack, HSET, HGETF, defrag, aof
      c18ff056
  26. 20 Mar, 2024 1 commit
  27. 15 Jan, 2024 1 commit
    • Yanqi Lv's avatar
      Shrink dict when deleting dictEntry (#12850) · e2b7932b
      Yanqi Lv authored
      When we insert entries into dict, it may autonomously expand if needed.
      However, when we delete entries from dict, it doesn't shrink to the
      proper size. If there are few entries in a very large dict, it may cause
      huge waste of memory and inefficiency when iterating.
      
      The main keyspace dicts (keys and expires), are shrinked by cron
      (`tryResizeHashTables` calls `htNeedsResize` and `dictResize`),
      And some data structures such as zset and hash also do that (call
      `htNeedsResize`) right after a loop of calls to `dictDelete`,
      But many other dicts are completely missing that call (they can only
      expand).
      
      In this PR, we provide the ability to automatically shrink the dict when
      deleting. The conditions triggering the shrinking is the same as
      `htNeedsResize` used to have. i.e. we expand when we're over 100%
      utilization, and shrink when we're below 10% utilization.
      
      Additionally:
      * Add `dictPauseAutoResize` so that flows that do mass deletions, will
      only trigger shrinkage at the end.
      * Rename `dictResize` to `dictShrinkToFit` (same logic as it used to
      have, but better name describing it)
      * Rename `_dictExpand` to `_dictResize` (same logic as it used to have,
      but better name describing it)
       
      related to discussion
      https://github.com/redis/redis/pull/12819#discussion_r1409293878
      
      
      
      ---------
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      Co-authored-by: default avatarzhaozhao.zz <zhaozhao.zz@alibaba-inc.com>
      e2b7932b
  28. 15 Oct, 2023 1 commit
    • Vitaly's avatar
      Replace cluster metadata with slot specific dictionaries (#11695) · 0270abda
      Vitaly authored
      This is an implementation of https://github.com/redis/redis/issues/10589
      
       that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot.  Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
      
      ## Important changes
      * Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
      * getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
      * Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
      * scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
      * Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot. 
      * Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
      * DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
      
      ## Performance
      This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict. 
      
      RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
      
      ## Interface changes
      * Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
      * Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
      * New RDB version to support the new op code for SLOT information. 
      
      ---------
      Co-authored-by: default avatarVitaly Arbuzov <arvit@amazon.com>
      Co-authored-by: default avatarHarkrishn Patro <harkrisp@amazon.com>
      Co-authored-by: default avatarRoshan Khatri <rvkhatri@amazon.com>
      Co-authored-by: default avatarMadelyn Olson <madelyneolson@gmail.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      0270abda
  29. 22 May, 2023 1 commit
    • Binbin's avatar
      Optimize HRANDFIELD and ZRANDMEMBER case 3 when listpack encoded (#12205) · 006ab26c
      Binbin authored
      Optimized HRANDFIELD and ZRANDMEMBER commands as in #8444,
      CASE 3 under listpack encoding. Boost optimization to CASE 2.5. 
      
      CASE 2.5 listpack only. Sampling unique elements, in non-random order.
      Listpack encoded hashes / zsets are meant to be relatively small, so
      HRANDFIELD_SUB_STRATEGY_MUL / ZRANDMEMBER_SUB_STRATEGY_MUL
      isn't necessary and we rather not make copies of the entries. Instead, we
      emit them directly to the output buffer.
      
      Simple benchmarks shows it provides some 400% improvement in HRANDFIELD
      and ZRANGESTORE both in CASE 3.
      
      Unrelated changes: remove useless setTypeRandomElements and fix a typo.
      006ab26c
  30. 16 May, 2023 1 commit
    • Binbin's avatar
      Fix for set max entries edge case in setTypeCreate / setTypeMaybeConvert (#12183) · fd566f40
      Binbin authored
      In the judgment in setTypeCreate, we should judge size_hint <= max_entries.
      
      This results in the following inconsistencies:
      ```
      127.0.0.1:6379> config set set-max-intset-entries 5 set-max-listpack-entries 5
      OK
      
      127.0.0.1:6379> sadd intset_set1 1 2 3 4 5
      (integer) 5
      127.0.0.1:6379> object encoding intset_set1
      "hashtable"
      127.0.0.1:6379> sadd intset_set2 1 2 3 4
      (integer) 4
      127.0.0.1:6379> sadd intset_set2 5
      (integer) 1
      127.0.0.1:6379> object encoding intset_set2
      "intset"
      
      127.0.0.1:6379> sadd listpack_set1 a 1 2 3 4
      (integer) 5
      127.0.0.1:6379> object encoding listpack_set1
      "hashtable"
      127.0.0.1:6379> sadd listpack_set2 a 1 2 3
      (integer) 4
      127.0.0.1:6379> sadd listpack_set2 4
      (integer) 1
      127.0.0.1:6379> object encoding listpack_set2
      "listpack"
      ```
      
      This was introduced in #12019, added corresponding tests.
      fd566f40
  31. 08 May, 2023 1 commit
    • Madelyn Olson's avatar
      Minor performance improvement to SADD and HSET (#12019) · a129a601
      Madelyn Olson authored
      For sets and hashes that will eventually be stored as the hash encoding, it's much faster to immediately convert them to their hash encoding and then perform the insertions since it avoids the O(N) search and frequent reallocations. This change checks the number of arguments in the incoming command, and converts the data-structure if the number of new entries exceeds the listpack-max-entries configuration. This can cause us to over-allocate memory if their are duplicate entries in the input, which is unexpected.
      
      unstable
      
      Summary:
        throughput summary: 805.54 requests per second
        latency summary (msec):
                avg       min       p50       p95       p99       max
             61.908    25.680    68.351    73.279    75.967    79.295
      hset-improvement
      
      Summary:
        throughput summary: 4701.46 requests per second
        latency summary (msec):
                avg       min       p50       p95       p99       max
             10.546     0.832    11.959    12.471    13.119    14.967
      a129a601
  32. 28 Feb, 2023 1 commit
  33. 16 Jan, 2023 2 commits
    • Oran Agra's avatar
      Obuf limit, exit during loop in *RAND* commands and KEYS (#11676) · b4123663
      Oran Agra authored
      Related to the hang reported in #11671
      Currently, redis can disconnect a client due to reaching output buffer limit,
      it'll also avoid feeding that output buffer with more data, but it will keep
      running the loop in the command (despite the client already being marked for
      disconnection)
      
      This PR is an attempt to mitigate the problem, specifically for commands that
      are easy to abuse, specifically: KEYS, HRANDFIELD, SRANDMEMBER, ZRANDMEMBER.
      The RAND family of commands can take a negative COUNT argument (which is not
      bound to the number of elements in the key), so it's enough to create a key
      with one field, and then these commands can be used to hang redis.
      For KEYS the caller can use the existing keyspace in redis (if big enough).
      b4123663
    • Oran Agra's avatar
      Fix range issues in ZRANDMEMBER and HRANDFIELD (CVE-2023-22458) (#11674) · 16f408b1
      Oran Agra authored
      missing range check in ZRANDMEMBER and HRANDIFLD leading to panic due
      to protocol limitations
      16f408b1
  34. 28 Aug, 2022 1 commit
  35. 14 Aug, 2022 1 commit
    • kmy2001's avatar
      Optimization in t_hash.c: Avoid looking for a same field twice by using... · eef2d830
      kmy2001 authored
      Optimization in t_hash.c: Avoid looking for a same field twice by using dictAddRaw() instead of dictFind() and dictAdd() (#11110)
      
      Before this change in hashTypeSet() function, we first use dictFind()
      to look for the field and if it does not exist, we use dictAdd() to add it.
      In dictAdd() function the dictionary will look for the field again and I
      think this is meaningless as we already know that the field does not exist.
      
      An optimization is to use dictAddRaw() instead of dictFind() and dictAdd().
      If we use dictAddRaw(), a new entry will be added when the field does not
      exist, and what we should do then is just set the value of that entry, and set
      its key to 'sdsdup(field)' in the case that 'HASH_SET_TAKE_FIELD' flag wasn't set.
      eef2d830