1. 24 Jun, 2024 1 commit
    • Moti Cohen's avatar
      Adapt HRANDFIELD to HFE feature (#13348) · e26ea35c
      Moti Cohen authored
      Considerations for the selected imp of HRANDFIELD & HFE feature:
      
      HRANDFIELD might access any of the fields in the hash as some of them
      might be expired. And so the Implementation of HRANDFIELD along with HFEs
      might be one of the two options:
      1. Expire hash-fields before diving into handling HRANDFIELD.
      2. Refine HRANDFIELD cases to deal with expired fields.
      
      Regarding the first option, as reference, the command RANDOMKEY also
      declareson O(1) complexity, yet might be stuck on a very long (but not infinite)
      loop trying to find non-expired keys. Furthermore RANDOMKEY also evicts expired 
      keys along the way even though it is categorized as a read-only command. Note
      that the case of HRANDFIELD is more lightweight versus RANDOMKEY since 
      HFEs have much more effective and aggressive active-expiration for fields behind.
      
      The second option introduces additional implementation complexity to HRANDFIELD.
      We could further refine HRANDFIELD cases to differentiate between scenarios
      with many expired fields versus few expired fields, and adjust based on the
      percentage of expired fields. However, this approach could still lead to long
      loops or necessitate expiring fields before selecting them. For the “lightweight”
      cases it is also expected to have a lightweight expiration.
      
      Considering the pros and cons, and the fact that HRANDFIELD is an infrequent
      command (particularly with HFEs) and the fact we have effective active-expiration
      behind for hash-fields, it is better to keep it simple and choose option number 1.
      
      Other changes:
      * Don't mark command dirty by internal hashTypeExpire(). It causes to read 
        only command of HRANDFIELD to be accidently propagated (This flag
        should be indicated at higher level, by the command functions).
      * Align `hashTypeExpireIfNeeded()` and `hashTypeGetValue()` to be more
        aligned with `expireIfNeeded()` logic of keyspace.
      e26ea35c
  2. 21 Jun, 2024 2 commits
  3. 20 Jun, 2024 1 commit
    • Moti Cohen's avatar
      Fix rdbLoadObject() empty hash (#13347) · e18a173a
      Moti Cohen authored
      As part of HFE feature, the logic of rdbLoadObject() was wrongly
      modified to indicate of loaded empty hash from RDB as hash that all its
      fields got expired. Rollback to `emptykey` logic. This function should
      load blindly all fields, expired or not. Manually verified.
      
      Few more minor fixes:
      - remove hash double check of emptyKey
      - Fix from `sds` to `hfield` in rdbLoadObject() (not really a bug. Both
      are of type char*)
      - Revert code rdbLoadObject() to get dbid instead of db
      e18a173a
  4. 18 Jun, 2024 1 commit
    • Filipe Oliveira (Redis)'s avatar
      reduce getNodeByQuery CPU time by using less cache lines (from 2064 Bytes... · 24c85cc3
      Filipe Oliveira (Redis) authored
      
      reduce getNodeByQuery CPU time by using less cache lines (from 2064 Bytes struct to 64 Bytes): reduces LLC misses and Memory Loads (#13296)
      
      The following PR goes from 33 cacheline on getKeysResult struct (by
      default has 256 static buffer)
      
      ```
      root@hpe10:~/redis# pahole -p   ./src/server.o -C getKeysResult
      typedef struct {
      	keyReference               keysbuf[256];         /*     0  2048 */
      	/* --- cacheline 32 boundary (2048 bytes) --- */
      	/* typedef keyReference */ struct {
      		int                pos;
      		int                flags;
      	} *keys; /*  2048     8 */
      	int                        numkeys;              /*  2056     4 */
      	int                        size;                 /*  2060     4 */
      
      	/* size: 2064, cachelines: 33, members: 4 */
      	/* last cacheline: 16 bytes */
      } getKeysResult;
      ```
      
      
      to 1 cacheline with a static buffer of 6 keys per command):
      ```
      root@hpe10:~/redis# pahole -p   ./src/server.o -C getKeysResult
      typedef struct {
      	int                        numkeys;              /*     0     4 */
      	int                        size;                 /*     4     4 */
      	keyReference               keysbuf[6];           /*     8    48 */
      	/* typedef keyReference */ struct {
      		int                pos;
      		int                flags;
      	} *keys; /*    56     8 */
      
      	/* size: 64, cachelines: 1, members: 4 */
      } getKeysResult; 
      ```
      
      we get around 1.5% higher ops/sec, and a confirmation of around 15% less
      LLC loads on getNodeByQuery and 37% less Stores.
      
      Function / Call Stack | CPU Time: Difference | CPU Time:
      9462436fa444e746716845b1d807c74d8945831b | CPU Time: this PR | Loads:
      Difference | Loads: 9462436fa444e746716845b1d807c74d8945831b | Loads:
      this PR | Stores: Difference | Stores:
      9462436fa444e746716845b1d807c74d8945831b | Stores: This PR
      -- | -- | -- | -- | -- | -- | -- | -- | -- | --
      getNodeByQuery | 0.753767 | 1.57118 | 0.817416 | 144297829 (15% less
      loads) | 920575969 | 776278140 | 367607824 (37% less stores) | 991642384
      | 624034560
      
      ## results on client side
      
      ### baseline 
      ```
      taskset -c 2,3 memtier_benchmark -s 192.168.1.200 --port 6379 --authenticate perf --cluster-mode --pipeline 10 --data-size 100 --ratio 1:0 --key-pattern P:P --key-minimum=1 --key-maximum 1000000 --test-time 180 -c 25 -t 2 --hide-histogram 
      Writing results to stdout
      [RUN #1] Preparing benchmark client...
      [RUN #1] Launching threads now...
      [RUN #1 100%, 180 secs]  0 threads:   110333450 ops,  604992 (avg:  612942) ops/sec, 84.75MB/sec (avg: 85.86MB/sec),  0.82 (avg:  0.81) msec latency
      
      2         Threads
      25        Connections per thread
      180       Seconds
      
      
      ALL STATS
      ======================================================================================================================================================
      Type         Ops/sec     Hits/sec   Misses/sec    MOVED/sec      ASK/sec    Avg. Latency     p50 Latency     p99 Latency   p99.9 Latency       KB/sec 
      ------------------------------------------------------------------------------------------------------------------------------------------------------
      Sets       612942.14          ---          ---         0.00         0.00         0.81332         0.80700         1.26300         2.92700     87924.12 
      Gets            0.00         0.00         0.00         0.00         0.00             ---             ---             ---             ---         0.00 
      Waits           0.00          ---          ---          ---          ---             ---             ---             ---             ---          --- 
      Totals     612942.14         0.00         0.00         0.00         0.00         0.81332         0.80700         1.26300         2.92700     87924.12 
      ```
      
      ### comparison 
      ```
      taskset -c 2,3 memtier_benchmark -s 192.168.1.200 --port 6379 --authenticate perf --cluster-mode --pipeline 10 --data-size 100 --ratio 1:0 --key-pattern P:P --key-minimum=1 --key-maximum 1000000 --test-time 180 -c 25 -t 2 --hide-histogram 
      Writing results to stdout
      [RUN #1] Preparing benchmark client...
      [RUN #1] Launching threads now...
      [RUN #1 100%, 180 secs]  0 threads:   111731310 ops,  610195 (avg:  620707) ops/sec, 85.48MB/sec (avg: 86.95MB/sec),  0.82 (avg:  0.80) msec latency
      
      2         Threads
      25        Connections per thread
      180       Seconds
      
      
      ALL STATS
      ======================================================================================================================================================
      Type         Ops/sec     Hits/sec   Misses/sec    MOVED/sec      ASK/sec    Avg. Latency     p50 Latency     p99 Latency   p99.9 Latency       KB/sec 
      ------------------------------------------------------------------------------------------------------------------------------------------------------
      Sets       620707.72          ---          ---         0.00         0.00         0.80312         0.79900         1.23900         2.87900     89037.78 
      Gets            0.00         0.00         0.00         0.00         0.00             ---             ---             ---             ---         0.00 
      Waits           0.00          ---          ---          ---          ---             ---             ---             ---             ---          --- 
      Totals     620707.72         0.00         0.00         0.00         0.00         0.80312         0.79900         1.23900         2.87900     89037.78
      ```
      Co-authored-by: default avatarfilipecosta90 <filipecosta.90@gmail.com>
      24c85cc3
  5. 14 Jun, 2024 2 commits
    • Ozan Tezcan's avatar
      Reply with array of return codes if the key does not exist for HFE commands (#13343) · 4aa25d04
      Ozan Tezcan authored
      Currently, HFE commands reply with empty array if the key does not
      exist. Though, non-existing key and empty key is the same thing. 
      It means fields given in the command do not exist in the empty key. 
      So, replying with an array of 'no field' error codes (-2) suits better 
      to Redis logic. e.g. Similarly, `hmget` returns array of nulls if the 
      key does not exist.
      
      After this PR:
      ```
      127.0.0.1:6379> hpersist missingkey fields 2 a b
      1) (integer) -2
      2) (integer) -2
      ```
      4aa25d04
    • Jo's avatar
      Update `FIELDS` argument to block type for HFE commands schema (#13339) · 871c9859
      Jo authored
      I reviewed `XREAD` command syntax:
      ```
      XREAD [COUNT count] [BLOCK milliseconds] STREAMS key [key ...] id [id ...]
      ```
      
      Here’s the structure for `XREAD`:
      ```json
      "arguments": [
                  {
                      "token": "COUNT",
                      "name": "count",
                      "type": "integer",
                      "optional": true
                  },
                  {
                      "token": "BLOCK",
                      "name": "milliseconds",
                      "type": "integer",
                      "optional": true
                  },
                  {
                      "name": "streams",
                      "token": "STREAMS",
                      "type": "block",
                      "arguments": [
                          {
                              "name": "key",
                              "type": "key",
                              "key_spec_index": 0,
                              "multiple": true
                          },
                          {
                              "name": "ID",
                              "type": "string",
                              "multiple": true
                          }
                      ]
                  }
      ]
      ```
      
      Now, consider the `HEXPIRE` syntax:
      ```
      HEXPIRE key seconds [NX | XX | GT | LT] FIELDS numfields field [field ...]
      ```
      
      Since the `FIELDS` token functions similarly to `STREAMS`, and given that `STREAMS` is defined as a block, I believe the `FIELDS` in `hepxire` should also be defined as a block.
      871c9859
  6. 11 Jun, 2024 1 commit
    • debing.sun's avatar
      Add new hexpired notification for HFE (#13329) · ed10f737
      debing.sun authored
      
      
      When the hash field expired, we will send a new `hexpired` notification.
      It mainly includes the following three cases:
      1. When field expired by active expiration.
      2. When field expired by lazy expiration.
      3. When the user uses the `h(p)expire(at)` command, the user will also
      get a `hexpired` notification if the field expires during the command.
      
      ## Improvement
      1. Now if more than one field expires in the hmget command, we will only
      send a `hexpired` notification.
      2. When a field with TTL is deleted by commands like hdel without
      updating the global DS, active expire will not send a notification.
      
      ---------
      Co-authored-by: default avatarOzan Tezcan <ozantezcan@gmail.com>
      Co-authored-by: default avatarMoti Cohen <moti.cohen@redis.com>
      ed10f737
  7. 10 Jun, 2024 2 commits
    • Moti Cohen's avatar
      Reserve 2 bits out of EB_EXPIRE_TIME_MAX for possible future use (#13331) · f01fdc39
      Moti Cohen authored
      Reserve 2 bits out of hash-field expiration time (`EB_EXPIRE_TIME_MAX`)
      for possible future lightweight indexing/categorizing of fields. It can
      be achieved by hacking HFE as follows:
      ```
      HPEXPIREAT key [ 2^47 + USER_INDEX ] FIELDS numfields field [field …]
      ```
      
      Redis will also need to expose kind of `HEXPIRESCAN` and `HEXPIRECOUNT`
      for this idea. Yet to be better defined.
      
      `HFE_MAX_ABS_TIME_MSEC` constraint must be enforced only at API level.
      Internally, the expiration time can be up to `EB_EXPIRE_TIME_MAX` for
      future readiness.
      f01fdc39
    • Moti Cohen's avatar
      HFE - Avoid lazy expire if called by modules + cleanup (#13326) · ce121b92
      Moti Cohen authored
      Need to be carefull if called by modules since modules API allow to open
      and close key handler. We don't want to invalidate the handler
      underneath.
      
      * hashTypeExists(), hashTypeGetValueObject() - will return the logical
      state of the field. A flag will indicate noExpire.
      * RM_HashGet() - Will get NULL if the field expired. Fields won’t be
      deleted.
      * RM_ScanKey() - might return 0 items if all fields got expired. Fields
      won’t be deleted.
      * RM_HashSet() - If set, then override expired field. If delete, we can
      either delete or leave it to active-expiration. XX/NX - logically
      correct (Verify with tests).
      
      Nice to have (not implemented):
      * RedisModule_CloseKey() - We can local active-expire up-to 100 items. 
      
      Note:
      Length will be wrong to modules just like redis (Count expired fields).
      ce121b92
  8. 05 Jun, 2024 1 commit
  9. 04 Jun, 2024 4 commits
    • debing.sun's avatar
      Prevent negative expire parameter in HEXPIRE/HEXPIREAT/HPEXPIRE/HPEXPIREAT commands (#13310) · 9a2c6ba4
      debing.sun authored
      
      
      1. Don't allow HEXPIRE/HEXPIREAT/HPEXPIRE/HPEXPIREAT command expire
      parameters is negative
      
      2. Remove a dead code reported from Coverity.
      when `unit` is not `UNIT_SECONDS`, the second `if (expire > (long long)
      EB_EXPIRE_TIME_MAX)` will be dead code.
      ```c
      # t_hash.c
      2988    /* Check expire overflow */
            	cond_at_most: Condition expire > 281474976710655LL, taking false branch. Now the value of expire is at most 281474976710655.
      2989    if (expire > (long long) EB_EXPIRE_TIME_MAX) {
      2990        addReplyErrorExpireTime(c);
      2991        return;
      2992    }
      
      2994    if (unit == UNIT_SECONDS) {
      2995        if (expire > (long long) EB_EXPIRE_TIME_MAX / 1000) {
      2996            addReplyErrorExpireTime(c);
      2997            return;
      2998        }
      2999        expire *= 1000;
      3000    } else {
            	at_most: At condition expire > 281474976710655LL, the value of expire must be at most 281474976710655.
            	dead_error_condition: The condition expire > 281474976710655LL cannot be true.
      3001        if (expire > (long long) EB_EXPIRE_TIME_MAX) {
            	
      CID 494223: (#1 of 1): Logically dead code (DEADCODE)
      dead_error_begin: Execution cannot reach this statement: addReplyErrorExpireTime(c);.
      3002            addReplyErrorExpireTime(c);
      3003            return;
      3004        }
      3005    }
      ```
      
      ---------
      Co-authored-by: default avatarOzan Tezcan <ozantezcan@gmail.com>
      9a2c6ba4
    • gms's avatar
      Fix crash due to unblock client during slot migration (#13311) · f36b5a85
      gms authored
      
      
      In #13224, we found a crash during cluster slot migration but don't know
      why. So i check all the return C_OK in processCommand to see if we are
      missing some duration reset and see this.
      
      This fix is like #12247, when we reject the command, we should reset the
      duration. I test it and verify it can fix #13224.
      
      So the reason may because we are using stream block and then during the
      slot migration, it got a redirect and then crash the server.
      
      ---------
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      f36b5a85
    • Ozan Tezcan's avatar
      Use lookupKeyWrite() for hpersist command (#13321) · 293a68af
      Ozan Tezcan authored
      As hpersist is a write command, we should use lookupKeyWrite() instead
      of lookupKeyRead() to fetch the key.
      293a68af
    • Ozan Tezcan's avatar
      Fix crash in RM_ScanKey() when used with hexpire (#13320) · 44352bee
      Ozan Tezcan authored
      RM_ScanKey() was overlooked while introducing hash field expiration. 
      An assert is triggered when it is called on a hash key with
      OBJ_ENCODING_LISTPACK_EX encoding.
      
      I've changed to code to handle listpackex encoding properly.
      44352bee
  10. 03 Jun, 2024 1 commit
    • Moti Cohen's avatar
      Fix returned value nextExpireTime by ebExpire() (#13313) · 56169112
      Moti Cohen authored
      At `ebuckets` structure, On `ebExpire()`, if the callback indicated to update 
      the item expiration time and return it back to ebuckets (`ACT_UPDATE_EXP_ITEM`), 
      then returned value `nextExpireTime` should be updated, if needed. 
      Invalid value of `nextExpireTime` was modified from 0 to `EB_EXPIRE_TIME_INVALID`.
      56169112
  11. 30 May, 2024 3 commits
    • Valentino Geron's avatar
      Free current client asynchronously after user permissions changes (#13274) · 50569a90
      Valentino Geron authored
      The crash happens when the user that triggers the permission changes
      should be affected (and should be disconnected eventually).
      
      To handle such a scenario, we should use the
      `CLIENT_CLOSE_AFTER_COMMAND` flag.
      
      This commit encapsulates all the places that should be handled in the
      same way in `deauthenticateAndCloseClient`
      
      Also:
      * bugfix: during the ACL LOAD we ignore clients that are marked as
      `CLIENT MASTER`
      50569a90
    • jonghoonpark's avatar
      dynamically list test files (#13220) · 5a3534f9
      jonghoonpark authored
      **Related issue**
      https://github.com/redis/redis/issues/13219
      
      **Motivation**
      Currently we have to manually update the all_tests variable when
      introducing new test files.
      
      **Modification**
      I have modified it to list test files dynamically, but instead of
      modifying it to add all test files, I have modified it to only add only
      test files from the following 4 paths
      
      - unit
      - unit/type
      - unit/cluster
      - integration
      
      so that it doesn't deviate too much from what we already do
      
      **Result**
      - dynamically list test files to all_tests variable
      - close issue https://github.com/redis/redis/issues/13219
      
      
      
      **Additional information**
      - removed `list-common.tcl` file and added
      `generate_largevalue_test_array` proc in `util.tcl`. because
      `list-common.tcl` is not a test file
      - There is an order dependency. So I added a code to the "Is a ziplist
      encoded Hash promoted on big payload?" test that resets
      hash-max-listpack-value to the default (64).
      
      ---------
      Signed-off-by: default avatarjonghoonpark <dev@jonghoonpark.com>
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      5a3534f9
    • debing.sun's avatar
      Hash Field Expiration (#13303) · 7b9e9606
      debing.sun authored
      ## Background
      
      This PR introduces support for field-level expiration in Redis hashes. Previously, Redis supported expiration only at the key level, but this enhancement allows setting expiration times for individual fields within a hash.
      
      ## New commands
      * HEXPIRE
      * HEXPIREAT
      * HEXPIRETIME
      * HPERSIST
      * HPEXPIRE
      * HPEXPIREAT
      * HPEXPIRETIME
      * HPTTL
      * HTTL
      
      ## Short example
      from @moticless
      ```sh
      127.0.0.1:6379>  hset myhash f1 v1 f2 v2 f3 v3                                                   
      (integer) 3
      127.0.0.1:6379>  hpexpire myhash 10000 NX fields 2 f2 f3                                         
      1) (integer) 1
      2) (integer) 1
      127.0.0.1:6379>  hpttl myhash fields 3 f1 f2 f3                                                                                                                                                                         
      1) (integer) -1
      2) (integer) 9997
      3) (integer) 9997
      127.0.0.1:6379>  hgetall myhash  
      1) "f3"
      2) "v3"
      3) "f2"
      4) "v2"
      5) "f1"
      6) "v1"
      
      ... after 10 seconds ...
      
      127.0.0.1:6379>  hgetall myhash  
      1) "f1"
      2) "v1"
      127.0.0.1:6379>
      ```
      
      ## Expiration strategy
      1. Integrate active
          Redis periodically performs active expiration and deletion of hash keys that contain expired fields, with a maximum attempt limit.
      3. Lazy expiration
          When a client touches fields within a hash, Redis checks if the fields are expired. If a field is expired, it will be deleted. However, we do not delete expired fields during a traversal, we implicitly skip over them.
      
      ## RDB changes
      Add two new rdb type s`RDB_TYPE_HASH_METADATA` and `RDB_TYPE_HASH_LISTPACK_EX`.
      
      ## Notification
      1. Add `hpersist` notification for `HPERSIST` command.
      5. Add `hexpire` notification for `HEXPIRE`, `HEXPIREAT`, `HPEXPIRE` and `HPEXPIREAT` commands.
      
      ## Internal
      1. Add new data structure `ebuckets`, which is used to store TTL and keys, enabling quick retrieval of keys based on TTL.
      2. Add new data structure `mstr` like sds, which is used to store a string with TTL.
      
      This work was done by @moticless, @tezc, @ronen-kalish, @sundb, I just release it.
      7b9e9606
  12. 29 May, 2024 2 commits
    • Ozan Tezcan's avatar
      Fix position of numfields in H(P)EXPIRE json files (#13301) · f0389f28
      Ozan Tezcan authored
      Fix position of numfields in H(P)EXPIRE json files 
      f0389f28
    • Moti Cohen's avatar
      HFE to support AOF and replicas (#13285) · 33fc0fbf
      Moti Cohen authored
      * For replica sake, rewrite commands `H*EXPIRE*` , `HSETF`, `HGETF` to
      have absolute unix time in msec.
      * On active-expiration of field, propagate HDEL to replica
      (`propagateHashFieldDeletion()`)
      * On lazy-expiration, propagate HDEL to replica (`hashTypeGetValue()`
      now calls `hashTypeDelete()`. It also takes care to call
      `propagateHashFieldDeletion()`).
      * Fix `H*EXPIRE*` command such that if it gets flag `LT` and it doesn’t
      have any expiration on the field then it will considered as valid
      condition.
      
      Note, replicas doesn’t make any active expiration, and should avoid lazy
      expiration. On `hashTypeGetValue()` it doesn't check expiration (As long
      as the master didn’t request to delete the field, it is valid)
      
      TODO: 
      * Attach `dbid` to HASH metadata. See
      [here](https://github.com/redis/redis/pull/13209#discussion_r1593385850
      
      )
      
      ---------
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      33fc0fbf
  13. 28 May, 2024 1 commit
    • Ozan Tezcan's avatar
      Fix hscan return value (#13297) · 6a11d458
      Ozan Tezcan authored
      In the last step of hscan, while replying to client, we assume all items
      in the result list are keys which are mstr instances. Though, there 
      might be values which are sds instances. 
      
      Added a check to avoid calling mstrlen() for value objects.
      
      To reproduce:
      ```
      127.0.0.1:6379> hset myhash1 a 11111111111111111111111111111111111111111111111111111111111111111
      (integer) 0
      127.0.0.1:6379> hscan myhash1 0
      1) "0"
      2) 1) "a"
         2) "11111111111111111111111111111111111111111111111111111111111111111\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
      ```
      6a11d458
  14. 27 May, 2024 2 commits
  15. 26 May, 2024 2 commits
  16. 25 May, 2024 1 commit
    • Yves LeBras's avatar
      redis-cli --keystats and --keystats-samples with --top and --cursor (#12826) · 6801a3ce
      Yves LeBras authored
      
      
      Added `--keystats` and `--keystats-samples <n>` commands.
      
      The new commands combines memkeys and bigkeys with additional
      distribution data.
      We often run memkeys and bigkeys one after the other. It will be
      convenient to just have one command.
      Distribution and top 10 key sizes are useful when we have multiple keys
      taking a lot of memory.
      
      Like for memkeys and bigkeys, we can use `-i <n>` and Ctrl-C to
      interrupt the scan loop. It will still show the statistics on the keys
      sampled so far.
      We can use two new optional parameters:
      - `--cursor <n>` to continue from the last cursor after an interruption
      of the scan.
      - `--top <n>` to change the number of top key sizes (10 by default).
      
      Implemented a fix for the `--memkeys-samples 0` issue in order to use
      `--keystats-samples 0`.
      
      Used hdr_histogram for the key size distribution.
      
      For key length, hdr_histogram seemed overkill and preset ranges were
      used.
      
      The memory used by keystats with hdr_histogram is around 7MB (compared
      to 3MB for memkeys or bigkeys).
      
      Execution time is somewhat equivalent to adding memkeys and bigkeys
      time. Each scan loop is having more commands (key type, key size, key
      length/cardinality).
      
      We can redirect the output to a file. In that case, no color or text
      refresh will happen.
      
      Limitation:
      - As the information printed during the loop is refreshed (moving cursor
      up), stderr and information not fitting in the terminal window (width or
      height) might create some refresh issues.
      
      Comments:
      - config.top_sizes_limit could be used globally like config.count, but
      it is passed as parameter to be consistent with config.memkeys_samples.
      - Not sure if we should move some utility functions to cli-common.c.
      
      Got some tips and help from @ofirluzon.
      
      ---------
      Co-authored-by: default avatarBinbin <binloveplay1314@qq.com>
      Co-authored-by: default avatarViktor Söderqvist <viktor.soderqvist@est.tech>
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      6801a3ce
  17. 23 May, 2024 2 commits
    • Moti Cohen's avatar
      Add Statistics hashes_with_expiry_fields to INFO (#13275) · f34f2ade
      Moti Cohen authored
      Added hashes_with_expiry_fields.
      Optimially it would better to have statistic of that counts all fields
      with expiry. But it requires careful logic and computation to follow and
      deep dive listpacks and hashes. This statistics is trivial to achieve
      and reflected by global HFE DS that has builtin enumeration of all the
      hashes that are registered in it.
      f34f2ade
    • Moti Cohen's avatar
      Fix ebuckets stop indication during ebExpire() (#13287) · ae6df30e
      Moti Cohen authored
      on active-expire of buckets, at function `ebExpire()` ->
      `ebSegExpire()`, if callback `onExpireItem()` indicated to stop, Yet
      iterator (iter) was wrongly already got updated to point to next item.
      In turn the segment will be updated without current item.
      ae6df30e
  18. 22 May, 2024 2 commits
    • Ozan Tezcan's avatar
      Improve performance of hfe listpack (#13279) · a25b1539
      Ozan Tezcan authored
      
      
      This PR contains a few optimizations for hfe listpack.
      - Hfe fields are ordered by TTL in the listpack. There are two cases
      that we want to search listpack according to TTLs:
      - As part of active-expiry, we need to find the fields that are expired.
      e.g. find fields that have smaller TTLs than given timestamp.
      - When we want to add a new field, we need to find the correct position
      to maintain the order by TTL. e.g. find the field that has a higher TTL
      than the one we want to insert.
        
      Iterating with lpNext() to compare TTLs has a performance cost as
      lpNext() calls lpValidateIntegrity() for each entry. Instead, this PR
      adds `lpFindCb()` to the listpack which accepts a comparator callback.
      It preserves same validation logic of lpFind() which is faster than
      search with lpNext().
        
      - We have field name, value, ttl for a single hfe field. Inserting these
      items one by one to listpack is costly. Especially, as we place fields
      according to TTL, most additions will end up in the middle of the
      listpack. Each insert causes realloc + memmove. This PR introduces
      `lpBatchInsert()` to add multiple items in one go.
      
      - For hsetf, if we are going to update value and TTL at the same time,
      currently, we update the value first and later update the TTL (two
      distinct listpack operation). This PR improves it by doing it with a
      single update operation.
      
      ---------
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      a25b1539
    • debing.sun's avatar
      sanitize dump payload for HFE (#13278) · 95cbe879
      debing.sun authored
      Add the following validations:
      1. Get TTL using the lpGetIntegerValue() method instead of lpGetValue(),
      Ref https://github.com/redis/redis/pull/13209#discussion_r1602569422
      
      
      2. The TTL of listpackex is a number in the valid range
      (0~EB_EXPIRE_TIME_MAX) and ordered.
      3. The TTL fields of listpackex are ordered. 
      4. The TTL of hashtable is within the valid range
      (0~EB_EXPIRE_TIME_MAX).
      
      Other:
      Fix the missing of handling OBJ_ENCODING_LISTPACK_EX in
      dismissHashObject().
      
      ---------
      Co-authored-by: default avatarOzan Tezcan <ozantezcan@gmail.com>
      95cbe879
  19. 21 May, 2024 2 commits
    • Ted Lyngmo's avatar
      Log the real reason for why posix_fadvise failed (#13246) · e92363e2
      Ted Lyngmo authored
      
      
      `reclaimFilePageCache` did not set `errno` but `rdbSaveInternal` which
      is logging the error assumed it did. This makes sure `errno` is set.
      
      Fixes #13245
      Signed-off-by: default avatarTed Lyngmo <ted@lyncon.se>
      e92363e2
    • debing.sun's avatar
      Have consistent behavior of SPUBLISH within multi/exec like regular command (#13276) · 9ffc35c9
      debing.sun authored
      
      
      This PR is based on the commits from PR #12944.
      
      Allow SPUBLISH command within multi/exec on replica
      
      Behavior on unstable:
      
      ```
      127.0.0.1:6380> CLUSTER NODES
      39ce8aa20f1f0d91f1a88d976ee1926dfefcdf1a 127.0.0.1:6380@16380 myself,slave 8b0feb120b68aac489d6a5af9c77dc40d71bc792 0 0 0 connected
      8b0feb120b68aac489d6a5af9c77dc40d71bc792 127.0.0.1:6379@16379 master - 0 1705091681202 0 connected 0-16383
      127.0.0.1:6380> SPUBLISH hello world
      (integer) 0
      127.0.0.1:6380> MULTI
      OK
      127.0.0.1:6380(TX)> SPUBLISH hello world
      QUEUED
      127.0.0.1:6380(TX)> EXEC
      (error) MOVED 866 127.0.0.1:6379
      ```
      
      With this change:
      
      ```
      127.0.0.1:6380> SPUBLISH hello world
      (integer) 0
      127.0.0.1:6380> MULTI
      OK
      127.0.0.1:6380(TX)> SPUBLISH hello world
      QUEUED
      127.0.0.1:6380(TX)> EXEC
      1) (integer) 0
      ```
      
      ---------
      Co-authored-by: default avatarHarkrishn Patro <harkrisp@amazon.com>
      Co-authored-by: default avataroranagra <oran@redislabs.com>
      9ffc35c9
  20. 18 May, 2024 1 commit
  21. 17 May, 2024 1 commit
    • Ronen Kalish's avatar
      Hfe serialization listpack (#13243) · 323be4d6
      Ronen Kalish authored
      Add RDB de/serialization for HFE
      
      This PR adds two new RDB types: `RDB_TYPE_HASH_METADATA` and
      `RDB_TYPE_HASH_LISTPACK_TTL` to save HFE data.
      When the hash RAM encoding is dict, it will be saved in the former, and
      when it is listpack it will be saved in the latter.
      Both formats just add the TTL value for each field after the data that
      was previously saved, i.e HASH_METADATA will save the number of entries
      and, for each entry, key, value and TTL, whereas listpack is saved as a
      blob.
      On read, the usual dict <--> listpack conversion takes place if
      required.
      In addition, when reading a hash that was saved as a dict fields are
      actively expired if expiry is due. Currently this slao holds for
      listpack encoding, but it is supposed to be removed.
      
      TODO:
      Remove active expiry on load when loading from listpack format (unless
      we'll decide to keep it)
      323be4d6
  22. 16 May, 2024 1 commit
  23. 15 May, 2024 1 commit
  24. 14 May, 2024 2 commits
    • debing.sun's avatar
      Fix test failure due to differing reply format of XREADGROUP under RESP3 in MULTI (#13255) · ffbdf2f6
      debing.sun authored
      This test was introducted by #13251.
      Normally we auto transform the reply format of XREADGROUP to array under
      RESP3 (see trasformer_funcs).
      But when we execute XREADGROUP command in multi it can't work, which
      cause the new test failed.
      The solution is to verity the reply of XREADGROUP in advance rather than
      in MULTI.
      
      Failed validate schema CI:
      https://github.com/redis/redis/actions/runs/9025128323/job/24800285684
      
      
      
      ---------
      Co-authored-by: default avatarguybe7 <guy.benoish@redislabs.com>
      ffbdf2f6
    • debing.sun's avatar
      Add defragment support for HFE (#13229) · 80be2cc2
      debing.sun authored
      
      
      ## Background
      1. All hash objects that contain HFE are referenced by db->hexpires.
      2. All fields in a dict hash object with HFE are referenced by an
      ebucket.
      
      So when we defrag the hash object or the field in a dict with HFE, we
      also need to update the references in them.
      
      ## Interface
      1. Add a new interface `ebDefragItem`, which can accept a defrag
      callback to defrag items in ebuckets, and simultaneously update their
      references in the ebucket.
      
      ## Mainly changes
      1. The key type of dict of hash object is no longer sds, so add new
      `activeDefragHfieldDict()` to defrag the dict instead of
      `activeDefragSdsDict()`.
      2. When we defrag the dict of hash object by using `dictScanDefrag()`,
      we always set the defrag callback `defragKey` of `dictDefragFunctions`
      to NULL, because we can't reallocate a field with out updating it's
      reference in ebuckets.
      Instead, we will defrag the field of the dict and update its reference
      in the callback `dictScanDefrag` of dictScanFunction().
      3. When we defrag the hash robj with HFE, we will use `ebDefragItem` to
      defrag the robj and update the reference in db->hexpires.
      
      ## TODO:
      Defrag ebucket structure incremently, which will be handler in a future
      PR.
      
      ---------
      Co-authored-by: default avatarOzan Tezcan <ozantezcan@gmail.com>
      Co-authored-by: default avatarMoti Cohen <moti.cohen@redis.com>
      80be2cc2
  25. 13 May, 2024 1 commit
    • Ozan Tezcan's avatar
      Fix hgetf/hsetf reply type by returning string (#13263) · 5066e6e9
      Ozan Tezcan authored
      If encoding is listpack, hgetf and hsetf commands reply field value type
      as integer.
      This PR fixes it by returning string.
      
      Problematic cases:
      ```
      127.0.0.1:6379> hset hash one 1
      (integer) 1
      127.0.0.1:6379> hgetf hash fields 1 one
      1) (integer) 1
      127.0.0.1:6379> hsetf hash GETOLD fvs 1 one 2
      1) (integer) 1
      127.0.0.1:6379> hsetf hash DOF GETNEW fvs 1 one 2
      1) (integer) 2
      ```
      
      Additional fixes:
      - hgetf/hsetf command description text
      
      Fixes #13261, #13262
      5066e6e9