1. 01 Jul, 2024 1 commit
    • Oran Agra's avatar
      Fix possible crash due to OOM panic on invalid command (#13380) · 69b7137d
      Oran Agra authored
      getKeysUsingKeySpece had the range check AFTER the allocation, of the
      keys buffer, which could lead to an OOM panic when invalid arguments are
      provided, leading to an overflow.
      The allocated memory is only used after the range check, so there's no
      risk of buffer overrun.
      The OOM panic can happen on 32bit builds, or 64 builds running on
      systems with less than 4GB of RAM, and is reachable via the COMMAND
      GETKEYSANDFLAGS, and ACL key name validation.
      69b7137d
  2. 26 Jun, 2024 2 commits
    • Moti Cohen's avatar
      HFE - count in command must match actual number of fields (#13369) · a9267137
      Moti Cohen authored
      There was wrong preliminary assumption that we can optionally provide
      vector of arguments more than count.
      This is error-prone approach that leaded to actual error in that case.
      This PR enforce that vector of argument match count.
      
      Also fixed flaky HRANDFIELD test.
      a9267137
    • debing.sun's avatar
      Don't keep global replication buffer reference for replicas marked CLIENT_CLOSE_ASAP (#13363) · 52e12d8b
      debing.sun authored
      
      
      In certain situations, we might generate a large number of propagates
      (e.g., multi/exec, Lua script, or a single command generating tons of
      propagations) within an event loop.
      During the process of propagating to a replica, if the replica is
      disconnected(marked as CLIENT_CLOSE_ASAP) due to exceeding the output
      buffer limit, we should remove its reference to the global replication
      buffer to avoid the global replication buffer being unable to be
      properly trimmed due to being referenced.
      
      ---------
      Co-authored-by: default avataroranagra <oran@redislabs.com>
      52e12d8b
  3. 25 Jun, 2024 1 commit
    • Moti Cohen's avatar
      Fix H(P)EXPIREAT command to propagate HDEL as well (#13364) · 5eac99c3
      Moti Cohen authored
      H(P)EXPIREAT command might delete fields in case the absolute time is in the 
      past. Those HDELs need to be propagated as well.
       
      In general, as we need to propagate H(P)EXPIRE(AT) command to the replica, each 
      field that is mentioned in the command should be categorized into one of the four
      options:
      1. Managed to update field’s expiration time - propagate it to replica as part 
         of the HPEXPIREAT command.
      2. Deleted the field because the time is in the past - propagate also HDEL command
         to delete the field and remove the field from the propagated HPEXPIREAT.
      3. Condition not met for the field - Remove the field from the propagated
         HPEXPIREAT command.
      4. Field does not exists - Remove the field from the propagated HPEXPIREAT command.
      
      If none of the provided fields match option number 1, then avoid also propagating 
      the HPEXPIREAT command to the replica.
      
      This approach is aligned with the EXPIRE command:
      If a given key has already expired, then DEL will be propagated instead of
      EXPIRE command. If condition not met, then command will be rejected. Otherwise, 
      EXPIRE command will be propagated for given key.
      5eac99c3
  4. 24 Jun, 2024 1 commit
    • Moti Cohen's avatar
      Adapt HRANDFIELD to HFE feature (#13348) · e26ea35c
      Moti Cohen authored
      Considerations for the selected imp of HRANDFIELD & HFE feature:
      
      HRANDFIELD might access any of the fields in the hash as some of them
      might be expired. And so the Implementation of HRANDFIELD along with HFEs
      might be one of the two options:
      1. Expire hash-fields before diving into handling HRANDFIELD.
      2. Refine HRANDFIELD cases to deal with expired fields.
      
      Regarding the first option, as reference, the command RANDOMKEY also
      declareson O(1) complexity, yet might be stuck on a very long (but not infinite)
      loop trying to find non-expired keys. Furthermore RANDOMKEY also evicts expired 
      keys along the way even though it is categorized as a read-only command. Note
      that the case of HRANDFIELD is more lightweight versus RANDOMKEY since 
      HFEs have much more effective and aggressive active-expiration for fields behind.
      
      The second option introduces additional implementation complexity to HRANDFIELD.
      We could further refine HRANDFIELD cases to differentiate between scenarios
      with many expired fields versus few expired fields, and adjust based on the
      percentage of expired fields. However, this approach could still lead to long
      loops or necessitate expiring fields before selecting them. For the “lightweight”
      cases it is also expected to have a lightweight expiration.
      
      Considering the pros and cons, and the fact that HRANDFIELD is an infrequent
      command (particularly with HFEs) and the fact we have effective active-expiration
      behind for hash-fields, it is better to keep it simple and choose option number 1.
      
      Other changes:
      * Don't mark command dirty by internal hashTypeExpire(). It causes to read 
        only command of HRANDFIELD to be accidently propagated (This flag
        should be indicated at higher level, by the command functions).
      * Align `hashTypeExpireIfNeeded()` and `hashTypeGetValue()` to be more
        aligned with `expireIfNeeded()` logic of keyspace.
      e26ea35c
  5. 14 Jun, 2024 1 commit
    • Ozan Tezcan's avatar
      Reply with array of return codes if the key does not exist for HFE commands (#13343) · 4aa25d04
      Ozan Tezcan authored
      Currently, HFE commands reply with empty array if the key does not
      exist. Though, non-existing key and empty key is the same thing. 
      It means fields given in the command do not exist in the empty key. 
      So, replying with an array of 'no field' error codes (-2) suits better 
      to Redis logic. e.g. Similarly, `hmget` returns array of nulls if the 
      key does not exist.
      
      After this PR:
      ```
      127.0.0.1:6379> hpersist missingkey fields 2 a b
      1) (integer) -2
      2) (integer) -2
      ```
      4aa25d04
  6. 11 Jun, 2024 1 commit
    • debing.sun's avatar
      Add new hexpired notification for HFE (#13329) · ed10f737
      debing.sun authored
      
      
      When the hash field expired, we will send a new `hexpired` notification.
      It mainly includes the following three cases:
      1. When field expired by active expiration.
      2. When field expired by lazy expiration.
      3. When the user uses the `h(p)expire(at)` command, the user will also
      get a `hexpired` notification if the field expires during the command.
      
      ## Improvement
      1. Now if more than one field expires in the hmget command, we will only
      send a `hexpired` notification.
      2. When a field with TTL is deleted by commands like hdel without
      updating the global DS, active expire will not send a notification.
      
      ---------
      Co-authored-by: default avatarOzan Tezcan <ozantezcan@gmail.com>
      Co-authored-by: default avatarMoti Cohen <moti.cohen@redis.com>
      ed10f737
  7. 10 Jun, 2024 2 commits
    • Moti Cohen's avatar
      Reserve 2 bits out of EB_EXPIRE_TIME_MAX for possible future use (#13331) · f01fdc39
      Moti Cohen authored
      Reserve 2 bits out of hash-field expiration time (`EB_EXPIRE_TIME_MAX`)
      for possible future lightweight indexing/categorizing of fields. It can
      be achieved by hacking HFE as follows:
      ```
      HPEXPIREAT key [ 2^47 + USER_INDEX ] FIELDS numfields field [field …]
      ```
      
      Redis will also need to expose kind of `HEXPIRESCAN` and `HEXPIRECOUNT`
      for this idea. Yet to be better defined.
      
      `HFE_MAX_ABS_TIME_MSEC` constraint must be enforced only at API level.
      Internally, the expiration time can be up to `EB_EXPIRE_TIME_MAX` for
      future readiness.
      f01fdc39
    • Moti Cohen's avatar
      HFE - Avoid lazy expire if called by modules + cleanup (#13326) · ce121b92
      Moti Cohen authored
      Need to be carefull if called by modules since modules API allow to open
      and close key handler. We don't want to invalidate the handler
      underneath.
      
      * hashTypeExists(), hashTypeGetValueObject() - will return the logical
      state of the field. A flag will indicate noExpire.
      * RM_HashGet() - Will get NULL if the field expired. Fields won’t be
      deleted.
      * RM_ScanKey() - might return 0 items if all fields got expired. Fields
      won’t be deleted.
      * RM_HashSet() - If set, then override expired field. If delete, we can
      either delete or leave it to active-expiration. XX/NX - logically
      correct (Verify with tests).
      
      Nice to have (not implemented):
      * RedisModule_CloseKey() - We can local active-expire up-to 100 items. 
      
      Note:
      Length will be wrong to modules just like redis (Count expired fields).
      ce121b92
  8. 05 Jun, 2024 1 commit
  9. 04 Jun, 2024 2 commits
    • debing.sun's avatar
      Prevent negative expire parameter in HEXPIRE/HEXPIREAT/HPEXPIRE/HPEXPIREAT commands (#13310) · 9a2c6ba4
      debing.sun authored
      
      
      1. Don't allow HEXPIRE/HEXPIREAT/HPEXPIRE/HPEXPIREAT command expire
      parameters is negative
      
      2. Remove a dead code reported from Coverity.
      when `unit` is not `UNIT_SECONDS`, the second `if (expire > (long long)
      EB_EXPIRE_TIME_MAX)` will be dead code.
      ```c
      # t_hash.c
      2988    /* Check expire overflow */
            	cond_at_most: Condition expire > 281474976710655LL, taking false branch. Now the value of expire is at most 281474976710655.
      2989    if (expire > (long long) EB_EXPIRE_TIME_MAX) {
      2990        addReplyErrorExpireTime(c);
      2991        return;
      2992    }
      
      2994    if (unit == UNIT_SECONDS) {
      2995        if (expire > (long long) EB_EXPIRE_TIME_MAX / 1000) {
      2996            addReplyErrorExpireTime(c);
      2997            return;
      2998        }
      2999        expire *= 1000;
      3000    } else {
            	at_most: At condition expire > 281474976710655LL, the value of expire must be at most 281474976710655.
            	dead_error_condition: The condition expire > 281474976710655LL cannot be true.
      3001        if (expire > (long long) EB_EXPIRE_TIME_MAX) {
            	
      CID 494223: (#1 of 1): Logically dead code (DEADCODE)
      dead_error_begin: Execution cannot reach this statement: addReplyErrorExpireTime(c);.
      3002            addReplyErrorExpireTime(c);
      3003            return;
      3004        }
      3005    }
      ```
      
      ---------
      Co-authored-by: default avatarOzan Tezcan <ozantezcan@gmail.com>
      9a2c6ba4
    • Ozan Tezcan's avatar
      Fix crash in RM_ScanKey() when used with hexpire (#13320) · 44352bee
      Ozan Tezcan authored
      RM_ScanKey() was overlooked while introducing hash field expiration. 
      An assert is triggered when it is called on a hash key with
      OBJ_ENCODING_LISTPACK_EX encoding.
      
      I've changed to code to handle listpackex encoding properly.
      44352bee
  10. 30 May, 2024 2 commits
    • Valentino Geron's avatar
      Free current client asynchronously after user permissions changes (#13274) · 50569a90
      Valentino Geron authored
      The crash happens when the user that triggers the permission changes
      should be affected (and should be disconnected eventually).
      
      To handle such a scenario, we should use the
      `CLIENT_CLOSE_AFTER_COMMAND` flag.
      
      This commit encapsulates all the places that should be handled in the
      same way in `deauthenticateAndCloseClient`
      
      Also:
      * bugfix: during the ACL LOAD we ignore clients that are marked as
      `CLIENT MASTER`
      50569a90
    • jonghoonpark's avatar
      dynamically list test files (#13220) · 5a3534f9
      jonghoonpark authored
      **Related issue**
      https://github.com/redis/redis/issues/13219
      
      **Motivation**
      Currently we have to manually update the all_tests variable when
      introducing new test files.
      
      **Modification**
      I have modified it to list test files dynamically, but instead of
      modifying it to add all test files, I have modified it to only add only
      test files from the following 4 paths
      
      - unit
      - unit/type
      - unit/cluster
      - integration
      
      so that it doesn't deviate too much from what we already do
      
      **Result**
      - dynamically list test files to all_tests variable
      - close issue https://github.com/redis/redis/issues/13219
      
      
      
      **Additional information**
      - removed `list-common.tcl` file and added
      `generate_largevalue_test_array` proc in `util.tcl`. because
      `list-common.tcl` is not a test file
      - There is an order dependency. So I added a code to the "Is a ziplist
      encoded Hash promoted on big payload?" test that resets
      hash-max-listpack-value to the default (64).
      
      ---------
      Signed-off-by: default avatarjonghoonpark <dev@jonghoonpark.com>
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      5a3534f9
  11. 29 May, 2024 1 commit
    • Moti Cohen's avatar
      HFE to support AOF and replicas (#13285) · 33fc0fbf
      Moti Cohen authored
      * For replica sake, rewrite commands `H*EXPIRE*` , `HSETF`, `HGETF` to
      have absolute unix time in msec.
      * On active-expiration of field, propagate HDEL to replica
      (`propagateHashFieldDeletion()`)
      * On lazy-expiration, propagate HDEL to replica (`hashTypeGetValue()`
      now calls `hashTypeDelete()`. It also takes care to call
      `propagateHashFieldDeletion()`).
      * Fix `H*EXPIRE*` command such that if it gets flag `LT` and it doesn’t
      have any expiration on the field then it will considered as valid
      condition.
      
      Note, replicas doesn’t make any active expiration, and should avoid lazy
      expiration. On `hashTypeGetValue()` it doesn't check expiration (As long
      as the master didn’t request to delete the field, it is valid)
      
      TODO: 
      * Attach `dbid` to HASH metadata. See
      [here](https://github.com/redis/redis/pull/13209#discussion_r1593385850
      
      )
      
      ---------
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      33fc0fbf
  12. 28 May, 2024 1 commit
    • Ozan Tezcan's avatar
      Fix hscan return value (#13297) · 6a11d458
      Ozan Tezcan authored
      In the last step of hscan, while replying to client, we assume all items
      in the result list are keys which are mstr instances. Though, there 
      might be values which are sds instances. 
      
      Added a check to avoid calling mstrlen() for value objects.
      
      To reproduce:
      ```
      127.0.0.1:6379> hset myhash1 a 11111111111111111111111111111111111111111111111111111111111111111
      (integer) 0
      127.0.0.1:6379> hscan myhash1 0
      1) "0"
      2) 1) "a"
         2) "11111111111111111111111111111111111111111111111111111111111111111\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
      ```
      6a11d458
  13. 26 May, 2024 2 commits
  14. 23 May, 2024 1 commit
    • Moti Cohen's avatar
      Add Statistics hashes_with_expiry_fields to INFO (#13275) · f34f2ade
      Moti Cohen authored
      Added hashes_with_expiry_fields.
      Optimially it would better to have statistic of that counts all fields
      with expiry. But it requires careful logic and computation to follow and
      deep dive listpacks and hashes. This statistics is trivial to achieve
      and reflected by global HFE DS that has builtin enumeration of all the
      hashes that are registered in it.
      f34f2ade
  15. 22 May, 2024 1 commit
  16. 21 May, 2024 1 commit
    • debing.sun's avatar
      Have consistent behavior of SPUBLISH within multi/exec like regular command (#13276) · 9ffc35c9
      debing.sun authored
      
      
      This PR is based on the commits from PR #12944.
      
      Allow SPUBLISH command within multi/exec on replica
      
      Behavior on unstable:
      
      ```
      127.0.0.1:6380> CLUSTER NODES
      39ce8aa20f1f0d91f1a88d976ee1926dfefcdf1a 127.0.0.1:6380@16380 myself,slave 8b0feb120b68aac489d6a5af9c77dc40d71bc792 0 0 0 connected
      8b0feb120b68aac489d6a5af9c77dc40d71bc792 127.0.0.1:6379@16379 master - 0 1705091681202 0 connected 0-16383
      127.0.0.1:6380> SPUBLISH hello world
      (integer) 0
      127.0.0.1:6380> MULTI
      OK
      127.0.0.1:6380(TX)> SPUBLISH hello world
      QUEUED
      127.0.0.1:6380(TX)> EXEC
      (error) MOVED 866 127.0.0.1:6379
      ```
      
      With this change:
      
      ```
      127.0.0.1:6380> SPUBLISH hello world
      (integer) 0
      127.0.0.1:6380> MULTI
      OK
      127.0.0.1:6380(TX)> SPUBLISH hello world
      QUEUED
      127.0.0.1:6380(TX)> EXEC
      1) (integer) 0
      ```
      
      ---------
      Co-authored-by: default avatarHarkrishn Patro <harkrisp@amazon.com>
      Co-authored-by: default avataroranagra <oran@redislabs.com>
      9ffc35c9
  17. 16 May, 2024 1 commit
  18. 14 May, 2024 2 commits
    • debing.sun's avatar
      Fix test failure due to differing reply format of XREADGROUP under RESP3 in MULTI (#13255) · ffbdf2f6
      debing.sun authored
      This test was introducted by #13251.
      Normally we auto transform the reply format of XREADGROUP to array under
      RESP3 (see trasformer_funcs).
      But when we execute XREADGROUP command in multi it can't work, which
      cause the new test failed.
      The solution is to verity the reply of XREADGROUP in advance rather than
      in MULTI.
      
      Failed validate schema CI:
      https://github.com/redis/redis/actions/runs/9025128323/job/24800285684
      
      
      
      ---------
      Co-authored-by: default avatarguybe7 <guy.benoish@redislabs.com>
      ffbdf2f6
    • debing.sun's avatar
      Add defragment support for HFE (#13229) · 80be2cc2
      debing.sun authored
      
      
      ## Background
      1. All hash objects that contain HFE are referenced by db->hexpires.
      2. All fields in a dict hash object with HFE are referenced by an
      ebucket.
      
      So when we defrag the hash object or the field in a dict with HFE, we
      also need to update the references in them.
      
      ## Interface
      1. Add a new interface `ebDefragItem`, which can accept a defrag
      callback to defrag items in ebuckets, and simultaneously update their
      references in the ebucket.
      
      ## Mainly changes
      1. The key type of dict of hash object is no longer sds, so add new
      `activeDefragHfieldDict()` to defrag the dict instead of
      `activeDefragSdsDict()`.
      2. When we defrag the dict of hash object by using `dictScanDefrag()`,
      we always set the defrag callback `defragKey` of `dictDefragFunctions`
      to NULL, because we can't reallocate a field with out updating it's
      reference in ebuckets.
      Instead, we will defrag the field of the dict and update its reference
      in the callback `dictScanDefrag` of dictScanFunction().
      3. When we defrag the hash robj with HFE, we will use `ebDefragItem` to
      defrag the robj and update the reference in db->hexpires.
      
      ## TODO:
      Defrag ebucket structure incremently, which will be handler in a future
      PR.
      
      ---------
      Co-authored-by: default avatarOzan Tezcan <ozantezcan@gmail.com>
      Co-authored-by: default avatarMoti Cohen <moti.cohen@redis.com>
      80be2cc2
  19. 13 May, 2024 1 commit
    • Ozan Tezcan's avatar
      Fix hgetf/hsetf reply type by returning string (#13263) · 5066e6e9
      Ozan Tezcan authored
      If encoding is listpack, hgetf and hsetf commands reply field value type
      as integer.
      This PR fixes it by returning string.
      
      Problematic cases:
      ```
      127.0.0.1:6379> hset hash one 1
      (integer) 1
      127.0.0.1:6379> hgetf hash fields 1 one
      1) (integer) 1
      127.0.0.1:6379> hsetf hash GETOLD fvs 1 one 2
      1) (integer) 1
      127.0.0.1:6379> hsetf hash DOF GETNEW fvs 1 one 2
      1) (integer) 2
      ```
      
      Additional fixes:
      - hgetf/hsetf command description text
      
      Fixes #13261, #13262
      5066e6e9
  20. 09 May, 2024 1 commit
  21. 08 May, 2024 1 commit
    • Ozan Tezcan's avatar
      Add listpack support, hgetf and hsetf commands (#13209) · ca4ed48d
      Ozan Tezcan authored
      **Changes:**
      - Adds listpack support to hash field expiration 
      - Implements hgetf/hsetf commands
      
      **Listpack support for hash field expiration**
      
      We keep field name and value pairs in listpack for the hash type. With
      this PR, if one of hash field expiration command is called on the key
      for the first time, it converts listpack layout to triplets to hold
      field name, value and ttl per field. If a field does not have a TTL, we
      store zero as the ttl value. Zero is encoded as two bytes in the
      listpack. So, once we convert listpack to hold triplets, for the fields
      that don't have a TTL, it will be consuming those extra 2 bytes per
      item. Fields are ordered by ttl in the listpack to find the field with
      minimum expiry time efficiently.
      
      **New command implementations as part of this PR:** 
      
      - HGETF command
      
      For each specified field get its value and optionally set the field's
      expiration time in sec/msec /unix-sec/unix-msec:
        ```
        HGETF key 
          [NX | XX | GT | LT]
      [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT
      unix-time-milliseconds | PERSIST]
          <FIELDS count field [field ...]>
        ```
      
      - HSETF command
      
      For each specified field value pair: set field to value and optionally
      set the field's expiration time in sec/msec /unix-sec/unix-msec:
        ```
        HSETF key 
          [DC] 
          [DCF | DOF] 
          [NX | XX | GT | LT] 
          [GETNEW | GETOLD] 
      [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT
      unix-time-milliseconds | KEEPTTL]
          <FVS count field value [field value …]>
        ```
      
      Todo:
      - Performance improvement.
      - rdb load/save
      - aof
      - defrag
      ca4ed48d
  22. 06 May, 2024 1 commit
    • guybe7's avatar
      XREADGROUP from PEL should not affect server.dirty (#13251) · 0e1de78f
      guybe7 authored
      Because it does not cause any propagation (arguably it should, see the
      comment in the tcl file)
      
      The motivation for this fix is that in 6.2 if dirty changed without
      propagation inside MULTI/EXEC it would cause propagation of EXEC only,
      which would result in the replica sending errors to its master
      0e1de78f
  23. 03 May, 2024 1 commit
  24. 18 Apr, 2024 1 commit
    • Moti Cohen's avatar
      Hash Field Expiration - Basic support · c18ff056
      Moti Cohen authored
      - Add ebuckets & mstr data structures
      - Integrate active & lazy expiration
      - Add most of the commands 
      - Add support for dict (listpack is missing)
      TODOs:  RDB, notification, listpack, HSET, HGETF, defrag, aof
      c18ff056
  25. 16 Apr, 2024 1 commit
    • Binbin's avatar
      Allocate Lua VM code with jemalloc instead of libc, and count it used memory (#13133) · 804110a4
      Binbin authored
      
      
      ## Background
      1. Currently Lua memory control does not pass through Redis's zmalloc.c.
      Redis maxmemory cannot limit memory problems caused by users abusing lua
      since these lua VM memory is not part of used_memory.
      
      2. Since jemalloc is much better (fragmentation and speed), and also we
      know it and trust it. we are
      going to use jemalloc instead of libc to allocate the Lua VM code and
      count it used memory.
      
      ## Process:
      In this PR, we will use jemalloc in lua. 
      1. Create an arena for all lua vm (script and function), which is
      shared, in order to avoid blocking defragger.
      2. Create a bound tcache for the lua VM, since the lua VM and the main
      thread are by default in the same tcache, and if there is no isolated
      tcache, lua may request memory from the tcache which has just been freed
      by main thread, and vice versa
      On the other hand, since lua vm might be release in bio thread, but
      tcache is not thread-safe, we need to recreate
          the tcache every time we recreate the lua vm.
      3. Remove lua memory statistics from memory fragmentation statistics to
      avoid the effects of lua memory fragmentation
      
      ## Other
      Add the following new fields to `INFO DEBUG` (we may promote them to
      INFO MEMORY some day)
      1. allocator_allocated_lua: total number of bytes allocated of lua arena
      2. allocator_active_lua: total number of bytes in active pages allocated
      in lua arena
      3. allocator_resident_lua: maximum number of bytes in physically
      resident data pages mapped in lua arena
      4. allocator_frag_bytes_lua: fragment bytes in lua arena
      
      This is oranagra's idea, and i got some help from sundb.
      
      This solves the third point in #13102.
      
      ---------
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      804110a4
  26. 02 Apr, 2024 1 commit
    • Moti Cohen's avatar
      Change FLUSHALL/FLUSHDB SYNC to run as blocking ASYNC (#13167) · 4df03796
      Moti Cohen authored
      # Overview
      Users utilize the `FLUSHDB SYNC` and `FLUSHALL SYNC` commands for a variety of 
      reasons. The main issue with this command is that if the database becomes 
      substantial in size, the server will be unresponsive for an extended period. 
      Other than freezing application traffic, this may also lead some clients making 
      incorrect judgments about the server's availability. For instance, a watchdog may 
      erroneously decide to terminate the process, resulting in potential adverse 
      outcomes. While a `FLUSH* ASYNC` can address these issues, it might not be used 
      for two reasons: firstly, it's not the default, and secondly, in some cases, the 
      client issuing the flush wants to wait for its completion before repopulating the 
      database.
      
      Between the option of triggering FLUSH* asynchronously in the background without 
      indication for completion versus running it synchronously in the foreground by 
      the main thread, there is another more appealing option. We can block the
      client that requested the flush, execute the flush command in the background, and 
      once done, unblock the client and return notification for completion. This approach 
      ensures the server remains responsive to other clients, and the blocked client 
      receives the expected response only after the flush operation has been successfully 
      carried out.
      
      # Implementation details
      Instead of defining yet another flavor to the flush command, we can modify
      `FLUSHALL SYNC` and `FLUSHDB SYNC` always run in this new mode.
      
      ## Extending BIO Threads capabilities
      Today jobs that are carried out by BIO threads don't have the capability to 
      indicate completion to the main thread. We can add this infrastructure by having
      an additional dummy job, coined as completion-job, that eventually will be written 
      by BIO threads to a response-queue. The main thread will take care to consume items
      from the response-queue and call the provided callback function of each 
      completion-job.
      
      ## FLUSH* SYNC to run as blocking ASYNC
      Command `FLUSH* SYNC` will be modified to create one or more async jobs to flush
      DB(s) and afterward will push additional completion-job request. By sending the
      completion job request only at the end, the main thread will be called back only
      after all the preceding jobs completed their task in the background. During that
      time, the client of the command is suspended and marked as `BLOCKED_LAZYFREE`
      whereas any other client will be able to communicate with the server without any
      issue.
      4df03796
  27. 19 Mar, 2024 2 commits
    • Yanqi Lv's avatar
      fix wrong data type conversion in zrangeResultBeginStore (#13148) · bad33f87
      Yanqi Lv authored
      In `beginResultEmission`, -1 means the result length is not known in
      advance. But after #12185, if we pass -1 to `zrangeResultBeginStore`, it
      will convert to SIZE_MAX in `zsetTypeCreate` and try to `dictExpand`.
      Although `dictExpand` won't succeed because the size overflows, I think
      we'd better to avoid this wrong conversion.
      
      This bug can be triggered when the source of `zrangestore` doesn't exist
      or we use `zrangestore` command with `byscore` or `bylex`.
      The impact is that dst keys will be converted to use skiplist instead of
      listpack.
      bad33f87
    • Binbin's avatar
      Prevent lua error_reply abuse from causing errorstats to become larger (#13141) · e04d41d7
      Binbin authored
      Users who abuse lua error_reply will generate a new error object on each
      error call, which can make server.errors get bigger and bigger. This
      will
      cause the server to block when calling INFO (we also return errorstats
      by
      default).
      
      To prevent the damage it can cause, when a misuse is detected, we will
      print a warning log and disable the errorstats to avoid adding more new
      errors. It can be re-enabled via CONFIG RESETSTAT.
      
      Because server.errors may be very large (it may be better now since we
      have the limit), config resetstat may block for a while. So in
      resetErrorTableStats, we will try to lazyfree server.errors.
      
      See the related discussion at the end of #8217.
      e04d41d7
  28. 18 Mar, 2024 1 commit
    • Binbin's avatar
      Fix dictionary use-after-free in active expire and make kvstore iter to respect EMPTY flag (#13135) · 7b070423
      Binbin authored
      After #13072, there is an use-after-free error. In expireScanCallback, we
      will delete the dict, and then in dictScan we will continue to use the dict,
      like we will doing `dictResumeRehashing(d)` in the end, this casued an error.
      
      In this PR, in freeDictIfNeeded, if the dict's pauserehash is set, don't
      delete the dict yet, and then when scan returns try to delete it again.
      
      At the same time, we noticed that there will be similar problems in iterator.
      We may also delete elements during the iteration process, causing the dict
      to be deleted, so the part related to iter in the PR has also been modified.
      dictResetIterator was also missing from the previous kvstoreIteratorNextDict,
      we currently have no scenario that elements will be deleted in kvstoreIterator
      process, deal with it together to avoid future problems. Added some simple
      tests to verify the changes.
      
      In addition, the modification in #13072 omitted initTempDb and emptyDbAsync,
      and they were also added. This PR also remove the slow flag from the expire
      test (consumes 1.3s) so that problems can be found in CI in the future.
      7b070423
  29. 13 Mar, 2024 2 commits
    • Binbin's avatar
      Lua eval scripts first in first out LRU eviction (#13108) · ad28d222
      Binbin authored
      In some cases, users will abuse lua eval. Each EVAL call generates
      a new lua script, which is added to the lua interpreter and cached
      to redis-server, consuming a large amount of memory over time.
      
      Since EVAL is mostly the one that abuses the lua cache, and these
      won't have pipeline issues (i.e. the script won't disappear
      unexpectedly,
      and cause errors like it would with SCRIPT LOAD and EVALSHA),
      we implement a plain FIFO LRU eviction only for these (not for
      scripts loaded with SCRIPT LOAD).
      
      ### Implementation notes:
      When not abused we'll probably have less than 100 scripts, and when
      abused we'll have many thousands. So we use a hard coded value of 500
      scripts. And considering that we don't have many scripts, then unlike
      keys, we don't need to worry about the memory usage of keeping a true
      sorted LRU linked list. We compute the SHA of each script anyway,
      and put the script in a dict, we can store a listNode there, and use
      it for quick removal and re-insertion into an LRU list each time the
      script is used.
      
      ### New interfaces:
      At the same time, a new `evicted_scripts` field is added to
      INFO, which represents the number of evicted eval scripts. Users
      can check it to see if they are abusing EVAL.
      
      ### benchmark:
      `./src/redis-benchmark -P 10 -n 1000000 -r 10000000000 eval "return
      __rand_int__" 0`
      
      The simple abuse of eval benchmark test that will create 1 million EVAL
      scripts. The performance has been improved by 50%, and the max latency
      has dropped from 500ms to 13ms (this may be caused by table expansion
      inside Lua when the number of scripts is large). And in the INFO memory,
      it used to consume 120MB (server cache) + 310MB (lua engine), but now
      it only consumes 70KB (server cache) + 210KB (lua_engine) because of
      the scripts eviction.
      
      For non-abusive case of about 100 EVAL scripts, there's no noticeable
      change in performance or memory usage.
      
      ### unlikely potentially breaking change:
      in theory, a user can maybe load a
      script with EVAL and then use EVALSHA to call it (by calculating the
      SHA1 value on the client side), it could be that if we read the docs
      carefully we'll realized it's a valid scenario, but we suppose it's
      extremely rare. So it may happen that EVALSHA acts on a script created
      by EVAL, and the script is evicted and EVALSHA returns a NOSCRIPT error.
      that is if you have more than 500 scripts being used in the same
      transaction / pipeline.
      
      This solves the second point in #13102.
      ad28d222
    • Ronen Kalish's avatar
      Xread last entry in stream (#7388) (#13117) · a8e74511
      Ronen Kalish authored
      
      
      Allow using `+` as a special ID for last item in stream on XREAD
      command.
      
      This would allow to iterate on a stream with XREAD starting with the
      last available message instead of the next one which `$` is used for.
      I.e. the caller can use `BLOCK` and `+` on the first call, and change to
      `$` on the next call.
      
      Closes #7388
      
      ---------
      Co-authored-by: default avatarFelipe Machado <462154+felipou@users.noreply.github.com>
      a8e74511
  30. 12 Mar, 2024 1 commit
  31. 10 Mar, 2024 1 commit
    • Matthew Douglass's avatar
      Fix conversion of numbers in lua args to redis args (#13115) · 5fdaa53d
      Matthew Douglass authored
      
      
      Since lua_Number is not explicitly an integer or a double, we need to
      make an effort
      to convert it as an integer when that's possible, since the string could
      later be used
      in a context that doesn't support scientific notation (e.g. 1e9 instead
      of 100000000).
      
      Since fpconv_dtoa converts numbers with the equivalent of `%f` or `%e`,
      which ever is shorter,
      this would break if we try to pass a long integer number to a command
      that takes integer.
      we'll get an implicit conversion to string in Lua, and then the parsing
      in getLongLongFromObjectOrReply will fail.
      
      ```
      > eval "redis.call('hincrby', 'key', 'field', '1000000000')" 0
      (nil)
      > eval "redis.call('hincrby', 'key', 'field', tonumber('1000000000'))" 0
      (error) ERR value is not an integer or out of range script: ac99c32e4daf7e300d593085b611de261954a946, on @user_script:1.
      ```
      
      Switch to using ll2string if the number can be safely represented as a
      long long.
      
      The problem was introduced in #10587 (Redis 7.2).
      closes #13113.
      
      ---------
      Co-authored-by: default avatarBinbin <binloveplay1314@qq.com>
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      5fdaa53d
  32. 05 Mar, 2024 1 commit