1. 26 May, 2024 2 commits
  2. 23 May, 2024 2 commits
    • Moti Cohen's avatar
      Add Statistics hashes_with_expiry_fields to INFO (#13275) · f34f2ade
      Moti Cohen authored
      Added hashes_with_expiry_fields.
      Optimially it would better to have statistic of that counts all fields
      with expiry. But it requires careful logic and computation to follow and
      deep dive listpacks and hashes. This statistics is trivial to achieve
      and reflected by global HFE DS that has builtin enumeration of all the
      hashes that are registered in it.
      f34f2ade
    • Moti Cohen's avatar
      Fix ebuckets stop indication during ebExpire() (#13287) · ae6df30e
      Moti Cohen authored
      on active-expire of buckets, at function `ebExpire()` ->
      `ebSegExpire()`, if callback `onExpireItem()` indicated to stop, Yet
      iterator (iter) was wrongly already got updated to point to next item.
      In turn the segment will be updated without current item.
      ae6df30e
  3. 22 May, 2024 2 commits
    • Ozan Tezcan's avatar
      Improve performance of hfe listpack (#13279) · a25b1539
      Ozan Tezcan authored
      
      
      This PR contains a few optimizations for hfe listpack.
      - Hfe fields are ordered by TTL in the listpack. There are two cases
      that we want to search listpack according to TTLs:
      - As part of active-expiry, we need to find the fields that are expired.
      e.g. find fields that have smaller TTLs than given timestamp.
      - When we want to add a new field, we need to find the correct position
      to maintain the order by TTL. e.g. find the field that has a higher TTL
      than the one we want to insert.
        
      Iterating with lpNext() to compare TTLs has a performance cost as
      lpNext() calls lpValidateIntegrity() for each entry. Instead, this PR
      adds `lpFindCb()` to the listpack which accepts a comparator callback.
      It preserves same validation logic of lpFind() which is faster than
      search with lpNext().
        
      - We have field name, value, ttl for a single hfe field. Inserting these
      items one by one to listpack is costly. Especially, as we place fields
      according to TTL, most additions will end up in the middle of the
      listpack. Each insert causes realloc + memmove. This PR introduces
      `lpBatchInsert()` to add multiple items in one go.
      
      - For hsetf, if we are going to update value and TTL at the same time,
      currently, we update the value first and later update the TTL (two
      distinct listpack operation). This PR improves it by doing it with a
      single update operation.
      
      ---------
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      a25b1539
    • debing.sun's avatar
      sanitize dump payload for HFE (#13278) · 95cbe879
      debing.sun authored
      Add the following validations:
      1. Get TTL using the lpGetIntegerValue() method instead of lpGetValue(),
      Ref https://github.com/redis/redis/pull/13209#discussion_r1602569422
      
      
      2. The TTL of listpackex is a number in the valid range
      (0~EB_EXPIRE_TIME_MAX) and ordered.
      3. The TTL fields of listpackex are ordered. 
      4. The TTL of hashtable is within the valid range
      (0~EB_EXPIRE_TIME_MAX).
      
      Other:
      Fix the missing of handling OBJ_ENCODING_LISTPACK_EX in
      dismissHashObject().
      
      ---------
      Co-authored-by: default avatarOzan Tezcan <ozantezcan@gmail.com>
      95cbe879
  4. 18 May, 2024 1 commit
  5. 17 May, 2024 1 commit
    • Ronen Kalish's avatar
      Hfe serialization listpack (#13243) · 323be4d6
      Ronen Kalish authored
      Add RDB de/serialization for HFE
      
      This PR adds two new RDB types: `RDB_TYPE_HASH_METADATA` and
      `RDB_TYPE_HASH_LISTPACK_TTL` to save HFE data.
      When the hash RAM encoding is dict, it will be saved in the former, and
      when it is listpack it will be saved in the latter.
      Both formats just add the TTL value for each field after the data that
      was previously saved, i.e HASH_METADATA will save the number of entries
      and, for each entry, key, value and TTL, whereas listpack is saved as a
      blob.
      On read, the usual dict <--> listpack conversion takes place if
      required.
      In addition, when reading a hash that was saved as a dict fields are
      actively expired if expiry is due. Currently this slao holds for
      listpack encoding, but it is supposed to be removed.
      
      TODO:
      Remove active expiry on load when loading from listpack format (unless
      we'll decide to keep it)
      323be4d6
  6. 16 May, 2024 1 commit
  7. 14 May, 2024 1 commit
    • debing.sun's avatar
      Add defragment support for HFE (#13229) · 80be2cc2
      debing.sun authored
      
      
      ## Background
      1. All hash objects that contain HFE are referenced by db->hexpires.
      2. All fields in a dict hash object with HFE are referenced by an
      ebucket.
      
      So when we defrag the hash object or the field in a dict with HFE, we
      also need to update the references in them.
      
      ## Interface
      1. Add a new interface `ebDefragItem`, which can accept a defrag
      callback to defrag items in ebuckets, and simultaneously update their
      references in the ebucket.
      
      ## Mainly changes
      1. The key type of dict of hash object is no longer sds, so add new
      `activeDefragHfieldDict()` to defrag the dict instead of
      `activeDefragSdsDict()`.
      2. When we defrag the dict of hash object by using `dictScanDefrag()`,
      we always set the defrag callback `defragKey` of `dictDefragFunctions`
      to NULL, because we can't reallocate a field with out updating it's
      reference in ebuckets.
      Instead, we will defrag the field of the dict and update its reference
      in the callback `dictScanDefrag` of dictScanFunction().
      3. When we defrag the hash robj with HFE, we will use `ebDefragItem` to
      defrag the robj and update the reference in db->hexpires.
      
      ## TODO:
      Defrag ebucket structure incremently, which will be handler in a future
      PR.
      
      ---------
      Co-authored-by: default avatarOzan Tezcan <ozantezcan@gmail.com>
      Co-authored-by: default avatarMoti Cohen <moti.cohen@redis.com>
      80be2cc2
  8. 13 May, 2024 1 commit
    • Ozan Tezcan's avatar
      Fix hgetf/hsetf reply type by returning string (#13263) · 5066e6e9
      Ozan Tezcan authored
      If encoding is listpack, hgetf and hsetf commands reply field value type
      as integer.
      This PR fixes it by returning string.
      
      Problematic cases:
      ```
      127.0.0.1:6379> hset hash one 1
      (integer) 1
      127.0.0.1:6379> hgetf hash fields 1 one
      1) (integer) 1
      127.0.0.1:6379> hsetf hash GETOLD fvs 1 one 2
      1) (integer) 1
      127.0.0.1:6379> hsetf hash DOF GETNEW fvs 1 one 2
      1) (integer) 2
      ```
      
      Additional fixes:
      - hgetf/hsetf command description text
      
      Fixes #13261, #13262
      5066e6e9
  9. 09 May, 2024 1 commit
  10. 08 May, 2024 2 commits
    • Ozan Tezcan's avatar
      Add listpack support, hgetf and hsetf commands (#13209) · ca4ed48d
      Ozan Tezcan authored
      **Changes:**
      - Adds listpack support to hash field expiration 
      - Implements hgetf/hsetf commands
      
      **Listpack support for hash field expiration**
      
      We keep field name and value pairs in listpack for the hash type. With
      this PR, if one of hash field expiration command is called on the key
      for the first time, it converts listpack layout to triplets to hold
      field name, value and ttl per field. If a field does not have a TTL, we
      store zero as the ttl value. Zero is encoded as two bytes in the
      listpack. So, once we convert listpack to hold triplets, for the fields
      that don't have a TTL, it will be consuming those extra 2 bytes per
      item. Fields are ordered by ttl in the listpack to find the field with
      minimum expiry time efficiently.
      
      **New command implementations as part of this PR:** 
      
      - HGETF command
      
      For each specified field get its value and optionally set the field's
      expiration time in sec/msec /unix-sec/unix-msec:
        ```
        HGETF key 
          [NX | XX | GT | LT]
      [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT
      unix-time-milliseconds | PERSIST]
          <FIELDS count field [field ...]>
        ```
      
      - HSETF command
      
      For each specified field value pair: set field to value and optionally
      set the field's expiration time in sec/msec /unix-sec/unix-msec:
        ```
        HSETF key 
          [DC] 
          [DCF | DOF] 
          [NX | XX | GT | LT] 
          [GETNEW | GETOLD] 
      [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT
      unix-time-milliseconds | KEEPTTL]
          <FVS count field value [field value …]>
        ```
      
      Todo:
      - Performance improvement.
      - rdb load/save
      - aof
      - defrag
      ca4ed48d
    • Moti Cohen's avatar
      ebuckets: Add test for ACT_UPDATE_EXP_ITEM (#13249) · 13401f8b
      Moti Cohen authored
      - On ebExpire() verify the logic of update expired value to a new time
      rather than remove it.
      - Refine ebuckets benchmark
      13401f8b
  11. 03 May, 2024 1 commit
  12. 25 Apr, 2024 1 commit
    • Moti Cohen's avatar
      Support HSET+expire in one command, at infra level (#13230) · c33c91db
      Moti Cohen authored
      Unify infra of `HSETF`, `HEXPIRE`, `HSET` and provide API for RDB load
      as well. Whereas setting plain fields is rather straightforward, setting
      expiration time to fields might be time-consuming and complex since each 
      update of expiration time, not only updates `ebuckets` of corresponding hash, 
      but also might update `ebuckets` of global HFE DS. It is required to opt 
      sequence of field updates with expirartion for a given hash, such that only once
      done, the global HFE DS will get updated.
      
      To do so, follow the scheme:
      1. Call `hashTypeSetExInit()` to initialize the HashTypeSetEx struct.
      2. Call `hashTypeSetEx()` one time or more, for each field/expiration update.
      3. Call `hashTypeSetExDone()` for notification and update of global HFE.
      
      If expiration is not required, then avoid this API and use instead hashTypeSet().
      c33c91db
  13. 18 Apr, 2024 1 commit
    • Moti Cohen's avatar
      Hash Field Expiration - Basic support · c18ff056
      Moti Cohen authored
      - Add ebuckets & mstr data structures
      - Integrate active & lazy expiration
      - Add most of the commands 
      - Add support for dict (listpack is missing)
      TODOs:  RDB, notification, listpack, HSET, HGETF, defrag, aof
      c18ff056
  14. 04 Apr, 2024 1 commit
    • debing.sun's avatar
      Fix daylight race condition and some thread leaks (#13191) · 4581d432
      debing.sun authored
      fix some issues that come from sanitizer thread report.
      
      1. when the main thread is updating daylight_active, other threads (bio,
      module thread) may be writing logs at the same time.
      ```
      WARNING: ThreadSanitizer: data race (pid=661064)
        Read of size 4 at 0x55c9a4d11c70 by thread T2:
          #0 serverLogRaw /home/sundb/data/redis_fork/src/server.c:116 (redis-server+0x8d797) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
          #1 _serverLog.constprop.2 /home/sundb/data/redis_fork/src/server.c:146 (redis-server+0x2a3b14) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
          #2 bioProcessBackgroundJobs /home/sundb/data/redis_fork/src/bio.c:329 (redis-server+0x1c24ca) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
      
        Previous write of size 4 at 0x55c9a4d11c70 by main thread (mutexes: write M0, write M1, write M2, write M3):
          #0 updateCachedTimeWithUs /home/sundb/data/redis_fork/src/server.c:1102 (redis-server+0x925e7) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
          #1 updateCachedTimeWithUs /home/sundb/data/redis_fork/src/server.c:1087 (redis-server+0x925e7)
          #2 updateCachedTime /home/sundb/data/redis_fork/src/server.c:1118 (redis-server+0x925e7)
          #3 afterSleep /home/sundb/data/redis_fork/src/server.c:1811 (redis-server+0x925e7)
          #4 aeProcessEvents /home/sundb/data/redis_fork/src/ae.c:389 (redis-server+0x85ae0) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
          #5 aeProcessEvents /home/sundb/data/redis_fork/src/ae.c:342 (redis-server+0x85ae0)
          #6 aeMain /home/sundb/data/redis_fork/src/ae.c:477 (redis-server+0x85ae0)
          #7 main /home/sundb/data/redis_fork/src/server.c:7211 (redis-server+0x7168c) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
      ```
      
      2. thread leaks in module tests
      ```
      WARNING: ThreadSanitizer: thread leak (pid=668683)
        Thread T13 (tid=670041, finished) created by main thread at:
          #0 pthread_create ../../../../src/libsanitizer/tsan/tsan_interceptors_posix.cpp:1036 (libtsan.so.2+0x3d179) (BuildId: 28a9f70061dbb2dfa2cef661d3b23aff4ea13536)
          #1 HelloBlockNoTracking_RedisCommand /home/sundb/data/redis_fork/tests/modules/blockonbackground.c:200 (blockonbackground.so+0x97fd) (BuildId: 9cd187906c57e88cdf896d121d1d96448b37a136)
          #2 HelloBlockNoTracking_RedisCommand /home/sundb/data/redis_fork/tests/modules/blockonbackground.c:169 (blockonbackground.so+0x97fd)
          #3 call /home/sundb/data/redis_fork/src/server.c:3546 (redis-server+0x9b7fb) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
          #4 processCommand /home/sundb/data/redis_fork/src/server.c:4176 (redis-server+0xa091c) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
          #5 processCommandAndResetClient /home/sundb/data/redis_fork/src/networking.c:2468 (redis-server+0xd2b8e) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
          #6 processInputBuffer /home/sundb/data/redis_fork/src/networking.c:2576 (redis-server+0xd2b8e)
          #7 readQueryFromClient /home/sundb/data/redis_fork/src/networking.c:2722 (redis-server+0xd358f) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
          #8 callHandler /home/sundb/data/redis_fork/src/connhelpers.h:58 (redis-server+0x288a7b) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
          #9 connSocketEventHandler /home/sundb/data/redis_fork/src/socket.c:277 (redis-server+0x288a7b)
          #10 aeProcessEvents /home/sundb/data/redis_fork/src/ae.c:417 (redis-server+0x85b45) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
          #11 aeProcessEvents /home/sundb/data/redis_fork/src/ae.c:342 (redis-server+0x85b45)
          #12 aeMain /home/sundb/data/redis_fork/src/ae.c:477 (redis-server+0x85b45)
          #13 main /home/sundb/data/redis_fork/src/server.c:7211 (redis-server+0x7168c) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
      ```
      4581d432
  15. 02 Apr, 2024 1 commit
    • Moti Cohen's avatar
      Change FLUSHALL/FLUSHDB SYNC to run as blocking ASYNC (#13167) · 4df03796
      Moti Cohen authored
      # Overview
      Users utilize the `FLUSHDB SYNC` and `FLUSHALL SYNC` commands for a variety of 
      reasons. The main issue with this command is that if the database becomes 
      substantial in size, the server will be unresponsive for an extended period. 
      Other than freezing application traffic, this may also lead some clients making 
      incorrect judgments about the server's availability. For instance, a watchdog may 
      erroneously decide to terminate the process, resulting in potential adverse 
      outcomes. While a `FLUSH* ASYNC` can address these issues, it might not be used 
      for two reasons: firstly, it's not the default, and secondly, in some cases, the 
      client issuing the flush wants to wait for its completion before repopulating the 
      database.
      
      Between the option of triggering FLUSH* asynchronously in the background without 
      indication for completion versus running it synchronously in the foreground by 
      the main thread, there is another more appealing option. We can block the
      client that requested the flush, execute the flush command in the background, and 
      once done, unblock the client and return notification for completion. This approach 
      ensures the server remains responsive to other clients, and the blocked client 
      receives the expected response only after the flush operation has been successfully 
      carried out.
      
      # Implementation details
      Instead of defining yet another flavor to the flush command, we can modify
      `FLUSHALL SYNC` and `FLUSHDB SYNC` always run in this new mode.
      
      ## Extending BIO Threads capabilities
      Today jobs that are carried out by BIO threads don't have the capability to 
      indicate completion to the main thread. We can add this infrastructure by having
      an additional dummy job, coined as completion-job, that eventually will be written 
      by BIO threads to a response-queue. The main thread will take care to consume items
      from the response-queue and call the provided callback function of each 
      completion-job.
      
      ## FLUSH* SYNC to run as blocking ASYNC
      Command `FLUSH* SYNC` will be modified to create one or more async jobs to flush
      DB(s) and afterward will push additional completion-job request. By sending the
      completion job request only at the end, the main thread will be called back only
      after all the preceding jobs completed their task in the background. During that
      time, the client of the command is suspended and marked as `BLOCKED_LAZYFREE`
      whereas any other client will be able to communicate with the server without any
      issue.
      4df03796
  16. 01 Apr, 2024 1 commit
    • Moti Cohen's avatar
      kvstoreIteratorNext() wrongly reset iterator twice (#13178) · ce478343
      Moti Cohen authored
      It calls kvstoreIteratorNextDict() which eventually calls dictResumeRehashing()
      And then, on return, it calls dictResetIterator(iter) which calls dictResumeRehashing().
      We end up with pauserehash value decremented twice instead of once.
      ce478343
  17. 20 Mar, 2024 2 commits
  18. 19 Mar, 2024 3 commits
    • Yanqi Lv's avatar
      fix wrong data type conversion in zrangeResultBeginStore (#13148) · bad33f87
      Yanqi Lv authored
      In `beginResultEmission`, -1 means the result length is not known in
      advance. But after #12185, if we pass -1 to `zrangeResultBeginStore`, it
      will convert to SIZE_MAX in `zsetTypeCreate` and try to `dictExpand`.
      Although `dictExpand` won't succeed because the size overflows, I think
      we'd better to avoid this wrong conversion.
      
      This bug can be triggered when the source of `zrangestore` doesn't exist
      or we use `zrangestore` command with `byscore` or `bylex`.
      The impact is that dst keys will be converted to use skiplist instead of
      listpack.
      bad33f87
    • Binbin's avatar
      Prevent lua error_reply abuse from causing errorstats to become larger (#13141) · e04d41d7
      Binbin authored
      Users who abuse lua error_reply will generate a new error object on each
      error call, which can make server.errors get bigger and bigger. This
      will
      cause the server to block when calling INFO (we also return errorstats
      by
      default).
      
      To prevent the damage it can cause, when a misuse is detected, we will
      print a warning log and disable the errorstats to avoid adding more new
      errors. It can be re-enabled via CONFIG RESETSTAT.
      
      Because server.errors may be very large (it may be better now since we
      have the limit), config resetstat may block for a while. So in
      resetErrorTableStats, we will try to lazyfree server.errors.
      
      See the related discussion at the end of #8217.
      e04d41d7
    • Chen Tianjie's avatar
      Avoid unnecessary dict shrink in zremrangeGenericCommand (#13143) · aeada201
      Chen Tianjie authored
      If the skiplist is emptied, there is no need to shrink the dict in
      skiplist, it can be deleted directly.
      aeada201
  19. 18 Mar, 2024 2 commits
    • Binbin's avatar
      Fix dictionary use-after-free in active expire and make kvstore iter to respect EMPTY flag (#13135) · 7b070423
      Binbin authored
      After #13072, there is an use-after-free error. In expireScanCallback, we
      will delete the dict, and then in dictScan we will continue to use the dict,
      like we will doing `dictResumeRehashing(d)` in the end, this casued an error.
      
      In this PR, in freeDictIfNeeded, if the dict's pauserehash is set, don't
      delete the dict yet, and then when scan returns try to delete it again.
      
      At the same time, we noticed that there will be similar problems in iterator.
      We may also delete elements during the iteration process, causing the dict
      to be deleted, so the part related to iter in the PR has also been modified.
      dictResetIterator was also missing from the previous kvstoreIteratorNextDict,
      we currently have no scenario that elements will be deleted in kvstoreIterator
      process, deal with it together to avoid future problems. Added some simple
      tests to verify the changes.
      
      In addition, the modification in #13072 omitted initTempDb and emptyDbAsync,
      and they were also added. This PR also remove the slow flag from the expire
      test (consumes 1.3s) so that problems can be found in CI in the future.
      7b070423
    • Alexander Mahone's avatar
      Add missing REDIS_STATIC in quicklist (#13147) · 98a6e55d
      Alexander Mahone authored
      Compiler complained when I tried to compile only quicklist.c.
      Since static keyword is needed when a static function declaration is
      placed before its implementation.
      
      ```
      #ifndef REDIS_STATIC
      #define REDIS_STATIC static
      #endif
      ```
      
      [How to solve static declaration follows non-static declaration in GCC C
      code?](https://stackoverflow.com/questions/3148244/how-to-solve-static-declaration-follows-non-static-declaration-in-gcc-c-code)
      98a6e55d
  20. 17 Mar, 2024 1 commit
  21. 13 Mar, 2024 4 commits
    • Viktor Söderqvist's avatar
      Makefile respect user's REDIS_CFLAGS and OPT (#13073) · 1d77a8e2
      Viktor Söderqvist authored
      This change to the Makefile makes it possible to opt out of
      `-fno-omit-frame-pointer` added in #12973 and `-flto` (#11350). Those
      features were implemented by conditionally modifying the `REDIS_CFLAGS`
      and `REDIS_LDFLAGS` variables. Historically, those variables provided a
      way for users to pass options to the compiler and linker unchanged.
      
      Instead of conditionally appending optimization flags to REDIS_CFLAGS
      and REDIS_LDFLAGS, I want to append them to the OPTIMIZATION variable.
      
      Later in the Makefile, we have `OPT=$(OPTIMIZATION)` (meaning
      OPTIMIZATION is only a default for OPT, but OPT can be overridden by the
      user), and later the flags are combined like this:
      
      FINAL_CFLAGS=$(STD) $(WARN) $(OPT) $(DEBUG) $(CFLAGS) $(REDIS_CFLAGS)
          FINAL_LDFLAGS=$(LDFLAGS) $(OPT) $(REDIS_LDFLAGS) $(DEBUG)
      
      This makes it possible for the the user to override all optimization
      flags with e.g. `make OPT=-O1` or just `make OPT=`.
      
      For some reason `-O3` was also already added to REDIS_LDFLAGS by default
      in #12339, so I added OPT to FINAL_LDFLAGS to avoid more complex logic
      (such as introducing a separate LD_OPT variable).
      1d77a8e2
    • Binbin's avatar
      Add KVSTORE_FREE_EMPTY_DICTS to cluster mode keys / expires kvstore (#13072) · 3b3d16f7
      Binbin authored
      
      
      Currently (following #11695, and #12822), keys kvstore and expires
      kvstore both flag with ON_DEMAND, it means that a cluster node will
      only allocate a dict when the slot is assigned to it and populated,
      but on the other hand, when the slot is unassigned, the dict will
      remain allocated.
      
      We considered releasing the dict when the slot is unassigned, but it
      causes complications on replicas. On the other hand, from benchmarks
      we conducted, it looks like the performance impact of releasing the
      dict when it becomes empty and re-allocate it when a key is added
      again, isn't huge.
      
      This PR add KVSTORE_FREE_EMPTY_DICTS to cluster mode keys / expires
      kvstore.
      
      The impact is about about 2% performance drop, for this hopefully
      uncommon scenario.
      
      ---------
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      3b3d16f7
    • Binbin's avatar
      Lua eval scripts first in first out LRU eviction (#13108) · ad28d222
      Binbin authored
      In some cases, users will abuse lua eval. Each EVAL call generates
      a new lua script, which is added to the lua interpreter and cached
      to redis-server, consuming a large amount of memory over time.
      
      Since EVAL is mostly the one that abuses the lua cache, and these
      won't have pipeline issues (i.e. the script won't disappear
      unexpectedly,
      and cause errors like it would with SCRIPT LOAD and EVALSHA),
      we implement a plain FIFO LRU eviction only for these (not for
      scripts loaded with SCRIPT LOAD).
      
      ### Implementation notes:
      When not abused we'll probably have less than 100 scripts, and when
      abused we'll have many thousands. So we use a hard coded value of 500
      scripts. And considering that we don't have many scripts, then unlike
      keys, we don't need to worry about the memory usage of keeping a true
      sorted LRU linked list. We compute the SHA of each script anyway,
      and put the script in a dict, we can store a listNode there, and use
      it for quick removal and re-insertion into an LRU list each time the
      script is used.
      
      ### New interfaces:
      At the same time, a new `evicted_scripts` field is added to
      INFO, which represents the number of evicted eval scripts. Users
      can check it to see if they are abusing EVAL.
      
      ### benchmark:
      `./src/redis-benchmark -P 10 -n 1000000 -r 10000000000 eval "return
      __rand_int__" 0`
      
      The simple abuse of eval benchmark test that will create 1 million EVAL
      scripts. The performance has been improved by 50%, and the max latency
      has dropped from 500ms to 13ms (this may be caused by table expansion
      inside Lua when the number of scripts is large). And in the INFO memory,
      it used to consume 120MB (server cache) + 310MB (lua engine), but now
      it only consumes 70KB (server cache) + 210KB (lua_engine) because of
      the scripts eviction.
      
      For non-abusive case of about 100 EVAL scripts, there's no noticeable
      change in performance or memory usage.
      
      ### unlikely potentially breaking change:
      in theory, a user can maybe load a
      script with EVAL and then use EVALSHA to call it (by calculating the
      SHA1 value on the client side), it could be that if we read the docs
      carefully we'll realized it's a valid scenario, but we suppose it's
      extremely rare. So it may happen that EVALSHA acts on a script created
      by EVAL, and the script is evicted and EVALSHA returns a NOSCRIPT error.
      that is if you have more than 500 scripts being used in the same
      transaction / pipeline.
      
      This solves the second point in #13102.
      ad28d222
    • Ronen Kalish's avatar
      Xread last entry in stream (#7388) (#13117) · a8e74511
      Ronen Kalish authored
      
      
      Allow using `+` as a special ID for last item in stream on XREAD
      command.
      
      This would allow to iterate on a stream with XREAD starting with the
      last available message instead of the next one which `$` is used for.
      I.e. the caller can use `BLOCK` and `+` on the first call, and change to
      `$` on the next call.
      
      Closes #7388
      
      ---------
      Co-authored-by: default avatarFelipe Machado <462154+felipou@users.noreply.github.com>
      a8e74511
  22. 12 Mar, 2024 3 commits
  23. 11 Mar, 2024 1 commit
  24. 10 Mar, 2024 1 commit
    • Matthew Douglass's avatar
      Fix conversion of numbers in lua args to redis args (#13115) · 5fdaa53d
      Matthew Douglass authored
      
      
      Since lua_Number is not explicitly an integer or a double, we need to
      make an effort
      to convert it as an integer when that's possible, since the string could
      later be used
      in a context that doesn't support scientific notation (e.g. 1e9 instead
      of 100000000).
      
      Since fpconv_dtoa converts numbers with the equivalent of `%f` or `%e`,
      which ever is shorter,
      this would break if we try to pass a long integer number to a command
      that takes integer.
      we'll get an implicit conversion to string in Lua, and then the parsing
      in getLongLongFromObjectOrReply will fail.
      
      ```
      > eval "redis.call('hincrby', 'key', 'field', '1000000000')" 0
      (nil)
      > eval "redis.call('hincrby', 'key', 'field', tonumber('1000000000'))" 0
      (error) ERR value is not an integer or out of range script: ac99c32e4daf7e300d593085b611de261954a946, on @user_script:1.
      ```
      
      Switch to using ll2string if the number can be safely represented as a
      long long.
      
      The problem was introduced in #10587 (Redis 7.2).
      closes #13113.
      
      ---------
      Co-authored-by: default avatarBinbin <binloveplay1314@qq.com>
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      5fdaa53d
  25. 08 Mar, 2024 1 commit
  26. 05 Mar, 2024 2 commits
    • debing.sun's avatar
      Check user's oom_score_adj write permission for oom-score-adj test (#13111) · 9738ba98
      debing.sun authored
      `CONFIG SET oom-score-adj handles configuration failures` test failed in
      some CI jobs today.
      Failed CI: https://github.com/redis/redis/actions/runs/8152519326
      
      Not sure why the github action's docker image perssions have changed,
      but the issue is similar to #12887,
      where we can't assume the range of oom_score_adj that a user can change.
      
      ## Solution:
      Modify the way of determining whether the current user has no privileges
      or not,
      instead of relying on whether the user id is 0 or not.
      9738ba98
    • Ping Xie's avatar
      Fix PONG message processing for primary-ship tracking during failovers (#13055) · 28976a90
      Ping Xie authored
      This commit updates the processing of PONG gossip messages in the
      cluster. When a node (B) becomes a replica due to a failover, its PONG
      messages include its new primary node's (A) information and B's
      configuration epoch is aligned with A's. This allows observer nodes to
      identify changes in primary-ship, addressing issues of intermediate
      states and enhancing cluster state consistency during topology changes.
      
      Fix #13018
      28976a90