1. 16 Apr, 2024 1 commit
    • Binbin's avatar
      Allocate Lua VM code with jemalloc instead of libc, and count it used memory (#13133) · 804110a4
      Binbin authored
      
      
      ## Background
      1. Currently Lua memory control does not pass through Redis's zmalloc.c.
      Redis maxmemory cannot limit memory problems caused by users abusing lua
      since these lua VM memory is not part of used_memory.
      
      2. Since jemalloc is much better (fragmentation and speed), and also we
      know it and trust it. we are
      going to use jemalloc instead of libc to allocate the Lua VM code and
      count it used memory.
      
      ## Process:
      In this PR, we will use jemalloc in lua. 
      1. Create an arena for all lua vm (script and function), which is
      shared, in order to avoid blocking defragger.
      2. Create a bound tcache for the lua VM, since the lua VM and the main
      thread are by default in the same tcache, and if there is no isolated
      tcache, lua may request memory from the tcache which has just been freed
      by main thread, and vice versa
      On the other hand, since lua vm might be release in bio thread, but
      tcache is not thread-safe, we need to recreate
          the tcache every time we recreate the lua vm.
      3. Remove lua memory statistics from memory fragmentation statistics to
      avoid the effects of lua memory fragmentation
      
      ## Other
      Add the following new fields to `INFO DEBUG` (we may promote them to
      INFO MEMORY some day)
      1. allocator_allocated_lua: total number of bytes allocated of lua arena
      2. allocator_active_lua: total number of bytes in active pages allocated
      in lua arena
      3. allocator_resident_lua: maximum number of bytes in physically
      resident data pages mapped in lua arena
      4. allocator_frag_bytes_lua: fragment bytes in lua arena
      
      This is oranagra's idea, and i got some help from sundb.
      
      This solves the third point in #13102.
      
      ---------
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      804110a4
  2. 08 Apr, 2024 1 commit
    • Yves LeBras's avatar
      redis-cli - sendReadOnly() to work with Redis Cloud (#13195) · e3550f01
      Yves LeBras authored
      When using Redis Cloud, sendReadOnly() exit with `Error: ERR unknown
      command 'READONLY'`.
      It is impacting `--memkeys`, `--bigkeys`, `--hotkeys`, and will impact
      `--keystats`.
      Added one line to ignore this error.
      
      issue introduced in #12735 (not yet released).
      e3550f01
  3. 07 Apr, 2024 1 commit
    • debing.sun's avatar
      Use usleep() instead of sched_yield() to yield cpu (#13183) · f4481e65
      debing.sun authored
      when the main thread and the module thread are in the same thread,
      sched_yield() can work well.
      when they are both bind to different cpus, sched_yield() will look for
      the thread with the highest priority, and if the module thread is always
      the highest priority on a cpu, it will take a long time to let the main
      thread to reacquire the GIL.
      
      ref https://man7.org/linux/man-pages/man2/sched_yield.2.html
      ```
      If the calling thread is the only thread in the highest priority
      list at that time, it will continue to run after a call to
      sched_yield().
      ```
      f4481e65
  4. 04 Apr, 2024 1 commit
    • debing.sun's avatar
      Fix daylight race condition and some thread leaks (#13191) · 4581d432
      debing.sun authored
      fix some issues that come from sanitizer thread report.
      
      1. when the main thread is updating daylight_active, other threads (bio,
      module thread) may be writing logs at the same time.
      ```
      WARNING: ThreadSanitizer: data race (pid=661064)
        Read of size 4 at 0x55c9a4d11c70 by thread T2:
          #0 serverLogRaw /home/sundb/data/redis_fork/src/server.c:116 (redis-server+0x8d797) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
          #1 _serverLog.constprop.2 /home/sundb/data/redis_fork/src/server.c:146 (redis-server+0x2a3b14) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
          #2 bioProcessBackgroundJobs /home/sundb/data/redis_fork/src/bio.c:329 (redis-server+0x1c24ca) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
      
        Previous write of size 4 at 0x55c9a4d11c70 by main thread (mutexes: write M0, write M1, write M2, write M3):
          #0 updateCachedTimeWithUs /home/sundb/data/redis_fork/src/server.c:1102 (redis-server+0x925e7) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
          #1 updateCachedTimeWithUs /home/sundb/data/redis_fork/src/server.c:1087 (redis-server+0x925e7)
          #2 updateCachedTime /home/sundb/data/redis_fork/src/server.c:1118 (redis-server+0x925e7)
          #3 afterSleep /home/sundb/data/redis_fork/src/server.c:1811 (redis-server+0x925e7)
          #4 aeProcessEvents /home/sundb/data/redis_fork/src/ae.c:389 (redis-server+0x85ae0) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
          #5 aeProcessEvents /home/sundb/data/redis_fork/src/ae.c:342 (redis-server+0x85ae0)
          #6 aeMain /home/sundb/data/redis_fork/src/ae.c:477 (redis-server+0x85ae0)
          #7 main /home/sundb/data/redis_fork/src/server.c:7211 (redis-server+0x7168c) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
      ```
      
      2. thread leaks in module tests
      ```
      WARNING: ThreadSanitizer: thread leak (pid=668683)
        Thread T13 (tid=670041, finished) created by main thread at:
          #0 pthread_create ../../../../src/libsanitizer/tsan/tsan_interceptors_posix.cpp:1036 (libtsan.so.2+0x3d179) (BuildId: 28a9f70061dbb2dfa2cef661d3b23aff4ea13536)
          #1 HelloBlockNoTracking_RedisCommand /home/sundb/data/redis_fork/tests/modules/blockonbackground.c:200 (blockonbackground.so+0x97fd) (BuildId: 9cd187906c57e88cdf896d121d1d96448b37a136)
          #2 HelloBlockNoTracking_RedisCommand /home/sundb/data/redis_fork/tests/modules/blockonbackground.c:169 (blockonbackground.so+0x97fd)
          #3 call /home/sundb/data/redis_fork/src/server.c:3546 (redis-server+0x9b7fb) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
          #4 processCommand /home/sundb/data/redis_fork/src/server.c:4176 (redis-server+0xa091c) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
          #5 processCommandAndResetClient /home/sundb/data/redis_fork/src/networking.c:2468 (redis-server+0xd2b8e) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
          #6 processInputBuffer /home/sundb/data/redis_fork/src/networking.c:2576 (redis-server+0xd2b8e)
          #7 readQueryFromClient /home/sundb/data/redis_fork/src/networking.c:2722 (redis-server+0xd358f) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
          #8 callHandler /home/sundb/data/redis_fork/src/connhelpers.h:58 (redis-server+0x288a7b) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
          #9 connSocketEventHandler /home/sundb/data/redis_fork/src/socket.c:277 (redis-server+0x288a7b)
          #10 aeProcessEvents /home/sundb/data/redis_fork/src/ae.c:417 (redis-server+0x85b45) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
          #11 aeProcessEvents /home/sundb/data/redis_fork/src/ae.c:342 (redis-server+0x85b45)
          #12 aeMain /home/sundb/data/redis_fork/src/ae.c:477 (redis-server+0x85b45)
          #13 main /home/sundb/data/redis_fork/src/server.c:7211 (redis-server+0x7168c) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)
      ```
      4581d432
  5. 02 Apr, 2024 1 commit
    • Moti Cohen's avatar
      Change FLUSHALL/FLUSHDB SYNC to run as blocking ASYNC (#13167) · 4df03796
      Moti Cohen authored
      # Overview
      Users utilize the `FLUSHDB SYNC` and `FLUSHALL SYNC` commands for a variety of 
      reasons. The main issue with this command is that if the database becomes 
      substantial in size, the server will be unresponsive for an extended period. 
      Other than freezing application traffic, this may also lead some clients making 
      incorrect judgments about the server's availability. For instance, a watchdog may 
      erroneously decide to terminate the process, resulting in potential adverse 
      outcomes. While a `FLUSH* ASYNC` can address these issues, it might not be used 
      for two reasons: firstly, it's not the default, and secondly, in some cases, the 
      client issuing the flush wants to wait for its completion before repopulating the 
      database.
      
      Between the option of triggering FLUSH* asynchronously in the background without 
      indication for completion versus running it synchronously in the foreground by 
      the main thread, there is another more appealing option. We can block the
      client that requested the flush, execute the flush command in the background, and 
      once done, unblock the client and return notification for completion. This approach 
      ensures the server remains responsive to other clients, and the blocked client 
      receives the expected response only after the flush operation has been successfully 
      carried out.
      
      # Implementation details
      Instead of defining yet another flavor to the flush command, we can modify
      `FLUSHALL SYNC` and `FLUSHDB SYNC` always run in this new mode.
      
      ## Extending BIO Threads capabilities
      Today jobs that are carried out by BIO threads don't have the capability to 
      indicate completion to the main thread. We can add this infrastructure by having
      an additional dummy job, coined as completion-job, that eventually will be written 
      by BIO threads to a response-queue. The main thread will take care to consume items
      from the response-queue and call the provided callback function of each 
      completion-job.
      
      ## FLUSH* SYNC to run as blocking ASYNC
      Command `FLUSH* SYNC` will be modified to create one or more async jobs to flush
      DB(s) and afterward will push additional completion-job request. By sending the
      completion job request only at the end, the main thread will be called back only
      after all the preceding jobs completed their task in the background. During that
      time, the client of the command is suspended and marked as `BLOCKED_LAZYFREE`
      whereas any other client will be able to communicate with the server without any
      issue.
      4df03796
  6. 01 Apr, 2024 1 commit
    • Moti Cohen's avatar
      kvstoreIteratorNext() wrongly reset iterator twice (#13178) · ce478343
      Moti Cohen authored
      It calls kvstoreIteratorNextDict() which eventually calls dictResumeRehashing()
      And then, on return, it calls dictResetIterator(iter) which calls dictResumeRehashing().
      We end up with pauserehash value decremented twice instead of once.
      ce478343
  7. 20 Mar, 2024 2 commits
  8. 19 Mar, 2024 3 commits
    • Yanqi Lv's avatar
      fix wrong data type conversion in zrangeResultBeginStore (#13148) · bad33f87
      Yanqi Lv authored
      In `beginResultEmission`, -1 means the result length is not known in
      advance. But after #12185, if we pass -1 to `zrangeResultBeginStore`, it
      will convert to SIZE_MAX in `zsetTypeCreate` and try to `dictExpand`.
      Although `dictExpand` won't succeed because the size overflows, I think
      we'd better to avoid this wrong conversion.
      
      This bug can be triggered when the source of `zrangestore` doesn't exist
      or we use `zrangestore` command with `byscore` or `bylex`.
      The impact is that dst keys will be converted to use skiplist instead of
      listpack.
      bad33f87
    • Binbin's avatar
      Prevent lua error_reply abuse from causing errorstats to become larger (#13141) · e04d41d7
      Binbin authored
      Users who abuse lua error_reply will generate a new error object on each
      error call, which can make server.errors get bigger and bigger. This
      will
      cause the server to block when calling INFO (we also return errorstats
      by
      default).
      
      To prevent the damage it can cause, when a misuse is detected, we will
      print a warning log and disable the errorstats to avoid adding more new
      errors. It can be re-enabled via CONFIG RESETSTAT.
      
      Because server.errors may be very large (it may be better now since we
      have the limit), config resetstat may block for a while. So in
      resetErrorTableStats, we will try to lazyfree server.errors.
      
      See the related discussion at the end of #8217.
      e04d41d7
    • Chen Tianjie's avatar
      Avoid unnecessary dict shrink in zremrangeGenericCommand (#13143) · aeada201
      Chen Tianjie authored
      If the skiplist is emptied, there is no need to shrink the dict in
      skiplist, it can be deleted directly.
      aeada201
  9. 18 Mar, 2024 2 commits
    • Binbin's avatar
      Fix dictionary use-after-free in active expire and make kvstore iter to respect EMPTY flag (#13135) · 7b070423
      Binbin authored
      After #13072, there is an use-after-free error. In expireScanCallback, we
      will delete the dict, and then in dictScan we will continue to use the dict,
      like we will doing `dictResumeRehashing(d)` in the end, this casued an error.
      
      In this PR, in freeDictIfNeeded, if the dict's pauserehash is set, don't
      delete the dict yet, and then when scan returns try to delete it again.
      
      At the same time, we noticed that there will be similar problems in iterator.
      We may also delete elements during the iteration process, causing the dict
      to be deleted, so the part related to iter in the PR has also been modified.
      dictResetIterator was also missing from the previous kvstoreIteratorNextDict,
      we currently have no scenario that elements will be deleted in kvstoreIterator
      process, deal with it together to avoid future problems. Added some simple
      tests to verify the changes.
      
      In addition, the modification in #13072 omitted initTempDb and emptyDbAsync,
      and they were also added. This PR also remove the slow flag from the expire
      test (consumes 1.3s) so that problems can be found in CI in the future.
      7b070423
    • Alexander Mahone's avatar
      Add missing REDIS_STATIC in quicklist (#13147) · 98a6e55d
      Alexander Mahone authored
      Compiler complained when I tried to compile only quicklist.c.
      Since static keyword is needed when a static function declaration is
      placed before its implementation.
      
      ```
      #ifndef REDIS_STATIC
      #define REDIS_STATIC static
      #endif
      ```
      
      [How to solve static declaration follows non-static declaration in GCC C
      code?](https://stackoverflow.com/questions/3148244/how-to-solve-static-declaration-follows-non-static-declaration-in-gcc-c-code)
      98a6e55d
  10. 17 Mar, 2024 1 commit
  11. 13 Mar, 2024 4 commits
    • Viktor Söderqvist's avatar
      Makefile respect user's REDIS_CFLAGS and OPT (#13073) · 1d77a8e2
      Viktor Söderqvist authored
      This change to the Makefile makes it possible to opt out of
      `-fno-omit-frame-pointer` added in #12973 and `-flto` (#11350). Those
      features were implemented by conditionally modifying the `REDIS_CFLAGS`
      and `REDIS_LDFLAGS` variables. Historically, those variables provided a
      way for users to pass options to the compiler and linker unchanged.
      
      Instead of conditionally appending optimization flags to REDIS_CFLAGS
      and REDIS_LDFLAGS, I want to append them to the OPTIMIZATION variable.
      
      Later in the Makefile, we have `OPT=$(OPTIMIZATION)` (meaning
      OPTIMIZATION is only a default for OPT, but OPT can be overridden by the
      user), and later the flags are combined like this:
      
      FINAL_CFLAGS=$(STD) $(WARN) $(OPT) $(DEBUG) $(CFLAGS) $(REDIS_CFLAGS)
          FINAL_LDFLAGS=$(LDFLAGS) $(OPT) $(REDIS_LDFLAGS) $(DEBUG)
      
      This makes it possible for the the user to override all optimization
      flags with e.g. `make OPT=-O1` or just `make OPT=`.
      
      For some reason `-O3` was also already added to REDIS_LDFLAGS by default
      in #12339, so I added OPT to FINAL_LDFLAGS to avoid more complex logic
      (such as introducing a separate LD_OPT variable).
      1d77a8e2
    • Binbin's avatar
      Add KVSTORE_FREE_EMPTY_DICTS to cluster mode keys / expires kvstore (#13072) · 3b3d16f7
      Binbin authored
      
      
      Currently (following #11695, and #12822), keys kvstore and expires
      kvstore both flag with ON_DEMAND, it means that a cluster node will
      only allocate a dict when the slot is assigned to it and populated,
      but on the other hand, when the slot is unassigned, the dict will
      remain allocated.
      
      We considered releasing the dict when the slot is unassigned, but it
      causes complications on replicas. On the other hand, from benchmarks
      we conducted, it looks like the performance impact of releasing the
      dict when it becomes empty and re-allocate it when a key is added
      again, isn't huge.
      
      This PR add KVSTORE_FREE_EMPTY_DICTS to cluster mode keys / expires
      kvstore.
      
      The impact is about about 2% performance drop, for this hopefully
      uncommon scenario.
      
      ---------
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      3b3d16f7
    • Binbin's avatar
      Lua eval scripts first in first out LRU eviction (#13108) · ad28d222
      Binbin authored
      In some cases, users will abuse lua eval. Each EVAL call generates
      a new lua script, which is added to the lua interpreter and cached
      to redis-server, consuming a large amount of memory over time.
      
      Since EVAL is mostly the one that abuses the lua cache, and these
      won't have pipeline issues (i.e. the script won't disappear
      unexpectedly,
      and cause errors like it would with SCRIPT LOAD and EVALSHA),
      we implement a plain FIFO LRU eviction only for these (not for
      scripts loaded with SCRIPT LOAD).
      
      ### Implementation notes:
      When not abused we'll probably have less than 100 scripts, and when
      abused we'll have many thousands. So we use a hard coded value of 500
      scripts. And considering that we don't have many scripts, then unlike
      keys, we don't need to worry about the memory usage of keeping a true
      sorted LRU linked list. We compute the SHA of each script anyway,
      and put the script in a dict, we can store a listNode there, and use
      it for quick removal and re-insertion into an LRU list each time the
      script is used.
      
      ### New interfaces:
      At the same time, a new `evicted_scripts` field is added to
      INFO, which represents the number of evicted eval scripts. Users
      can check it to see if they are abusing EVAL.
      
      ### benchmark:
      `./src/redis-benchmark -P 10 -n 1000000 -r 10000000000 eval "return
      __rand_int__" 0`
      
      The simple abuse of eval benchmark test that will create 1 million EVAL
      scripts. The performance has been improved by 50%, and the max latency
      has dropped from 500ms to 13ms (this may be caused by table expansion
      inside Lua when the number of scripts is large). And in the INFO memory,
      it used to consume 120MB (server cache) + 310MB (lua engine), but now
      it only consumes 70KB (server cache) + 210KB (lua_engine) because of
      the scripts eviction.
      
      For non-abusive case of about 100 EVAL scripts, there's no noticeable
      change in performance or memory usage.
      
      ### unlikely potentially breaking change:
      in theory, a user can maybe load a
      script with EVAL and then use EVALSHA to call it (by calculating the
      SHA1 value on the client side), it could be that if we read the docs
      carefully we'll realized it's a valid scenario, but we suppose it's
      extremely rare. So it may happen that EVALSHA acts on a script created
      by EVAL, and the script is evicted and EVALSHA returns a NOSCRIPT error.
      that is if you have more than 500 scripts being used in the same
      transaction / pipeline.
      
      This solves the second point in #13102.
      ad28d222
    • Ronen Kalish's avatar
      Xread last entry in stream (#7388) (#13117) · a8e74511
      Ronen Kalish authored
      
      
      Allow using `+` as a special ID for last item in stream on XREAD
      command.
      
      This would allow to iterate on a stream with XREAD starting with the
      last available message instead of the next one which `$` is used for.
      I.e. the caller can use `BLOCK` and `+` on the first call, and change to
      `$` on the next call.
      
      Closes #7388
      
      ---------
      Co-authored-by: default avatarFelipe Machado <462154+felipou@users.noreply.github.com>
      a8e74511
  12. 12 Mar, 2024 3 commits
  13. 11 Mar, 2024 1 commit
  14. 10 Mar, 2024 1 commit
    • Matthew Douglass's avatar
      Fix conversion of numbers in lua args to redis args (#13115) · 5fdaa53d
      Matthew Douglass authored
      
      
      Since lua_Number is not explicitly an integer or a double, we need to
      make an effort
      to convert it as an integer when that's possible, since the string could
      later be used
      in a context that doesn't support scientific notation (e.g. 1e9 instead
      of 100000000).
      
      Since fpconv_dtoa converts numbers with the equivalent of `%f` or `%e`,
      which ever is shorter,
      this would break if we try to pass a long integer number to a command
      that takes integer.
      we'll get an implicit conversion to string in Lua, and then the parsing
      in getLongLongFromObjectOrReply will fail.
      
      ```
      > eval "redis.call('hincrby', 'key', 'field', '1000000000')" 0
      (nil)
      > eval "redis.call('hincrby', 'key', 'field', tonumber('1000000000'))" 0
      (error) ERR value is not an integer or out of range script: ac99c32e4daf7e300d593085b611de261954a946, on @user_script:1.
      ```
      
      Switch to using ll2string if the number can be safely represented as a
      long long.
      
      The problem was introduced in #10587 (Redis 7.2).
      closes #13113.
      
      ---------
      Co-authored-by: default avatarBinbin <binloveplay1314@qq.com>
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      5fdaa53d
  15. 08 Mar, 2024 1 commit
  16. 05 Mar, 2024 2 commits
    • debing.sun's avatar
      Check user's oom_score_adj write permission for oom-score-adj test (#13111) · 9738ba98
      debing.sun authored
      `CONFIG SET oom-score-adj handles configuration failures` test failed in
      some CI jobs today.
      Failed CI: https://github.com/redis/redis/actions/runs/8152519326
      
      Not sure why the github action's docker image perssions have changed,
      but the issue is similar to #12887,
      where we can't assume the range of oom_score_adj that a user can change.
      
      ## Solution:
      Modify the way of determining whether the current user has no privileges
      or not,
      instead of relying on whether the user id is 0 or not.
      9738ba98
    • Ping Xie's avatar
      Fix PONG message processing for primary-ship tracking during failovers (#13055) · 28976a90
      Ping Xie authored
      This commit updates the processing of PONG gossip messages in the
      cluster. When a node (B) becomes a replica due to a failover, its PONG
      messages include its new primary node's (A) information and B's
      configuration epoch is aligned with A's. This allows observer nodes to
      identify changes in primary-ship, addressing issues of intermediate
      states and enhancing cluster state consistency during topology changes.
      
      Fix #13018
      28976a90
  17. 04 Mar, 2024 1 commit
    • debing.sun's avatar
      Implement defragmentation for pubsub kvstore (#13058) · ad127303
      debing.sun authored
      
      
      After #13013
      
      ### This PR make effort to defrag the pubsub kvstore in the following
      ways:
      
      1. Till now server.pubsub(shard)_channels only share channel name obj
      with the first subscribed client, now change it so that the clients and
      the pubsub kvstore share the channel name robj.
      This would save a lot of memory when there are many subscribers to the
      same channel.
      It also means that we only need to defrag the channel name robj in the
      pubsub kvstore, and then update
      all client references for the current channel, avoiding the need to
      iterate through all the clients to do the same things.
          
      2. Refactor the code to defragment pubsub(shard) in the same way as
      defragment of keys and EXPIRES, with the exception that we only
      defragment pubsub(without shard) when slot is zero.
      
      
      ### Other
      Fix an overlook in #11695, if defragment doesn't reach the end time, we
      should wait for the current
      db's keys and expires, pubsub and pubsubshard to finish before leaving,
      now it's possible to exit
      early when the keys are defragmented.
      
      ---------
      Co-authored-by: default avataroranagra <oran@redislabs.com>
      ad127303
  18. 03 Mar, 2024 1 commit
  19. 02 Mar, 2024 2 commits
    • Binbin's avatar
      Fix reply schemas validator build issue due to new regular expression (#13103) · df75153d
      Binbin authored
      The new regular expression break the validator:
      ```
      In file included from commands.c:10:
      commands_with_reply_schema.def:14528:72: error: stray ‘\’ in program
      14528 | struct jsonObjectElement MEMORY_STATS_ReplySchema_patternProperties__db\_\d+__properties_overhead_hashtable_main_elements[] = {
      ```
      
      The reason is that special characters are not added to to_c_name,
      causes special characters to appear in the structure name, causing
      c file compilation to fail.
      
      Broken by #12913
      df75153d
    • YaacovHazan's avatar
      redis-cli fixes around help hints version filtering (#13097) · a50bbcb6
      YaacovHazan authored
      
      
      - In removeUnsupportedArgs, trying to access the next item after the
      last one and causing an out of bounds read.
      - In versionIsSupported, when the 'version' is equal to 'since', the
      return value is 0 (not supported).
      Also, change the function to return `not supported` in case they have
      different numbers of digits
      
      Both issues were found by `Non-interactive non-TTY CLI: Test
      command-line hinting - old server` under `test-sanitizer-address` (When
      changing the `src/version.h` locally to `8.0.0`)
      
      The new `MAXAGE` argument inside `client-kill` triggered the issue (new
      argument at the end of the list)
      
      ---------
      Co-authored-by: default avatarYaacovHazan <yaacov.hazan@redislabs.com>
      a50bbcb6
  20. 01 Mar, 2024 1 commit
    • Chen Tianjie's avatar
      Add overhead of all DBs and rehashing dict count to info. (#12913) · 4cae99e7
      Chen Tianjie authored
      
      
      Sometimes we need to make fast judgement about why Redis is suddenly
      taking more memory. One of the reasons is main DB's dicts doing
      rehashing.
      
      We may use `MEMORY STATS` to monitor the overhead memory of each DB, but
      there still lacks a total sum to show an overall trend. So this PR adds
      the total overhead of all DBs to `INFO MEMORY` section, together with
      the total count of rehashing DB dicts, providing some intuitive metrics
      about main dicts rehashing.
      
      This PR adds the following metrics to INFO MEMORY
      * `mem_overhead_db_hashtable_rehashing` - only size of ht[0] in
      dictionaries we're rehashing (i.e. the memory that's gonna get released
      soon)
      
      and a similar ones to MEMORY STATS:
      * `overhead.db.hashtable.lut` (complements the existing
      `overhead.hashtable.main` and `overhead.hashtable.expires` which also
      counts the `dictEntry` structs too)
      * `overhead.db.hashtable.rehashing` - temporary rehashing overhead.
      * `db.dict.rehashing.count` - number of top level dictionaries being
      rehashed.
      
      ---------
      Co-authored-by: default avatarzhaozhao.zz <zhaozhao.zz@alibaba-inc.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      4cae99e7
  21. 29 Feb, 2024 2 commits
    • Binbin's avatar
      Fix propagation of entries_read by calling streamPropagateGroupID unconditionally (#12898) · f17381a3
      Binbin authored
      In XREADGROUP ACK, because streamPropagateXCLAIM does not propagate
      entries-read, entries-read will be inconsistent between master and
      replicas.
      I.e. if no entries were claimed, it would have propagated correctly, but
      if some
      were claimed, then the entries-read field would be inconsistent on the
      replica.
      
      The fix was suggested by guybe7, call streamPropagateGroupID
      unconditionally,
      so that we will normalize entries_read on the replicas. In the past, we
      would
      only set propagate_last_id when NOACK was specified. And in #9127,
      XCLAIM did
      not propagate entries_read in ACK, which would cause entries_read to be
      inconsistent between master and replicas.
      
      Another approach is add another arg to XCLAIM and let it propagate
      entries_read,
      but we decided not to use it. Because we want minimal damage in case
      there's an
      old target and new source (in the worst case scenario, the new source
      doesn't
      recognize XGROUP SETID ... ENTRIES READ and the lag is lost. If we
      change XCLAIM,
      the damage is much more severe).
      
      In this patch, now if the user uses XREADGROUP .. COUNT 1 there will be
      an additional
      overhead of MULTI, EXEC and XGROUPSETID. We assume the extra command in
      case of
      COUNT 1 (4x factor, changing from one XCLAIM to
      MULTI+XCLAIM+XSETID+EXEC), is probably
      ok since reading just one entry is in any case very inefficient (a
      client round trip
      per record), so we're hoping it's not a common case.
      
      Issue was introduced in #9127.
      f17381a3
    • zhaozhao.zz's avatar
      freeDictIfNeeded when kvstoreEmpty (#13098) · cc9fbd27
      zhaozhao.zz authored
      just like `kvstoreDictDelete`, we need check `freeDictIfNeeded` when
      `kvstoreEmpty`.
      cc9fbd27
  22. 28 Feb, 2024 2 commits
    • Binbin's avatar
      SCRIPT FLUSH run truly async, close lua interpreter in bio (#13087) · a7abc2f0
      Binbin authored
      Even if we have SCRIPT FLUSH ASYNC now, when there are a lot of
      lua scripts, SCRIPT FLUSH ASYNC will still block the main thread.
      This is because lua_close is executed in the main thread, and lua
      heap needs to release a lot of memory.
      
      In this PR, we take the current lua instance on lctx.lua and call
      lua_close on it in a background thread, to close it in async way.
      This is MeirShpilraien's idea.
      a7abc2f0
    • LiiNen's avatar
      Fix redis-cli --count (for --scan, --bigkeys, etc) was ignored unless... · 763827c9
      LiiNen authored
      Fix redis-cli --count (for --scan, --bigkeys, etc) was ignored unless --pattern was also used (#13092)
      
      The --count option for redis-cli has been released in redis 7.2.
      https://github.com/redis/redis/pull/12042
      But I have found in code, that some logic was missing for using this
      'count' option.
      
      ```
      static redisReply *sendScan(unsigned long long *it) {
          redisReply *reply;
      
          if (config.pattern)
              reply = redisCommand(context, "SCAN %llu MATCH %b COUNT %d",
                  *it, config.pattern, sdslen(config.pattern), config.count);
          else
              reply = redisCommand(context,"SCAN %llu",*it);
      ```
      
      The intention was being able to using scan count.
      But in this case, the --count will be only applied when 'pattern' is
      declared.
      So, I had fix it simply, to be worked properly - even if --pattern
      option is not being used.
      
      I tested it simply with time() command several times, and I could see it
      works as intended with this commit.
      The examples of test results are below:
      ```
      # unstable build
      
      time(./redis-cli -a $AUTH -p $PORT -h $HOST --scan >/dev/null 2>/dev/null)
      
      real    0m1.287s
      user    0m0.011s
      sys     0m0.022s
      
      # count is not applied
      time(./redis-cli -a $AUTH -p $PORT -h $HOST --scan --count 1000 >/dev/null 2>/dev/null)
      
      real    0m1.117s
      user    0m0.011s
      sys     0m0.020s
      
      # count is applied with --pattern
      
      time(./redis-cli -a $AUTH -p $PORT -h $HOST --scan --count 1000 --pattern "hash:*" >/dev/null 2>/dev/null)
      
      real    0m0.045s
      user    0m0.002s
      sys     0m0.002s
      ```
      
      ```
      # fix-redis-cli-scan-count build
      time(./redis-cli -a $AUTH -p $PORT -h $HOST --scan >/dev/null 2>/dev/null)
      
      real    0m1.084s
      user    0m0.008s
      sys     0m0.024s
      
      # count is applied even if --pattern is not declared
      time(./redis-cli -a $AUTH -p $PORT -h $HOST --scan --count 1000 >/dev/null 2>/dev/null)
      
      real    0m0.043s
      user    0m0.000s
      sys     0m0.004s
      
      # of course this also applied
      time(./redis-cli -a $AUTH -p $PORT -h $HOST --scan --count 1000 --pattern "hash:*" >/dev/null 2>/dev/null)
      
      real    0m0.031s
      user    0m0.002s
      sys     0m0.002s
      ```
      
      
      
      Thanks a lot.
      763827c9
  23. 26 Feb, 2024 2 commits
    • Yanqi Lv's avatar
      Optimize DEL on expired keys (#13080) · 0a12f380
      Yanqi Lv authored
      
      
      If we call `DEL` on expired keys, keys may be deleted in
      `expireIfNeeded` and we don't need to call `dbSyncDelete` or
      `dbAsyncDelete` after, which repeat the deletion process(i.e. find keys
      in main db).
      
      In this PR, I refine the return values of `expireIfNeeded` to indicate
      whether we have deleted the expired key to avoid the potential redundant
      deletion logic in `delGenericCommand`. Besides, because both KEY_EXPIRED
      and KEY_DELETED are non-zero, this PR won't affect other functions
      calling `expireIfNeeded`.
      
      I also make a performance test. I first close active expiration by
      `debug set-active-expire 0` and write 1 million keys with 1ms TTL. Then
      I repeatedly delete 100 expired keys in one `DEL`. The results are as
      follow, which shows that this PR can improve performance by about 10% in
      this situation.
      **unstable**
      ```
      Summary:
        throughput summary: 10080.65 requests per second
        latency summary (msec):
                avg       min       p50       p95       p99       max
              0.953     0.136     0.959     1.215     1.335     2.247
      ```
      
      **This PR**
      ```
      Summary:			
        throughput summary: 11074.20 requests per second			
        latency summary (msec):			
                avg       min       p50       p95       p99       max			
              0.865     0.128     0.879     1.055     1.175     2.159			
      ```
      
      ---------
      Co-authored-by: default avatarViktor Söderqvist <viktor.soderqvist@est.tech>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      0a12f380
    • Binbin's avatar
      Fix size stat in malloc(0) and cleanups around zmalloc file (#13068) · 104b2076
      Binbin authored
      In #8554, we added a MALLOC_MIN_SIZE to use a minimum allocation
      size when using malloc(0). However, we did not update the size,
      when malloc_size is missing.
      
      When malloc_size exists, we record the size that was allocated
      instead of the size that was requested. This would work with both
      jemalloc, and libc malloc (the change in #8554, doesn't break this).
      
      When malloc_size is missing, we allocate extra size_t bytes and
      store the requested size in it. In that case, the requested size
      is probably different than the allocated size anyway (the change
      in #8554 doesn't conceptually change that).
      
      So we have room for improvement since in this case we are aware
      of the extra bytes we asked for. Same as we're also aware of the
      extra size_t bytes we asked for.
      
      In addition, some cleaning was done:
      1. fixes some outupdated comments.
      2. test cleanups
      104b2076
  24. 22 Feb, 2024 3 commits
    • Binbin's avatar
      Fix minor memory leak in rewriteSetObject (#13086) · bfcaa7db
      Binbin authored
      It seems to be a leak caused by code refactoring in #11290.
      it's a small leak, that only happens if there's an IO error.
      bfcaa7db
    • debing.sun's avatar
      Expose lua os.clock() api (#12971) · 4a265554
      debing.sun authored
      
      
      Implement #12699
      
      This PR exposing Lua os.clock() api for getting the elapsed time of Lua
      code execution.
      
      Using:
      ```lua
      local start = os.clock()
      ...
      do something
      ...
      local elpased = os.clock() - start
      ```
      
      ---------
      Co-authored-by: default avatarMeir Shpilraien (Spielrein) <meir@redis.com>
      Co-authored-by: default avatarMadelyn Olson <34459052+madolson@users.noreply.github.com>
      4a265554
    • debing.sun's avatar
      Determine the large limit of the quicklist node based on fill (#12659) · 165afc5f
      debing.sun authored
      Following #12568
      
      In issue #9357, when inserting an element larger than 1GB, we currently
      store it in a plain node instead of a listpack.
      Presently, when we insert an element that exceeds the maximum size of a
      packed node, it cannot be accommodated in any other nodes, thus ending
      up isolated like a large element.
      I.e. it's a node with only one element, but it's listpack encoded rather
      than a plain buffer.
      
      This PR lowers the threshold for considering an element as 'large' from
      1GB to the maximum size of a node.
      While this change doesn't completely resolve the bug mentioned in the
      previous PR, it does mitigate its potential impact.
      
      As a result of this change, we can now only use LSET to replace an
      element with another element that falls below the maximum size
      threshold.
      In the worst-case scenario, with a fill of -5, the largest packed node
      we can create is 2GB (32k * 64k):
      * 32k: The smallest element in a listpack is 2 bytes, which allows us to
      store up to 32k elements.
      * 64k: This is the maximum size for a single quicklist node.
      
      ## Others
      To fully fix #9357, we need more work, as discussed in #12568, when we
      insert an element into a quicklistNode, it may be created in a new node,
      put into another node, or merged, and we can't correctly delete the node
      that was supposed to be deleted.
      I'm not sure it's worth it, since it involves a lot of modifications.
      165afc5f