1. 29 Aug, 2024 1 commit
    • Oran Agra's avatar
      testsuite --dump-logs works on servers started before the test (#13500) · 3fcddfb6
      Oran Agra authored
      so far ./runtest --dump-logs used work for servers started within the
      test proc.
      now it'll also work on servers started outside the test proc scope.
      the downside is that these logs can be huge if they served many tests
      and not just the failing one.
      but for some rare failures, we rather have that than nothing.
      this feature isn't enabled y default, but is used by our GH actions.
      3fcddfb6
  2. 28 Aug, 2024 1 commit
  3. 26 Aug, 2024 1 commit
  4. 21 Aug, 2024 1 commit
  5. 20 Aug, 2024 2 commits
    • Zihao Lin's avatar
      Improve GETRANGE command behavior (#12272) · 6ceadfb5
      Zihao Lin authored
      
      
      Fixed the issue about GETRANGE and SUBSTR command
      return unexpected result caused by the `start` and `end` out of
      definition range of string.
      
      ---
      ## break change
      Before this PR, when negative `end` was out of range (i.e., end <
      -strlen), we would fix it to 0 to get the substring, which also resulted
      in the first character still being returned for this kind of out of
      range.
      After this PR, we ensure that `GETRANGE` returns an empty bulk when the
      negative end index is out of range.
      
      Closes #11738
      
      ---------
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      6ceadfb5
    • judeng's avatar
      improve performance for scan command when matching data type (#12395) · 7f0a7f0a
      judeng authored
      Move the TYPE filtering to the scan callback so that avoided the
      `lookupKey` operation. This is the follow-up to #12209 . In this thread
      we introduced two breaking changes:
      1. we will not attempt to do lazy expire (delete) a key that was
      filtered by not matching the TYPE (like we already do for MATCH
      pattern).
      2. when the specified key TYPE filter is an unknown type, server will
      reply a error immediately instead of doing a full scan that comes back
      empty handed.
      7f0a7f0a
  6. 19 Aug, 2024 2 commits
    • Meir Shpilraien (Spielrein)'s avatar
      Avoid used_memory contention when update from multiple threads. (#13431) · 3264deb2
      Meir Shpilraien (Spielrein) authored
      The PR attempt to avoid contention on the `used_memory` global variable
      when allocate or free memory from multiple threads at the same time.
      
      Each time a thread is allocating or releasing a memory, it needs to
      update the `used_memory` global variable. This update might cause a
      contention when done aggressively from multiple threads.
      
      ### The solution
      
      Instead of having a single global variable that need to be updated from
      multiple thread. We create an array of used_memory, each entry in the
      array is updated by a single thread and the main thread summarizes all
      the values to accumulate the memory usage.
      
      This solution, though reduces the contention between threads on updating
      the `used_memory` global variable, it adds work to the main thread that
      need to summarize all the entries at the `used_memory` array. To avoid
      increasing the work done by the main thread by too much, we limit the
      size of the used memory array to 16. This means that up to 16 threads
      can run without any contention between them. If there are more than 16
      threads, we will reuse entries on the used_memory array, in this case we
      might still have contention between threads, but it will be much less
      significant.
      
      Notice, that in order to really avoid contention, the entries in the
      `used_memory` array must reside on different cache lines. To achieve
      that we create a struct with padding such that its size will be exactly
      cache_line size. In addition we make sure the address of the
      `used_memory` array will be aligned to cache_line size.
      
      ### Benchmark
      
      Some benchmark shows improvement (up to 15%):
      
      | Test Case |Baseline unstable (median obs. +- std.dev)|Comparison
      test_used_memory_per_thread_array (median obs. +- std.dev)|% change
      (higher-better)| Note |
      
      |-------------------------------------------------------------------------------|------------------------------------------|--------------------------------------------------------------------:|------------------------|------------------------------------|
      |memtier_benchmark-1key-list-100-elements-lrange-all-elements | 92657 +-
      2.0% (2 datapoints) | 101445|9.5% |IMPROVEMENT |
      |memtier_benchmark-1key-list-1K-elements-lrange-all-elements | 14965 +-
      1.3% (2 datapoints) | 16296|8.9% |IMPROVEMENT |
      |memtier_benchmark-1key-set-10-elements-smembers-pipeline-10 | 431019 +-
      5.2% (2 datapoints) | 461039|7.0% |waterline=5.2%. IMPROVEMENT |
      |memtier_benchmark-1key-set-100-elements-smembers | 74367 +- 0.0% (2
      datapoints) | 80190|7.8% |IMPROVEMENT |
      |memtier_benchmark-1key-set-1K-elements-smembers | 11730 +- 0.4% (2
      datapoints) | 13519|15.3% |IMPROVEMENT |
      
      
      Full results:
      
      | Test Case |Baseline unstable (median obs. +- std.dev)|Comparison
      test_used_memory_per_thread_array (median obs. +- std.dev)|% change
      (higher-better)| Note |
      
      |-------------------------------------------------------------------------------|------------------------------------------|--------------------------------------------------------------------:|------------------------|------------------------------------|
      |memtier_benchmark-10Mkeys-load-hash-5-fields-with-1000B-values | 88613
      +- 1.0% (2 datapoints) | 88688|0.1% |No Change |
      
      |memtier_benchmark-10Mkeys-load-hash-5-fields-with-1000B-values-pipeline-10
      | 124786 +- 1.2% (2 datapoints) | 123671|-0.9% |No Change |
      |memtier_benchmark-10Mkeys-load-hash-5-fields-with-100B-values | 122460
      +- 1.4% (2 datapoints) | 122990|0.4% |No Change |
      
      |memtier_benchmark-10Mkeys-load-hash-5-fields-with-100B-values-pipeline-10
      | 333384 +- 5.1% (2 datapoints) | 319221|-4.2% |waterline=5.1%.
      potential REGRESSION|
      |memtier_benchmark-10Mkeys-load-hash-5-fields-with-10B-values | 137354
      +- 0.3% (2 datapoints) | 138759|1.0% |No Change |
      
      |memtier_benchmark-10Mkeys-load-hash-5-fields-with-10B-values-pipeline-10
      | 401261 +- 4.3% (2 datapoints) | 398524|-0.7% |No Change |
      |memtier_benchmark-1Mkeys-100B-expire-use-case | 179058 +- 0.4% (2
      datapoints) | 180114|0.6% |No Change |
      |memtier_benchmark-1Mkeys-10B-expire-use-case | 180390 +- 0.2% (2
      datapoints) | 180401|0.0% |No Change |
      |memtier_benchmark-1Mkeys-1KiB-expire-use-case | 175993 +- 0.7% (2
      datapoints) | 175147|-0.5% |No Change |
      |memtier_benchmark-1Mkeys-4KiB-expire-use-case | 165771 +- 0.0% (2
      datapoints) | 164434|-0.8% |No Change |
      |memtier_benchmark-1Mkeys-bitmap-getbit-pipeline-10 | 931339 +- 2.1% (2
      datapoints) | 929487|-0.2% |No Change |
      |memtier_benchmark-1Mkeys-generic-exists-pipeline-10 | 999462 +- 0.4% (2
      datapoints) | 963226|-3.6% |potential REGRESSION |
      |memtier_benchmark-1Mkeys-generic-expire-pipeline-10 | 905333 +- 1.4% (2
      datapoints) | 896673|-1.0% |No Change |
      |memtier_benchmark-1Mkeys-generic-expireat-pipeline-10 | 885015 +- 1.0%
      (2 datapoints) | 865010|-2.3% |No Change |
      |memtier_benchmark-1Mkeys-generic-pexpire-pipeline-10 | 897115 +- 1.2%
      (2 datapoints) | 887544|-1.1% |No Change |
      |memtier_benchmark-1Mkeys-generic-scan-pipeline-10 | 451103 +- 3.2% (2
      datapoints) | 465571|3.2% |potential IMPROVEMENT |
      |memtier_benchmark-1Mkeys-generic-touch-pipeline-10 | 996809 +- 0.6% (2
      datapoints) | 984478|-1.2% |No Change |
      |memtier_benchmark-1Mkeys-generic-ttl-pipeline-10 | 979570 +- 1.7% (2
      datapoints) | 958752|-2.1% |No Change |
      
      |memtier_benchmark-1Mkeys-hash-hget-hgetall-hkeys-hvals-with-100B-values
      | 180888 +- 0.5% (2 datapoints) | 182295|0.8% |No Change |
      
      |memtier_benchmark-1Mkeys-hash-hmget-5-fields-with-100B-values-pipeline-10
      | 717881 +- 1.0% (2 datapoints) | 724814|1.0% |No Change |
      |memtier_benchmark-1Mkeys-hash-transactions-multi-exec-pipeline-20 |
      1055447 +- 0.4% (2 datapoints) | 1065836|1.0% |No Change |
      |memtier_benchmark-1Mkeys-lhash-hexists | 164332 +- 0.1% (2 datapoints)
      | 163636|-0.4% |No Change |
      |memtier_benchmark-1Mkeys-lhash-hincbry | 171674 +- 0.3% (2 datapoints)
      | 172737|0.6% |No Change |
      |memtier_benchmark-1Mkeys-list-lpop-rpop-with-100B-values | 180904 +-
      1.1% (2 datapoints) | 179467|-0.8% |No Change |
      |memtier_benchmark-1Mkeys-list-lpop-rpop-with-10B-values | 181746 +-
      0.8% (2 datapoints) | 182416|0.4% |No Change |
      |memtier_benchmark-1Mkeys-list-lpop-rpop-with-1KiB-values | 182004 +-
      0.7% (2 datapoints) | 180237|-1.0% |No Change |
      |memtier_benchmark-1Mkeys-load-hash-5-fields-with-1000B-values | 105191
      +- 0.9% (2 datapoints) | 105058|-0.1% |No Change |
      
      |memtier_benchmark-1Mkeys-load-hash-5-fields-with-1000B-values-pipeline-10
      | 150683 +- 0.9% (2 datapoints) | 153597|1.9% |No Change |
      |memtier_benchmark-1Mkeys-load-hash-hmset-5-fields-with-1000B-values |
      104122 +- 0.7% (2 datapoints) | 105236|1.1% |No Change |
      |memtier_benchmark-1Mkeys-load-list-with-100B-values | 149770 +- 0.9% (2
      datapoints) | 150510|0.5% |No Change |
      |memtier_benchmark-1Mkeys-load-list-with-10B-values | 165537 +- 1.9% (2
      datapoints) | 164329|-0.7% |No Change |
      |memtier_benchmark-1Mkeys-load-list-with-1KiB-values | 113315 +- 0.5% (2
      datapoints) | 114110|0.7% |No Change |
      |memtier_benchmark-1Mkeys-load-stream-1-fields-with-100B-values | 131201
      +- 0.7% (2 datapoints) | 129545|-1.3% |No Change |
      
      |memtier_benchmark-1Mkeys-load-stream-1-fields-with-100B-values-pipeline-10
      | 352891 +- 2.8% (2 datapoints) | 348338|-1.3% |No Change |
      |memtier_benchmark-1Mkeys-load-stream-5-fields-with-100B-values | 104386
      +- 0.7% (2 datapoints) | 105796|1.4% |No Change |
      
      |memtier_benchmark-1Mkeys-load-stream-5-fields-with-100B-values-pipeline-10
      | 227593 +- 5.5% (2 datapoints) | 218783|-3.9% |waterline=5.5%.
      potential REGRESSION|
      |memtier_benchmark-1Mkeys-load-string-with-100B-values | 167552 +- 0.2%
      (2 datapoints) | 170282|1.6% |No Change |
      |memtier_benchmark-1Mkeys-load-string-with-100B-values-pipeline-10 |
      646888 +- 0.5% (2 datapoints) | 639680|-1.1% |No Change |
      |memtier_benchmark-1Mkeys-load-string-with-10B-values | 174891 +- 0.7%
      (2 datapoints) | 174382|-0.3% |No Change |
      |memtier_benchmark-1Mkeys-load-string-with-10B-values-pipeline-10 |
      749988 +- 5.1% (2 datapoints) | 769986|2.7% |waterline=5.1%. No Change |
      |memtier_benchmark-1Mkeys-load-string-with-1KiB-values | 155929 +- 0.1%
      (2 datapoints) | 156387|0.3% |No Change |
      |memtier_benchmark-1Mkeys-load-zset-with-10-elements-double-score |
      92241 +- 0.2% (2 datapoints) | 92189|-0.1% |No Change |
      |memtier_benchmark-1Mkeys-load-zset-with-10-elements-int-score | 114328
      +- 1.3% (2 datapoints) | 113154|-1.0% |No Change |
      |memtier_benchmark-1Mkeys-string-get-100B | 180685 +- 0.2% (2
      datapoints) | 180359|-0.2% |No Change |
      |memtier_benchmark-1Mkeys-string-get-100B-pipeline-10 | 991291 +- 3.1%
      (2 datapoints) | 1020086|2.9% |No Change |
      |memtier_benchmark-1Mkeys-string-get-10B | 181183 +- 0.3% (2 datapoints)
      | 177868|-1.8% |No Change |
      |memtier_benchmark-1Mkeys-string-get-10B-pipeline-10 | 1032554 +- 0.8%
      (2 datapoints) | 1023120|-0.9% |No Change |
      |memtier_benchmark-1Mkeys-string-get-1KiB | 180479 +- 0.9% (2
      datapoints) | 182215|1.0% |No Change |
      |memtier_benchmark-1Mkeys-string-get-1KiB-pipeline-10 | 979286 +- 0.9%
      (2 datapoints) | 989888|1.1% |No Change |
      |memtier_benchmark-1Mkeys-string-mget-1KiB | 121950 +- 0.4% (2
      datapoints) | 120996|-0.8% |No Change |
      |memtier_benchmark-1key-geo-60M-elements-geodist | 179404 +- 1.0% (2
      datapoints) | 181232|1.0% |No Change |
      |memtier_benchmark-1key-geo-60M-elements-geodist-pipeline-10 | 1023797
      +- 0.5% (2 datapoints) | 1014980|-0.9% |No Change |
      |memtier_benchmark-1key-geo-60M-elements-geohash | 180808 +- 1.2% (2
      datapoints) | 180606|-0.1% |No Change |
      |memtier_benchmark-1key-geo-60M-elements-geohash-pipeline-10 | 1056458
      +- 1.6% (2 datapoints) | 1040050|-1.6% |No Change |
      |memtier_benchmark-1key-geo-60M-elements-geopos | 181808 +- 0.2% (2
      datapoints) | 175945|-3.2% |potential REGRESSION |
      |memtier_benchmark-1key-geo-60M-elements-geopos-pipeline-10 | 1038180 +-
      3.4% (2 datapoints) | 1033005|-0.5% |No Change |
      |memtier_benchmark-1key-geo-60M-elements-geosearch-fromlonlat | 142614
      +- 0.3% (2 datapoints) | 144259|1.2% |No Change |
      |memtier_benchmark-1key-geo-60M-elements-geosearch-fromlonlat-bybox |
      141008 +- 0.4% (2 datapoints) | 139602|-1.0% |No Change |
      
      |memtier_benchmark-1key-geo-60M-elements-geosearch-fromlonlat-pipeline-10
      | 560698 +- 0.8% (2 datapoints) | 548806|-2.1% |No Change |
      |memtier_benchmark-1key-list-10-elements-lrange-all-elements | 166132 +-
      0.9% (2 datapoints) | 170259|2.5% |No Change |
      |memtier_benchmark-1key-list-100-elements-lrange-all-elements | 92657 +-
      2.0% (2 datapoints) | 101445|9.5% |IMPROVEMENT |
      |memtier_benchmark-1key-list-1K-elements-lrange-all-elements | 14965 +-
      1.3% (2 datapoints) | 16296|8.9% |IMPROVEMENT |
      |memtier_benchmark-1key-pfadd-4KB-values-pipeline-10 | 264156 +- 0.2% (2
      datapoints) | 262582|-0.6% |No Change |
      |memtier_benchmark-1key-set-10-elements-smembers | 138916 +- 1.7% (2
      datapoints) | 138016|-0.6% |No Change |
      |memtier_benchmark-1key-set-10-elements-smembers-pipeline-10 | 431019 +-
      5.2% (2 datapoints) | 461039|7.0% |waterline=5.2%. IMPROVEMENT |
      |memtier_benchmark-1key-set-10-elements-smismember | 173545 +- 1.1% (2
      datapoints) | 173488|-0.0% |No Change |
      |memtier_benchmark-1key-set-100-elements-smembers | 74367 +- 0.0% (2
      datapoints) | 80190|7.8% |IMPROVEMENT |
      |memtier_benchmark-1key-set-100-elements-smismember | 155682 +- 1.6% (2
      datapoints) | 151367|-2.8% |No Change |
      |memtier_benchmark-1key-set-1K-elements-smembers | 11730 +- 0.4% (2
      datapoints) | 13519|15.3% |IMPROVEMENT |
      |memtier_benchmark-1key-set-200K-elements-sadd-constant | 181070 +- 1.1%
      (2 datapoints) | 180214|-0.5% |No Change |
      |memtier_benchmark-1key-set-2M-elements-sadd-increasing | 166364 +- 0.1%
      (2 datapoints) | 166944|0.3% |No Change |
      |memtier_benchmark-1key-zincrby-1M-elements-pipeline-1 | 46071 +- 0.6%
      (2 datapoints) | 44979|-2.4% |No Change |
      |memtier_benchmark-1key-zrank-1M-elements-pipeline-1 | 48429 +- 0.4% (2
      datapoints) | 49265|1.7% |No Change |
      |memtier_benchmark-1key-zrem-5M-elements-pipeline-1 | 48528 +- 0.4% (2
      datapoints) | 48869|0.7% |No Change |
      |memtier_benchmark-1key-zrevrangebyscore-256K-elements-pipeline-1 |
      100580 +- 1.5% (2 datapoints) | 101782|1.2% |No Change |
      |memtier_benchmark-1key-zrevrank-1M-elements-pipeline-1 | 48621 +- 2.0%
      (2 datapoints) | 48473|-0.3% |No Change |
      |memtier_benchmark-1key-zset-10-elements-zrange-all-elements | 83485 +-
      0.6% (2 datapoints) | 83095|-0.5% |No Change |
      
      |memtier_benchmark-1key-zset-10-elements-zrange-all-elements-long-scores
      | 118673 +- 0.8% (2 datapoints) | 118006|-0.6% |No Change |
      |memtier_benchmark-1key-zset-100-elements-zrange-all-elements | 19009 +-
      1.1% (2 datapoints) | 19293|1.5% |No Change |
      |memtier_benchmark-1key-zset-100-elements-zrangebyscore-all-elements |
      18957 +- 0.5% (2 datapoints) | 19419|2.4% |No Change |
      
      |memtier_benchmark-1key-zset-100-elements-zrangebyscore-all-elements-long-scores|
      171693 +- 0.5% (2 datapoints) | 172432|0.4% |No Change |
      |memtier_benchmark-1key-zset-1K-elements-zrange-all-elements | 3566 +-
      0.6% (2 datapoints) | 3672|3.0% |No Change |
      |memtier_benchmark-1key-zset-1M-elements-zcard-pipeline-10 | 1067713 +-
      0.4% (2 datapoints) | 1071550|0.4% |No Change |
      |memtier_benchmark-1key-zset-1M-elements-zrevrange-5-elements | 169195
      +- 0.7% (2 datapoints) | 169620|0.3% |No Change |
      |memtier_benchmark-1key-zset-1M-elements-zscore-pipeline-10 | 914338 +-
      0.2% (2 datapoints) | 905540|-1.0% |No Change |
      |memtier_benchmark-2keys-lua-eval-hset-expire | 88346 +- 1.7% (2
      datapoints) | 87259|-1.2% |No Change |
      |memtier_benchmark-2keys-lua-evalsha-hset-expire | 103273 +- 1.2% (2
      datapoints) | 102393|-0.9% |No Change |
      |memtier_benchmark-2keys-set-10-100-elements-sdiff | 15418 +- 10.9%
      UNSTABLE (2 datapoints) | 14369|-6.8% |UNSTABLE (very high variance) |
      |memtier_benchmark-2keys-set-10-100-elements-sinter | 83601 +- 3.6% (2
      datapoints) | 82508|-1.3% |No Change |
      |memtier_benchmark-2keys-set-10-100-elements-sunion | 14942 +- 11.2%
      UNSTABLE (2 datapoints) | 14001|-6.3% |UNSTABLE (very high variance) |
      |memtier_benchmark-2keys-stream-5-entries-xread-all-entries | 75938 +-
      0.4% (2 datapoints) | 76565|0.8% |No Change |
      |memtier_benchmark-2keys-stream-5-entries-xread-all-entries-pipeline-10
      | 120781 +- 1.1% (2 datapoints) | 119142|-1.4% |No Change |
      3264deb2
    • debing.sun's avatar
      Fix a race condition issue in the cache_memory of functionsLibCtx (#13476) · 6c648928
      debing.sun authored
      This is a missing of the PR https://github.com/redis/redis/pull/13383.
      We will call `functionsLibCtxClear()` in bio, so we shouldn't touch
      `curr_functions_lib_ctx` in it.
      6c648928
  7. 16 Aug, 2024 1 commit
    • debing.sun's avatar
      Fix incorrect lag due to trimming stream via XTRIM command (#13473) · 2b88db90
      debing.sun authored
      ## Describe
      When using the `XTRIM` command to trim a stream, it does not update the
      maximal tombstone (`max_deleted_entry_id`). This leads to an issue where
      the lag calculation incorrectly assumes that there are no tombstones
      after the consumer group's last_id, resulting in an inaccurate lag.
      
      The reason XTRIM doesn't need to update the maximal tombstone is that it
      always trims from the beginning of the stream. This means that it
      consistently changes the position of the first entry, leading to the
      following scenarios:
      
      1) First entry trimmed after maximal tombstone:
      If the first entry is trimmed to a position after the maximal tombstone,
      all tombstones will be before the first entry, so they won't affect the
      consumer group's lag.
      
      2) First entry trimmed before maximal tombstone:
      If the first entry is trimmed to a position before the maximal
      tombstone, the maximal tombstone will not be updated.
      
      ## Solution
      Therefore, this PR optimizes the lag calculation by ensuring that when
      both the consumer group's last_id and the maximal tombstone are behind
      the first entry, the consumer group's lag is always equal to the number
      of remaining elements in the stream.
      
      Supplement to PR https://github.com/redis/redis/pull/13338
      2b88db90
  8. 14 Aug, 2024 1 commit
  9. 11 Aug, 2024 1 commit
    • Moti Cohen's avatar
      On HDEL last field with expiry, update global HFE DS (#13470) · 806459f4
      Moti Cohen authored
      Hash field expiration is optimized to avoid frequent update global HFE DS for
      each field deletion. Eventually active-expiration will run and update or remove
      the hash from global HFE DS gracefully. Nevertheless, statistic "subexpiry"
      might reflect wrong number of hashes with HFE to the user if HDEL deletes
      the last field with expiration in hash (yet there are more fields without expiration).
      
      Following this change, if HDEL the last field with expiration in the hash then
      take care to remove the hash from global HFE DS as well.
      806459f4
  10. 08 Aug, 2024 2 commits
  11. 06 Aug, 2024 2 commits
  12. 05 Aug, 2024 4 commits
    • YaacovHazan's avatar
      Keep cluster shards command implementation generic (#13440) · e4ddc344
      YaacovHazan authored
      Make the clusterCommandShards function use only cluster API functions
      instead of accessing cluster implementation details.
      This way the cluster API implementation doesn't have to have intimate
      knowledge of the command reply format, and doesn't need to interact with
      the client directly (the addReply function family).
      The PR has two commits, one moves the function from cluster_legacy.c to
      cluster.c, and the other modifies it's implementation.
      
      
      **better merge without squashing.**
      e4ddc344
    • Josh Hershberg's avatar
      Make cluster shards cmd implementation generic · 6d5d7541
      Josh Hershberg authored
      
      
      This and the previous commit make the cluster
      shards command a generic implementation instead of a
      specific implementation for each cluster API implementation.
      This commit (a) adds functions to the cluster API
      and (b) modifies the cluster shards cmd implementation
      to use cluster API functions instead of directly
      accessing the legacy clustering implementation.
      Signed-off-by: default avatarJosh Hershberg <yehoshua@redis.com>
      6d5d7541
    • Josh Hershberg's avatar
      Prep to make cluster shards cmd generic · e3e631f3
      Josh Hershberg authored
      
      
      This and the next following commit makes the cluster
      shards command a generic implementation instead of a
      specific implementation for each cluster API implementation.
      This commit simply moves the cluster shards implementation
      from cluster_legacy.c to cluster.c without changing the
      implementation at all. The reason for doing so was to
      help with reviewing the changes in the diff.
      Signed-off-by: default avatarJosh Hershberg <yehoshua@redis.com>
      e3e631f3
    • Zhongxian Pan's avatar
      Replace bit shift with __builtin_ctzll in HyperLogLog (#13218) · 6263823e
      Zhongxian Pan authored
      
      
      ## Replace bit shift with `__builtin_ctzll` in HyperLogLog
      
      Builtin function `__builtin_ctzll` is more effective than bit shift even
      though "in the average case there are high probabilities to find a 1
      after a few iterations" mentioned in the source file comment.
      
      ---------
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      6263823e
  13. 04 Aug, 2024 1 commit
  14. 03 Aug, 2024 1 commit
  15. 01 Aug, 2024 2 commits
  16. 31 Jul, 2024 1 commit
  17. 30 Jul, 2024 1 commit
  18. 28 Jul, 2024 1 commit
  19. 25 Jul, 2024 2 commits
  20. 24 Jul, 2024 1 commit
  21. 22 Jul, 2024 2 commits
    • Oran Agra's avatar
      solve race conditions in tests (#13433) · 447ce11a
      Oran Agra authored
      [exception]: Executing test client: ERR FAILOVER target replica is not
      online.. ERR FAILOVER target replica is not online.
          while executing
      "$node_0 failover to $node_1_host $node_1_port"
          ("uplevel" body line 16)
          invoked from within
      "uplevel 1 $code"
          (procedure "test" line 58)
          invoked from within
      "test {failover command to specific replica works} {
      
      [err]: client evicted due to percentage of maxmemory in
      tests/unit/client-eviction.tcl
      Expected 33622 >= 220200 && 33622 < 440401 (context: type eval line 17
      cmd {assert {$tot_mem >= $n && $tot_mem < $maxmemory_clients_actual}}
      proc ::test)
      447ce11a
    • Oran Agra's avatar
      Different fix for the race in #13361 (#13434) · 13d227fa
      Oran Agra authored
      Recently in #13361, i attempted to fix a race between FLUSHALL and
      BGSAVE, where despite calling killRDBChild, the
      backgroundSaveDoneHandler will terminate with success.
      Turns out that even if the child didn't yet exit, there's a chance it'll
      still miss our signal and exit with success.
      in that case, we will still mess up the dirty counter (deducting
      dirty_before_bgsave) which is reset by FLUSHALL, and override the
      synchronous rdb file we saved.
      
      instead, we'll set a flag to treat the next done handler as a failed
      one.
      13d227fa
  22. 17 Jul, 2024 1 commit
    • Oran Agra's avatar
      Fix external test hang in redis-cli test when run in a certain order (#13423) · a3319785
      Oran Agra authored
      When the tests are run against an external server in this order:
      `--single unit/introspection --single unit/moduleapi/blockonbackground
      --single integration/redis-cli`
      the test would hang when the "ASK redirect test" test attempts to create
      a listening socket (it fails, and then redis-cli itself hangs waiting
      for a non-responsive socket created by the introspection test).
      
      the reasons are:
      1. the blockedbackground test includes util.tcl and resets the
      `::last_port_attempted` variable
      2. the test in introspection didn't close the listening server, so it's
      still alive.
      3. find_available_port doesn't properly detect the busy port, and it
      thinks that the port is free even though it's busy.
      
      fixing all 3 of these problems, even though fixing just one would be
      enough to let the test pass.
      a3319785
  23. 16 Jul, 2024 3 commits
    • Oran Agra's avatar
      Test infra adjustments for external CI runs (#13421) · fa46aa4d
      Oran Agra authored
      - when uploading server logs, make sure they don't overwrite each other.
      - sort the test units to get consistent order between them (following
      #13220)
      - backup and restore the entire server configuration, to protect one
      unit from config changes another unit performs
      fa46aa4d
    • debing.sun's avatar
      Trigger Lua GC after script loading (#13407) · 88af96c7
      debing.sun authored
      Nowdays we do not trigger LUA GC after loading lua script. This means
      that when a large number of scripts are loaded, such as when functions
      are propagating from the master to the replica, if the LUA scripts are
      never touched on the replica, the garbage might remain there
      indefinitely.
      
      Before this PR, we would share a gc_count between scripts and functions.
      This means that, under certain circumstances, the GC trigger for scripts
      and functions was not fair.
      For example, loading a large number of scripts followed by a small
      number of functions could result in the functions triggering GC.
      In this PR, we assign a unique `gc_count` to each of them, so the GC
      triggers between them will no longer affect each other.
      
      on the other hand, this PR will to bring regession for script loading
      commands(`FUNCTION LOAD` and `SCRIPT LOAD`), but they are not hot path,
      we can ignore it, and it will be replaced
      https://github.com/redis/redis/pull/13375
      
       in the future.
      
      ---------
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      88af96c7
    • debing.sun's avatar
      Prevent deleting RDB read event after restarting RDB saving for other diskless replicas (#13410) · 76415fa2
      debing.sun authored
      When we terminate the diskless RDB saving child process and, at the same
      time, we start a new BGSAVE for new replicas, we should not delete the
      RDB read event. Otherwise, these replicas will never receive a response.
      this is a result of the recent change in
      https://github.com/redis/redis/pull/13361
      
      
      
      ---------
      Co-authored-by: default avataroranagra <oran@redislabs.com>
      76415fa2
  24. 15 Jul, 2024 2 commits
  25. 14 Jul, 2024 1 commit
    • guybe7's avatar
      Crash report: Use more chars for argv (#13413) · b10e19e3
      guybe7 authored
      128 is not enough chars when we're talking about commands like RESTORE.
      Of course, it's impossible to find the perfect number, but 1024 is
      better than 128, and it's not obscenely large.
      b10e19e3
  26. 12 Jul, 2024 1 commit
    • debing.sun's avatar
      Avoid starting defrag after config resetstat for defrag test (#13399) · d39548c8
      debing.sun authored
      
      
      If `config resetstat` is executed and a defrag is started after it, the
      `total_active_defrag_time` will not be 0.
      When we start the defrag again, we will skip the following steps:
      1. waiting for the defrag to start. (s total_active_defrag_time is equal
      0)
      2. waiting for the test to complete. (active_defrag_running is euqal 0)
      which result in the test failed.
      
      ---------
      Co-authored-by: default avataroranagra <oran@redislabs.com>
      d39548c8
  27. 11 Jul, 2024 1 commit