1. 03 Sep, 2023 1 commit
    • secwall's avatar
      Check shard_id pointer validity in updateShardId (#12538) · a2046c1e
      secwall authored
      When connecting between a 7.0 and 7.2 cluster, the 7.0 cluster will not populate the shard_id field, which is expect on the 7.2 cluster. This is not intended behavior, as the 7.2 cluster is supposed to use a temporary shard_id while the node is in the upgrading state, but it wasn't being correctly set in this case.
      a2046c1e
  2. 02 Sep, 2023 1 commit
    • alonre24's avatar
      redis-benchmark - add the support for binary strings (#9414) · 044e29dd
      alonre24 authored
      
      
      Recently, the option of sending an argument from stdin using `-x` flag
      was added to redis-benchmark (this option is available in redis-cli as well).
      However, using the `-x` option for sending a blobs that contains null-characters
      doesn't work as expected - the argument is trimmed in the first occurrence of
      `\X00` (unlike in redis-cli).  
      This PR aims to fix this issue and add the support for every binary string input,
      by sending arguments length to `redisFormatCommandArgv` when processing
      redis-benchmark command, so we won't treat the arguments as C-strings.
      
      Additionally, we add a simple test coverage for `-x` (without binary strings, and
      also remove an excessive server started in tests, and make sure to select db 0
      so that `r` and the benchmark work on the same db.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      044e29dd
  3. 01 Sep, 2023 1 commit
    • Binbin's avatar
      Add logreqres:skip flag to new INFO obuf limit test (#12537) · 4ba144a4
      Binbin authored
      The new test added in #12476 causes reply-schemas-validator to fail.
      When doing `catch {r get key}`, the req-res output is:
      ```
      3
      get
      3
      key
      12
      __argv_end__
      $100000
      aaaaaaaaaaaaaaaaaaaa...4
      info
      5
      stats
      12
      __argv_end__
      =1670
      txt:# Stats
      ...
      ```
      
      And we can see the link after `$100000`, there is a 4 in the last,
      it break the req-res-log-validator script since the format is wrong.
      
      The reason i guess is after the client reconnection (after the output
      buf limit), we will not add newlines, but append args directly.
      Since obuf-limits.tcl is doing the same thing, and it had the logreqres:skip
      flag, so this PR is following it.
      4ba144a4
  4. 31 Aug, 2023 4 commits
    • Roshan Khatri's avatar
      Remove unnecessary use of sds and mem copy in module.c (#12533) · 49f7d173
      Roshan Khatri authored
      Found that in moduleConfigValidityCheck and isModuleConfigNameRegistered, sds is not required. This also allowed to remove unnecessary memcopy from some of the config registering APIs.
      49f7d173
    • icy17's avatar
      370d3801
    • Chen Tianjie's avatar
      Optimize ZRANGE offset location from linear search to skiplist jump. (#12450) · b26e8e32
      Chen Tianjie authored
      ZRANGE BYSCORE/BYLEX with [LIMIT offset count] option was
      using every level in skiplist to jump to the first/last node in range,
      but only use level[0] in skiplist to locate the node at offset, resulting
      in sub-optimal performance using LIMIT:
      ```
      while (ln && offset--) {
          if (reverse) {
              ln = ln->backward;
          } else {
              ln = ln->level[0].forward;
          }
      }
      ```
      It could be slow when offset is very big. We can get the total rank of
      the offset location and use skiplist to jump to it. It is an improvement
      from O(offset) to O(log rank).
      
      Below shows how this is implemented (if the offset is positve):
      
      Use the skiplist to seach for the first element in the range, record its
      rank `rank_0`, so we can have the rank of the target node `rank_t`.
      Meanwhile we record the last node we visited which has zsl->level-1
      levels and its rank `rank_1`. Then we start from the zsl->level-1 node,
      use skiplist to go forward `rank_t-rank_1` nodes to reach the target node.
      
      It is very similiar when the offset is reversed.
      
      Note that if `rank_t` is very close to `rank_0`, we just start from the first
      element in range and go node by node, this for the case when zsl->level-1
      node is to far away and it is quicker to reach the target node by node.
      
      Here is a test using a random generated zset including 10000 elements
      (with different positive scores), doing a bench mark which compares how
      fast the `ZRANGE` command is exucuted before and after the optimization. 
      
      The start score is set to 0 and the count is set to 1 to make sure that
      most of the time is spent on locating the offset.
      ```
      memtier_benchmark -h 127.0.0.1 -p 6379 --command="zrange test 0 +inf byscore limit <offset> 1"
      ```
      | offset | QPS(unstable) | QPS(optimized) |
      |--------|--------|--------|
      | 10 | 73386.02 | 74819.82 |
      | 1000 | 48084.96 | 73177.73 |
      | 2000 | 31156.79 | 72805.83 |
      | 5000 | 10954.83 | 71218.21 |
      
      With the result above, we can see that the original code is greatly
      slowed down when offset gets bigger, and with the optimization the
      speed is almost not affected.
      
      Similiar results are generated when testing reversed offset:
      ```
      memtier_benchmark -h 127.0.0.1 -p 6379 --command="zrange test +inf 0 byscore rev limit <offset> 1"
      ```
      | offset | QPS(unstable) | QPS(optimized) |
      |--------|--------|--------|
      | 10 | 74505.14 | 71653.67 |
      | 1000 | 46829.25 | 72842.75 |
      | 2000 | 28985.48 | 73669.01 |
      | 5000 | 11066.22 | 73963.45 | 
      
      And the same conclusion is drawn from the tests of ZRANGE BYLEX.
      b26e8e32
    • Binbin's avatar
      Update sort_ro reply_schema to mention the null reply (#12534) · 9ce8c54d
      Binbin authored
      Also added a test to cover this case, so this can
      cover the reply schemas check.
      9ce8c54d
  5. 30 Aug, 2023 4 commits
    • Roshan Khatri's avatar
      Allows modules to declare new ACL categories. (#12486) · 75199605
      Roshan Khatri authored
      
      
      This PR adds a new Module API int RM_AddACLCategory(RedisModuleCtx *ctx, const char *category_name) to add a new ACL command category.
      
      Here, we initialize the ACLCommandCategories array by allocating space for 64 categories and duplicate the 21 default categories from the predefined array 'ACLDefaultCommandCategories' into the ACLCommandCategories array while ACL initialization. Valid ACL category names can only contain alphanumeric characters, underscores, and dashes.
      
      The API when called, checks for the onload flag, category name validity, and for duplicate category name if present. If the conditions are satisfied, the API adds the new category to the trailing end of the ACLCommandCategories array and assigns the acl_categories flag bit according to the index at which the category is added.
      
      If any error is encountered the errno is set accordingly by the API.
      
      ---------
      Co-authored-by: default avatarMadelyn Olson <madelyneolson@gmail.com>
      75199605
    • bodong.ybd's avatar
      Fix sort_ro get-keys function return wrong key number (#12522) · b59f53ef
      bodong.ybd authored
      Before:
      ```
      127.0.0.1:6379> command getkeys sort_ro key
      (empty array)
      127.0.0.1:6379>
      ```
      After:
      ```
      127.0.0.1:6379> command getkeys sort_ro key
      1) "key"
      127.0.0.1:6379>
      ```
      b59f53ef
    • Chen Tianjie's avatar
      Add two stats to count client input and output buffer oom. (#12476) · e3d4b30d
      Chen Tianjie authored
      Add these INFO metrics:
      * client_query_buffer_limit_disconnections
      * client_output_buffer_limit_disconnections
      
      Sometimes it is useful to monitor whether clients reaches size limit of
      query buffer and output buffer, to decide whether we need to adjust the
      buffer size limit or reduce client query payload.
      e3d4b30d
    • nihohit's avatar
      Align CONFIG RESETSTAT/REWRITE tips with SET. (#12530) · 4b281ce5
      nihohit authored
      
      
      Since the three commands have similar behavior (change config, return
      OK), the tips that govern how they should behave should be similar.
      Co-authored-by: default avatarShachar Langbeheim <shachlan@amazon.com>
      4b281ce5
  6. 27 Aug, 2023 1 commit
    • Binbin's avatar
      Add printing for LATENCY related tests (#12514) · e7926537
      Binbin authored
      This test failed several times:
      ```
      *** [err]: LATENCY GRAPH can output the event graph in tests/unit/latency-monitor.tcl
      Expected '478' to be more than or equal to '500' (context: type eval
      line 8 cmd {assert_morethan_equal $high 500} proc ::test)
      ```
      
      Not sure why, adding some verbose printing that'll print the command
      result on the next time.
      e7926537
  7. 22 Aug, 2023 1 commit
  8. 21 Aug, 2023 4 commits
    • Binbin's avatar
      BITCOUNT and BITPOS with non-existing key and illegal arguments should return error, not 0 (#11734) · 1407ac1f
      Binbin authored
      BITCOUNT and BITPOS with non-existing key will return 0 even the
      arguments are error, before this commit:
      ```
      > flushall
      OK
      > bitcount s 0
      (integer) 0
      > bitpos s 0 0 1 hello
      (integer) 0
      
      > set s 1
      OK
      > bitcount s 0
      (error) ERR syntax error
      > bitpos s 0 0 1 hello
      (error) ERR syntax error
      ```
      
      The reason is that we judged non-existing before parameter checking and
      returned. This PR fixes it, and after this commit:
      ```
      > flushall
      OK
      > bitcount s 0
      (error) ERR syntax error
      > bitpos s 0 0 1 hello
      (error) ERR syntax error
      ```
      
      Also BITPOS made the same fix as #12394, check for wrong argument, before
      checking for key.
      ```
      > lpush mylist a b c
      (integer) 3                                                                                    
      > bitpos mylist 1 a b
      (error) WRONGTYPE Operation against a key holding the wrong kind of value
      ```
      1407ac1f
    • Wen Hui's avatar
      BITCOUNT: check for argument, before checking for key (#12394) · 45d33106
      Wen Hui authored
      Generally, In any command we first check for  the argument and then check if key exist.
      
      Some of the examples are
      
      ```
      127.0.0.1:6379> getrange no-key invalid1 invalid2
      (error) ERR value is not an integer or out of range
      127.0.0.1:6379> setbit no-key 1 invalid
      (error) ERR bit is not an integer or out of range
      127.0.0.1:6379> xrange no-key invalid1 invalid2
      (error) ERR Invalid stream ID specified as stream command argument
      ```
      
      **Before change** 
      ```
      bitcount no-key invalid1 invalid2
      0
      ```
      
      **After change**
      ```
      bitcount no-key invalid1 invalid2
      (error) ERR value is not an integer or out of range
      ```
      45d33106
    • Binbin's avatar
      Fix LREM count LONG_MIN overflow minor issue (#12465) · c98a28a8
      Binbin authored
      Limit the range of LREM count to -LONG_MAX ~ LONG_MAX.
      Before the fix, passing -LONG_MAX would cause an overflow
      and would effectively be the same as passing 0. (Because
      this condition `toremove && removed == toremove `can never
      be satisfied).
      
      This is a minor fix as it shouldn't really affect users,
      more like a cleanup.
      c98a28a8
    • Yves LeBras's avatar
      config.memkeys init for consistency (#12505) · 16988208
      Yves LeBras authored
      Initializing `memkeys` to 0 for consistency and clarity.
      the config struct is anyway zeroed, but other fields are explicitly initialized.
      16988208
  9. 20 Aug, 2023 2 commits
    • Wen Hui's avatar
      Added tests for Client commands (#10276) · e532c95d
      Wen Hui authored
      In our test case, now we missed some test coverage for client sub-commands.
      This pr goal is to add some test coverage cases of the following commands:
      
      Client caching
      Client kill
      Client no-evict
      Client pause
      Client reply
      Client tracking
      Client setname
      
      At the very least, this is useful to make sure there are no leaks and crashes in these code paths.
      e532c95d
    • meiravgri's avatar
      Signal handler attributes (#12426) · fe47c202
      meiravgri authored
      This PR purpose is to make the crash report process thread safe.
      main changes include:
      
      1. `setupSigSegvHandler()` is introduced to initialize the signal handler.
      This function first initializes the signal handler mutex (if not initialized yet)
      and then registers the process to the signal handler. 
      
      2. **sigsegvHandler** flags :
      SA_NODEFER - don't add the signal to the process signal mask. We use this
      flag because we want to be able to handle a second call to the signal manually.
      removed SA_RESETHAND: this flag resets the signal handler function upon the first
      entrance to the registered function. The reason to use this flag is to protect from
      recursively entering the signal handler by the same thread. But, it also means
      that if a second thread crashes while handling a signal, the process will be
      terminated immediately and we won't get the crash report.
      In this PR we discard this flag. The signal handler guard described below purpose
      is to solve the above issues.
      
      3. Add a **signal handler lock** with ERRORCHECK attributes. 
      The lock's purpose is to ensure that only one thread generates a crash report.
      Once a second thread enters the signal handler it will be blocked.
      We use the ERRORCHECK lock in order to protect from possible deadlock in
      case the thread handling the crash gets a signal. In the latest scenario, we log
      what we have collected until the handler crashed.
      
      At the end of the crash report we reset the signal handler SIG_DFL, with no flags, and
      rethrow the signal to generate a core dump (if enabled) and exit the process.
      
      During the work on this PR we wanted to understand the historical reasons for
      how crash is handled.
      With respect to the choice of the flag, we believe the **SA_RESETHAND** was not
      added for any specific purpose.
      **SA_ONSTACK** which is removed here from bugReportEnd(), was originally also
      set in the initial registration to signal handler, but removed in 3ada43e7. In addition,
      it was removed from another location in deee2c1e with the following description,
      which is also relevant to why it should be removed from bugReportEnd:
      
      > it seems to be some valgrind bug with SA_ONSTACK.
      > SA_ONSTACK seems unneeded since WD is not recursive (SA_NODEFER was removed),
      > also, not sure if it's even valid without a call to sigaltstack()
      fe47c202
  10. 16 Aug, 2023 5 commits
  11. 15 Aug, 2023 1 commit
  12. 10 Aug, 2023 2 commits
    • Madelyn Olson's avatar
      Fixed a bug where sequential matching ACL rules weren't compressed (#12472) · 7c179f9b
      Madelyn Olson authored
      When adding a new ACL rule was added, an attempt was made to remove
      any "overlapping" rules. However, there when a match was found, the search
      was not resumed at the right location, but instead after the original position of
      the original command.
      
      For example, if the current rules were `-config +config|get` and a rule `+config`
      was added. It would identify that `-config` was matched, but it would skip over
      `+config|get`, leaving the compacted rule `-config +config`. This would be evaluated
      safely, but looks weird.
      
      This bug can only be triggered with subcommands, since that is the only way to
      have sequential matching rules. Resolves #12470. This is also only present in 7.2.
      I think there was also a minor risk of removing another valid rule, since it would start
      the search of the next command at an arbitrary point. I couldn't find a valid offset that
      would have cause a match using any of the existing commands that have subcommands
      with another command. 
      7c179f9b
    • Binbin's avatar
      Fix flaky SENTINEL RESET test (#12437) · 6abfda54
      Binbin authored
      After SENTINEL RESET, sometimes the sentinel can
      sense the master again, causing the test to fail.
      Here we give it a few more chances.
      6abfda54
  13. 05 Aug, 2023 4 commits
    • zhaozhao.zz's avatar
      optimize the check of kill pubsub clients after modifying ACL rules (#12457) · 1b6bdff4
      zhaozhao.zz authored
      if there are no subscribers, we can ignore the operation
      1b6bdff4
    • zhaozhao.zz's avatar
      do not call handleClientsBlockedOnKeys inside yielding command (#12459) · 8226f39f
      zhaozhao.zz authored
      
      
      Fix the assertion when a busy script (timeout) signal ready keys (like LPUSH),
      and then an arbitrary client's `allow-busy` command steps into `handleClientsBlockedOnKeys`
      try wake up clients blocked on keys (like BLPOP).
      
      Reproduction process:
      1. start a redis with aof
          `./redis-server --appendonly yes`
      2. exec blpop
          `127.0.0.1:6379> blpop a 0`
      3. use another client call a busy script and this script push the blocked key
          `127.0.0.1:6379> eval "redis.call('lpush','a','b') while(1) do end" 0`
      4. user a new client call an allow-busy command like auth
          `127.0.0.1:6379> auth a`
      
      BTW, this issue also break the atomicity of script.
      
      This bug has been around for many years, the old versions only have the
      atomic problem, only 7.0/7.2 has the assertion problem.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      8226f39f
    • sundb's avatar
      Avoid mostly harmless integer overflow in cjson (#12456) · da9c2804
      sundb authored
      This PR mainly fixes a possible integer overflow in `json_append_string()`.
      When we use `cjson.encoding()` to encode a string larger than 2GB, at specific
      compilation flags, an integer overflow may occur leading to truncation, resulting
      in the part of the string larger than 2GB not being encoded.
      On the other hand, this overflow doesn't cause any read or write out-of-range or segment fault.
      
      1) using -O0 for lua_cjson (`make LUA_DEBUG=yes`)
          In this case, `i` will overflow and leads to truncation.
          When `i` reaches `INT_MAX+1` and overflows to INT_MIN, when compared to
          len, `i` (1000000..00) is expanded to 64 bits signed integer (1111111.....000000) .
          At this point i will be greater than len and jump out of the loop, so `for (i = 0; i < len; i++)`
          will loop up to 2^31 times, and the part of larger than 2GB will be truncated.
      
      ```asm
      `i` => -0x24(%rbp)
      <+253>:   addl   $0x1,-0x24(%rbp)       ; overflow if i large than 2^31
      <+257>:   mov    -0x24(%rbp),%eax
      <+260>:   movslq %eax,%rdx	            ; move a 32-bit value with sign extension into a 64-bit signed
      <+263>:   mov    -0x20(%rbp),%rax
      <+267>:   cmp    %rax,%rdx              ; check `i < len`
      <+270>:   jb     0x212600 <json_append_string+148>
      ```
         
      2) using -O2/-O3 for lua_cjson (`make LUA_DEBUG=no`, **the default**)
          In this case, because singed integer overflow is an undefined behavior, `i` will not overflow.
         `i` will be optimized by the compiler and use 64-bit registers for all subsequent instructions.
      
      ```asm
      <+180>:   add    $0x1,%rbx           ; Using 64-bit register `rbx` for i++
      <+184>:   lea    0x1(%rdx),%rsi
      <+188>:   mov    %rsi,0x10(%rbp)
      <+192>:   mov    %al,(%rcx,%rdx,1)
      <+195>:   cmp    %rbx,(%rsp)         ; check `i < len`
      <+199>:   ja     0x20b63a <json_append_string+154>
      ```
      
      3) using 32bit
          Because `strbuf_ensure_empty_length()` preallocates memory of length (len * 6 + 2),
          in 32-bit `cjson.encode()` can only handle strings smaller than ((2 ^ 32) - 3 ) / 6.
          So 32bit is not affected.
      
      Also change `i` in `strbuf_append_string()` to `size_t`.
      Since its second argument `str` is taken from the `char2escape` string array which is never
      larger than 6, so `strbuf_append_string()` is not at risk of overflow (the bug was unreachable).
      da9c2804
    • Binbin's avatar
      Fix GEOHASH / GEODIST / GEOPOS time complexity, should be O(1) (#12445) · 7af9f4b3
      Binbin authored
      GEOHASH / GEODIST / GEOPOS use zsetScore to get the score, in skiplist encoding,
      we use dictFind to get the score, which is O(1), same as ZSCORE command.
      It is not clear why these commands had O(Log(N)), and O(N) until now.
      7af9f4b3
  14. 02 Aug, 2023 2 commits
    • Meir Shpilraien (Spielrein)'s avatar
      Ensure that the function load timeout is disabled during loading from RDB/AOF... · 2ee1bbb5
      Meir Shpilraien (Spielrein) authored
      Ensure that the function load timeout is disabled during loading from RDB/AOF and on replicas. (#12451)
      
      When loading a function from either RDB/AOF or a replica, it is essential not to
      fail on timeout errors. The loading time may vary due to various factors, such as
      hardware specifications or the system's workload during the loading process.
      Once a function has been successfully loaded, it should be allowed to load from
      persistence or on replicas without encountering a timeout failure.
      
      To maintain a clear separation between the engine and Redis internals, the
      implementation refrains from directly checking the state of Redis within the
      engine itself. Instead, the engine receives the desired timeout as part of the
      library creation and duly respects this timeout value. If Redis wishes to disable
      any timeout, it can simply send a value of 0.
      2ee1bbb5
    • zhaozhao.zz's avatar
      fix false success and a memory leak for ACL selector with bad parenthesis combination (#12452) · 90ab91f0
      zhaozhao.zz authored
      
      
      When doing merge selector, we should check whether the merge
      has started (i.e., whether open_bracket_start is -1) every time.
      Otherwise, encountering an illegal selector pattern could succeed
      and also cause memory leaks, for example:
      
      ```
      acl setuser test1 (+PING (+SELECT (+DEL )
      ```
      
      The above would leak memory and succeed with only DEL being applied,
      and would now error after the fix.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      90ab91f0
  15. 01 Aug, 2023 1 commit
  16. 30 Jul, 2023 1 commit
  17. 25 Jul, 2023 4 commits
    • Harkrishn Patro's avatar
      Test coverage for incr/decr operation on robj encoding type optimization (#12435) · 42985b00
      Harkrishn Patro authored
      Additional test coverage for incr/decr operation.
      
      integer number could be present in raw encoding format due to operation like append. A incr/decr operation following it optimize the string to int encoding format.
      42985b00
    • zhaozhao.zz's avatar
      update monitor client's memory and evict correctly (#12420) · 01eb939a
      zhaozhao.zz authored
      A bug introduced in #11657 (7.2 RC1), causes client-eviction (#8687)
      and INFO to have inaccurate memory usage metrics of MONITOR clients.
      
      Because the type in `c->type` and the type in `getClientType()` are confusing
      (in the later, `CLIENT_TYPE_NORMAL` not `CLIENT_TYPE_SLAVE`), the comment
      we wrote in `updateClientMemUsageAndBucket` was wrong, and in fact that function
      didn't skip monitor clients.
      And since it doesn't skip monitor clients, it was wrong to delete the call for it from
      `replicationFeedMonitors` (it wasn't a NOP).
      That deletion could mean that the monitor client memory usage is not always up to
      date (updated less frequently, but still a candidate for client eviction).
      01eb939a
    • nihohit's avatar
      Update request/response policies. (#12417) · 9f512017
      nihohit authored
      changing the response and request policy of a few commands,
      see https://redis.io/docs/reference/command-tips
      
      
      
      1. RANDOMKEY used to have no response policy, which means
        that when sent to multiple shards, the responses should be aggregated.
        this normally applies to commands that return arrays, but since RANDOMKEY
        replies with a simple string, it actually requires a SPECIAL response policy
        (for the client to select just one)
      2. SCAN used to have no response policy, but although the key names part of
        the response can be aggregated, the cursor part certainly can't.
      3. MSETNX had a request policy of MULTI_SHARD and response policy of AGG_MIN,
        but in fact the contract with MSETNX is that when one key exists, it returns 0
        and doesn't set any key, routing it to multiple shards would mean that if one failed
        and another succeeded, it's atomicity is broken and it's impossible to return a valid
        response to the caller.
      Co-authored-by: default avatarShachar Langbeheim <shachlan@amazon.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      9f512017
    • Harkrishn Patro's avatar
      Add test case for APPEND command usage on integer value (#12429) · 34b95f75
      Harkrishn Patro authored
      Add test coverage to validate object encoding update on APPEND command usage on a integer value
      34b95f75
  18. 20 Jul, 2023 1 commit
    • Makdon's avatar
      redis-cli: use previous hostip when not provided by redis cluster server (#12273) · 2495b90a
      Makdon authored
      
      
      When the redis server cluster running on cluster-preferred-endpoint-type unknown-endpoint mode, and receive a request that should be redirected to another redis server node, it does not reply the hostip, but a empty host like MOVED 3999 :6381.
      
      The redis-cli would try to connect to an address without a host, which cause the issue:
      ```
      127.0.0.1:7002> set bar bar
      -> Redirected to slot [5061] located at :7000
      Could not connect to Redis at :7000: No address associated with hostname
      Could not connect to Redis at :7000: No address associated with hostname
      not connected> exit
      ```
      
      In this case, the redis-cli should use the previous hostip when there's no host provided by the server.
      
      ---------
      Co-authored-by: default avatarViktor Söderqvist <viktor.soderqvist@est.tech>
      Co-authored-by: default avatarMadelyn Olson <madelynolson@gmail.com>
      2495b90a