1. 16 Sep, 2021 1 commit
    • Binbin's avatar
      Adds limit to SINTERCARD/ZINTERCARD. (#9425) · f898a9e9
      Binbin authored
      Implements the [LIMIT limit] variant of SINTERCARD/ZINTERCARD.
      Now with the LIMIT, we can stop the searching when cardinality
      reaching the limit, and return the cardinality ASAP.
      
      Note that in SINTERCARD, the old synatx was: `SINTERCARD key [key ...]`
      In order to add a optional parameter, we must break the old synatx.
      So the new syntax of SINTERCARD will be consistent with ZINTERCARD.
      New syntax: `SINTERCARD numkeys key [key ...] [LIMIT limit]`.
      
      Note that this means that SINTERCARD has a different syntax than
      SINTER and SINTERSTORE (taking numkeys argument)
      
      As for ZINTERCARD, we can easily add a optional parameter to it.
      New syntax: `ZINTERCARD numkeys key [key ...] [LIMIT limit]`
      f898a9e9
  2. 15 Sep, 2021 5 commits
    • guybe7's avatar
    • Wen Hui's avatar
    • yancz2000's avatar
      Support tclsh 8.7 (#9500) · 157454ff
      yancz2000 authored
      157454ff
    • guybe7's avatar
      Cleanup: propagate and alsoPropagate do not need redisCommand (#9502) · 7759ec7c
      guybe7 authored
      The `cmd` argument was completely unused, and all the code that bothered to pass it was unnecessary.
      This is a prepartion for a future commit that treats subcommands as commands
      7759ec7c
    • guybe7's avatar
      A better approach for COMMAND INFO for movablekeys commands (#8324) · 03fcc211
      guybe7 authored
      Fix #7297
      
      The problem:
      
      Today, there is no way for a client library or app to know the key name indexes for commands such as
      ZUNIONSTORE/EVAL and others with "numkeys", since COMMAND INFO returns no useful info for them.
      
      For cluster-aware redis clients, this requires to 'patch' the client library code specifically for each of these commands or to
      resolve each execution of these commands with COMMAND GETKEYS.
      
      The solution:
      
      Introducing key specs other than the legacy "range" (first,last,step)
      
      The 8th element of the command info array, if exists, holds an array of key specs. The array may be empty, which indicates
      the command doesn't take any key arguments or may contain one or more key-specs, each one may leads to the discovery
      of 0 or more key arguments.
      
      A client library that doesn't support this key-spec feature will keep using the first,last,step and movablekeys flag which will
      obviously remain unchanged.
      
      A client that supports this key-specs feature needs only to look at the key-specs array. If it finds an unrecognized spec, it
      must resort to using COMMAND GETKEYS if it wishes to get all key name arguments, but if all it needs is one key in order
      to know which cluster node to use, then maybe another spec (if the command has several) can supply that, and there's no
      need to use GETKEYS.
      
      Each spec is an array of arguments, first one is the spec name, the second is an array of flags, and the third is an array
      containing details about the spec (specific meaning for each spec type)
      The initial flags we support are "read" and "write" indicating if the keys that this key-spec finds are used for read or for write.
      clients should ignore any unfamiliar flags.
      
      In order to easily find the positions of keys in a given array of args we introduce keys specs. There are two logical steps of
      key specs:
      1. `start_search`: Given an array of args, indicate where we should start searching for keys
      2. `find_keys`: Given the output of start_search and an array of args, indicate all possible indices of keys.
      
      ### start_search step specs
      - `index`: specify an argument index explicitly
        - `index`: 0 based index (1 means the first command argument)
      - `keyword`: specify a string to match in `argv`. We should start searching for keys just after the keyword appears.
        - `keyword`: the string to search for
        - `start_search`: an index from which to start the keyword search (can be negative, which means to search from the end)
      
      Examples:
      - `SET` has start_search of type `index` with value `1`
      - `XREAD` has start_search of type `keyword` with value `[“STREAMS”,1]`
      - `MIGRATE` has start_search of type `keyword` with value `[“KEYS”,-2]`
      
      ### find_keys step specs
      - `range`: specify `[count, step, limit]`.
        - `lastkey`: index of the last key. relative to the index returned from begin_search. -1 indicating till the last argument, -2 one before the last
        - `step`: how many args should we skip after finding a key, in order to find the next one
        - `limit`: if count is -1, we use limit to stop the search by a factor. 0 and 1 mean no limit. 2 means ½ of the remaining args, 3 means ⅓, and so on.
      - “keynum”: specify `[keynum_index, first_key_index, step]`.
        - `keynum_index`: is relative to the return of the `start_search` spec.
        - `first_key_index`: is relative to `keynum_index`.
        - `step`: how many args should we skip after finding a key, in order to find the next one
      
      Examples:
      - `SET` has `range` of `[0,1,0]`
      - `MSET` has `range` of `[-1,2,0]`
      - `XREAD` has `range` of `[-1,1,2]`
      - `ZUNION` has `start_search` of type `index` with value `1` and `find_keys` of type `keynum` with value `[0,1,1]`
      - `AI.DAGRUN` has `start_search` of type `keyword` with value `[“LOAD“,1]` and `find_keys` of type `keynum` with value
        `[0,1,1]` (see https://oss.redislabs.com/redisai/master/commands/#aidagrun)
      
      Note: this solution is not perfect as the module writers can come up with anything, but at least we will be able to find the key
      args of the vast majority of commands.
      If one of the above specs can’t describe the key positions, the module writer can always fall back to the `getkeys-api` option.
      
      Some keys cannot be found easily (`KEYS` in `MIGRATE`: Imagine the argument for `AUTH` is the string “KEYS” - we will
      start searching in the wrong index). 
      The guarantee is that the specs may be incomplete (`incomplete` will be specified in the spec to denote that) but we never
      report false information (assuming the command syntax is correct).
      For `MIGRATE` we start searching from the end - `startfrom=-1` - and if one of the keys is actually called "keys" we will
      report only a subset of all keys - hence the `incomplete` flag.
      Some `incomplete` specs can be completely empty (i.e. UNKNOWN begin_search) which should tell the client that
      COMMAND GETKEYS (or any other way to get the keys) must be used (Example: For `SORT` there is no way to describe
      the STORE keyword spec, as the word "store" can appear anywhere in the command).
      
      We will expose these key specs in the `COMMAND` command so that clients can learn, on startup, where the keys are for
      all commands instead of holding hardcoded tables or use `COMMAND GETKEYS` in runtime.
      
      Comments:
      1. Redis doesn't internally use the new specs, they are only used for COMMAND output.
      2. In order to support the current COMMAND INFO format (reply array indices 4, 5, 6) we created a synthetic range, called
         legacy_range, that, if possible, is built according to the new specs.
      3. Redis currently uses only getkeys_proc or the legacy_range to get the keys indices (in COMMAND GETKEYS for
         example).
      
      "incomplete" specs:
      the command we have issues with are MIGRATE, STRALGO, and SORT
      for MIGRATE, because the token KEYS, if exists, must be the last token, we can search in reverse. it one of the keys is
      actually the string "keys" will return just a subset of the keys (hence, it's "incomplete")
      for SORT and STRALGO we can use this heuristic (the keys can be anywhere in the command) and therefore we added a
      key spec that is both "incomplete" and of "unknown type"
      
      if a client encounters an "incomplete" spec it means that it must find a different way (either COMMAND GETKEYS or have
      its own parser) to retrieve the keys.
      please note that all commands, apart from the three mentioned above, have "complete" key specs
      03fcc211
  3. 14 Sep, 2021 3 commits
    • filipe oliveira's avatar
      Added URI support to redis-benchmark (cli and benchmark share the same uri-parsing methods) (#9314) · b5a879e1
      filipe oliveira authored
      - Add `-u <uri>` command line option to support `redis://`
      
       URI scheme.
      - included server connection information object (`struct cliConnInfo`),
        used to describe an ip:port pair, db num user input, and user:pass to
        avoid a large number of function arguments.
      - Using sds on connection info strings for redis-benchmark/redis-cli
      Co-authored-by: default avataryoav-steinberg <yoav@monfort.co.il>
      b5a879e1
    • Viktor Söderqvist's avatar
      Modules: Add remaining list API functions (#8439) · ea36d4de
      Viktor Söderqvist authored
      List functions operating on elements by index:
      
      * RM_ListGet
      * RM_ListSet
      * RM_ListInsert
      * RM_ListDelete
      
      Iteration is done using a simple for loop over indices.
      The index based functions use an internal iterator as an optimization.
      This is explained in the docs:
      
      ```
       * Many of the list functions access elements by index. Since a list is in
       * essence a doubly-linked list, accessing elements by index is generally an
       * O(N) operation. However, if elements are accessed sequentially or with
       * indices close together, the functions are optimized to seek the index from
       * the previous index, rather than seeking from the ends of the list.
       *
       * This enables iteration to be done efficiently using a simple for loop:
       *
       *     long n = RM_ValueLength(key);
       *     for (long i = 0; i < n; i++) {
       *         RedisModuleString *elem = RedisModule_ListGet(key, i);
       *         // Do stuff...
       *     }
      ```
      ea36d4de
    • sundb's avatar
      Fix memory leak due to missing freeCallback in blockonbackground moduleapi test (#9499) · 1376d833
      sundb authored
      Before #9497, before redis-server was shut down, we did not manually shut down all the clients,
      which would have prevented valgrind from detecting a memory leak in the client's argc.
      1376d833
  4. 13 Sep, 2021 3 commits
    • Altaf hussain's avatar
    • yoav-steinberg's avatar
      Fixed leaked client for "start_server" when running in --loop (#9497) · 4c782758
      yoav-steinberg authored
      * On `kill_server` make sure we close the default `"client"` connection.
      * Don't reconnect when trying to execute the client's `close` command.
      * On `restart_server` make sure to remove the (closed) default `"client"` after killing the old server.
      4c782758
    • zhaozhao.zz's avatar
      PSYNC2: make partial sync possible after master reboot (#8015) · 794442b1
      zhaozhao.zz authored
      The main idea is how to allow a master to load replication info from RDB file when rebooting, if master can load replication info it means that replicas may have the chance to psync with master, it can save much traffic.
      
      The key point is we need guarantee safety and consistency, so there
      are two differences between master and replica:
      
      1. master would load the replication info as secondary ID and
         offset, in case other masters have the same replid.
      2. when master loading RDB, it would propagate expired keys as DEL
         command to replication backlog, then replica can receive these
         commands to delete stale keys.
         p.s. the expired keys when RDB loading is useful for users, so
         we show it as `rdb_last_load_keys_expired` and `rdb_last_load_keys_loaded` in info persistence.
      
      Moreover, after load replication info, master should update
      `no_replica_time` in case loading RDB cost too long time.
      794442b1
  5. 12 Sep, 2021 1 commit
    • Huang Zhw's avatar
      bitpos/bitcount add bit index (#9324) · 75dd2309
      Huang Zhw authored
      Make bitpos/bitcount support bit index:
      
      ```
      BITPOS key bit [start [end [BIT|BYTE]]]
      BITCOUNT key [start end [BIT|BYTE]]
      ```
      
      The default behavior is `BYTE`, so these commands are still compatible with old.
      75dd2309
  6. 11 Sep, 2021 1 commit
  7. 10 Sep, 2021 1 commit
  8. 09 Sep, 2021 11 commits
    • Meir Shpilraien (Spielrein)'s avatar
    • sundb's avatar
      Replace all usage of ziplist with listpack for t_zset (#9366) · 3ca6972e
      sundb authored
      Part two of implementing #8702 (zset), after #8887.
      
      ## Description of the feature
      Replaced all uses of ziplist with listpack in t_zset, and optimized some of the code to optimize performance.
      
      ## Rdb format changes
      New `RDB_TYPE_ZSET_LISTPACK` rdb type.
      
      ## Rdb loading improvements:
      1) Pre-expansion of dict for validation of duplicate data for listpack and ziplist.
      2) Simplifying the release of empty key objects when RDB loading.
      3) Unify ziplist and listpack data verify methods for zset and hash, and move code to rdb.c.
      
      ## Interface changes
      1) New `zset-max-listpack-entries` config is an alias for `zset-max-ziplist-entries` (same with `zset-max-listpack-value`).
      2) OBJECT ENCODING will return listpack instead of ziplist.
      
      ## Listpack improvements:
      1) Add `lpDeleteRange` and `lpDeleteRangeWithEntry` functions to delete a range of entries from listpack.
      2) Improve the performance of `lpCompare`, converting from string to integer is faster than converting from integer to string.
      3) Replace `snprintf` with `ll2string` to improve performance in converting numbers to strings in `lpGet()`.
      
      ## Zset improvements:
      1) Improve the performance of `zzlFind` method, use `lpFind` instead of `lpCompare` in a loop.
      2) Use `lpDeleteRangeWithEntry` instead of `lpDelete` twice to delete a element of zset.
      
      ## Tests
      1) Add some unittests for `lpDeleteRange` and `lpDeleteRangeWithEntry` function.
      2) Add zset RDB loading test.
      3) Add benchmark test for `lpCompare` and `ziplsitCompare`.
      4) Add empty listpack zset corrupt dump test.
      3ca6972e
    • Madelyn Olson's avatar
      Remove redundant validation and prevent duplicate users during ACL load (#9330) · 86b0de5c
      Madelyn Olson authored
      Throw an error when a user is provided multiple times on the command line instead of silently throwing one of them away.
      Remove unneeded validation for validating users on ACL load.
      86b0de5c
    • yancz2000's avatar
      Add make test-cluster option (#9478) · 47c001dd
      yancz2000 authored
      Add make test-cluster option
      47c001dd
    • yvette903's avatar
      Fix: client pause uses an old timeout (#9477) · f560531d
      yvette903 authored
      A write request may be paused unexpectedly because `server.client_pause_end_time` is old.
      
      **Recreate this:**
      redis-cli -p 6379
      127.0.0.1:6379> client pause 500000000 write
      OK
      127.0.0.1:6379> client unpause
      OK
      127.0.0.1:6379> client pause 10000 write
      OK
      127.0.0.1:6379> set key value
      
      The write request `set key value` is paused util  the timeout of 500000000 milliseconds was reached.
      
      **Fix:**
      reset `server.client_pause_end_time` = 0 in `unpauseClients`
      f560531d
    • Kamil Cudnik's avatar
      Lua: Use all characters to calculate string hash (#9449) · 7f88923b
      Kamil Cudnik authored
      For a lot of long strings which have same prefix which extends beyond
      hashing limit, there will be many hash collisions which result in
      performance degradation using commands like KEYS
      7f88923b
    • Binbin's avatar
      Add LMPOP/BLMPOP commands. (#9373) · c50af0ae
      Binbin authored
      We want to add COUNT option for BLPOP.
      But we can't do it without breaking compatibility due to the command arguments syntax.
      So this commit introduce two new commands.
      
      Syntax for the new LMPOP command:
      `LMPOP numkeys [<key> ...] LEFT|RIGHT [COUNT count]`
      
      Syntax for the new BLMPOP command:
      `BLMPOP timeout numkeys [<key> ...] LEFT|RIGHT [COUNT count]`
      
      Some background:
      - LPOP takes one key, and can return multiple elements.
      - BLPOP takes multiple keys, but returns one element from just one key.
      - LMPOP can take multiple keys and return multiple elements from just one key.
      
      Note that LMPOP/BLMPOP  can take multiple keys, it eventually operates on just one key.
      And it will propagate as LPOP or RPOP with the COUNT option.
      
      As a new command, it still return NIL if we can't pop any elements.
      For the normal response is nested arrays in RESP2 and RESP3, like:
      ```
      LMPOP/BLMPOP 
      1) keyname
      2) 1) element1
         2) element2
      ```
      I.e. unlike BLPOP that returns a key name and one element so it uses a flat array,
      and LPOP that returns multiple elements with no key name, and again uses a flat array,
      this one has to return a nested array, and it does for for both RESP2 and RESP3 (like SCAN does)
      
      Some discuss can see: #766 #8824
      c50af0ae
    • Huang Zhw's avatar
      Add INFO total_active_defrag_time and current_active_defrag_time (#9377) · 216f168b
      Huang Zhw authored
      Add two INFO metrics:
      ```
      total_active_defrag_time:12345
      current_active_defrag_time:456
      ```
      `current_active_defrag_time` if greater than 0, means how much time has
      passed since active defrag started running. If active defrag stops, this metric is reset to 0.
      `total_active_defrag_time` means total time the fragmentation
      was over the defrag threshold since the server started.
      
      This is a followup PR for #9031
      216f168b
    • Wang Yuan's avatar
      Delay to discard cached master when full synchronization (#9398) · cee3d67f
      Wang Yuan authored
      * Delay to discard cache master when full synchronization
      * Don't disconnect with replicas before loading transferred RDB when full sync
      
      Previously, once replica need to start full synchronization with master,
      it will discard cached master whatever full synchronization is failed or
      not. 
      Now we discard cached master only when transferring RDB is finished
      and start to change data space, this make replica could start partial
      resynchronization with another new master if new master is failed
      during full synchronization.
      cee3d67f
    • chenyang8094's avatar
      Fix callReplyParseCollection memleak when use AutoMemory (#9446) · bc0c22fa
      chenyang8094 authored
      When parsing an array type reply, ctx will be lost when recursively parsing its
      elements, which will cause a memory leak in automemory mode.
      
      This is a result of the changes in #9202
      
      Add test for callReplyParseCollection fix
      bc0c22fa
    • Itamar Haber's avatar
      7c80a654
  9. 08 Sep, 2021 2 commits
    • chenyang8094's avatar
      Add stdlib.h for RedisModule_Assert (#9470) · 7a0e6685
      chenyang8094 authored
      7a0e6685
    • zhaozhao.zz's avatar
      Fix wrong offset when replica pause (#9448) · 1b83353d
      zhaozhao.zz authored
      When a replica paused, it would not apply any commands event the command comes from master, if we feed the non-applied command to replication stream, the replication offset would be wrong, and data would be lost after failover(since replica's `master_repl_offset` grows but command is not applied).
      
      To fix it, here are the changes:
      * Don't update replica's replication offset or propagate commands to sub-replicas when it's paused in `commandProcessed`.
      * Show `slave_read_repl_offset` in info reply.
      * Add an assert to make sure master client should never be blocked unless pause or module (some modules may use block way to do background (parallel) processing and forward original block module command to the replica, it's not a good way but it can work, so the assert excludes module now, but someday in future all modules should rewrite block command to propagate like what `BLPOP` does).
      1b83353d
  10. 07 Sep, 2021 1 commit
  11. 06 Sep, 2021 1 commit
    • Viktor Söderqvist's avatar
      Optimize quicklistIndex to seek from the nearest end (#9454) · 547c3405
      Viktor Söderqvist authored
      Until now, giving a negative index seeks from the end of a list and a
      positive seeks from the beginning. This change makes it seek from
      the nearest end, regardless of the sign of the given index.
      
      quicklistIndex is used by all list commands which operate by index.
      
      LINDEX key 999999 in a list if 1M elements is greately optimized by
      this change. Latency is cut by 75%.
      
      LINDEX key -1000000 in a list of 1M elements, likewise.
      
      LRANGE key -1 -1 is affected by this, since LRANGE converts the
      indices to positive numbers before seeking.
      
      The tests for corrupt dumps are updated to make sure the corrup
      data is seeked in the same direction as before.
      547c3405
  12. 05 Sep, 2021 1 commit
  13. 03 Sep, 2021 1 commit
  14. 02 Sep, 2021 3 commits
    • guybe7's avatar
      Fix two minor bugs (MIGRATE key args and getKeysUsingCommandTable) (#9455) · 6aa2285e
      guybe7 authored
      1. MIGRATE has a potnetial key arg in argv[3]. It should be reflected in the command table.
      2. getKeysUsingCommandTable should never free getKeysResult, it is always freed by the caller)
         The reason we never encountered this double-free bug is that almost always getKeysResult
         uses the statis buffer and doesn't allocate a new one.
      6aa2285e
    • sundb's avatar
      Fix the timing of read and write events under kqueue (#9416) · 306a5ccd
      sundb authored
      Normally we execute the read event first and then the write event.
      When the barrier is set, we will do it reverse.
      However, under `kqueue`, if an `fd` has both read and write events,
      reading the event using `kevent` will generate two events, which will
      result in uncontrolled read and write timing.
      
      This also means that the guarantees of AOF `appendfsync` = `always` are
      not met on MacOS without this fix.
      
      The main change to this pr is to cache the events already obtained when reading
      them, so that if the same `fd` occurs again, only the mask in the cache is updated,
      rather than a new event is generated.
      
      This was exposed by the following test failure on MacOS:
      ```
      *** [err]: AOF fsync always barrier issue in tests/integration/aof.tcl
      Expected 544 != 544 (context: type eval line 26 cmd {assert {$size1 != $size2}} proc ::test)
      ```
      306a5ccd
    • Yossi Gottlieb's avatar
      Use fchmod to update command history file. (#9447) · c9931ddb
      Yossi Gottlieb authored
      This is considered a safer approach as it prevents a race condition that
      could lead to chmod executed on a different file.
      
      Not a major risk, but CodeQL alerted this so it makes sense to fix.
      c9931ddb
  15. 31 Aug, 2021 1 commit
    • Viktor Söderqvist's avatar
      Slot-to-keys using dict entry metadata (#9356) · f24c63a2
      Viktor Söderqvist authored
      
      
      * Enhance dict to support arbitrary metadata carried in dictEntry
      Co-authored-by: default avatarViktor Söderqvist <viktor.soderqvist@est.tech>
      
      * Rewrite slot-to-keys mapping to linked lists using dict entry metadata
      
      This is a memory enhancement for Redis Cluster.
      
      The radix tree slots_to_keys (which duplicates all key names prefixed with their
      slot number) is replaced with a linked list for each slot. The dict entries of
      the same cluster slot form a linked list and the pointers are stored as metadata
      in each dict entry of the main DB dict.
      
      This commit also moves the slot-to-key API from db.c to cluster.c.
      Co-authored-by: default avatarJim Brunner <brunnerj@amazon.com>
      f24c63a2
  16. 30 Aug, 2021 2 commits
    • Oran Agra's avatar
      Tune timeout of active defrag test (#9426) · 1e7ad894
      Oran Agra authored
      Failed on Raspberry Pi 3b where that single test took about 170 seconds
      1e7ad894
    • Wang Yuan's avatar
      Use sync_file_range to optimize fsync if possible (#9409) · 9a0c0617
      Wang Yuan authored
      We implement incremental data sync in rio.c by call fsync, on slow disk, that may cost a lot of time,
      sync_file_range could provide async fsync, so we could serialize key/value and sync file data at the same time.
      
      > one tip for sync_file_range usage: http://lkml.iu.edu/hypermail/linux/kernel/1005.2/01845.html
      
      Additionally, this change avoids a single large write to be used, which can result in a mass of dirty
      pages in the kernel (increasing the risk of someone else's write to block).
      
      On HDD, current solution could reduce approximate half of dumping RDB time,
      this PR costs 50s for dump 7.7G rdb but unstable branch costs 93s.
      On NVME SSD, this PR can't reduce much time,  this PR costs 40s, unstable branch costs 48s.
      
      Moreover, I find calling data sync every 4MB is better than 32MB.
      9a0c0617
  17. 29 Aug, 2021 2 commits
    • Binbin's avatar
      Better error handling for updateClientOutputBufferLimit. (#9308) · aefbc234
      Binbin authored
      This one follow #9313 and goes deeper (validation of config file parsing)
      
      Move the check/update logic to a new updateClientOutputBufferLimit
      function. So that it can be used in CONFIG SET and config file parsing.
      aefbc234
    • Viktor Söderqvist's avatar
      redis-benchmark: improved help and warnings (#9419) · 97dcf95c
      Viktor Söderqvist authored
      1. The output of --help:
      
        * On the Usage line, just write [OPTIONS] [COMMAND ARGS...] instead listing
          only a few arbitrary options and no command.
        * For --cluster, describe that if the command is supplied on the command line,
          the key must contain "{tag}". Otherwise, the command will not be sent to the
          right cluster node.
        * For -r, add a note that if -r is omitted, all commands in a benchmark will
          use the same key. Also align the description.
        * For -t, describe that -t is ignored if a command is supplied on the command
          line.
      
      2. Print a warning if -t is present when a specific command is supplied.
      
      3. Print all warnings and errors to stderr.
      
      4. Remove -e from calls in redis-benchmark test suite.
      97dcf95c