1. 25 Oct, 2021 1 commit
    • Wang Yuan's avatar
      Replication backlog and replicas use one global shared replication buffer (#9166) · c1718f9d
      Wang Yuan authored
      ## Background
      For redis master, one replica uses one copy of replication buffer, that is a big waste of memory,
      more replicas more waste, and allocate/free memory for every reply list also cost much.
      If we set client-output-buffer-limit small and write traffic is heavy, master may disconnect with
      replicas and can't finish synchronization with replica. If we set  client-output-buffer-limit big,
      master may be OOM when there are many replicas that separately keep much memory.
      Because replication buffers of different replica client are the same, one simple idea is that
      all replicas only use one replication buffer, that will effectively save memory.
      
      Since replication backlog content is the same as replicas' output buffer, now we
      can discard replication backlog memory and use global shared replication buffer
      to implement replication backlog mechanism.
      
      ## Implementation
      I create one global "replication buffer" which contains content of replication stream.
      The structure of "replication buffer" is similar to the reply list that exists in every client.
      But the node of list is `replBufBlock`, which has `id, repl_offset, refcount` fields.
      ```c
      /* Replication buffer blocks is the list of replBufBlock.
       *
       * +--------------+       +--------------+       +--------------+
       * | refcount = 1 |  ...  | refcount = 0 |  ...  | refcount = 2 |
       * +--------------+       +--------------+       +--------------+
       *      |                                            /       \
       *      |                                           /         \
       *      |                                          /           \
       *  Repl Backlog                               Replia_A      Replia_B
       * 
       * Each replica or replication backlog increments only the refcount of the
       * 'ref_repl_buf_node' which it points to. So when replica walks to the next
       * node, it should first increase the next node's refcount, and when we trim
       * the replication buffer nodes, we remove node always from the head node which
       * refcount is 0. If the refcount of the head node is not 0, we must stop
       * trimming and never iterate the next node. */
      
      /* Similar with 'clientReplyBlock', it is used for shared buffers between
       * all replica clients and replication backlog. */
      typedef struct replBufBlock {
          int refcount;           /* Number of replicas or repl backlog using. */
          long long id;           /* The unique incremental number. */
          long long repl_offset;  /* Start replication offset of the block. */
          size_t size, used;
          char buf[];
      } replBufBlock;
      ```
      So now when we feed replication stream into replication backlog and all replicas, we only need
      to feed stream into replication buffer `feedReplicationBuffer`. In this function, we set some fields of
      replication backlog and replicas to references of the global replication buffer blocks. And we also
      need to check replicas' output buffer limit to free if exceeding `client-output-buffer-limit`, and trim
      replication backlog if exceeding `repl-backlog-size`.
      
      When sending reply to replicas, we also need to iterate replication buffer blocks and send its
      content, when totally sending one block for replica, we decrease current node count and
      increase the next current node count, and then free the block which reference is 0 from the
      head of replication buffer blocks.
      
      Since now we use linked list to manage replication backlog, it may cost much time for iterating
      all linked list nodes to find corresponding replication buffer node. So we create a rax tree to
      store some nodes  for index, but to avoid rax tree occupying too much memory, i record
      one per 64 nodes for index.
      
      Currently, to make partial resynchronization as possible as much, we always let replication
      backlog as the last reference of replication buffer blocks, backlog size may exceeds our setting
      if slow replicas that reference vast replication buffer blocks, and this method doesn't increase
      memory usage since they share replication buffer. To avoid freezing server for freeing unreferenced
      replication buffer blocks when we need to trim backlog for exceeding backlog size setting,
      we trim backlog incrementally (free 64 blocks per call now), and make it faster in
      `beforeSleep` (free 640 blocks).
      
      ### Other changes
      - `mem_total_replication_buffers`: we add this field in INFO command, it means the total
        memory of replication buffers used.
      - `mem_clients_slaves`:  now even replica is slow to replicate, and its output buffer memory
        is not 0, but it still may be 0, since replication backlog and replicas share one global replication
        buffer, only if replication buffer memory is more than the repl backlog setting size, we consider
        the excess as replicas' memory. Otherwise, we think replication buffer memory is the consumption
        of repl backlog.
      - Key eviction
        Since all replicas and replication backlog share global replication buffer, we think only the
        part of exceeding backlog size the extra separate consumption of replicas.
        Because we trim backlog incrementally in the background, backlog size may exceeds our
        setting if slow replicas that reference vast replication buffer blocks disconnect.
        To avoid massive eviction loop, we don't count the delayed freed replication backlog into
        used memory even if there are no replicas, i.e. we also regard this memory as replicas's memory.
      - `client-output-buffer-limit` check for replica clients
        It doesn't make sense to set the replica clients output buffer limit lower than the repl-backlog-size
        config (partial sync will succeed and then replica will get disconnected). Such a configuration is
        ignored (the size of repl-backlog-size will be used). This doesn't have memory consumption
        implications since the replica client will share the backlog buffers memory.
      - Drop replication backlog after loading data if needed
        We always create replication backlog if server is a master, we need it because we put DELs in
        it when loading expired keys in RDB, but if RDB doesn't have replication info or there is no rdb,
        it is not possible to support partial resynchronization, to avoid extra memory of replication backlog,
        we drop it.
      - Multi IO threads
       Since all replicas and replication backlog use global replication buffer,  if I/O threads are enabled,
        to guarantee data accessing thread safe, we must let main thread handle sending the output buffer
        to all replicas. But before, other IO threads could handle sending output buffer of all replicas.
      
      ## Other optimizations
      This solution resolve some other problem:
      - When replicas disconnect with master since of out of output buffer limit, releasing the output
        buffer of replicas may freeze server if we set big `client-output-buffer-limit` for replicas, but now,
        it doesn't cause freezing.
      - This implementation may mitigate reply list copy cost time(also freezes server) when one replication
        has huge reply buffer and another replica can copy buffer for full synchronization. now, we just copy
        reference info, it is very light.
      - If we set replication backlog size big, it also may cost much time to copy replication backlog into
        replica's output buffer. But this commit eliminates this problem.
      - Resizing replication backlog size doesn't empty current replication backlog content.
      c1718f9d
  2. 21 Oct, 2021 1 commit
  3. 20 Oct, 2021 2 commits
    • Oran Agra's avatar
      fix new cluster tests issues (#9657) · 7d6744c7
      Oran Agra authored
      Following #9483 the daily CI exposed a few problems.
      
      * The cluster creation code (uses redis-cli) is complicated to test with TLS enabled.
        for now i'm just skipping them since the tests we run there don't really need that kind of coverage
      * cluster port binding failures
        note that `find_available_port` already looks for a free cluster port
        but the code in `wait_server_started` couldn't detect the failure of binding
        (the text it greps for wasn't found in the log)
      7d6744c7
    • guybe7's avatar
      Treat subcommands as commands (#9504) · 43e736f7
      guybe7 authored
      ## Intro
      
      The purpose is to allow having different flags/ACL categories for
      subcommands (Example: CONFIG GET is ok-loading but CONFIG SET isn't)
      
      We create a small command table for every command that has subcommands
      and each subcommand has its own flags, etc. (same as a "regular" command)
      
      This commit also unites the Redis and the Sentinel command tables
      
      ## Affected commands
      
      CONFIG
      Used to have "admin ok-loading ok-stale no-script"
      Changes:
      1. Dropped "ok-loading" in all except GET (this doesn't change behavior since
      there were checks in the code doing that)
      
      XINFO
      Used to have "read-only random"
      Changes:
      1. Dropped "random" in all except CONSUMERS
      
      XGROUP
      Used to have "write use-memory"
      Changes:
      1. Dropped "use-memory" in all except CREATE and CREATECONSUMER
      
      COMMAND
      No changes.
      
      MEMORY
      Used to have "random read-only"
      Changes:
      1. Dropped "random" in PURGE and USAGE
      
      ACL
      Used to have "admin no-script ok-loading ok-stale"
      Changes:
      1. Dropped "admin" in WHOAMI, GENPASS, and CAT
      
      LATENCY
      No changes.
      
      MODULE
      No changes.
      
      SLOWLOG
      Used to have "admin random ok-loading ok-stale"
      Changes:
      1. Dropped "random" in RESET
      
      OBJECT
      Used to have "read-only random"
      Changes:
      1. Dropped "random" in ENCODING and REFCOUNT
      
      SCRIPT
      Used to have "may-replicate no-script"
      Changes:
      1. Dropped "may-replicate" in all except FLUSH and LOAD
      
      CLIENT
      Used to have "admin no-script random ok-loading ok-stale"
      Changes:
      1. Dropped "random" in all except INFO and LIST
      2. Dropped "admin" in ID, TRACKING, CACHING, GETREDIR, INFO, SETNAME, GETNAME, and REPLY
      
      STRALGO
      No changes.
      
      PUBSUB
      No changes.
      
      CLUSTER
      Changes:
      1. Dropped "admin in countkeysinslots, getkeysinslot, info, nodes, keyslot, myid, and slots
      
      SENTINEL
      No changes.
      
      (note that DEBUG also fits, but we decided not to convert it since it's for
      debugging and anyway undocumented)
      
      ## New sub-command
      This commit adds another element to the per-command output of COMMAND,
      describing the list of subcommands, if any (in the same structure as "regular" commands)
      Also, it adds a new subcommand:
      ```
      COMMAND LIST [FILTERBY (MODULE <module-name>|ACLCAT <cat>|PATTERN <pattern>)]
      ```
      which returns a set of all commands (unless filters), but excluding subcommands.
      
      ## Module API
      A new module API, RM_CreateSubcommand, was added, in order to allow
      module writer to define subcommands
      
      ## ACL changes:
      1. Now, that each subcommand is actually a command, each has its own ACL id.
      2. The old mechanism of allowed_subcommands is redundant
      (blocking/allowing a subcommand is the same as blocking/allowing a regular command),
      but we had to keep it, to support the widespread usage of allowed_subcommands
      to block commands with certain args, that aren't subcommands (e.g. "-select +select|0").
      3. I have renamed allowed_subcommands to allowed_firstargs to emphasize the difference.
      4. Because subcommands are commands in ACL too, you can now use "-" to block subcommands
      (e.g. "+client -client|kill"), which wasn't possible in the past.
      5. It is also possible to use the allowed_firstargs mechanism with subcommand.
      For example: `+config -config|set +config|set|loglevel` will block all CONFIG SET except
      for setting the log level.
      6. All of the ACL changes above required some amount of refactoring.
      
      ## Misc
      1. There are two approaches: Either each subcommand has its own function or all
         subcommands use the same function, determining what to do according to argv[0].
         For now, I took the former approaches only with CONFIG and COMMAND,
         while other commands use the latter approach (for smaller blamelog diff).
      2. Deleted memoryGetKeys: It is no longer needed because MEMORY USAGE now uses the "range" key spec.
      4. Bugfix: GETNAME was missing from CLIENT's help message.
      5. Sentinel and Redis now use the same table, with the same function pointer.
         Some commands have a different implementation in Sentinel, so we redirect
         them (these are ROLE, PUBLISH, and INFO).
      6. Command stats now show the stats per subcommand (e.g. instead of stats just
         for "config" you will have stats for "config|set", "config|get", etc.)
      7. It is now possible to use COMMAND directly on subcommands:
         COMMAND INFO CONFIG|GET (The pipeline syntax was inspired from ACL, and
         can be used in functions lookupCommandBySds and lookupCommandByCString)
      8. STRALGO is now a container command (has "help")
      
      ## Breaking changes:
      1. Command stats now show the stats per subcommand (see (5) above)
      43e736f7
  4. 19 Oct, 2021 1 commit
    • qetu3790's avatar
      Release clients blocked on module commands in cluster resharding and down state (#9483) · 4962c552
      qetu3790 authored
      
      
      Prevent clients from being blocked forever in cluster when they block with their own module command
      and the hash slot is migrated to another master at the same time.
      These will get a redirection message when unblocked.
      Also, release clients blocked on module commands when cluster is down (same as other blocked clients)
      
      This commit adds basic tests for the main (non-cluster) redis test infra that test the cluster.
      This was done because the cluster test infra can't handle some common test features,
      but most importantly we only build the test modules with the non-cluster test suite.
      
      note that rather than really supporting cluster operations by the test infra, it was added (as dup code)
      in two files, one for module tests and one for non-modules tests, maybe in the future we'll refactor that.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      4962c552
  5. 18 Oct, 2021 1 commit
  6. 30 Sep, 2021 1 commit
  7. 23 Sep, 2021 1 commit
    • YaacovHazan's avatar
      Adding ACL support for modules (#9309) · a56d4533
      YaacovHazan authored
      This commit introduced a new flag to the RM_Call:
      'C' - Check if the command can be executed according to the ACLs associated with it.
      
      Also, three new API's added to check if a command, key, or channel can be executed or accessed
      by a user, according to the ACLs associated with it.
      - RM_ACLCheckCommandPerm
      - RM_ACLCheckKeyPerm
      - RM_ACLCheckChannelPerm
      
      The user for these API's is a RedisModuleUser object, that for a Module user returned by the RM_CreateModuleUser API, or for a general ACL user can be retrieved by these two new API's:
      - RM_GetCurrentUserName - Retrieve the user name of the client connection behind the current context.
      - RM_GetModuleUserFromUserName - Get a RedisModuleUser from a user name
      
      As a result of getting a RedisModuleUser from name, it can now also access the general ACL users (not just ones created by the module).
      This mean the already existing API RM_SetModuleUserACL(), can be used to change the ACL rules for such users.
      a56d4533
  8. 15 Sep, 2021 1 commit
    • guybe7's avatar
      A better approach for COMMAND INFO for movablekeys commands (#8324) · 03fcc211
      guybe7 authored
      Fix #7297
      
      The problem:
      
      Today, there is no way for a client library or app to know the key name indexes for commands such as
      ZUNIONSTORE/EVAL and others with "numkeys", since COMMAND INFO returns no useful info for them.
      
      For cluster-aware redis clients, this requires to 'patch' the client library code specifically for each of these commands or to
      resolve each execution of these commands with COMMAND GETKEYS.
      
      The solution:
      
      Introducing key specs other than the legacy "range" (first,last,step)
      
      The 8th element of the command info array, if exists, holds an array of key specs. The array may be empty, which indicates
      the command doesn't take any key arguments or may contain one or more key-specs, each one may leads to the discovery
      of 0 or more key arguments.
      
      A client library that doesn't support this key-spec feature will keep using the first,last,step and movablekeys flag which will
      obviously remain unchanged.
      
      A client that supports this key-specs feature needs only to look at the key-specs array. If it finds an unrecognized spec, it
      must resort to using COMMAND GETKEYS if it wishes to get all key name arguments, but if all it needs is one key in order
      to know which cluster node to use, then maybe another spec (if the command has several) can supply that, and there's no
      need to use GETKEYS.
      
      Each spec is an array of arguments, first one is the spec name, the second is an array of flags, and the third is an array
      containing details about the spec (specific meaning for each spec type)
      The initial flags we support are "read" and "write" indicating if the keys that this key-spec finds are used for read or for write.
      clients should ignore any unfamiliar flags.
      
      In order to easily find the positions of keys in a given array of args we introduce keys specs. There are two logical steps of
      key specs:
      1. `start_search`: Given an array of args, indicate where we should start searching for keys
      2. `find_keys`: Given the output of start_search and an array of args, indicate all possible indices of keys.
      
      ### start_search step specs
      - `index`: specify an argument index explicitly
        - `index`: 0 based index (1 means the first command argument)
      - `keyword`: specify a string to match in `argv`. We should start searching for keys just after the keyword appears.
        - `keyword`: the string to search for
        - `start_search`: an index from which to start the keyword search (can be negative, which means to search from the end)
      
      Examples:
      - `SET` has start_search of type `index` with value `1`
      - `XREAD` has start_search of type `keyword` with value `[“STREAMS”,1]`
      - `MIGRATE` has start_search of type `keyword` with value `[“KEYS”,-2]`
      
      ### find_keys step specs
      - `range`: specify `[count, step, limit]`.
        - `lastkey`: index of the last key. relative to the index returned from begin_search. -1 indicating till the last argument, -2 one before the last
        - `step`: how many args should we skip after finding a key, in order to find the next one
        - `limit`: if count is -1, we use limit to stop the search by a factor. 0 and 1 mean no limit. 2 means ½ of the remaining args, 3 means ⅓, and so on.
      - “keynum”: specify `[keynum_index, first_key_index, step]`.
        - `keynum_index`: is relative to the return of the `start_search` spec.
        - `first_key_index`: is relative to `keynum_index`.
        - `step`: how many args should we skip after finding a key, in order to find the next one
      
      Examples:
      - `SET` has `range` of `[0,1,0]`
      - `MSET` has `range` of `[-1,2,0]`
      - `XREAD` has `range` of `[-1,1,2]`
      - `ZUNION` has `start_search` of type `index` with value `1` and `find_keys` of type `keynum` with value `[0,1,1]`
      - `AI.DAGRUN` has `start_search` of type `keyword` with value `[“LOAD“,1]` and `find_keys` of type `keynum` with value
        `[0,1,1]` (see https://oss.redislabs.com/redisai/master/commands/#aidagrun)
      
      Note: this solution is not perfect as the module writers can come up with anything, but at least we will be able to find the key
      args of the vast majority of commands.
      If one of the above specs can’t describe the key positions, the module writer can always fall back to the `getkeys-api` option.
      
      Some keys cannot be found easily (`KEYS` in `MIGRATE`: Imagine the argument for `AUTH` is the string “KEYS” - we will
      start searching in the wrong index). 
      The guarantee is that the specs may be incomplete (`incomplete` will be specified in the spec to denote that) but we never
      report false information (assuming the command syntax is correct).
      For `MIGRATE` we start searching from the end - `startfrom=-1` - and if one of the keys is actually called "keys" we will
      report only a subset of all keys - hence the `incomplete` flag.
      Some `incomplete` specs can be completely empty (i.e. UNKNOWN begin_search) which should tell the client that
      COMMAND GETKEYS (or any other way to get the keys) must be used (Example: For `SORT` there is no way to describe
      the STORE keyword spec, as the word "store" can appear anywhere in the command).
      
      We will expose these key specs in the `COMMAND` command so that clients can learn, on startup, where the keys are for
      all commands instead of holding hardcoded tables or use `COMMAND GETKEYS` in runtime.
      
      Comments:
      1. Redis doesn't internally use the new specs, they are only used for COMMAND output.
      2. In order to support the current COMMAND INFO format (reply array indices 4, 5, 6) we created a synthetic range, called
         legacy_range, that, if possible, is built according to the new specs.
      3. Redis currently uses only getkeys_proc or the legacy_range to get the keys indices (in COMMAND GETKEYS for
         example).
      
      "incomplete" specs:
      the command we have issues with are MIGRATE, STRALGO, and SORT
      for MIGRATE, because the token KEYS, if exists, must be the last token, we can search in reverse. it one of the keys is
      actually the string "keys" will return just a subset of the keys (hence, it's "incomplete")
      for SORT and STRALGO we can use this heuristic (the keys can be anywhere in the command) and therefore we added a
      key spec that is both "incomplete" and of "unknown type"
      
      if a client encounters an "incomplete" spec it means that it must find a different way (either COMMAND GETKEYS or have
      its own parser) to retrieve the keys.
      please note that all commands, apart from the three mentioned above, have "complete" key specs
      03fcc211
  9. 14 Sep, 2021 1 commit
    • Viktor Söderqvist's avatar
      Modules: Add remaining list API functions (#8439) · ea36d4de
      Viktor Söderqvist authored
      List functions operating on elements by index:
      
      * RM_ListGet
      * RM_ListSet
      * RM_ListInsert
      * RM_ListDelete
      
      Iteration is done using a simple for loop over indices.
      The index based functions use an internal iterator as an optimization.
      This is explained in the docs:
      
      ```
       * Many of the list functions access elements by index. Since a list is in
       * essence a doubly-linked list, accessing elements by index is generally an
       * O(N) operation. However, if elements are accessed sequentially or with
       * indices close together, the functions are optimized to seek the index from
       * the previous index, rather than seeking from the ends of the list.
       *
       * This enables iteration to be done efficiently using a simple for loop:
       *
       *     long n = RM_ValueLength(key);
       *     for (long i = 0; i < n; i++) {
       *         RedisModuleString *elem = RedisModule_ListGet(key, i);
       *         // Do stuff...
       *     }
      ```
      ea36d4de
  10. 04 Aug, 2021 1 commit
    • Meir Shpilraien (Spielrein)'s avatar
      Unified Lua and modules reply parsing and added RESP3 support to RM_Call (#9202) · 2237131e
      Meir Shpilraien (Spielrein) authored
      
      
      ## Current state
      1. Lua has its own parser that handles parsing `reds.call` replies and translates them
        to Lua objects that can be used by the user Lua code. The parser partially handles
        resp3 (missing big number, verbatim, attribute, ...)
      2. Modules have their own parser that handles parsing `RM_Call` replies and translates
        them to RedisModuleCallReply objects. The parser does not support resp3.
      
      In addition, in the future, we want to add Redis Function (#8693) that will probably
      support more languages. At some point maintaining so many parsers will stop
      scaling (bug fixes and protocol changes will need to be applied on all of them).
      We will probably end up with different parsers that support different parts of the
      resp protocol (like we already have today with Lua and modules)
      
      ## PR Changes
      This PR attempt to unified the reply parsing of Lua and modules (and in the future
      Redis Function) by introducing a new parser unit (`resp_parser.c`). The new parser
      handles parsing the reply and calls different callbacks to allow the users (another
      unit that uses the parser, i.e, Lua, modules, or Redis Function) to analyze the reply.
      
      ### Lua API Additions
      The code that handles reply parsing on `scripting.c` was removed. Instead, it uses
      the resp_parser to parse and create a Lua object out of the reply. As mentioned
      above the Lua parser did not handle parsing big numbers, verbatim, and attribute.
      The new parser can handle those and so Lua also gets it for free.
      Those are translated to Lua objects in the following way:
      1. Big Number - Lua table `{'big_number':'<str representation for big number>'}`
      2. Verbatim - Lua table `{'verbatim_string':{'format':'<verbatim format>', 'string':'<verbatim string value>'}}`
      3. Attribute - currently ignored and not expose to the Lua parser, another issue will be open to decide how to expose it.
      
      Tests were added to check resp3 reply parsing on Lua
      
      ### Modules API Additions
      The reply parsing code on `module.c` was also removed and the new resp_parser is used instead.
      In addition, the RedisModuleCallReply was also extracted to a separate unit located on `call_reply.c`
      (in the future, this unit will also be used by Redis Function). A nice side effect of unified parsing is
      that modules now also support resp3. Resp3 can be enabled by giving `3` as a parameter to the
      fmt argument of `RM_Call`. It is also possible to give `0`, which will indicate an auto mode. i.e, Redis
      will automatically chose the reply protocol base on the current client set on the RedisModuleCtx
      (this mode will mostly be used when the module want to pass the reply to the client as is).
      In addition, the following RedisModuleAPI were added to allow analyzing resp3 replies:
      
      * New RedisModuleCallReply types:
         * `REDISMODULE_REPLY_MAP`
         * `REDISMODULE_REPLY_SET`
         * `REDISMODULE_REPLY_BOOL`
         * `REDISMODULE_REPLY_DOUBLE`
         * `REDISMODULE_REPLY_BIG_NUMBER`
         * `REDISMODULE_REPLY_VERBATIM_STRING`
         * `REDISMODULE_REPLY_ATTRIBUTE`
      
      * New RedisModuleAPI:
         * `RedisModule_CallReplyDouble` - getting double value from resp3 double reply
         * `RedisModule_CallReplyBool` - getting boolean value from resp3 boolean reply
         * `RedisModule_CallReplyBigNumber` - getting big number value from resp3 big number reply
         * `RedisModule_CallReplyVerbatim` - getting format and value from resp3 verbatim reply
         * `RedisModule_CallReplySetElement` - getting element from resp3 set reply
         * `RedisModule_CallReplyMapElement` - getting key and value from resp3 map reply
         * `RedisModule_CallReplyAttribute` - getting a reply attribute
         * `RedisModule_CallReplyAttributeElement` - getting key and value from resp3 attribute reply
         
      * New context flags:
         * `REDISMODULE_CTX_FLAGS_RESP3` - indicate that the client is using resp3
      
      Tests were added to check the new RedisModuleAPI
      
      ### Modules API Changes
      * RM_ReplyWithCallReply might return REDISMODULE_ERR if the given CallReply is in resp3
        but the client expects resp2. This is not a breaking change because in order to get a resp3
        CallReply one needs to specifically specify `3` as a parameter to the fmt argument of
        `RM_Call` (as mentioned above).
      
      Tests were added to check this change
      
      ### More small Additions
      * Added `debug set-disable-deny-scripts` that allows to turn on and off the commands no-script
      flag protection. This is used by the Lua resp3 tests so it will be possible to run `debug protocol`
      and check the resp3 parsing code.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      Co-authored-by: default avatarYossi Gottlieb <yossigo@gmail.com>
      2237131e
  11. 03 Aug, 2021 1 commit
  12. 01 Aug, 2021 1 commit
  13. 25 Jul, 2021 1 commit
  14. 01 Jul, 2021 1 commit
    • Yossi Gottlieb's avatar
      Fix CLIENT UNBLOCK crashing modules. (#9167) · aa139e2f
      Yossi Gottlieb authored
      Modules that use background threads with thread safe contexts are likely
      to use RM_BlockClient() without a timeout function, because they do not
      set up a timeout.
      
      Before this commit, `CLIENT UNBLOCK` would result with a crash as the
      `NULL` timeout callback is called. Beyond just crashing, this is also
      logically wrong as it may throw the module into an unexpected client
      state.
      
      This commits makes `CLIENT UNBLOCK` on such clients behave the same as
      any other client that is not in a blocked state and therefore cannot be
      unblocked.
      aa139e2f
  15. 24 Jun, 2021 1 commit
  16. 22 Jun, 2021 1 commit
    • Evan's avatar
      modules: Add newlen == 0 handling to RM_StringTruncate (#3717) (#3718) · 1ccf2ca2
      Evan authored
      Previously, passing 0 for newlen would not truncate the string at all.
      This adds handling of this case, freeing the old string and creating a new empty string.
      
      Other changes:
      - Move `src/modules/testmodule.c` to `tests/modules/basics.c`
      - Introduce that basic test into the test suite
      - Add tests to cover StringTruncate
      - Add `test-modules` build target for the main makefile
      - Extend `distclean` build target to clean modules too
      1ccf2ca2
  17. 16 Jun, 2021 1 commit
  18. 10 Jun, 2021 1 commit
    • Binbin's avatar
      Fixed some typos, add a spell check ci and others minor fix (#8890) · 0bfccc55
      Binbin authored
      This PR adds a spell checker CI action that will fail future PRs if they introduce typos and spelling mistakes.
      This spell checker is based on blacklist of common spelling mistakes, so it will not catch everything,
      but at least it is also unlikely to cause false positives.
      
      Besides that, the PR also fixes many spelling mistakes and types, not all are a result of the spell checker we use.
      
      Here's a summary of other changes:
      1. Scanned the entire source code and fixes all sorts of typos and spelling mistakes (including missing or extra spaces).
      2. Outdated function / variable / argument names in comments
      3. Fix outdated keyspace masks error log when we check `config.notify-keyspace-events` in loadServerConfigFromString.
      4. Trim the white space at the end of line in `module.c`. Check: https://github.com/redis/redis/pull/7751
      5. Some outdated https link URLs.
      6. Fix some outdated comment. Such as:
          - In README: about the rdb, we used to said create a `thread`, change to `process`
          - dbRandomKey function coment (about the dictGetRandomKey, change to dictGetFairRandomKey)
          - notifyKeyspaceEvent fucntion comment (add type arg)
          - Some others minor fix in comment (Most of them are incorrectly quoted by variable names)
      7. Modified the error log so that users can easily distinguish between TCP and TLS in `changeBindAddr`
      0bfccc55
  19. 01 Jun, 2021 1 commit
  20. 19 Apr, 2021 2 commits
    • Hanna Fadida's avatar
      Modules: adding a module type for key space notification (#8759) · 53a4d6c3
      Hanna Fadida authored
      Adding a new type mask ​for key space notification, REDISMODULE_NOTIFY_MODULE, to enable unique notifications from commands on REDISMODULE_KEYTYPE_MODULE type keys (which is currently unsupported).
      
      Modules can subscribe to a module key keyspace notification by RM_SubscribeToKeyspaceEvents,
      and clients by notify-keyspace-events of redis.conf or via the CONFIG SET, with the characters 'd' or 'A' 
      (REDISMODULE_NOTIFY_MODULE type mask is part of the '**A**ll' notation for key space notifications).
      
      Refactor: move some pubsub test infra from pubsub.tcl to util.tcl to be re-used by other tests.
      53a4d6c3
    • guybe7's avatar
      Modules: Replicate lazy-expire even if replication is not allowed (#8816) · f40ca9cb
      guybe7 authored
      Before this commit using RM_Call without "!" could cause the master
      to lazy-expire a key (delete it) but without replicating to replicas.
      This could cause the replica's memory usage to gradually grow and
      could also cause consistency issues if the master and replica have
      a clock diff.
      This bug was introduced in #8617
      
      Added a test which demonstrates that scenario.
      f40ca9cb
  21. 15 Mar, 2021 1 commit
    • guybe7's avatar
      Missing EXEC on modules propagation after failed EVAL execution (#8654) · dba33a94
      guybe7 authored
      1. moduleReplicateMultiIfNeeded should use server.in_eval like
         moduleHandlePropagationAfterCommandCallback
      2. server.in_eval could have been set to 1 and not reset back
         to 0 (a lot of missed early-exits after in_eval is already 1)
      
      Note: The new assertions in processCommand cover (2) and I added
      two module tests to cover (1)
      
      Implications:
      If an EVAL that failed (and thus left server.in_eval=1) runs before a module
      command that replicates, the replication stream will contain MULTI (because
      moduleReplicateMultiIfNeeded used to check server.lua_caller which is NULL
      at this point) but not EXEC (because server.in_eval==1)
      This only affects modules as module.c the only user of server.in_eval.
      
      Affects versions 6.2.0, 6.2.1
      dba33a94
  22. 10 Mar, 2021 1 commit
    • guybe7's avatar
      Fix some issues with modules and MULTI/EXEC (#8617) · 3d0b427c
      guybe7 authored
      Bug 1:
      When a module ctx is freed moduleHandlePropagationAfterCommandCallback
      is called and handles propagation. We want to prevent it from propagating
      commands that were not replicated by the same context. Example:
      1. module1.foo does: RM_Replicate(cmd1); RM_Call(cmd2); RM_Replicate(cmd3)
      2. RM_Replicate(cmd1) propagates MULTI and adds cmd1 to also_propagagte
      3. RM_Call(cmd2) create a new ctx, calls call() and destroys the ctx.
      4. moduleHandlePropagationAfterCommandCallback is called, calling
         alsoPropagates EXEC (Note: EXEC is still not written to socket),
         setting server.in_trnsaction = 0
      5. RM_Replicate(cmd3) is called, propagagting yet another MULTI (now
         we have nested MULTI calls, which is no good) and then cmd3
      
      We must prevent RM_Call(cmd2) from resetting server.in_transaction.
      REDISMODULE_CTX_MULTI_EMITTED was revived for that purpose.
      
      Bug 2:
      Fix issues with nested RM_Call where some have '!' and some don't.
      Example:
      1. module1.foo does RM_Call of module2.bar without replication (i.e. no '!')
      2. module2.bar internally calls RM_Call of INCR with '!'
      3. at the end of module1.foo we call RM_ReplicateVerbatim
      
      We want the replica/AOF to see only module1.foo and not the INCR from module2.bar
      
      Introduced a global replication_allowed flag inside RM_Call to determine
      whether we need to replicate or not (even if '!' was specified)
      
      Other changes:
      Split beforePropagateMultiOrExec to beforePropagateMulti afterPropagateExec
      just for better readability
      3d0b427c
  23. 01 Mar, 2021 1 commit
  24. 28 Feb, 2021 1 commit
    • Viktor Söderqvist's avatar
      Shared reusable client for RM_Call() (#8516) · 6122f1c4
      Viktor Söderqvist authored
      A single client pointer is added in the server struct. This is
      initialized by the first RM_Call() and reused for every subsequent
      RM_Call() except if it's already in use, which means that it's not
      used for (recursive) module calls to modules. For these, a new
      "fake" client is created each time.
      
      Other changes:
      * Avoid allocating a dict iterator in pubsubUnsubscribeAllChannels
        when not needed
      6122f1c4
  25. 15 Feb, 2021 2 commits
  26. 10 Feb, 2021 1 commit
  27. 08 Feb, 2021 1 commit
    • filipe oliveira's avatar
      [fix] Increasing block on background timeout time to avoid test failure (#8470) · b2351ea0
      filipe oliveira authored
      The test failed from time to time on Github actions.
      We think it's possible that on the module's blocking timeout
      time tracking test, the timeout is happening prior we issue the
      RedisModule_BlockedClientMeasureTimeStart(bc) on the
      background thread. If that is the case one possible solution
      is to increase the timeout.
      Increasing to 200ms to 500ms to see if nightly stops failing.
      b2351ea0
  28. 05 Feb, 2021 1 commit
    • Viktor Söderqvist's avatar
      RM_ZsetRem: Delete key if empty (#8453) · aea6e71e
      Viktor Söderqvist authored
      Without this fix, RM_ZsetRem can leave empty sorted sets which are
      not allowed to exist.
      
      Removing from a sorted set while iterating seems to work (while
      inserting causes failed assetions). RM_ZsetRangeEndReached is
      modified to return 1 if the key doesn't exist, to terminate
      iteration when the last element has been removed.
      aea6e71e
  29. 29 Jan, 2021 1 commit
    • filipe oliveira's avatar
      Enabled background and reply time tracking on blocked on keys/blocked on... · f0c5052a
      filipe oliveira authored
      Enabled background and reply time tracking on blocked on keys/blocked on background work clients (#7491)
      
      This commit enables tracking time of the background tasks and on replies,
      opening the door for properly tracking commands that rely on blocking / background
       work via the slowlog, latency history, and commandstats. 
      
      Some notes:
      - The time spent blocked waiting for key changes, or blocked on synchronous
        replication is not accounted for. 
      
      - **This commit does not affect latency tracking of commands that are non-blocking
        or do not have background work.** ( meaning that it all stays the same with exception to
        `BZPOPMIN`,`BZPOPMAX`,`BRPOP`,`BLPOP`, etc... and module's commands that rely
        on background threads ). 
      
      -  Specifically for latency history command we've added a new event class named
        `command-unblocking` that will enable latency monitoring on commands that spawn
        background threads to do the work.
      
      - For blocking commands we're now considering the total time of a command as the
        time spent on call() + the time spent on replying when unblocked.
      
      - For Modules commands that rely on background threads we're now considering the
        total time of a command as the time spent on call (main thread) + the time spent on
        the background thread ( if marked within `RedisModule_MeasureTimeStart()` and
        `RedisModule_MeasureTimeEnd()` ) + the time spent on replying (main thread)
      
      To test for this feature we've added a `unit/moduleapi/blockonbackground` test that relies on
      a module that blocks the client and sleeps on the background for a given time. 
      - check blocked command that uses RedisModule_MeasureTimeStart() is tracking background time
      - check blocked command that uses RedisModule_MeasureTimeStart() is tracking background time even in timeout
      - check blocked command with multiple calls RedisModule_MeasureTimeStart()  is tracking the total background time
      - check blocked command without calling RedisModule_MeasureTimeStart() is not reporting background time
      f0c5052a
  30. 28 Jan, 2021 1 commit
    • Viktor Söderqvist's avatar
      Add modules API for streams (#8288) · 4355145a
      Viktor Söderqvist authored
      APIs added for these stream operations: add, delete, iterate and
      trim (by ID or maxlength). The functions are prefixed by RM_Stream.
      
      * RM_StreamAdd
      * RM_StreamDelete
      * RM_StreamIteratorStart
      * RM_StreamIteratorStop
      * RM_StreamIteratorNextID
      * RM_StreamIteratorNextField
      * RM_StreamIteratorDelete
      * RM_StreamTrimByLength
      * RM_StreamTrimByID
      
      The type RedisModuleStreamID is added and functions for converting
      from and to RedisModuleString.
      
      * RM_CreateStringFromStreamID
      * RM_StringToStreamID
      
      Whenever the stream functions return REDISMODULE_ERR, errno is set to
      provide additional error information.
      
      Refactoring: The zset iterator fields in the RedisModuleKey struct
      are wrapped in a union, to allow the same space to be used for type-
      specific info for streams and allow future use for other key types.
      4355145a
  31. 22 Jan, 2021 1 commit
    • Viktor Söderqvist's avatar
      Test that module can wake up module blocked on non-empty list key (#8382) · 9c148310
      Viktor Söderqvist authored
      BLPOP and other blocking list commands can only block on empty keys
      and LPUSH only wakes up clients when the list is created.
      
      Using the module API, it's possible to block on a non-empty key.
      Unblocking a client blocked on a non-empty list (or zset) can only
      be done using RedisModule_SignalKeyAsReady(). This commit tests it.
      9c148310
  32. 20 Jan, 2021 1 commit
    • guybe7's avatar
      Fix misleading module test (#8366) · 5a77d015
      guybe7 authored
      the test was misleading because the module would actually woke up on a wrong type and
      re-blocked, while the test name suggests the module doesn't not wake up at all on a wrong type..
      
      i changed the name of the test + added verification that indeed the module wakes up and gets
      re-blocked after it understand it's the wrong type
      5a77d015
  33. 19 Jan, 2021 1 commit
    • Viktor Söderqvist's avatar
      Bugfix: Make modules blocked on keys unblock on commands like LPUSH (#8356) · 4985c11b
      Viktor Söderqvist authored
      This was a regression from #7625 (only in 6.2 RC2).
      
      This makes it possible again to implement blocking list and zset
      commands using the modules API.
      
      This commit also includes a test case for the reverse: A module
      unblocks a client blocked on BLPOP by inserting elements using
      RedisModule_ListPush(). This already works, but it was untested.
      4985c11b
  34. 22 Dec, 2020 1 commit
    • Oran Agra's avatar
      Remove read-only flag from non-keyspace cmds, different approach for EXEC to... · 411c18bb
      Oran Agra authored
      Remove read-only flag from non-keyspace cmds, different approach for EXEC to propagate MULTI (#8216)
      
      In the distant history there was only the read flag for commands, and whatever
      command that didn't have the read flag was a write one.
      Then we added the write flag, but some portions of the code still used !read
      Also some commands that don't work on the keyspace at all, still have the read
      flag.
      
      Changes in this commit:
      1. remove the read-only flag from TIME, ECHO, ROLE and LASTSAVE
      
      2. EXEC command used to decides if it should propagate a MULTI by looking at
         the command flags (!read & !admin).
         When i was about to change it to look at the write flag instead, i realized
         that this would cause it not to propagate a MULTI for PUBLISH, EVAL, and
         SCRIPT, all 3 are not marked as either a read command or a write one (as
         they should), but all 3 are calling forceCommandPropagation.
      
         So instead of introducing a new flag to denote a command that "writes" but
         not into the keyspace, and still needs propagation, i decided to rely on
         the forceCommandPropagation, and just fix the code to propagate MULTI when
         needed rather than depending on the command flags at all.
      
         The implication of my change then is that now it won't decide to propagate
         MULTI when it sees one of these: SELECT, PING, INFO, COMMAND, TIME and
         other commands which are neither read nor write.
      
      3. Changing getNodeByQuery and clusterRedirectBlockedClientIfNeeded in
         cluster.c to look at !write rather than read flag.
         This should have no implications, since these code paths are only reachable
         for commands which access keys, and these are always marked as either read
         or write.
      
      This commit improve MULTI propagation tests, for modules and a bunch of
      other special cases, all of which used to pass already before that commit.
      the only one that test change that uncovered a change of behavior is the
      one that DELs a non-existing key, it used to propagate an empty
      multi-exec block, and no longer does.
      411c18bb
  35. 14 Dec, 2020 1 commit
    • Oran Agra's avatar
      Tests: fix new defrag test to be skipped when not supported (#8185) · 7d9b09ad
      Oran Agra authored
      Additionally the older defrag tests are using an obsolete way to check
      if the defragger is suuported (the error no longer contains "DISABLED").
      this doesn't usually makes a difference since these tests are completely
      skipped if the allocator is not jemalloc, but that would fail if the
      allocator is a jemalloc that doesn't support defrag.
      7d9b09ad
  36. 13 Dec, 2020 1 commit
    • Yossi Gottlieb's avatar
      Modules: add defrag API support. (#8149) · 63c1303c
      Yossi Gottlieb authored
      Add a new set of defrag functions that take a defrag context and allow
      defragmenting memory blocks and RedisModuleStrings.
      
      Modules can register a defrag callback which will be invoked when the
      defrag process handles globals.
      
      Modules with custom data types can also register a datatype-specific
      defrag callback which is invoked for keys that require defragmentation.
      The callback and associated functions support both one-step and
      multi-step options, depending on the complexity of the key as exposed by
      the free_effort callback.
      63c1303c
  37. 09 Dec, 2020 1 commit
    • Yossi Gottlieb's avatar
      Add module data-type support for COPY. (#8112) · 4e064fba
      Yossi Gottlieb authored
      This adds a copy callback for module data types, in order to make
      modules compatible with the new COPY command.
      
      The callback is optional and COPY will fail for keys with data types
      that do not implement it.
      4e064fba