1. 11 Mar, 2023 1 commit
    • guybe7's avatar
      Add reply_schema to command json files (internal for now) (#10273) · 4ba47d2d
      guybe7 authored
      Work in progress towards implementing a reply schema as part of COMMAND DOCS, see #9845
      Since ironing the details of the reply schema of each and every command can take a long time, we
      would like to merge this PR when the infrastructure is ready, and let this mature in the unstable branch.
      Meanwhile the changes of this PR are internal, they are part of the repo, but do not affect the produced build.
      
      ### Background
      In #9656 we add a lot of information about Redis commands, but we are missing information about the replies
      
      ### Motivation
      1. Documentation. This is the primary goal.
      2. It should be possible, based on the output of COMMAND, to be able to generate client code in typed
        languages. In order to do that, we need Redis to tell us, in detail, what each reply looks like.
      3. We would like to build a fuzzer that verifies the reply structure (for now we use the existing
        testsuite, see the "Testing" section)
      
      ### Schema
      The idea is to supply some sort of schema for the various replies of each command.
      The schema will describe the conceptual structure of the reply (for generated clients), as defined in RESP3.
      Note that the reply structure itself may change, depending on the arguments (e.g. `XINFO STREAM`, with
      and without the `FULL` modifier)
      We decided to use the standard json-schema (see https://json-schema.org/) as the reply-schema.
      
      Example for `BZPOPMIN`:
      ```
      "reply_schema": {
          "oneOf": [
              {
                  "description": "Timeout reached and no elements were popped.",
                  "type": "null"
              },
              {
                  "description": "The keyname, popped member, and its score.",
                  "type": "array",
                  "minItems": 3,
                  "maxItems": 3,
                  "items": [
                      {
                          "description": "Keyname",
                          "type": "string"
                      },
                      {
                          "description": "Member",
                          "type": "string"
                      },
                      {
                          "description": "Score",
                          "type": "number"
                      }
                  ]
              }
          ]
      }
      ```
      
      #### Notes
      1.  It is ok that some commands' reply structure depends on the arguments and it's the caller's responsibility
        to know which is the relevant one. this comes after looking at other request-reply systems like OpenAPI,
        where the reply schema can also be oneOf and the caller is responsible to know which schema is the relevant one.
      2. The reply schemas will describe RESP3 replies only. even though RESP3 is structured, we want to use reply
        schema for documentation (and possibly to create a fuzzer that validates the replies)
      3. For documentation, the description field will include an explanation of the scenario in which the reply is sent,
        including any relation to arguments. for example, for `ZRANGE`'s two schemas we will need to state that one
        is with `WITHSCORES` and the other is without.
      4. For documentation, there will be another optional field "notes" in which we will add a short description of
        the representation in RESP2, in case it's not trivial (RESP3's `ZRANGE`'s nested array vs. RESP2's flat
        array, for example)
      
      Given the above:
      1. We can generate the "return" section of all commands in [redis-doc](https://redis.io/commands/)
        (given that "description" and "notes" are comprehensive enough)
      2. We can generate a client in a strongly typed language (but the return type could be a conceptual
        `union` and the caller needs to know which schema is relevant). see the section below for RESP2 support.
      3. We can create a fuzzer for RESP3.
      
      ### Limitations (because we are using the standard json-schema)
      The problem is that Redis' replies are more diverse than what the json format allows. This means that,
      when we convert the reply to a json (in order to validate the schema against it), we lose information (see
      the "Testing" section below).
      The other option would have been to extend the standard json-schema (and json format) to include stuff
      like sets, bulk-strings, error-string, etc. but that would mean also extending the schema-validator - and that
      seemed like too much work, so we decided to compromise.
      
      Examples:
      1. We cannot tell the difference between an "array" and a "set"
      2. We cannot tell the difference between simple-string and bulk-string
      3. we cannot verify true uniqueness of items in commands like ZRANGE: json-schema doesn't cover the
        case of two identical members with different scores (e.g. `[["m1",6],["m1",7]]`) because `uniqueItems`
        compares (member,score) tuples and not just the member name. 
      
      ### Testing
      This commit includes some changes inside Redis in order to verify the schemas (existing and future ones)
      are indeed correct (i.e. describe the actual response of Redis).
      To do that, we added a debugging feature to Redis that causes it to produce a log of all the commands
      it executed and their replies.
      For that, Redis needs to be compiled with `-DLOG_REQ_RES` and run with
      `--reg-res-logfile <file> --client-default-resp 3` (the testsuite already does that if you run it with
      `--log-req-res --force-resp3`)
      You should run the testsuite with the above args (and `--dont-clean`) in order to make Redis generate
      `.reqres` files (same dir as the `stdout` files) which contain request-response pairs.
      These files are later on processed by `./utils/req-res-log-validator.py` which does:
      1. Goes over req-res files, generated by redis-servers, spawned by the testsuite (see logreqres.c)
      2. For each request-response pair, it validates the response against the request's reply_schema
        (obtained from the extended COMMAND DOCS)
      5. In order to get good coverage of the Redis commands, and all their different replies, we chose to use
        the existing redis test suite, rather than attempt to write a fuzzer.
      
      #### Notes about RESP2
      1. We will not be able to use the testing tool to verify RESP2 replies (we are ok with that, it's time to
        accept RESP3 as the future RESP)
      2. Since the majority of the test suite is using RESP2, and we want the server to reply with RESP3
        so that we can validate it, we will need to know how to convert the actual reply to the one expected.
         - number and boolean are always strings in RESP2 so the conversion is easy
         - objects (maps) are always a flat array in RESP2
         - others (nested array in RESP3's `ZRANGE` and others) will need some special per-command
           handling (so the client will not be totally auto-generated)
      
      Example for ZRANGE:
      ```
      "reply_schema": {
          "anyOf": [
              {
                  "description": "A list of member elements",
                  "type": "array",
                  "uniqueItems": true,
                  "items": {
                      "type": "string"
                  }
              },
              {
                  "description": "Members and their scores. Returned in case `WITHSCORES` was used.",
                  "notes": "In RESP2 this is returned as a flat array",
                  "type": "array",
                  "uniqueItems": true,
                  "items": {
                      "type": "array",
                      "minItems": 2,
                      "maxItems": 2,
                      "items": [
                          {
                              "description": "Member",
                              "type": "string"
                          },
                          {
                              "description": "Score",
                              "type": "number"
                          }
                      ]
                  }
              }
          ]
      }
      ```
      
      ### Other changes
      1. Some tests that behave differently depending on the RESP are now being tested for both RESP,
        regardless of the special log-req-res mode ("Pub/Sub PING" for example)
      2. Update the history field of CLIENT LIST
      3. Added basic tests for commands that were not covered at all by the testsuite
      
      ### TODO
      
      - [x] (maybe a different PR) add a "condition" field to anyOf/oneOf schemas that refers to args. e.g.
        when `SET` return NULL, the condition is `arguments.get||arguments.condition`, for `OK` the condition
        is `!arguments.get`, and for `string` the condition is `arguments.get` - https://github.com/redis/redis/issues/11896
      - [x] (maybe a different PR) also run `runtest-cluster` in the req-res logging mode
      - [x] add the new tests to GH actions (i.e. compile with `-DLOG_REQ_RES`, run the tests, and run the validator)
      - [x] (maybe a different PR) figure out a way to warn about (sub)schemas that are uncovered by the output
        of the tests - https://github.com/redis/redis/issues/11897
      - [x] (probably a separate PR) add all missing schemas
      - [x] check why "SDOWN is triggered by misconfigured instance replying with errors" fails with --log-req-res
      - [x] move the response transformers to their own file (run both regular, cluster, and sentinel tests - need to
        fight with the tcl including mechanism a bit)
      - [x] issue: module API - https://github.com/redis/redis/issues/11898
      - [x] (probably a separate PR): improve schemas: add `required` to `object`s - https://github.com/redis/redis/issues/11899
      
      Co-authored-by: default avatarOzan Tezcan <ozantezcan@gmail.com>
      Co-authored-by: default avatarHanna Fadida <hanna.fadida@redislabs.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      Co-authored-by: default avatarShaya Potter <shaya@redislabs.com>
      4ba47d2d
  2. 30 Mar, 2022 1 commit
    • Binbin's avatar
      command json files cleanups (#10473) · e2fa6aa1
      Binbin authored
      This PR do some command json files cleanups:
      
      1. Add COMMAND TIPS to some commands
      - command-docs: add `NONDETERMINISTIC_OUTPUT_ORDER`
      - command-info: add `NONDETERMINISTIC_OUTPUT_ORDER`
      - command-list: add `NONDETERMINISTIC_OUTPUT_ORDER`
      - command: change `NONDETERMINISTIC_OUTPUT` to `NONDETERMINISTIC_OUTPUT_ORDER`
      - function-list: add `NONDETERMINISTIC_OUTPUT_ORDER`
      - latency-doctor: add `NONDETERMINISTIC_OUTPUT`, `REQUEST_POLICY:ALL_NODES` and `RESPONSE_POLICY:SPECIAL`
      - latency-graph: add `NONDETERMINISTIC_OUTPUT`, `REQUEST_POLICY:ALL_NODES` and `RESPONSE_POLICY:SPECIAL`
      - memory-doctor: add `REQUEST_POLICY:ALL_SHARDS` and `RESPONSE_POLICY:SPECIAL`
      - memory-malloc-stats: add `REQUEST_POLICY:ALL_SHARDS` and `RESPONSE_POLICY:SPECIAL`
      - memory-purge: add `REQUEST_POLICY:ALL_SHARDS` and `RESPONSE_POLICY:ALL_SUCCEEDED`
      - module-list: add `NONDETERMINISTIC_OUTPUT_ORDER`
      - msetnx: add `REQUEST_POLICY:MULTI_SHARD` and `RESPONSE_POLICY:AGG_MIN`
      - object-refcount: add `NONDETERMINISTIC_OUTPUT`
      3. Only (mostly) indentation and formatting changes:
      - cluster-shards
      - latency-history
      - pubsub-shardchannels
      - pubsub-shardnumsub
      - spublish
      - ssubscribe
      - sunsubscribe
      4. add doc_flags (DEPRECATED) to cluster-slots,  replaced_by `CLUSTER SHARDS` in 7.0
      5. command-getkeysandflags: a better summary (the old one is copy from command-getkeys)
      6. adjustment of command parameter types
      - `port` is integer, not string (`MIGRATE`, `REPLICAOF`, `SLAVEOF`)
      - `replicationid` is string, not integer (`PSYNC`)
      - `pattern` is pattern, not string (`PUBSUB CHANNELS`, `SENTINEL RESET`, `SORT`, `SORT_RO`)
      e2fa6aa1
  3. 06 Jan, 2022 1 commit
    • Meir Shpilraien (Spielrein)'s avatar
      Redis Function Libraries (#10004) · 885f6b5c
      Meir Shpilraien (Spielrein) authored
      # Redis Function Libraries
      
      This PR implements Redis Functions Libraries as describe on: https://github.com/redis/redis/issues/9906.
      
      Libraries purpose is to provide a better code sharing between functions by allowing to create multiple
      functions in a single command. Functions that were created together can safely share code between
      each other without worrying about compatibility issues and versioning.
      
      Creating a new library is done using 'FUNCTION LOAD' command (full API is described below)
      
      This PR introduces a new struct called libraryInfo, libraryInfo holds information about a library:
      * name - name of the library
      * engine - engine used to create the library
      * code - library code
      * description - library description
      * functions - the functions exposed by the library
      
      When Redis gets the `FUNCTION LOAD` command it creates a new empty libraryInfo.
      Redis passes the `CODE` to the relevant engine alongside the empty libraryInfo.
      As a result, the engine will create one or more functions by calling 'libraryCreateFunction'.
      The new funcion will be added to the newly created libraryInfo. So far Everything is happening
      locally on the libraryInfo so it is easy to abort the operation (in case of an error) by simply
      freeing the libraryInfo. After the library info is fully constructed we start the joining phase by
      which we will join the new library to the other libraries currently exist on Redis.
      The joining phase make sure there is no function collision and add the library to the
      librariesCtx (renamed from functionCtx). LibrariesCtx is used all around the code in the exact
      same way as functionCtx was used (with respect to RDB loading, replicatio, ...).
      The only difference is that apart from function dictionary (maps function name to functionInfo
      object), the librariesCtx contains also a libraries dictionary that maps library name to libraryInfo object.
      
      ## New API
      ### FUNCTION LOAD
      `FUNCTION LOAD <ENGINE> <LIBRARY NAME> [REPLACE] [DESCRIPTION <DESCRIPTION>] <CODE>`
      Create a new library with the given parameters:
      * ENGINE - REPLACE Engine name to use to create the library.
      * LIBRARY NAME - The new library name.
      * REPLACE - If the library already exists, replace it.
      * DESCRIPTION - Library description.
      * CODE - Library code.
      
      Return "OK" on success, or error on the following cases:
      * Library name already taken and REPLACE was not used
      * Name collision with another existing library (even if replace was uses)
      * Library registration failed by the engine (usually compilation error)
      
      ## Changed API
      ### FUNCTION LIST
      `FUNCTION LIST [LIBRARYNAME <LIBRARY NAME PATTERN>] [WITHCODE]`
      Command was modified to also allow getting libraries code (so `FUNCTION INFO` command is no longer
      needed and removed). In addition the command gets an option argument, `LIBRARYNAME` allows you to
      only get libraries that match the given `LIBRARYNAME` pattern. By default, it returns all libraries.
      
      ### INFO MEMORY
      Added number of libraries to `INFO MEMORY`
      
      ### Commands flags
      `DENYOOM` flag was set on `FUNCTION LOAD` and `FUNCTION RESTORE`. We consider those commands
      as commands that add new data to the dateset (functions are data) and so we want to disallows
      to run those commands on OOM.
      
      ## Removed API
      * FUNCTION CREATE - Decided on https://github.com/redis/redis/issues/9906
      * FUNCTION INFO - Decided on https://github.com/redis/redis/issues/9899
      
      ## Lua engine changes
      When the Lua engine gets the code given on `FUNCTION LOAD` command, it immediately runs it, we call
      this run the loading run. Loading run is not a usual script run, it is not possible to invoke any
      Redis command from within the load run.
      Instead there is a new API provided by `library` object. The new API's: 
      * `redis.log` - behave the same as `redis.log`
      * `redis.register_function` - register a new function to the library
      
      The loading run purpose is to register functions using the new `redis.register_function` API.
      Any attempt to use any other API will result in an error. In addition, the load run is has a time
      limit of 500ms, error is raise on timeout and the entire operation is aborted.
      
      ### `redis.register_function`
      `redis.register_function(<function_name>, <callback>, [<description>])`
      This new API allows users to register a new function that will be linked to the newly created library.
      This API can only be called during the load run (see definition above). Any attempt to use it outside
      of the load run will result in an error.
      The parameters pass to the API are:
      * function_name - Function name (must be a Lua string)
      * callback - Lua function object that will be called when the function is invokes using fcall/fcall_ro
      * description - Function description, optional (must be a Lua string).
      
      ### Example
      The following example creates a library called `lib` with 2 functions, `f1` and `f1`, returns 1 and 2 respectively:
      ```
      local function f1(keys, args)
          return 1
      end
      
      local function f2(keys, args)
          return 2
      end
      
      redis.register_function('f1', f1)
      redis.register_function('f2', f2)
      ```
      
      Notice: Unlike `eval`, functions inside a library get the KEYS and ARGV as arguments to the
      functions and not as global.
      
      ### Technical Details
      
      On the load run we only want the user to be able to call a white list on API's. This way, in
      the future, if new API's will be added, the new API's will not be available to the load run
      unless specifically added to this white list. We put the while list on the `library` object and
      make sure the `library` object is only available to the load run by using [lua_setfenv](https://www.lua.org/manual/5.1/manual.html#lua_setfenv) API. This API allows us to set
      the `globals` of a function (and all the function it creates). Before starting the load run we
      create a new fresh Lua table (call it `g`) that only contains the `library` API (we make sure
      to set global protection on this table just like the general global protection already exists
      today), then we use [lua_setfenv](https://www.lua.org/manual/5.1/manual.html#lua_setfenv)
      to set `g` as the global table of the load run. After the load run finished we update `g`
      metatable and set `__index` and `__newindex` functions to be `_G` (Lua default globals),
      we also pop out the `library` object as we do not need it anymore.
      This way, any function that was created on the load run (and will be invoke using `fcall`) will
      see the default globals as it expected to see them and will not have the `library` API anymore.
      
      An important outcome of this new approach is that now we can achieve a distinct global table
      for each library (it is not yet like that but it is very easy to achieve it now). In the future we can
      decide to remove global protection because global on different libraries will not collide or we
      can chose to give different API to different libraries base on some configuration or input.
      
      Notice that this technique was meant to prevent errors and was not meant to prevent malicious
      user from exploit it. For example, the load run can still save the `library` object on some local
      variable and then using in `fcall` context. To prevent such a malicious use, the C code also make
      sure it is running in the right context and if not raise an error.
      885f6b5c
  4. 21 Dec, 2021 1 commit
    • Meir Shpilraien (Spielrein)'s avatar
      Change FUNCTION CREATE, DELETE and FLUSH to be WRITE commands instead of MAY_REPLICATE. (#9953) · 3bcf1084
      Meir Shpilraien (Spielrein) authored
      The issue with MAY_REPLICATE is that all automatic mechanisms to handle
      write commands will not work. This require have a special treatment for:
      * Not allow those commands to be executed on RO replica.
      * Allow those commands to be executed on RO replica from primary connection.
      * Allow those commands to be executed on the RO replica from AOF.
      
      By setting those commands as WRITE commands we are getting all those properties from Redis.
      Test was added to verify that those properties work as expected.
      
      In addition, rearrange when and where functions are flushed. Before this PR functions were
      flushed manually on `rdbLoadRio` and cleaned manually on failure. This contradicts the
      assumptions that functions are data and need to be created/deleted alongside with the
      data. A side effect of this, for example, `debug reload noflush` did not flush the data but
      did flush the functions, `debug loadaof` flush the data but not the functions.
      This PR move functions deletion into `emptyDb`. `emptyDb` (renamed to `emptyData`) will
      now accept an additional flag, `NOFUNCTIONS` which specifically indicate that we do not
      want to flush the functions (on all other cases, functions will be flushed). Used the new flag
      on FLUSHALL and FLUSHDB only! Tests were added to `debug reload` and `debug loadaof`
      to verify that functions behave the same as the data.
      
      Notice that because now functions will be deleted along side with the data we can not allow
      `CLUSTER RESET` to be called from within a function (it will cause the function to be released
      while running), this PR adds `NO_SCRIPT` flag to `CLUSTER RESET`  so it will not be possible
      to be called from within a function. The other cluster commands are allowed from within a
      function (there are use-cases that uses `GETKEYSINSLOT` to iterate over all the keys on a
      given slot). Tests was added to verify `CLUSTER RESET` is denied from within a script.
      
      Another small change on this PR is that `RDBFLAGS_ALLOW_DUP` is also applicable on functions.
      When loading functions, if this flag is set, we will replace old functions with new ones on collisions. 
      3bcf1084
  5. 15 Dec, 2021 1 commit
    • guybe7's avatar
      Auto-generate the command table from JSON files (#9656) · 86781600
      guybe7 authored
      Delete the hardcoded command table and replace it with an auto-generated table, based
      on a JSON file that describes the commands (each command must have a JSON file).
      
      These JSON files are the SSOT of everything there is to know about Redis commands,
      and it is reflected fully in COMMAND INFO.
      
      These JSON files are used to generate commands.c (using a python script), which is then
      committed to the repo and compiled.
      
      The purpose is:
      * Clients and proxies will be able to get much more info from redis, instead of relying on hard coded logic.
      * drop the dependency between Redis-user and the commands.json in redis-doc.
      * delete help.h and have redis-cli learn everything it needs to know just by issuing COMMAND (will be
        done in a separate PR)
      * redis.io should stop using commands.json and learn everything from Redis (ultimately one of the release
        artifacts should be a large JSON, containing all the information about all of the commands, which will be
        generated from COMMAND's reply)
      * the byproduct of this is:
        * module commands will be able to provide that info and possibly be more of a first-class citizens
        * in theory, one may be able to generate a redis client library for a strictly typed language, by using this info.
      
      ### Interface changes
      
      #### COMMAND INFO's reply change (and arg-less COMMAND)
      
      Before this commit the reply at index 7 contained the key-specs list
      and reply at index 8 contained the sub-commands list (Both unreleased).
      Now, reply at index 7 is a map of:
      - summary - short command description
      - since - debut version
      - group - command group
      - complexity - complexity string
      - doc-flags - flags used for documentation (e.g. "deprecated")
      - deprecated-since - if deprecated, from which version?
      - replaced-by - if deprecated, which command replaced it?
      - history - a list of (version, what-changed) tuples
      - hints - a list of strings, meant to provide hints for clients/proxies. see https://github.com/redis/redis/issues/9876
      - arguments - an array of arguments. each element is a map, with the possibility of nesting (sub-arguments)
      - key-specs - an array of keys specs (already in unstable, just changed location)
      - subcommands - a list of sub-commands (already in unstable, just changed location)
      - reply-schema - will be added in the future (see https://github.com/redis/redis/issues/9845)
      
      more details on these can be found in https://github.com/redis/redis-doc/pull/1697
      
      only the first three fields are mandatory 
      
      #### API changes (unreleased API obviously)
      
      now they take RedisModuleCommand opaque pointer instead of looking up the command by name
      
      - RM_CreateSubcommand
      - RM_AddCommandKeySpec
      - RM_SetCommandKeySpecBeginSearchIndex
      - RM_SetCommandKeySpecBeginSearchKeyword
      - RM_SetCommandKeySpecFindKeysRange
      - RM_SetCommandKeySpecFindKeysKeynum
      
      Currently, we did not add module API to provide additional information about their commands because
      we couldn't agree on how the API should look like, see https://github.com/redis/redis/issues/9944
      
      .
      
      ### Somehow related changes
      1. Literals should be in uppercase while placeholder in lowercase. Now all the GEO* command
         will be documented with M|KM|FT|MI and can take both lowercase and uppercase
      
      ### Unrelated changes
      1. Bugfix: no_madaory_keys was absent in COMMAND's reply
      2. expose CMD_MODULE as "module" via COMMAND
      3. have a dedicated uint64 for ACL categories (instead of having them in the same uint64 as command flags)
      Co-authored-by: default avatarItamar Haber <itamar@garantiadata.com>
      86781600