1. 08 Sep, 2023 1 commit
  2. 12 Mar, 2023 1 commit
    • Binbin's avatar
      Fix the bug that CLIENT REPLY OFF|SKIP cannot receive push notifications (#11875) · 416842e6
      Binbin authored
      This bug seems to be there forever, CLIENT REPLY OFF|SKIP will
      mark the client with CLIENT_REPLY_OFF or CLIENT_REPLY_SKIP flags.
      With these flags, prepareClientToWrite called by addReply* will
      return C_ERR directly. So the client can't receive the Pub/Sub
      messages and any other push notifications, e.g client side tracking.
      
      In this PR, we adding a CLIENT_PUSHING flag, disables the reply
      silencing flags. When adding push replies, set the flag, after the reply,
      clear the flag. Then add the flag check in prepareClientToWrite.
      
      Fixes #11874
      
      Note, the SUBSCRIBE command response is a bit awkward,
      see https://github.com/redis/redis-doc/pull/2327
      
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      416842e6
  3. 11 Mar, 2023 1 commit
    • guybe7's avatar
      Add reply_schema to command json files (internal for now) (#10273) · 4ba47d2d
      guybe7 authored
      Work in progress towards implementing a reply schema as part of COMMAND DOCS, see #9845
      Since ironing the details of the reply schema of each and every command can take a long time, we
      would like to merge this PR when the infrastructure is ready, and let this mature in the unstable branch.
      Meanwhile the changes of this PR are internal, they are part of the repo, but do not affect the produced build.
      
      ### Background
      In #9656 we add a lot of information about Redis commands, but we are missing information about the replies
      
      ### Motivation
      1. Documentation. This is the primary goal.
      2. It should be possible, based on the output of COMMAND, to be able to generate client code in typed
        languages. In order to do that, we need Redis to tell us, in detail, what each reply looks like.
      3. We would like to build a fuzzer that verifies the reply structure (for now we use the existing
        testsuite, see the "Testing" section)
      
      ### Schema
      The idea is to supply some sort of schema for the various replies of each command.
      The schema will describe the conceptual structure of the reply (for generated clients), as defined in RESP3.
      Note that the reply structure itself may change, depending on the arguments (e.g. `XINFO STREAM`, with
      and without the `FULL` modifier)
      We decided to use the standard json-schema (see https://json-schema.org/) as the reply-schema.
      
      Example for `BZPOPMIN`:
      ```
      "reply_schema": {
          "oneOf": [
              {
                  "description": "Timeout reached and no elements were popped.",
                  "type": "null"
              },
              {
                  "description": "The keyname, popped member, and its score.",
                  "type": "array",
                  "minItems": 3,
                  "maxItems": 3,
                  "items": [
                      {
                          "description": "Keyname",
                          "type": "string"
                      },
                      {
                          "description": "Member",
                          "type": "string"
                      },
                      {
                          "description": "Score",
                          "type": "number"
                      }
                  ]
              }
          ]
      }
      ```
      
      #### Notes
      1.  It is ok that some commands' reply structure depends on the arguments and it's the caller's responsibility
        to know which is the relevant one. this comes after looking at other request-reply systems like OpenAPI,
        where the reply schema can also be oneOf and the caller is responsible to know which schema is the relevant one.
      2. The reply schemas will describe RESP3 replies only. even though RESP3 is structured, we want to use reply
        schema for documentation (and possibly to create a fuzzer that validates the replies)
      3. For documentation, the description field will include an explanation of the scenario in which the reply is sent,
        including any relation to arguments. for example, for `ZRANGE`'s two schemas we will need to state that one
        is with `WITHSCORES` and the other is without.
      4. For documentation, there will be another optional field "notes" in which we will add a short description of
        the representation in RESP2, in case it's not trivial (RESP3's `ZRANGE`'s nested array vs. RESP2's flat
        array, for example)
      
      Given the above:
      1. We can generate the "return" section of all commands in [redis-doc](https://redis.io/commands/)
        (given that "description" and "notes" are comprehensive enough)
      2. We can generate a client in a strongly typed language (but the return type could be a conceptual
        `union` and the caller needs to know which schema is relevant). see the section below for RESP2 support.
      3. We can create a fuzzer for RESP3.
      
      ### Limitations (because we are using the standard json-schema)
      The problem is that Redis' replies are more diverse than what the json format allows. This means that,
      when we convert the reply to a json (in order to validate the schema against it), we lose information (see
      the "Testing" section below).
      The other option would have been to extend the standard json-schema (and json format) to include stuff
      like sets, bulk-strings, error-string, etc. but that would mean also extending the schema-validator - and that
      seemed like too much work, so we decided to compromise.
      
      Examples:
      1. We cannot tell the difference between an "array" and a "set"
      2. We cannot tell the difference between simple-string and bulk-string
      3. we cannot verify true uniqueness of items in commands like ZRANGE: json-schema doesn't cover the
        case of two identical members with different scores (e.g. `[["m1",6],["m1",7]]`) because `uniqueItems`
        compares (member,score) tuples and not just the member name. 
      
      ### Testing
      This commit includes some changes inside Redis in order to verify the schemas (existing and future ones)
      are indeed correct (i.e. describe the actual response of Redis).
      To do that, we added a debugging feature to Redis that causes it to produce a log of all the commands
      it executed and their replies.
      For that, Redis needs to be compiled with `-DLOG_REQ_RES` and run with
      `--reg-res-logfile <file> --client-default-resp 3` (the testsuite already does that if you run it with
      `--log-req-res --force-resp3`)
      You should run the testsuite with the above args (and `--dont-clean`) in order to make Redis generate
      `.reqres` files (same dir as the `stdout` files) which contain request-response pairs.
      These files are later on processed by `./utils/req-res-log-validator.py` which does:
      1. Goes over req-res files, generated by redis-servers, spawned by the testsuite (see logreqres.c)
      2. For each request-response pair, it validates the response against the request's reply_schema
        (obtained from the extended COMMAND DOCS)
      5. In order to get good coverage of the Redis commands, and all their different replies, we chose to use
        the existing redis test suite, rather than attempt to write a fuzzer.
      
      #### Notes about RESP2
      1. We will not be able to use the testing tool to verify RESP2 replies (we are ok with that, it's time to
        accept RESP3 as the future RESP)
      2. Since the majority of the test suite is using RESP2, and we want the server to reply with RESP3
        so that we can validate it, we will need to know how to convert the actual reply to the one expected.
         - number and boolean are always strings in RESP2 so the conversion is easy
         - objects (maps) are always a flat array in RESP2
         - others (nested array in RESP3's `ZRANGE` and others) will need some special per-command
           handling (so the client will not be totally auto-generated)
      
      Example for ZRANGE:
      ```
      "reply_schema": {
          "anyOf": [
              {
                  "description": "A list of member elements",
                  "type": "array",
                  "uniqueItems": true,
                  "items": {
                      "type": "string"
                  }
              },
              {
                  "description": "Members and their scores. Returned in case `WITHSCORES` was used.",
                  "notes": "In RESP2 this is returned as a flat array",
                  "type": "array",
                  "uniqueItems": true,
                  "items": {
                      "type": "array",
                      "minItems": 2,
                      "maxItems": 2,
                      "items": [
                          {
                              "description": "Member",
                              "type": "string"
                          },
                          {
                              "description": "Score",
                              "type": "number"
                          }
                      ]
                  }
              }
          ]
      }
      ```
      
      ### Other changes
      1. Some tests that behave differently depending on the RESP are now being tested for both RESP,
        regardless of the special log-req-res mode ("Pub/Sub PING" for example)
      2. Update the history field of CLIENT LIST
      3. Added basic tests for commands that were not covered at all by the testsuite
      
      ### TODO
      
      - [x] (maybe a different PR) add a "condition" field to anyOf/oneOf schemas that refers to args. e.g.
        when `SET` return NULL, the condition is `arguments.get||arguments.condition`, for `OK` the condition
        is `!arguments.get`, and for `string` the condition is `arguments.get` - https://github.com/redis/redis/issues/11896
      - [x] (maybe a different PR) also run `runtest-cluster` in the req-res logging mode
      - [x] add the new tests to GH actions (i.e. compile with `-DLOG_REQ_RES`, run the tests, and run the validator)
      - [x] (maybe a different PR) figure out a way to warn about (sub)schemas that are uncovered by the output
        of the tests - https://github.com/redis/redis/issues/11897
      - [x] (probably a separate PR) add all missing schemas
      - [x] check why "SDOWN is triggered by misconfigured instance replying with errors" fails with --log-req-res
      - [x] move the response transformers to their own file (run both regular, cluster, and sentinel tests - need to
        fight with the tcl including mechanism a bit)
      - [x] issue: module API - https://github.com/redis/redis/issues/11898
      - [x] (probably a separate PR): improve schemas: add `required` to `object`s - https://github.com/redis/redis/issues/11899
      
      Co-authored-by: default avatarOzan Tezcan <ozantezcan@gmail.com>
      Co-authored-by: default avatarHanna Fadida <hanna.fadida@redislabs.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      Co-authored-by: default avatarShaya Potter <shaya@redislabs.com>
      4ba47d2d
  4. 21 Feb, 2023 1 commit
    • Madelyn Olson's avatar
      Prevent Redis from crashing from key tracking invalidations (#11814) · dca5927a
      Madelyn Olson authored
      There is a built in limit to client side tracking keys, which when exceeded will invalidate keys. This occurs in two places, one in the server cron and other before executing a command. If it happens in the second scenario, the invalidations will be queued for later since current client is set. This queue is never drained if a command is not executed (through call) such as a multi-exec command getting queued. This results in a later server assert crashing.
      dca5927a
  5. 16 Feb, 2023 1 commit
    • Oran Agra's avatar
      Cleanup around script_caller, fix tracking of scripts and ACL logging for RM_Call (#11770) · 233abbbe
      Oran Agra authored
      * Make it clear that current_client is the root client that was called by
        external connection
      * add executing_client which is the client that runs the current command
        (can be a module or a script)
      * Remove script_caller that was used for commands that have CLIENT_SCRIPT
        to get the client that called the script. in most cases, that's the current_client,
        and in others (when being called from a module), it could be an intermediate
        client when we actually want the original one used by the external connection.
      
      bugfixes:
      * RM_Call with C flag should log ACL errors with the requested user rather than
        the one used by the original client, this also solves a crash when RM_Call is used
        with C flag from a detached thread safe context.
      * addACLLogEntry would have logged info about the script_caller, but in case the
        script was issued by a module command we actually want the current_client. the
        exception is when RM_Call is called from a timer event, in which case we don't
        have a current_client.
      
      behavior changes:
      * client side tracking for scripts now tracks the keys that are read by the script
        instead of the keys that are declared by the caller for EVAL
      
      other changes:
      * Log both current_client and executing_client in the crash log.
      * remove prepareLuaClient and resetLuaClient, being dead code that was forgotten.
      * remove scriptTimeSnapshot and snapshot_time and instead add cmd_time_snapshot
        that serves all commands and is reset only when execution nesting starts.
      * remove code to propagate CLIENT_FORCE_REPL from the executed command
        to the script caller since scripts aren't propagated anyway these days and anyway
        this flag wouldn't have had an effect since CLIENT_PREVENT_PROP is added by scriptResetRun.
      * fix a module GIL violation issue in afterSleep that was introduced in #10300 (unreleased)
      233abbbe
  6. 10 Aug, 2022 1 commit
  7. 31 Jul, 2022 1 commit
  8. 26 Jul, 2022 1 commit
  9. 30 May, 2022 1 commit
  10. 26 May, 2022 1 commit
  11. 02 Nov, 2021 2 commits
    • Oran Agra's avatar
      Solve issues with tracking test in external mode (#9726) · d25dc089
      Oran Agra authored
      The issue was that setting maxmemory to used_memory and expecting
      eviction is insufficient, since we need to take
      mem_not_counted_for_evict into consideration.
      
      This test got broken by #9166
      d25dc089
    • Oran Agra's avatar
      attempt to fix tracking test issue with external tests due to lazy free (#9722) · 87321deb
      Oran Agra authored
      The External tests started failing recently for unclear reason:
      ```
      *** [err]: Tracking invalidation message of eviction keys should be before response in tests/unit/tracking.tcl
      Expected '0' to be equal to 'invalidate volatile-key' (context: type eval line 21 cmd {assert_equal $res {invalidate volatile-key}} proc ::test)
      ```
      
      I suspect the issue is that the used_memory sample is taken while a lazy free is still being processed.
      87321deb
  12. 07 Oct, 2021 1 commit
  13. 22 Jun, 2021 1 commit
    • Oran Agra's avatar
      Fix race in client side tracking (#9116) · 9b564b52
      Oran Agra authored
      The `Tracking gets notification of expired keys` test in tracking.tcl
      used to hung in valgrind CI quite a lot.
      
      It turns out the reason is that with valgrind and a busy machine, the
      server cron active expire cycle could easily run in the same event loop
      as the command that created `mykey`, so that when they key got expired,
      there were two change events to broadcast, one that set the key and one
      that expired it, but since we used raxTryInsert, the client that was
      associated with the "last" change was the one that created the key, so
      the NOLOOP filtered that event.
      
      This commit adds a test that reproduces the problem by using lazy expire
      in a multi-exec which makes sure the key expires in the same event loop
      as the one that added it.
      9b564b52
  14. 09 Jun, 2021 1 commit
    • Yossi Gottlieb's avatar
      Improve test suite to handle external servers better. (#9033) · 8a86bca5
      Yossi Gottlieb authored
      This commit revives the improves the ability to run the test suite against
      external servers, instead of launching and managing `redis-server` processes as
      part of the test fixture.
      
      This capability existed in the past, using the `--host` and `--port` options.
      However, it was quite limited and mostly useful when running a specific tests.
      Attempting to run larger chunks of the test suite experienced many issues:
      
      * Many tests depend on being able to start and control `redis-server` themselves,
      and there's no clear distinction between external server compatible and other
      tests.
      * Cluster mode is not supported (resulting with `CROSSSLOT` errors).
      
      This PR cleans up many things and makes it possible to run the entire test suite
      against an external server. It also provides more fine grained controls to
      handle cases where the external server supports a subset of the Redis commands,
      limited number of databases, cluster mode, etc.
      
      The tests directory now contains a `README.md` file that describes how this
      works.
      
      This commit also includes additional cleanups and fixes:
      
      * Tests can now be tagged.
      * Tag-based selection is now unified across `start_server`, `tags` and `test`.
      * More information is provided about skipped or ignored tests.
      * Repeated patterns in tests have been extracted to common procedures, both at a
        global level and on a per-test file basis.
      * Cleaned up some cases where test setup was based on a previous test executing
        (a major anti-pattern that repeats itself in many places).
      * Cleaned up some cases where test teardown was not part of a test (in the
        future we should have dedicated teardown code that executes even when tests
        fail).
      * Fixed some tests that were flaky running on external servers.
      8a86bca5
  15. 20 Apr, 2021 1 commit
  16. 21 Feb, 2021 1 commit
    • Huang Zw's avatar
      Client tracking tracking-redir-broken push len is 2 not 3 (#8456) · f687ac0c
      Huang Zw authored
      When redis responds with tracking-redir-broken push message (RESP3),
      it was responding with a broken protocol: an array of 3 elements, but only
      pushes 2 elements.
      
      Some bugs in the test make this pass. Read the push reply
      will consume an extra reply, because the reply length is 3, but there
      are only two elements, so the next reply will be treated as third
      element. So the test is corrected too.
      
      Other changes:
      * checkPrefixCollisionsOrReply success should return 1 instead of -1,
        this bug didn't have any implications.
      * improve client tracking tests to validate more of the response it reads.
      f687ac0c
  17. 17 Jan, 2021 1 commit
    • Yossi Gottlieb's avatar
      Add io-thread daily CI tests. (#8232) · 522d9360
      Yossi Gottlieb authored
      This adds basic coverage to IO threads by running the cluster and few selected Redis test suite tests with the IO threads enabled.
      
      Also provides some necessary additional improvements to the test suite:
      
      * Add --config to sentinel/cluster tests for arbitrary configuration.
      * Fix --tags whitelisting which was broken.
      * Add a `network` tag to some tests that are more network intensive. This is work in progress and more tests should be properly tagged in the future.
      522d9360
  18. 08 Jan, 2021 1 commit
  19. 05 Jan, 2021 1 commit
    • Oran Agra's avatar
      Fix wrong order of key/value in Lua map response (#8266) · 2017407b
      Oran Agra authored
      When a Lua script returns a map to redis (a feature which was added in
      redis 6 together with RESP3), it would have returned the value first and
      the key second.
      
      If the client was using RESP2, it was getting them out of order, and if
      the client was in RESP3, it was getting a map of value => key.
      This was happening regardless of the Lua script using redis.setresp(3)
      or not.
      
      This also affects a case where the script was returning a map which it got
      from from redis by doing something like: redis.setresp(3); return redis.call()
      
      This fix is a breaking change for redis 6.0 users who happened to rely
      on the wrong order (either ones that used redis.setresp(3), or ones that
      returned a map explicitly).
      
      This commit also includes other two changes in the tests:
      1. The test suite now handles RESP3 maps as dicts rather than nested
         lists
      2. Remove some redundant (duplicate) tests from tracking.tcl
      2017407b
  20. 27 Dec, 2020 1 commit
  21. 24 Dec, 2020 4 commits
  22. 09 Nov, 2020 1 commit
    • nitaicaro's avatar
      Extend client tracking tests (#7998) · 19c29b60
      nitaicaro authored
      Test support for the new map, null and push message types. Map objects are parsed as a list of lists of key value pairs.
      for instance: user => john password => 123
      
      will be parsed to the following TCL list:
      
      {{user john} {password 123}}
      
      Also added the following tests:
      
      Redirection still works with RESP3
      
      Able to use a RESP3 client as a redirection client
      
      No duplicate invalidation messages when turning BCAST mode on after normal tracking
      
      Server is able to evacuate enough keys when num of keys surpasses limit by more than defined initial effort
      
      Different clients using different protocols can track the same key
      
      OPTOUT tests
      
      OPTIN tests
      
      Clients can redirect to the same connection
      
      tracking-redir-broken test
      
      HELLO 3 checks
      
      Invalidation messages still work when using RESP3, with and without redirection
      
      Switching to RESP3 doesn't disturb previous tracked keys
      
      Tracking info is correct
      
      Flushall and flushdb produce invalidation messages
      
      These tests achieve 100% line coverage for tracking.c using lcov.
      19c29b60
  23. 30 Sep, 2020 1 commit
    • nitaicaro's avatar
      Fixed Tracking test “The other connection is able to get invalidations” (#7871) · 8fb89a57
      nitaicaro authored
      
      
      PROBLEM:
      
      [$rd1 read] reads invalidation messages one by one, so it's never going to see the second invalidation message produced after INCR b, whether or not it exists. Adding another read will block incase no invalidation message is produced.
      
      FIX:
      
      We switch the order of "INCR a" and "INCR b" - now "INCR b" comes first. We still only read the first invalidation message produces. If an invalidation message is wrongly produces for b - then it will be produced before that of a, since "INCR b" comes before "INCR a".
      Co-authored-by: default avatarNitai Caro <caronita@amazon.com>
      8fb89a57
  24. 14 May, 2020 1 commit
  25. 22 Apr, 2020 2 commits
  26. 14 Feb, 2020 1 commit