1. 08 Oct, 2024 1 commit
  2. 16 Jul, 2024 1 commit
    • Oran Agra's avatar
      Test infra adjustments for external CI runs (#13421) · fa46aa4d
      Oran Agra authored
      - when uploading server logs, make sure they don't overwrite each other.
      - sort the test units to get consistent order between them (following
      #13220)
      - backup and restore the entire server configuration, to protect one
      unit from config changes another unit performs
      fa46aa4d
  3. 30 May, 2024 1 commit
    • jonghoonpark's avatar
      dynamically list test files (#13220) · 5a3534f9
      jonghoonpark authored
      **Related issue**
      https://github.com/redis/redis/issues/13219
      
      **Motivation**
      Currently we have to manually update the all_tests variable when
      introducing new test files.
      
      **Modification**
      I have modified it to list test files dynamically, but instead of
      modifying it to add all test files, I have modified it to only add only
      test files from the following 4 paths
      
      - unit
      - unit/type
      - unit/cluster
      - integration
      
      so that it doesn't deviate too much from what we already do
      
      **Result**
      - dynamically list test files to all_tests variable
      - close issue https://github.com/redis/redis/issues/13219
      
      
      
      **Additional information**
      - removed `list-common.tcl` file and added
      `generate_largevalue_test_array` proc in `util.tcl`. because
      `list-common.tcl` is not a test file
      - There is an order dependency. So I added a code to the "Is a ziplist
      encoded Hash promoted on big payload?" test that resets
      hash-max-listpack-value to the default (64).
      
      ---------
      Signed-off-by: default avatarjonghoonpark <dev@jonghoonpark.com>
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      5a3534f9
  4. 21 May, 2024 1 commit
    • debing.sun's avatar
      Have consistent behavior of SPUBLISH within multi/exec like regular command (#13276) · 9ffc35c9
      debing.sun authored
      
      
      This PR is based on the commits from PR #12944.
      
      Allow SPUBLISH command within multi/exec on replica
      
      Behavior on unstable:
      
      ```
      127.0.0.1:6380> CLUSTER NODES
      39ce8aa20f1f0d91f1a88d976ee1926dfefcdf1a 127.0.0.1:6380@16380 myself,slave 8b0feb120b68aac489d6a5af9c77dc40d71bc792 0 0 0 connected
      8b0feb120b68aac489d6a5af9c77dc40d71bc792 127.0.0.1:6379@16379 master - 0 1705091681202 0 connected 0-16383
      127.0.0.1:6380> SPUBLISH hello world
      (integer) 0
      127.0.0.1:6380> MULTI
      OK
      127.0.0.1:6380(TX)> SPUBLISH hello world
      QUEUED
      127.0.0.1:6380(TX)> EXEC
      (error) MOVED 866 127.0.0.1:6379
      ```
      
      With this change:
      
      ```
      127.0.0.1:6380> SPUBLISH hello world
      (integer) 0
      127.0.0.1:6380> MULTI
      OK
      127.0.0.1:6380(TX)> SPUBLISH hello world
      QUEUED
      127.0.0.1:6380(TX)> EXEC
      1) (integer) 0
      ```
      
      ---------
      Co-authored-by: default avatarHarkrishn Patro <harkrisp@amazon.com>
      Co-authored-by: default avataroranagra <oran@redislabs.com>
      9ffc35c9
  5. 18 Apr, 2024 1 commit
    • Moti Cohen's avatar
      Hash Field Expiration - Basic support · c18ff056
      Moti Cohen authored
      - Add ebuckets & mstr data structures
      - Integrate active & lazy expiration
      - Add most of the commands 
      - Add support for dict (listpack is missing)
      TODOs:  RDB, notification, listpack, HSET, HGETF, defrag, aof
      c18ff056
  6. 20 Mar, 2024 1 commit
  7. 20 Feb, 2024 1 commit
    • Binbin's avatar
      Fix wathced client test timing issue caused by late close (#13062) · 3c2ea1ea
      Binbin authored
      There is a timing issue in the test, close may arrive late, or in
      freeClientAsync we will free the client in async way, which will
      lead to errors in watching_clients statistics, since we will only
      unwatch all keys when we truly freeClient.
      
      Add a wait here to avoid this problem. Also fixed some outdated
      comments i saw. The test was introduced in #12966.
      3c2ea1ea
  8. 11 Jan, 2024 1 commit
  9. 10 Jan, 2024 1 commit
  10. 27 Sep, 2023 1 commit
  11. 26 Jun, 2023 1 commit
    • Chen Tianjie's avatar
      Support TLS service when "tls-cluster" is not enabled and persist both plain... · 22a29935
      Chen Tianjie authored
      Support TLS service when "tls-cluster" is not enabled and persist both plain and TLS port in nodes.conf (#12233)
      
      Originally, when "tls-cluster" is enabled, `port` is set to TLS port. In order to support non-TLS clients, `pport` is used to propagate TCP port across cluster nodes. However when "tls-cluster" is disabled, `port` is set to TCP port, and `pport` is not used, which means the cluster cannot provide TLS service unless "tls-cluster" is on.
      ```
      typedef struct {
          // ...
          uint16_t port;  /* Latest known clients port (TLS or plain). */
          uint16_t pport; /* Latest known clients plaintext port. Only used if the main clients port is for TLS. */
          // ...
      } clusterNode;
      ```
      ```
      typedef struct {
          // ...
          uint16_t port;   /* TCP base port number. */
          uint16_t pport;  /* Sender TCP plaintext port, if base port is TLS */
          // ...
      } clusterMsg;
      ```
      This PR renames `port` and `pport` in `clusterNode` to `tcp_port` and `tls_port`, to record both ports no matter "tls-cluster" is enabled or disabled.
      
      This allows to provide TLS service to clients when "tls-cluster" is disabled: when displaying cluster topology, or giving `MOVED` error, server can provide TLS or TCP port according to client's connection type, no matter what type of connection cluster bus is using.
      
      For backwards compatibility, `port` and `pport` in `clusterMsg` are preserved, when "tls-cluster" is enabled, `port` is set to TLS port and `pport` is set to TCP port, when "tls-cluster" is disabled, `port` is set to TCP port and `pport` is set to TLS port (instead of 0).
      
      Also, in the nodes.conf file, a new aux field displaying an extra port is added to complete the persisted info. We may have `tls_port=xxxxx` or `tcp_port=xxxxx` in the aux field, to complete the cluster topology, while the other port is stored in the normal `<ip>:<port>` field. The format is shown below.
      ```
      <node-id> <ip>:<tcp_port>@<cport>,<hostname>,shard-id=...,tls-port=6379 myself,master - 0 0 0 connected 0-1000
      ```
      Or we can switch the position of two ports, both can be correctly resolved.
      ```
      <node-id> <ip>:<tls_port>@<cport>,<hostname>,shard-id=...,tcp-port=6379 myself,master - 0 0 0 connected 0-1000
      ```
      22a29935
  12. 18 Jun, 2023 1 commit
    • Wen Hui's avatar
      Cluster human readable nodename feature (#9564) · 070453ee
      Wen Hui authored
      
      
      This PR adds a human readable name to a node in clusters that are visible as part of error logs. This is useful so that admins and operators of Redis cluster have better visibility into failures without having to cross-reference the generated ID with some logical identifier (such as pod-ID or EC2 instance ID). This is mentioned in #8948. Specific nodenames can be set by using the variable cluster-announce-human-nodename. The nodename is gossiped using the clusterbus extension in #9530.
      Co-authored-by: default avatarMadelyn Olson <madelyneolson@gmail.com>
      070453ee
  13. 12 Apr, 2023 1 commit
    • Oran Agra's avatar
      Attempt to solve MacOS CI issues in GH Actions (#12013) · 997fa41e
      Oran Agra authored
      The MacOS CI in github actions often hangs without any logs. GH argues that
      it's due to resource utilization, either running out of disk space, memory, or CPU
      starvation, and thus the runner is terminated.
      
      This PR contains multiple attempts to resolve this:
      1. introducing pause_process instead of SIGSTOP, which waits for the process
        to stop before resuming the test, possibly resolving race conditions in some tests,
        this was a suspect since there was one test that could result in an infinite loop in that
       case, in practice this didn't help, but still a good idea to keep.
      2. disable the `save` config in many tests that don't need it, specifically ones that use
        heavy writes and could create large files.
      3. change the `populate` proc to use short pipeline rather than an infinite one.
      4. use `--clients 1` in the macos CI so that we don't risk running multiple resource
        demanding tests in parallel.
      5. enable `--verbose` to be repeated to elevate verbosity and print more info to stdout
        when a test or a server starts.
      997fa41e
  14. 20 Mar, 2023 1 commit
    • Binbin's avatar
      Fix new subscribe mode test in reply-schemas-validator (#11939) · c9124145
      Binbin authored
      The reason is in reply-schemas-validator, the resp of the
      client we create will be client_default_resp (currently 3):
      ```
      client *createClient(connection *conn) {
          client *c = zmalloc(sizeof(client));
       #ifdef LOG_REQ_RES
          reqresReset(c, 0);
          c->resp = server.client_default_resp;
       #else
          c->resp = 2;
       #endif
      }
      ```
      
      But current_resp3 in redis-cli will be inconsistent with it,
      the test adds a simple hello 3 to avoid this failure, test
      was added in #11873.
      
      Added help descriptions for dont-pre-clean option, it was
      added in #10273
      c9124145
  15. 11 Mar, 2023 1 commit
    • guybe7's avatar
      Add reply_schema to command json files (internal for now) (#10273) · 4ba47d2d
      guybe7 authored
      Work in progress towards implementing a reply schema as part of COMMAND DOCS, see #9845
      Since ironing the details of the reply schema of each and every command can take a long time, we
      would like to merge this PR when the infrastructure is ready, and let this mature in the unstable branch.
      Meanwhile the changes of this PR are internal, they are part of the repo, but do not affect the produced build.
      
      ### Background
      In #9656 we add a lot of information about Redis commands, but we are missing information about the replies
      
      ### Motivation
      1. Documentation. This is the primary goal.
      2. It should be possible, based on the output of COMMAND, to be able to generate client code in typed
        languages. In order to do that, we need Redis to tell us, in detail, what each reply looks like.
      3. We would like to build a fuzzer that verifies the reply structure (for now we use the existing
        testsuite, see the "Testing" section)
      
      ### Schema
      The idea is to supply some sort of schema for the various replies of each command.
      The schema will describe the conceptual structure of the reply (for generated clients), as defined in RESP3.
      Note that the reply structure itself may change, depending on the arguments (e.g. `XINFO STREAM`, with
      and without the `FULL` modifier)
      We decided to use the standard json-schema (see https://json-schema.org/) as the reply-schema.
      
      Example for `BZPOPMIN`:
      ```
      "reply_schema": {
          "oneOf": [
              {
                  "description": "Timeout reached and no elements were popped.",
                  "type": "null"
              },
              {
                  "description": "The keyname, popped member, and its score.",
                  "type": "array",
                  "minItems": 3,
                  "maxItems": 3,
                  "items": [
                      {
                          "description": "Keyname",
                          "type": "string"
                      },
                      {
                          "description": "Member",
                          "type": "string"
                      },
                      {
                          "description": "Score",
                          "type": "number"
                      }
                  ]
              }
          ]
      }
      ```
      
      #### Notes
      1.  It is ok that some commands' reply structure depends on the arguments and it's the caller's responsibility
        to know which is the relevant one. this comes after looking at other request-reply systems like OpenAPI,
        where the reply schema can also be oneOf and the caller is responsible to know which schema is the relevant one.
      2. The reply schemas will describe RESP3 replies only. even though RESP3 is structured, we want to use reply
        schema for documentation (and possibly to create a fuzzer that validates the replies)
      3. For documentation, the description field will include an explanation of the scenario in which the reply is sent,
        including any relation to arguments. for example, for `ZRANGE`'s two schemas we will need to state that one
        is with `WITHSCORES` and the other is without.
      4. For documentation, there will be another optional field "notes" in which we will add a short description of
        the representation in RESP2, in case it's not trivial (RESP3's `ZRANGE`'s nested array vs. RESP2's flat
        array, for example)
      
      Given the above:
      1. We can generate the "return" section of all commands in [redis-doc](https://redis.io/commands/)
        (given that "description" and "notes" are comprehensive enough)
      2. We can generate a client in a strongly typed language (but the return type could be a conceptual
        `union` and the caller needs to know which schema is relevant). see the section below for RESP2 support.
      3. We can create a fuzzer for RESP3.
      
      ### Limitations (because we are using the standard json-schema)
      The problem is that Redis' replies are more diverse than what the json format allows. This means that,
      when we convert the reply to a json (in order to validate the schema against it), we lose information (see
      the "Testing" section below).
      The other option would have been to extend the standard json-schema (and json format) to include stuff
      like sets, bulk-strings, error-string, etc. but that would mean also extending the schema-validator - and that
      seemed like too much work, so we decided to compromise.
      
      Examples:
      1. We cannot tell the difference between an "array" and a "set"
      2. We cannot tell the difference between simple-string and bulk-string
      3. we cannot verify true uniqueness of items in commands like ZRANGE: json-schema doesn't cover the
        case of two identical members with different scores (e.g. `[["m1",6],["m1",7]]`) because `uniqueItems`
        compares (member,score) tuples and not just the member name. 
      
      ### Testing
      This commit includes some changes inside Redis in order to verify the schemas (existing and future ones)
      are indeed correct (i.e. describe the actual response of Redis).
      To do that, we added a debugging feature to Redis that causes it to produce a log of all the commands
      it executed and their replies.
      For that, Redis needs to be compiled with `-DLOG_REQ_RES` and run with
      `--reg-res-logfile <file> --client-default-resp 3` (the testsuite already does that if you run it with
      `--log-req-res --force-resp3`)
      You should run the testsuite with the above args (and `--dont-clean`) in order to make Redis generate
      `.reqres` files (same dir as the `stdout` files) which contain request-response pairs.
      These files are later on processed by `./utils/req-res-log-validator.py` which does:
      1. Goes over req-res files, generated by redis-servers, spawned by the testsuite (see logreqres.c)
      2. For each request-response pair, it validates the response against the request's reply_schema
        (obtained from the extended COMMAND DOCS)
      5. In order to get good coverage of the Redis commands, and all their different replies, we chose to use
        the existing redis test suite, rather than attempt to write a fuzzer.
      
      #### Notes about RESP2
      1. We will not be able to use the testing tool to verify RESP2 replies (we are ok with that, it's time to
        accept RESP3 as the future RESP)
      2. Since the majority of the test suite is using RESP2, and we want the server to reply with RESP3
        so that we can validate it, we will need to know how to convert the actual reply to the one expected.
         - number and boolean are always strings in RESP2 so the conversion is easy
         - objects (maps) are always a flat array in RESP2
         - others (nested array in RESP3's `ZRANGE` and others) will need some special per-command
           handling (so the client will not be totally auto-generated)
      
      Example for ZRANGE:
      ```
      "reply_schema": {
          "anyOf": [
              {
                  "description": "A list of member elements",
                  "type": "array",
                  "uniqueItems": true,
                  "items": {
                      "type": "string"
                  }
              },
              {
                  "description": "Members and their scores. Returned in case `WITHSCORES` was used.",
                  "notes": "In RESP2 this is returned as a flat array",
                  "type": "array",
                  "uniqueItems": true,
                  "items": {
                      "type": "array",
                      "minItems": 2,
                      "maxItems": 2,
                      "items": [
                          {
                              "description": "Member",
                              "type": "string"
                          },
                          {
                              "description": "Score",
                              "type": "number"
                          }
                      ]
                  }
              }
          ]
      }
      ```
      
      ### Other changes
      1. Some tests that behave differently depending on the RESP are now being tested for both RESP,
        regardless of the special log-req-res mode ("Pub/Sub PING" for example)
      2. Update the history field of CLIENT LIST
      3. Added basic tests for commands that were not covered at all by the testsuite
      
      ### TODO
      
      - [x] (maybe a different PR) add a "condition" field to anyOf/oneOf schemas that refers to args. e.g.
        when `SET` return NULL, the condition is `arguments.get||arguments.condition`, for `OK` the condition
        is `!arguments.get`, and for `string` the condition is `arguments.get` - https://github.com/redis/redis/issues/11896
      - [x] (maybe a different PR) also run `runtest-cluster` in the req-res logging mode
      - [x] add the new tests to GH actions (i.e. compile with `-DLOG_REQ_RES`, run the tests, and run the validator)
      - [x] (maybe a different PR) figure out a way to warn about (sub)schemas that are uncovered by the output
        of the tests - https://github.com/redis/redis/issues/11897
      - [x] (probably a separate PR) add all missing schemas
      - [x] check why "SDOWN is triggered by misconfigured instance replying with errors" fails with --log-req-res
      - [x] move the response transformers to their own file (run both regular, cluster, and sentinel tests - need to
        fight with the tcl including mechanism a bit)
      - [x] issue: module API - https://github.com/redis/redis/issues/11898
      - [x] (probably a separate PR): improve schemas: add `required` to `object`s - https://github.com/redis/redis/issues/11899
      
      Co-authored-by: default avatarOzan Tezcan <ozantezcan@gmail.com>
      Co-authored-by: default avatarHanna Fadida <hanna.fadida@redislabs.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      Co-authored-by: default avatarShaya Potter <shaya@redislabs.com>
      4ba47d2d
  16. 08 Mar, 2023 1 commit
    • Binbin's avatar
      Fix test and improve assert_replication_stream print the whole stream (#11793) · a7c9e505
      Binbin authored
      This PR has two parts:
      
      1. Fix flaky test case, the previous tests set a lot of volatile keys,
      it injects an unexpected DEL command into the replication stream during
      the later test, causing it to fail. Add a flushall to avoid it.
      
      2. Improve assert_replication_stream, now it can print the whole stream
      rather than just the failing line.
      a7c9e505
  17. 02 Nov, 2022 1 commit
  18. 25 Oct, 2022 1 commit
  19. 16 Oct, 2022 1 commit
  20. 28 Aug, 2022 1 commit
  21. 24 Aug, 2022 1 commit
    • Oran Agra's avatar
      Fix assertion when a key is lazy expired during cluster key migration (#11176) · c789fb0a
      Oran Agra authored
      Redis 7.0 has #9890 which added an assertion when the propagation queue
      was not flushed and we got to beforeSleep.
      But it turns out that when processCommands calls getNodeByQuery and
      decides to reject the command, it can lead to a key that was lazy
      expired and is deleted without later flushing the propagation queue.
      
      This change prevents lazy expiry from deleting the key at this stage
      (not as part of a command being processed in `call`)
      c789fb0a
  22. 23 Aug, 2022 1 commit
    • Oran Agra's avatar
      Build TLS as a loadable module · 4faddf18
      Oran Agra authored
      
      
      * Support BUILD_TLS=module to be loaded as a module via config file or
        command line. e.g. redis-server --loadmodule redis-tls.so
      * Updates to redismodule.h to allow it to be used side by side with
        server.h by defining REDISMODULE_CORE_MODULE
      * Changes to server.h, redismodule.h and module.c to avoid repeated
        type declarations (gcc 4.8 doesn't like these)
      * Add a mechanism for non-ABI neutral modules (ones who include
        server.h) to refuse loading if they detect not being built together with
        redis (release.c)
      * Fix wrong signature of RedisModuleDefragFunc, this could break
        compilation of a module, but not the ABI
      * Move initialization of listeners in server.c to be after loading
        the modules
      * Config TLS after initialization of listeners
      * Init cluster after initialization of listeners
      * Add TLS module to CI
      * Fix a test suite race conditions:
        Now that the listeners are initialized later, it's not sufficient to
        wait for the PID message in the log, we need to wait for the "Server
        Initialized" message.
      * Fix issues with moduleconfigs test as a result from start_server
        waiting for "Server Initialized"
      * Fix issues with modules/infra test as a result of an additional module
        present
      
      Notes about Sentinel:
      Sentinel can't really rely on the tls module, since it uses hiredis to
      initiate connections and depends on OpenSSL (won't be able to use any
      other connection modules for that), so it was decided that when TLS is
      built as a module, sentinel does not support TLS at all.
      This means that it keeps using redis_tls_ctx and redis_tls_client_ctx directly.
      
      Example code of config in redis-tls.so(may be use in the future):
      RedisModuleString *tls_cfg = NULL;
      
      void tlsInfo(RedisModuleInfoCtx *ctx, int for_crash_report) {
          UNUSED(for_crash_report);
          RedisModule_InfoAddSection(ctx, "");
          RedisModule_InfoAddFieldLongLong(ctx, "var", 42);
      }
      
      int tlsCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)
      {
          if (argc != 2) return RedisModule_WrongArity(ctx);
          return RedisModule_ReplyWithString(ctx, argv[1]);
      }
      
      RedisModuleString *getStringConfigCommand(const char *name, void *privdata) {
          REDISMODULE_NOT_USED(name);
          REDISMODULE_NOT_USED(privdata);
          return tls_cfg;
      }
      
      int setStringConfigCommand(const char *name, RedisModuleString *new, void *privdata, RedisModuleString **err) {
          REDISMODULE_NOT_USED(name);
          REDISMODULE_NOT_USED(err);
          REDISMODULE_NOT_USED(privdata);
          if (tls_cfg) RedisModule_FreeString(NULL, tls_cfg);
          RedisModule_RetainString(NULL, new);
          tls_cfg = new;
          return REDISMODULE_OK;
      }
      
      int RedisModule_OnLoad(void *ctx, RedisModuleString **argv, int argc)
      {
          ....
          if (RedisModule_CreateCommand(ctx,"tls",tlsCommand,"",0,0,0) == REDISMODULE_ERR)
              return REDISMODULE_ERR;
      
          if (RedisModule_RegisterStringConfig(ctx, "cfg", "", REDISMODULE_CONFIG_DEFAULT, getStringConfigCommand, setStringConfigCommand, NULL, NULL) == REDISMODULE_ERR)
              return REDISMODULE_ERR;
      
          if (RedisModule_LoadConfigs(ctx) == REDISMODULE_ERR) {
              if (tls_cfg) {
                  RedisModule_FreeString(ctx, tls_cfg);
                  tls_cfg = NULL;
              }
              return REDISMODULE_ERR;
          }
          ...
      }
      Co-authored-by: default avatarzhenwei pi <pizhenwei@bytedance.com>
      Signed-off-by: default avatarzhenwei pi <pizhenwei@bytedance.com>
      4faddf18
  23. 04 Aug, 2022 1 commit
  24. 17 Jul, 2022 1 commit
  25. 12 Jul, 2022 1 commit
  26. 25 May, 2022 1 commit
  27. 26 Apr, 2022 1 commit
    • Madelyn Olson's avatar
      By default prevent cross slot operations in functions and scripts with # (#10615) · efcd1bf3
      Madelyn Olson authored
      Adds the `allow-cross-slot-keys` flag to Eval scripts and Functions to allow
      scripts to access keys from multiple slots.
      The default behavior is now that they are not allowed to do that (unlike before).
      This is a breaking change for 7.0 release candidates (to be part of 7.0.0), but
      not for previous redis releases since EVAL without shebang isn't doing this check.
      
      Note that the check is done on both the keys declared by the EVAL / FCALL command
      arguments, and also the ones used by the script when making a `redis.call`.
      
      A note about the implementation, there seems to have been some confusion
      about allowing access to non local keys. I thought I missed something in our
      wider conversation, but Redis scripts do block access to non-local keys.
      So the issue was just about cross slots being accessed.
      efcd1bf3
  28. 22 Feb, 2022 1 commit
    • ranshid's avatar
      introduce dynamic client reply buffer size - save memory on idle clients (#9822) · 47c51d0c
      ranshid authored
      
      
      Current implementation simple idle client which serves no traffic still
      use ~17Kb of memory. this is mainly due to a fixed size reply buffer
      currently set to 16kb.
      
      We have encountered some cases in which the server operates in a low memory environments.
      In such cases a user who wishes to create large connection pools to support potential burst period,
      will exhaust a large amount of memory  to maintain connected Idle clients.
      Some users may choose to "sacrifice" performance in order to save memory.
      
      This commit introduce a dynamic mechanism to shrink and expend the client reply buffer based on
      periodic observed peak.
      the algorithm works as follows:
      1. each time a client reply buffer has been fully written, the last recorded peak is updated: 
      new peak = MAX( last peak, current written size)
      2. during clients cron we check for each client if the last observed peak was:
           a. matching the current buffer size - in which case we expend (resize) the buffer size by 100%
           b. less than half the buffer size - in which case we shrink the buffer size by 50%
      3. In any case we will **not** resize the buffer in case:
          a. the current buffer peak is less then the current buffer usable size and higher than 1/2 the
            current buffer usable size
          b. the value of (current buffer usable size/2) is less than 1Kib
          c. the value of  (current buffer usable size*2) is larger than 16Kib
      4. the peak value is reset to the current buffer position once every **5** seconds. we maintain a new
         field in the client structure (buf_peak_last_reset_time) which is used to keep track of how long it
         passed since the last buffer peak reset.
      
      ### **Interface changes:**
      **CIENT LIST** - now contains 2 new extra fields:
      rbs= < the current size in bytes of the client reply buffer >
      rbp=< the current value in bytes of the last observed buffer peak position >
      
      **INFO STATS** - now contains 2 new statistics:
      reply_buffer_shrinks = < total number of buffer shrinks performed >
      reply_buffer_expends = < total number of buffer expends performed >
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      Co-authored-by: default avatarYoav Steinberg <yoav@redislabs.com>
      47c51d0c
  29. 08 Feb, 2022 1 commit
    • Wen Hui's avatar
      Make INFO command variadic (#6891) · 2e1bc942
      Wen Hui authored
      
      
      This is an enhancement for INFO command, previously INFO only support one argument
      for different info section , if user want to get more categories information, either perform
      INFO all / default or calling INFO for multiple times.
      
      **Description of the feature**
      
      The goal of adding this feature is to let the user retrieve multiple categories via the INFO
      command, and still avoid emitting the same section twice.
      
      A use case for this is like Redis Sentinel, which periodically calling INFO command to refresh
      info from monitored Master/Slaves, only Server and Replication part categories are used for
      parsing information. If the INFO command can return just enough categories that client side
      needs, it can save a lot of time for client side parsing it as well as network bandwidth.
      
      **Implementation**
      To share code between redis, sentinel, and other users of INFO (DEBUG and modules),
      we have a new `genInfoSectionDict` function that returns a dict and some boolean flags
      (e.g. `all`) to the caller (built from user input).
      Sentinel is later purging unwanted sections from that, and then it is forwarded to the info `genRedisInfoString`.
      
      **Usage Examples**
      INFO Server Replication   
      INFO CPU Memory
      INFO default commandstats
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      2e1bc942
  30. 20 Jan, 2022 1 commit
  31. 05 Jan, 2022 1 commit
    • sundb's avatar
      Show the elapsed time of single test and speed up some tests (#10058) · 4d3c4cfa
      sundb authored
      Following #10038.
      
      This PR introduces two changes.
      1. Show the elapsed time of a single test in the test output, in order to have a more
      detailed understanding of the changes in test run time.
      
      2. Speedup two tests related to `key-load-delay` configuration.
      other tests do not seem to be affected by #10003.
      4d3c4cfa
  32. 03 Jan, 2022 2 commits
    • chenyang8094's avatar
      Implement Multi Part AOF mechanism to avoid AOFRW overheads. (#9788) · 87789fae
      chenyang8094 authored
      
      
      Implement Multi-Part AOF mechanism to avoid overheads during AOFRW.
      Introducing a folder with multiple AOF files tracked by a manifest file.
      
      The main issues with the the original AOFRW mechanism are:
      * buffering of commands that are processed during rewrite (consuming a lot of RAM)
      * freezes of the main process when the AOFRW completes to drain the remaining part of the buffer and fsync it.
      * double disk IO for the data that arrives during AOFRW (had to be written to both the old and new AOF files)
      
      The main modifications of this PR:
      1. Remove the AOF rewrite buffer and related code.
      2. Divide the AOF into multiple files, they are classified as two types, one is the the `BASE` type,
        it represents the full amount of data (Maybe AOF or RDB format) after each AOFRW, there is only
        one `BASE` file at most. The second is `INCR` type, may have more than one. They represent the
        incremental commands since the last AOFRW.
      3. Use a AOF manifest file to record and manage these AOF files mentioned above.
      4. The original configuration of `appendfilename` will be the base part of the new file name, for example:
        `appendonly.aof.1.base.rdb` and `appendonly.aof.2.incr.aof`
      5. Add manifest-related TCL tests, and modified some existing tests that depend on the `appendfilename`
      6. Remove the `aof_rewrite_buffer_length` field in info.
      7. Add `aof-disable-auto-gc` configuration. By default we're automatically deleting HISTORY type AOFs.
        It also gives users the opportunity to preserve the history AOFs. just for testing use now.
      8. Add AOFRW limiting measure. When the AOFRW failures reaches the threshold (3 times now),
        we will delay the execution of the next AOFRW by 1 minute. If the next AOFRW also fails, it will be
        delayed by 2 minutes. The next is 4, 8, 16, the maximum delay is 60 minutes (1 hour). During the limit
        period, we can still use the 'bgrewriteaof' command to execute AOFRW immediately.
      9. Support upgrade (load) data from old version redis.
      10. Add `appenddirname` configuration, as the directory name of the append only files. All AOF files and
        manifest file will be placed in this directory.
      11. Only the last AOF file (BASE or INCR) can be truncated. Otherwise redis will exit even if
        `aof-load-truncated` is enabled.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      87789fae
    • Harkrishn Patro's avatar
      Sharded pubsub implementation (#8621) · 9f888576
      Harkrishn Patro authored
      
      
      This commit implements a sharded pubsub implementation based off of shard channels.
      Co-authored-by: default avatarHarkrishn Patro <harkrisp@amazon.com>
      Co-authored-by: default avatarMadelyn Olson <madelyneolson@gmail.com>
      9f888576
  33. 02 Jan, 2022 1 commit
    • Viktor Söderqvist's avatar
      Wait for replicas when shutting down (#9872) · 45a155bd
      Viktor Söderqvist authored
      
      
      To avoid data loss, this commit adds a grace period for lagging replicas to
      catch up the replication offset.
      
      Done:
      
      * Wait for replicas when shutdown is triggered by SIGTERM and SIGINT.
      
      * Wait for replicas when shutdown is triggered by the SHUTDOWN command. A new
        blocked client type BLOCKED_SHUTDOWN is introduced, allowing multiple clients
        to call SHUTDOWN in parallel.
        Note that they don't expect a response unless an error happens and shutdown is aborted.
      
      * Log warning for each replica lagging behind when finishing shutdown.
      
      * CLIENT_PAUSE_WRITE while waiting for replicas.
      
      * Configurable grace period 'shutdown-timeout' in seconds (default 10).
      
      * New flags for the SHUTDOWN command:
      
          - NOW disables the grace period for lagging replicas.
      
          - FORCE ignores errors writing the RDB or AOF files which would normally
            prevent a shutdown.
      
          - ABORT cancels ongoing shutdown. Can't be combined with other flags.
      
      * New field in the output of the INFO command: 'shutdown_in_milliseconds'. The
        value is the remaining maximum time to wait for lagging replicas before
        finishing the shutdown. This field is present in the Server section **only**
        during shutdown.
      
      Not directly related:
      
      * When shutting down, if there is an AOF saving child, it is killed **even** if AOF
        is disabled. This can happen if BGREWRITEAOF is used when AOF is off.
      
      * Client pause now has end time and type (WRITE or ALL) per purpose. The
        different pause purposes are *CLIENT PAUSE command*, *failover* and
        *shutdown*. If clients are unpaused for one purpose, it doesn't affect client
        pause for other purposes. For example, the CLIENT UNPAUSE command doesn't
        affect client pause initiated by the failover or shutdown procedures. A completed
        failover or a failed shutdown doesn't unpause clients paused by the CLIENT
        PAUSE command.
      
      Notes:
      
      * DEBUG RESTART doesn't wait for replicas.
      
      * We already have a warning logged when a replica disconnects. This means that
        if any replica connection is lost during the shutdown, it is either logged as
        disconnected or as lagging at the time of exit.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      45a155bd
  34. 16 Dec, 2021 1 commit
    • Meir Shpilraien (Spielrein)'s avatar
      Add FUNCTION FLUSH command to flush all functions (#9936) · 687210f1
      Meir Shpilraien (Spielrein) authored
      Added `FUNCTION FLUSH` command. The new sub-command allows delete all the functions.
      An optional `[SYNC|ASYNC]` argument can be given to control whether or not to flush the
      functions synchronously or asynchronously. if not given the default flush mode is chosen by
      `lazyfree-lazy-user-flush` configuration values.
      
      Add the missing `functions.tcl` test to the list of tests that are executed in test_helper.tcl,
      and call FUNCTION FLUSH in between servers in external mode
      687210f1
  35. 13 Dec, 2021 1 commit
    • yoav-steinberg's avatar
      Fix possible int overflow when hashing an sds. (#9916) · c7dc17fc
      yoav-steinberg authored
      This caused a crash when adding elements larger than 2GB to a set (same goes for hash keys). See #8455.
      
      Details:
      * The fix makes the dict hash functions receive a `size_t` instead of an `int`. In practice the dict hash functions
        call siphash which receives a `size_t` and the callers to the hash function pass a `size_t` to it so the fix is trivial.
      * The issue was recreated by attempting to add a >2gb value to a set. Appropriate tests were added where I create
        a set with large elements and check basic functionality on it (SADD, SCARD, SPOP, etc...).
      * When I added the tests I also refactored a bit all the tests code which is run under the `--large-memory` flag.
        This removed code duplication for the test framework's `write_big_bulk` and `write_big_bulk` code and also takes
        care of not allocating the test frameworks helper huge string used by these tests when not run under `--large-memory`.
      * I also added the _violoations.tcl_ unit tests to be part of the entire test suite and leaned up non relevant list related
        tests that were in there. This was done in this PR because most of the _violations_ tests are "large memory" tests.
      c7dc17fc
  36. 10 Nov, 2021 1 commit
    • Oran Agra's avatar
      Increase test timeout in valgrind runs (#9767) · 978eadba
      Oran Agra authored
      We saw some tests sporadically time out on valgrind (namely the ones
      from #9323).
      
      Increasing valgrind timeout from 20 mins to 40 mins in CI.
      And fixing an outdated help message.
      978eadba
  37. 03 Nov, 2021 1 commit
    • perryitay's avatar
      Add support for list type to store elements larger than 4GB (#9357) · f27083a4
      perryitay authored
      
      
      Redis lists are stored in quicklist, which is currently a linked list of ziplists.
      Ziplists are limited to storing elements no larger than 4GB, so when bigger
      items are added they're getting truncated.
      This PR changes quicklists so that they're capable of storing large items
      in quicklist nodes that are plain string buffers rather than ziplist.
      
      As part of the PR there were few other changes in redis: 
      1. new DEBUG sub-commands: 
         - QUICKLIST-PACKED-THRESHOLD - set the threshold of for the node type to
           be plan or ziplist. default (1GB)
         - QUICKLIST <key> - Shows low level info about the quicklist encoding of <key>
      2. rdb format change:
         - A new type was added - RDB_TYPE_LIST_QUICKLIST_2 . 
         - container type (packed / plain) was added to the beginning of the rdb object
           (before the actual node list).
      3. testing:
         - Tests that requires over 100MB will be by default skipped. a new flag was
           added to 'runtest' to run the large memory tests (not used by default)
      Co-authored-by: default avatarsundb <sundbcn@gmail.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      f27083a4
  38. 25 Oct, 2021 1 commit
    • Wang Yuan's avatar
      Replication backlog and replicas use one global shared replication buffer (#9166) · c1718f9d
      Wang Yuan authored
      ## Background
      For redis master, one replica uses one copy of replication buffer, that is a big waste of memory,
      more replicas more waste, and allocate/free memory for every reply list also cost much.
      If we set client-output-buffer-limit small and write traffic is heavy, master may disconnect with
      replicas and can't finish synchronization with replica. If we set  client-output-buffer-limit big,
      master may be OOM when there are many replicas that separately keep much memory.
      Because replication buffers of different replica client are the same, one simple idea is that
      all replicas only use one replication buffer, that will effectively save memory.
      
      Since replication backlog content is the same as replicas' output buffer, now we
      can discard replication backlog memory and use global shared replication buffer
      to implement replication backlog mechanism.
      
      ## Implementation
      I create one global "replication buffer" which contains content of replication stream.
      The structure of "replication buffer" is similar to the reply list that exists in every client.
      But the node of list is `replBufBlock`, which has `id, repl_offset, refcount` fields.
      ```c
      /* Replication buffer blocks is the list of replBufBlock.
       *
       * +--------------+       +--------------+       +--------------+
       * | refcount = 1 |  ...  | refcount = 0 |  ...  | refcount = 2 |
       * +--------------+       +--------------+       +--------------+
       *      |                                            /       \
       *      |                                           /         \
       *      |                                          /           \
       *  Repl Backlog                               Replia_A      Replia_B
       * 
       * Each replica or replication backlog increments only the refcount of the
       * 'ref_repl_buf_node' which it points to. So when replica walks to the next
       * node, it should first increase the next node's refcount, and when we trim
       * the replication buffer nodes, we remove node always from the head node which
       * refcount is 0. If the refcount of the head node is not 0, we must stop
       * trimming and never iterate the next node. */
      
      /* Similar with 'clientReplyBlock', it is used for shared buffers between
       * all replica clients and replication backlog. */
      typedef struct replBufBlock {
          int refcount;           /* Number of replicas or repl backlog using. */
          long long id;           /* The unique incremental number. */
          long long repl_offset;  /* Start replication offset of the block. */
          size_t size, used;
          char buf[];
      } replBufBlock;
      ```
      So now when we feed replication stream into replication backlog and all replicas, we only need
      to feed stream into replication buffer `feedReplicationBuffer`. In this function, we set some fields of
      replication backlog and replicas to references of the global replication buffer blocks. And we also
      need to check replicas' output buffer limit to free if exceeding `client-output-buffer-limit`, and trim
      replication backlog if exceeding `repl-backlog-size`.
      
      When sending reply to replicas, we also need to iterate replication buffer blocks and send its
      content, when totally sending one block for replica, we decrease current node count and
      increase the next current node count, and then free the block which reference is 0 from the
      head of replication buffer blocks.
      
      Since now we use linked list to manage replication backlog, it may cost much time for iterating
      all linked list nodes to find corresponding replication buffer node. So we create a rax tree to
      store some nodes  for index, but to avoid rax tree occupying too much memory, i record
      one per 64 nodes for index.
      
      Currently, to make partial resynchronization as possible as much, we always let replication
      backlog as the last reference of replication buffer blocks, backlog size may exceeds our setting
      if slow replicas that reference vast replication buffer blocks, and this method doesn't increase
      memory usage since they share replication buffer. To avoid freezing server for freeing unreferenced
      replication buffer blocks when we need to trim backlog for exceeding backlog size setting,
      we trim backlog incrementally (free 64 blocks per call now), and make it faster in
      `beforeSleep` (free 640 blocks).
      
      ### Other changes
      - `mem_total_replication_buffers`: we add this field in INFO command, it means the total
        memory of replication buffers used.
      - `mem_clients_slaves`:  now even replica is slow to replicate, and its output buffer memory
        is not 0, but it still may be 0, since replication backlog and replicas share one global replication
        buffer, only if replication buffer memory is more than the repl backlog setting size, we consider
        the excess as replicas' memory. Otherwise, we think replication buffer memory is the consumption
        of repl backlog.
      - Key eviction
        Since all replicas and replication backlog share global replication buffer, we think only the
        part of exceeding backlog size the extra separate consumption of replicas.
        Because we trim backlog incrementally in the background, backlog size may exceeds our
        setting if slow replicas that reference vast replication buffer blocks disconnect.
        To avoid massive eviction loop, we don't count the delayed freed replication backlog into
        used memory even if there are no replicas, i.e. we also regard this memory as replicas's memory.
      - `client-output-buffer-limit` check for replica clients
        It doesn't make sense to set the replica clients output buffer limit lower than the repl-backlog-size
        config (partial sync will succeed and then replica will get disconnected). Such a configuration is
        ignored (the size of repl-backlog-size will be used). This doesn't have memory consumption
        implications since the replica client will share the backlog buffers memory.
      - Drop replication backlog after loading data if needed
        We always create replication backlog if server is a master, we need it because we put DELs in
        it when loading expired keys in RDB, but if RDB doesn't have replication info or there is no rdb,
        it is not possible to support partial resynchronization, to avoid extra memory of replication backlog,
        we drop it.
      - Multi IO threads
       Since all replicas and replication backlog use global replication buffer,  if I/O threads are enabled,
        to guarantee data accessing thread safe, we must let main thread handle sending the output buffer
        to all replicas. But before, other IO threads could handle sending output buffer of all replicas.
      
      ## Other optimizations
      This solution resolve some other problem:
      - When replicas disconnect with master since of out of output buffer limit, releasing the output
        buffer of replicas may freeze server if we set big `client-output-buffer-limit` for replicas, but now,
        it doesn't cause freezing.
      - This implementation may mitigate reply list copy cost time(also freezes server) when one replication
        has huge reply buffer and another replica can copy buffer for full synchronization. now, we just copy
        reference info, it is very light.
      - If we set replication backlog size big, it also may cost much time to copy replication backlog into
        replica's output buffer. But this commit eliminates this problem.
      - Resizing replication backlog size doesn't empty current replication backlog content.
      c1718f9d
  39. 19 Oct, 2021 1 commit
    • qetu3790's avatar
      Release clients blocked on module commands in cluster resharding and down state (#9483) · 4962c552
      qetu3790 authored
      
      
      Prevent clients from being blocked forever in cluster when they block with their own module command
      and the hash slot is migrated to another master at the same time.
      These will get a redirection message when unblocked.
      Also, release clients blocked on module commands when cluster is down (same as other blocked clients)
      
      This commit adds basic tests for the main (non-cluster) redis test infra that test the cluster.
      This was done because the cluster test infra can't handle some common test features,
      but most importantly we only build the test modules with the non-cluster test suite.
      
      note that rather than really supporting cluster operations by the test infra, it was added (as dup code)
      in two files, one for module tests and one for non-modules tests, maybe in the future we'll refactor that.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      4962c552