1. 16 Dec, 2021 1 commit
  2. 15 Dec, 2021 2 commits
    • guybe7's avatar
      Auto-generate the command table from JSON files (#9656) · 86781600
      guybe7 authored
      Delete the hardcoded command table and replace it with an auto-generated table, based
      on a JSON file that describes the commands (each command must have a JSON file).
      
      These JSON files are the SSOT of everything there is to know about Redis commands,
      and it is reflected fully in COMMAND INFO.
      
      These JSON files are used to generate commands.c (using a python script), which is then
      committed to the repo and compiled.
      
      The purpose is:
      * Clients and proxies will be able to get much more info from redis, instead of relying on hard coded logic.
      * drop the dependency between Redis-user and the commands.json in redis-doc.
      * delete help.h and have redis-cli learn everything it needs to know just by issuing COMMAND (will be
        done in a separate PR)
      * redis.io should stop using commands.json and learn everything from Redis (ultimately one of the release
        artifacts should be a large JSON, containing all the information about all of the commands, which will be
        generated from COMMAND's reply)
      * the byproduct of this is:
        * module commands will be able to provide that info and possibly be more of a first-class citizens
        * in theory, one may be able to generate a redis client library for a strictly typed language, by using this info.
      
      ### Interface changes
      
      #### COMMAND INFO's reply change (and arg-less COMMAND)
      
      Before this commit the reply at index 7 contained the key-specs list
      and reply at index 8 contained the sub-commands list (Both unreleased).
      Now, reply at index 7 is a map of:
      - summary - short command description
      - since - debut version
      - group - command group
      - complexity - complexity string
      - doc-flags - flags used for documentation (e.g. "deprecated")
      - deprecated-since - if deprecated, from which version?
      - replaced-by - if deprecated, which command replaced it?
      - history - a list of (version, what-changed) tuples
      - hints - a list of strings, meant to provide hints for clients/proxies. see https://github.com/redis/redis/issues/9876
      - arguments - an array of arguments. each element is a map, with the possibility of nesting (sub-arguments)
      - key-specs - an array of keys specs (already in unstable, just changed location)
      - subcommands - a list of sub-commands (already in unstable, just changed location)
      - reply-schema - will be added in the future (see https://github.com/redis/redis/issues/9845)
      
      more details on these can be found in https://github.com/redis/redis-doc/pull/1697
      
      only the first three fields are mandatory 
      
      #### API changes (unreleased API obviously)
      
      now they take RedisModuleCommand opaque pointer instead of looking up the command by name
      
      - RM_CreateSubcommand
      - RM_AddCommandKeySpec
      - RM_SetCommandKeySpecBeginSearchIndex
      - RM_SetCommandKeySpecBeginSearchKeyword
      - RM_SetCommandKeySpecFindKeysRange
      - RM_SetCommandKeySpecFindKeysKeynum
      
      Currently, we did not add module API to provide additional information about their commands because
      we couldn't agree on how the API should look like, see https://github.com/redis/redis/issues/9944
      
      .
      
      ### Somehow related changes
      1. Literals should be in uppercase while placeholder in lowercase. Now all the GEO* command
         will be documented with M|KM|FT|MI and can take both lowercase and uppercase
      
      ### Unrelated changes
      1. Bugfix: no_madaory_keys was absent in COMMAND's reply
      2. expose CMD_MODULE as "module" via COMMAND
      3. have a dedicated uint64 for ACL categories (instead of having them in the same uint64 as command flags)
      Co-authored-by: default avatarItamar Haber <itamar@garantiadata.com>
      86781600
    • 丽媛自己动's avatar
  3. 08 Dec, 2021 1 commit
    • Binbin's avatar
      Fix SENTINEL subcommands's arity (#9909) · a7726cdf
      Binbin authored
      For `SENTINEL SET`, we can use in these ways:
      1. SENTINEL SET mymaster quorum 3
      2. SENTINEL SET mymaster quorum 5 parallel-syncs 1
      
      For `SENTINEL SIMULATE-FAILURE`, although it is only used for testing:
      1. SENTINEL SIMULATE-FAILURE CRASH-AFTER-ELECTION
      2. SENTINEL SIMULATE-FAILURE CRASH-AFTER-ELECTION CRASH-AFTER-PROMOTION
      a7726cdf
  4. 07 Dec, 2021 1 commit
    • yoav-steinberg's avatar
      Don't write oom score adj to proc unless we're managing it. (#9904) · 1736fa4d
      yoav-steinberg authored
      When disabling redis oom-score-adj managment we restore the
      base value read before enabling oom-score-adj management.
      
      This fixes an issue introduced in #9748 where updating
      `oom-score-adj-values` while `oom-score-adj` was set to `no`
      would write the base oom score adj value read on startup to `/proc`.
      This is a bug since while `oom-score-adj` is disabled we should
      never write to proc and let external processes manage it.
      
      Added appropriate tests.
      1736fa4d
  5. 02 Dec, 2021 1 commit
    • meir@redislabs.com's avatar
      Redis Functions - Added redis function unit and Lua engine · cbd46317
      meir@redislabs.com authored
      Redis function unit is located inside functions.c
      and contains Redis Function implementation:
      1. FUNCTION commands:
        * FUNCTION CREATE
        * FCALL
        * FCALL_RO
        * FUNCTION DELETE
        * FUNCTION KILL
        * FUNCTION INFO
      2. Register engine
      
      In addition, this commit introduce the first engine
      that uses the Redis Function capabilities, the
      Lua engine.
      cbd46317
  6. 01 Dec, 2021 3 commits
    • meir@redislabs.com's avatar
      Redis Functions - Introduce script unit. · fc731bc6
      meir@redislabs.com authored
      Script unit is a new unit located on script.c.
      Its purpose is to provides an API for functions (and eval)
      to interact with Redis. Interaction includes mostly
      executing commands, but also functionalities like calling
      Redis back on long scripts or check if the script was killed.
      
      The interaction is done using a scriptRunCtx object that
      need to be created by the user and initialized using scriptPrepareForRun.
      
      Detailed list of functionalities expose by the unit:
      1. Calling commands (including all the validation checks such as
         acl, cluster, read only run, ...)
      2. Set Resp
      3. Set Replication method (AOF/REPLICATION/NONE)
      4. Call Redis back to on long running scripts to allow Redis reply
         to clients and perform script kill
      
      The commit introduce the new unit and uses it on eval commands to
      interact with Redis.
      fc731bc6
    • meir@redislabs.com's avatar
      Redis Functions - Move Lua related variable into luaCtx struct · e0cd580a
      meir@redislabs.com authored
      The following variable was renamed:
      1. lua_caller 			-> script_caller
      2. lua_time_limit 		-> script_time_limit
      3. lua_timedout 		-> script_timedout
      4. lua_oom 			-> script_oom
      5. lua_disable_deny_script 	-> script_disable_deny_script
      6. in_eval			-> in_script
      
      The following variables was moved to lctx under eval.c
      1.  lua
      2.  lua_client
      3.  lua_cur_script
      4.  lua_scripts
      5.  lua_scripts_mem
      6.  lua_replicate_commands
      7.  lua_write_dirty
      8.  lua_random_dirty
      9.  lua_multi_emitted
      10. lua_repl
      11. lua_kill
      12. lua_time_start
      13. lua_time_snapshot
      
      This commit is in a low risk of introducing any issues and it
      is just moving varibales around and not changing any logic.
      e0cd580a
    • yoav-steinberg's avatar
      Multiparam config set (#9748) · 0e5b813e
      yoav-steinberg authored
      We can now do: `config set maxmemory 10m repl-backlog-size 5m`
      
      ## Basic algorithm to support "transaction like" config sets:
      
      1. Backup all relevant current values (via get).
      2. Run "verify" and "set" on everything, if we fail run "restore".
      3. Run "apply" on everything (optional optimization: skip functions already run). If we fail run "restore".
      4. Return success.
      
      ### restore
      1. Run set on everything in backup. If we fail log it and continue (this puts us in an undefined
         state but we decided it's better than the alternative of panicking). This indicates either a bug
         or some unsupported external state.
      2. Run apply on everything in backup (optimization: skip functions already run). If we fail log
         it (see comment above).
      3. Return error.
      
      ## Implementation/design changes:
      * Apply function are idempotent (have no effect if they are run more than once for the same config).
      * No indication in set functions if we're reading the config or running from the `CONFIG SET` command
         (removed `update` argument).
      * Set function should set some config variable and assume an (optional) apply function will use that
         later to apply. If we know this setting can be safely applied immediately and can always be reverted
         and doesn't depend on any other configuration we can apply immediately from within the set function
         (and not store the setting anywhere). This is the case of this `dir` config, for example, which has no
         apply function. No apply function is need also in the case that setting the variable in the `server` struct
         is all that needs to be done to make the configuration take effect. Note that the original concept of `update_fn`,
         which received the old and new values was removed and replaced by the optional apply function.
      * Apply functions use settings written to the `server` struct and don't receive any inputs.
      * I take care that for the generic (non-special) configs if there's no change I avoid calling the setter (possible
         optimization: avoid calling the apply function as well).
      * Passing the same config parameter more than once to `config set` will fail. You can't do `config set my-setting
         value1 my-setting value2`.
      
      Note that getting `save` in the context of the conf file parsing to work here as before was a pain.
      The conf file supports an aggregate `save` definition, where each `save` line is added to the server's
      save params. This is unlike any other line in the config file where each line overwrites any previous
      configuration. Since we now support passing multiple save params in a single line (see top comments
      about `save` in https://github.com/redis/redis/pull/9644) we should deprecate the aggregate nature of
      this config line and perhaps reduce this ugly code in the future.
      0e5b813e
  7. 28 Nov, 2021 1 commit
    • sundb's avatar
      Fix COMMAND GETKEYS on LCS (#9852) · 4d870078
      sundb authored
      Remove lcsGetKeys to clean up the remaining STRALGO after #9733.
      i.e. it still used a getkeys_proc which was still looking for the KEYS or STRINGS arguments
      4d870078
  8. 24 Nov, 2021 1 commit
    • sundb's avatar
      Replace ziplist with listpack in quicklist (#9740) · 45129059
      sundb authored
      
      
      Part three of implementing #8702, following #8887 and #9366 .
      
      ## Description of the feature
      1. Replace the ziplist container of quicklist with listpack.
      2. Convert existing quicklist ziplists on RDB loading time. an O(n) operation.
      
      ## Interface changes
      1. New `list-max-listpack-size` config is an alias for `list-max-ziplist-size`.
      2. Replace `debug ziplist` command with `debug listpack`.
      
      ## Internal changes
      1. Add `lpMerge` to merge two listpacks . (same as `ziplistMerge`)
      2. Add `lpRepr` to print info of listpack which is used in debugCommand and `quicklistRepr`. (same as `ziplistRepr`)
      3. Replace `QUICKLIST_NODE_CONTAINER_ZIPLIST` with `QUICKLIST_NODE_CONTAINER_PACKED`(following #9357 ).
          It represent that a quicklistNode is a packed node, as opposed to a plain node.
      4. Remove `createZiplistObject` method, which is never used.
      5. Calculate listpack entry size using overhead overestimation in `quicklistAllowInsert`.
          We prefer an overestimation, which would at worse lead to a few bytes below the lowest limit of 4k.
      
      ## Improvements
      1. Calling `lpShrinkToFit` after converting Ziplist to listpack, which was missed at #9366.
      2. Optimize `quicklistAppendPlainNode` to avoid memcpy data.
      
      ## Bugfix
      1. Fix crash in `quicklistRepr` when ziplist is compressed, introduced from #9366.
      
      ## Test
      1. Add unittest for `lpMerge`.
      2. Modify the old quicklist ziplist corrupt dump test.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      45129059
  9. 23 Nov, 2021 1 commit
    • guybe7's avatar
      QUIT is a command, HOST: and POST are not (#9798) · b161cff5
      guybe7 authored
      Some people complain that QUIT is missing from help/command table.
      Not appearing in COMMAND command, command stats, ACL, etc.
      and instead, there's a hack in processCommand with a comment that looks outdated.
      Note that it is [documented](https://redis.io/commands/quit)
      
      At the same time, HOST: and POST are there in the command table although these are not real commands.
      They would appear in the COMMAND command, and even in commandstats.
      
      Other changes:
      1. Initialize the static logged_time static var in securityWarningCommand
      2. add `no-auth` flag to RESET so it can always be executed.
      b161cff5
  10. 18 Nov, 2021 2 commits
    • Eduardo Semprebon's avatar
      Reject PING with MASTERDOWN when replica-serve-stale-data=no (#9757) · 1a255e31
      Eduardo Semprebon authored
      Currently PING returns different status when server is not serving data,
      for example when `LOADING` or `BUSY`.
      But same was not true for `MASTERDOWN`
      This commit makes PING reply with `MASTERDOWN` when
      replica-serve-stale-data=no and link is MASTER is down.
      1a255e31
    • guybe7's avatar
      Obliterate STRALGO! add LCS (which only works on keys) (#9799) · af748988
      guybe7 authored
      Drop the STRALGO command, now LCS is a command of its own and it only works on keys (not input strings).
      The motivation is that STRALGO's syntax was really messed-up...
      - assumes all (future) string algorithms will take similar arguments
      - mixes command that takes keys and one that doesn't in the same command.
      - make it nearly impossible to expose the right key spec in COMMAND INFO (issues cluster clients)
      - hard for cluster clients to determine the key names (firstkey, lastkey, etc)
      - hard for ACL / flags (is it a read command?)
      
      This is a breaking change.
      af748988
  11. 16 Nov, 2021 2 commits
  12. 07 Nov, 2021 1 commit
    • yoav-steinberg's avatar
      Refactor config.c for generic setter interface (#9644) · 79ac5756
      yoav-steinberg authored
      
      
      This refactors all `CONFIG SET`s and conf file loading arguments go through
      the generic config handling interface.
      
      Refactoring changes:
      - All config params go through the `standardConfig` interface (some stuff which
        is only related to the config file and not the `CONFIG` command still has special
        handling for rewrite/config file parsing, `loadmodule`, for example.) .
      - Added `MULTI_ARG_CONFIG` flag for configs to signify they receive a variable
        number of arguments instead of a single argument. This is used to break up space
        separated arguments to `CONFIG SET` so the generic setter interface can pass
        multiple arguments to the setter function. When parsing the config file we also break
        up anything after the config name into multiple arguments to the setter function.
      
      Interface changes:
      - A side effect of the above interface is that the `bind` argument in the config file can
        be empty (no argument at all) this is treated the same as passing an single empty
        string argument (same as `save` already used to work).
      - Support rewrite and setting `watchdog-period` from config file (was only supported
        by the CONFIG command till now).
      - Another side effect is that the `save T X` config argument now supports multiple
        Time-Changes pairs in a single line like its `CONFIG SET` counterpart. So in the
        config file you can either do:
        ```
        save 3600 1
        save 600 10
        ```
        or do
        ```
        save 3600 1 600 10
        ```
      Co-authored-by: default avatarBjorn Svensson <bjorn.a.svensson@est.tech>
      79ac5756
  13. 04 Nov, 2021 1 commit
    • Eduardo Semprebon's avatar
      Replica keep serving data during repl-diskless-load=swapdb for better availability (#9323) · 91d0c758
      Eduardo Semprebon authored
      
      
      For diskless replication in swapdb mode, considering we already spend replica memory
      having a backup of current db to restore in case of failure, we can have the following benefits
      by instead swapping database only in case we succeeded in transferring db from master:
      
      - Avoid `LOADING` response during failed and successful synchronization for cases where the
        replica is already up and running with data.
      - Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
        time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
      - This could be implemented also for disk replication with similar benefits if consumers are willing
        to spend the extra memory usage.
      
      General notes:
      - The concept of `backupDb` becomes `tempDb` for clarity.
      - Async loading mode will only kick in if the replica is syncing from a master that has the same
        repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline. 
      - New property in INFO: `async_loading` to differentiate from the blocking loading
      - Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
        and the tempDb that is passed around.
      - Because this is affecting replicas only, we assume that if they are not readonly and write commands
        during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
        here anyways to avoid complications.
      
      Considerations for review:
      - We have many cases where server.loading flag is used and even though I tried my best, there may
        be cases where async_loading should be checked as well and cases where it shouldn't (would require
        very good understanding of whole code)
      - Several places that had different behavior depending on the loading flag where actually meant to just
        handle commands coming from the AOF client differently than ones coming from real clients, changed
        to check CLIENT_ID_AOF instead.
      
      **Additional for Release Notes**
      - Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
        contribute on triggering next database SAVE
      - New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
      - Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
        Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
        ABORTED and COMPLETED.
      - New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
        to allow modules to declare they support the diskless replication with async loading (when absent, we fall
        back to disk-based loading).
      Co-authored-by: default avatarEduardo Semprebon <edus@saxobank.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      91d0c758
  14. 03 Nov, 2021 1 commit
    • guybe7's avatar
      Fix COMMAND GETKEYS on EVAL without keys (#9733) · f11a2d4d
      guybe7 authored
      Add new no-mandatory-keys flag to support COMMAND GETKEYS of commands
      which have no mandatory keys.
      
      In the past we would have got this error:
      ```
      127.0.0.1:6379> command getkeys eval "return 1" 0
      (error) ERR Invalid arguments specified for command
      ```
      f11a2d4d
  15. 02 Nov, 2021 1 commit
    • zhaozhao.zz's avatar
      rebuild replication backlog index when master restart (#9720) · d08f0552
      zhaozhao.zz authored
      After PR #9166 , replication backlog is not a real block of memory, just contains a
      reference points to replication buffer's block and the blocks index (to accelerate
      search offset when partial sync), so we need update both replication buffer's block's
      offset and replication backlog blocks index's offset when master restart from RDB,
      since the `server.master_repl_offset` is changed.
      The implications of this bug was just a slow search, but not a replication failure.
      d08f0552
  16. 27 Oct, 2021 1 commit
  17. 25 Oct, 2021 3 commits
    • Wang Yuan's avatar
      Add timestamp annotations in AOF (#9326) · 9ec3294b
      Wang Yuan authored
      Add timestamp annotation in AOF, one part of #9325.
      
      Enabled with the new `aof-timestamp-enabled` config option.
      
      Timestamp annotation format is "#TS:${timestamp}\r\n"."
      TS" is short of timestamp and this method could save extra bytes in AOF.
      
      We can use timestamp annotation for some special functions. 
      - know the executing time of commands
      - restore data to a specific point-in-time (by using redis-check-rdb to truncate the file)
      9ec3294b
    • Itamar Haber's avatar
      Removes admin acl category from CLIENT TRACKINGINFO (#9662) · 00362f2a
      Itamar Haber authored
      overlooked in #9504
      00362f2a
    • Wang Yuan's avatar
      Replication backlog and replicas use one global shared replication buffer (#9166) · c1718f9d
      Wang Yuan authored
      ## Background
      For redis master, one replica uses one copy of replication buffer, that is a big waste of memory,
      more replicas more waste, and allocate/free memory for every reply list also cost much.
      If we set client-output-buffer-limit small and write traffic is heavy, master may disconnect with
      replicas and can't finish synchronization with replica. If we set  client-output-buffer-limit big,
      master may be OOM when there are many replicas that separately keep much memory.
      Because replication buffers of different replica client are the same, one simple idea is that
      all replicas only use one replication buffer, that will effectively save memory.
      
      Since replication backlog content is the same as replicas' output buffer, now we
      can discard replication backlog memory and use global shared replication buffer
      to implement replication backlog mechanism.
      
      ## Implementation
      I create one global "replication buffer" which contains content of replication stream.
      The structure of "replication buffer" is similar to the reply list that exists in every client.
      But the node of list is `replBufBlock`, which has `id, repl_offset, refcount` fields.
      ```c
      /* Replication buffer blocks is the list of replBufBlock.
       *
       * +--------------+       +--------------+       +--------------+
       * | refcount = 1 |  ...  | refcount = 0 |  ...  | refcount = 2 |
       * +--------------+       +--------------+       +--------------+
       *      |                                            /       \
       *      |                                           /         \
       *      |                                          /           \
       *  Repl Backlog                               Replia_A      Replia_B
       * 
       * Each replica or replication backlog increments only the refcount of the
       * 'ref_repl_buf_node' which it points to. So when replica walks to the next
       * node, it should first increase the next node's refcount, and when we trim
       * the replication buffer nodes, we remove node always from the head node which
       * refcount is 0. If the refcount of the head node is not 0, we must stop
       * trimming and never iterate the next node. */
      
      /* Similar with 'clientReplyBlock', it is used for shared buffers between
       * all replica clients and replication backlog. */
      typedef struct replBufBlock {
          int refcount;           /* Number of replicas or repl backlog using. */
          long long id;           /* The unique incremental number. */
          long long repl_offset;  /* Start replication offset of the block. */
          size_t size, used;
          char buf[];
      } replBufBlock;
      ```
      So now when we feed replication stream into replication backlog and all replicas, we only need
      to feed stream into replication buffer `feedReplicationBuffer`. In this function, we set some fields of
      replication backlog and replicas to references of the global replication buffer blocks. And we also
      need to check replicas' output buffer limit to free if exceeding `client-output-buffer-limit`, and trim
      replication backlog if exceeding `repl-backlog-size`.
      
      When sending reply to replicas, we also need to iterate replication buffer blocks and send its
      content, when totally sending one block for replica, we decrease current node count and
      increase the next current node count, and then free the block which reference is 0 from the
      head of replication buffer blocks.
      
      Since now we use linked list to manage replication backlog, it may cost much time for iterating
      all linked list nodes to find corresponding replication buffer node. So we create a rax tree to
      store some nodes  for index, but to avoid rax tree occupying too much memory, i record
      one per 64 nodes for index.
      
      Currently, to make partial resynchronization as possible as much, we always let replication
      backlog as the last reference of replication buffer blocks, backlog size may exceeds our setting
      if slow replicas that reference vast replication buffer blocks, and this method doesn't increase
      memory usage since they share replication buffer. To avoid freezing server for freeing unreferenced
      replication buffer blocks when we need to trim backlog for exceeding backlog size setting,
      we trim backlog incrementally (free 64 blocks per call now), and make it faster in
      `beforeSleep` (free 640 blocks).
      
      ### Other changes
      - `mem_total_replication_buffers`: we add this field in INFO command, it means the total
        memory of replication buffers used.
      - `mem_clients_slaves`:  now even replica is slow to replicate, and its output buffer memory
        is not 0, but it still may be 0, since replication backlog and replicas share one global replication
        buffer, only if replication buffer memory is more than the repl backlog setting size, we consider
        the excess as replicas' memory. Otherwise, we think replication buffer memory is the consumption
        of repl backlog.
      - Key eviction
        Since all replicas and replication backlog share global replication buffer, we think only the
        part of exceeding backlog size the extra separate consumption of replicas.
        Because we trim backlog incrementally in the background, backlog size may exceeds our
        setting if slow replicas that reference vast replication buffer blocks disconnect.
        To avoid massive eviction loop, we don't count the delayed freed replication backlog into
        used memory even if there are no replicas, i.e. we also regard this memory as replicas's memory.
      - `client-output-buffer-limit` check for replica clients
        It doesn't make sense to set the replica clients output buffer limit lower than the repl-backlog-size
        config (partial sync will succeed and then replica will get disconnected). Such a configuration is
        ignored (the size of repl-backlog-size will be used). This doesn't have memory consumption
        implications since the replica client will share the backlog buffers memory.
      - Drop replication backlog after loading data if needed
        We always create replication backlog if server is a master, we need it because we put DELs in
        it when loading expired keys in RDB, but if RDB doesn't have replication info or there is no rdb,
        it is not possible to support partial resynchronization, to avoid extra memory of replication backlog,
        we drop it.
      - Multi IO threads
       Since all replicas and replication backlog use global replication buffer,  if I/O threads are enabled,
        to guarantee data accessing thread safe, we must let main thread handle sending the output buffer
        to all replicas. But before, other IO threads could handle sending output buffer of all replicas.
      
      ## Other optimizations
      This solution resolve some other problem:
      - When replicas disconnect with master since of out of output buffer limit, releasing the output
        buffer of replicas may freeze server if we set big `client-output-buffer-limit` for replicas, but now,
        it doesn't cause freezing.
      - This implementation may mitigate reply list copy cost time(also freezes server) when one replication
        has huge reply buffer and another replica can copy buffer for full synchronization. now, we just copy
        reference info, it is very light.
      - If we set replication backlog size big, it also may cost much time to copy replication backlog into
        replica's output buffer. But this commit eliminates this problem.
      - Resizing replication backlog size doesn't empty current replication backlog content.
      c1718f9d
  18. 24 Oct, 2021 2 commits
  19. 21 Oct, 2021 1 commit
  20. 20 Oct, 2021 2 commits
    • Oran Agra's avatar
      fix new cluster tests issues (#9657) · 7d6744c7
      Oran Agra authored
      Following #9483 the daily CI exposed a few problems.
      
      * The cluster creation code (uses redis-cli) is complicated to test with TLS enabled.
        for now i'm just skipping them since the tests we run there don't really need that kind of coverage
      * cluster port binding failures
        note that `find_available_port` already looks for a free cluster port
        but the code in `wait_server_started` couldn't detect the failure of binding
        (the text it greps for wasn't found in the log)
      7d6744c7
    • guybe7's avatar
      Treat subcommands as commands (#9504) · 43e736f7
      guybe7 authored
      ## Intro
      
      The purpose is to allow having different flags/ACL categories for
      subcommands (Example: CONFIG GET is ok-loading but CONFIG SET isn't)
      
      We create a small command table for every command that has subcommands
      and each subcommand has its own flags, etc. (same as a "regular" command)
      
      This commit also unites the Redis and the Sentinel command tables
      
      ## Affected commands
      
      CONFIG
      Used to have "admin ok-loading ok-stale no-script"
      Changes:
      1. Dropped "ok-loading" in all except GET (this doesn't change behavior since
      there were checks in the code doing that)
      
      XINFO
      Used to have "read-only random"
      Changes:
      1. Dropped "random" in all except CONSUMERS
      
      XGROUP
      Used to have "write use-memory"
      Changes:
      1. Dropped "use-memory" in all except CREATE and CREATECONSUMER
      
      COMMAND
      No changes.
      
      MEMORY
      Used to have "random read-only"
      Changes:
      1. Dropped "random" in PURGE and USAGE
      
      ACL
      Used to have "admin no-script ok-loading ok-stale"
      Changes:
      1. Dropped "admin" in WHOAMI, GENPASS, and CAT
      
      LATENCY
      No changes.
      
      MODULE
      No changes.
      
      SLOWLOG
      Used to have "admin random ok-loading ok-stale"
      Changes:
      1. Dropped "random" in RESET
      
      OBJECT
      Used to have "read-only random"
      Changes:
      1. Dropped "random" in ENCODING and REFCOUNT
      
      SCRIPT
      Used to have "may-replicate no-script"
      Changes:
      1. Dropped "may-replicate" in all except FLUSH and LOAD
      
      CLIENT
      Used to have "admin no-script random ok-loading ok-stale"
      Changes:
      1. Dropped "random" in all except INFO and LIST
      2. Dropped "admin" in ID, TRACKING, CACHING, GETREDIR, INFO, SETNAME, GETNAME, and REPLY
      
      STRALGO
      No changes.
      
      PUBSUB
      No changes.
      
      CLUSTER
      Changes:
      1. Dropped "admin in countkeysinslots, getkeysinslot, info, nodes, keyslot, myid, and slots
      
      SENTINEL
      No changes.
      
      (note that DEBUG also fits, but we decided not to convert it since it's for
      debugging and anyway undocumented)
      
      ## New sub-command
      This commit adds another element to the per-command output of COMMAND,
      describing the list of subcommands, if any (in the same structure as "regular" commands)
      Also, it adds a new subcommand:
      ```
      COMMAND LIST [FILTERBY (MODULE <module-name>|ACLCAT <cat>|PATTERN <pattern>)]
      ```
      which returns a set of all commands (unless filters), but excluding subcommands.
      
      ## Module API
      A new module API, RM_CreateSubcommand, was added, in order to allow
      module writer to define subcommands
      
      ## ACL changes:
      1. Now, that each subcommand is actually a command, each has its own ACL id.
      2. The old mechanism of allowed_subcommands is redundant
      (blocking/allowing a subcommand is the same as blocking/allowing a regular command),
      but we had to keep it, to support the widespread usage of allowed_subcommands
      to block commands with certain args, that aren't subcommands (e.g. "-select +select|0").
      3. I have renamed allowed_subcommands to allowed_firstargs to emphasize the difference.
      4. Because subcommands are commands in ACL too, you can now use "-" to block subcommands
      (e.g. "+client -client|kill"), which wasn't possible in the past.
      5. It is also possible to use the allowed_firstargs mechanism with subcommand.
      For example: `+config -config|set +config|set|loglevel` will block all CONFIG SET except
      for setting the log level.
      6. All of the ACL changes above required some amount of refactoring.
      
      ## Misc
      1. There are two approaches: Either each subcommand has its own function or all
         subcommands use the same function, determining what to do according to argv[0].
         For now, I took the former approaches only with CONFIG and COMMAND,
         while other commands use the latter approach (for smaller blamelog diff).
      2. Deleted memoryGetKeys: It is no longer needed because MEMORY USAGE now uses the "range" key spec.
      4. Bugfix: GETNAME was missing from CLIENT's help message.
      5. Sentinel and Redis now use the same table, with the same function pointer.
         Some commands have a different implementation in Sentinel, so we redirect
         them (these are ROLE, PUBLISH, and INFO).
      6. Command stats now show the stats per subcommand (e.g. instead of stats just
         for "config" you will have stats for "config|set", "config|get", etc.)
      7. It is now possible to use COMMAND directly on subcommands:
         COMMAND INFO CONFIG|GET (The pipeline syntax was inspired from ACL, and
         can be used in functions lookupCommandBySds and lookupCommandByCString)
      8. STRALGO is now a container command (has "help")
      
      ## Breaking changes:
      1. Command stats now show the stats per subcommand (see (5) above)
      43e736f7
  21. 19 Oct, 2021 1 commit
    • Bjorn Svensson's avatar
      Move config `unixsocketperm` to generic configs (#9607) · c9fabc2e
      Bjorn Svensson authored
      Since the size of mode_t is platform dependant we handle the
      `unixsocketperm` configuration as a generic int type.
      mode_t is either an unsigned int or unsigned short (macOS) and
      the range-limits allows for a simple cast to a mode_t.
      c9fabc2e
  22. 13 Oct, 2021 1 commit
  23. 08 Oct, 2021 2 commits
  24. 07 Oct, 2021 1 commit
  25. 06 Oct, 2021 3 commits
    • Andy Pan's avatar
      Implement anetPipe() to combine creating pipe and setting flags (#9511) · 2391aefd
      Andy Pan authored
      
      
      Implement createPipe() to combine creating pipe and setting flags, also reduce
      system calls by prioritizing pipe2() over pipe().
      
      Without createPipe(), we have to call pipe() to create a pipe and then call some
      functions (like anetCloexec() and anetNonBlock()) of anet.c to set flags respectively,
      which leads to some extra system calls, now we can leverage pipe2() to combine
      them and make the process of creating pipe more convergent in createPipe().
      Co-authored-by: default avatarViktor Söderqvist <viktor.soderqvist@est.tech>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      2391aefd
    • Meir Shpilraien (Spielrein)'s avatar
      Added module-acquire-GIL latency stats (#9608) · 4fb39b67
      Meir Shpilraien (Spielrein) authored
      The new value indicates how long Redis wait to
      acquire the GIL after sleep. This can help identify
      problems where a module perform some background
      operation for a long time (with the GIL held) and
      blocks the Redis main thread.
      4fb39b67
    • tzongw's avatar
      improve latency when a client is unblocked by module timer (#9593) · f5160ed0
      tzongw authored
      Scenario:
      1. client block on command `XREAD BLOCK 0 STREAMS mystream  $`
      2. in a module, calling `XADD mystream * field value` via lua from a timer callback
      3. client will receive response after some latency up to 100ms
      
      Reason:
      When `XADD` signal the key `mystream` as ready, `beforeSleep` in next eventloop will call
      `handleClientsBlockedOnKeys` to unblock the client and add pending data to write but not
      actually install a write handler, so next redis will block in `aeApiPoll` up to 100ms given `hz`
      config as default 10, pending data will be sent in another next eventloop by
      `handleClientsWithPendingWritesUsingThreads`.
      
      Calling `handleClientsBlockedOnKeys` before `handleClientsWithPendingWritesUsingThreads`
      in `beforeSleep` solves the problem.
      f5160ed0
  26. 04 Oct, 2021 1 commit
  27. 26 Sep, 2021 1 commit
    • yoav-steinberg's avatar
      Client eviction ci issues (#9549) · 66002530
      yoav-steinberg authored
      Fixing CI test issues introduced in #8687
      - valgrind warnings in readQueryFromClient when client was freed by processInputBuffer
      - adding DEBUG pause-cron for tests not to be time dependent.
      - skipping a test that depends on socket buffers / events not compatible with TLS
      - making sure client got subscribed by not using deferring client
      66002530
  28. 23 Sep, 2021 1 commit
    • yoav-steinberg's avatar
      Client eviction (#8687) · 2753429c
      yoav-steinberg authored
      
      
      ### Description
      A mechanism for disconnecting clients when the sum of all connected clients is above a
      configured limit. This prevents eviction or OOM caused by accumulated used memory
      between all clients. It's a complimentary mechanism to the `client-output-buffer-limit`
      mechanism which takes into account not only a single client and not only output buffers
      but rather all memory used by all clients.
      
      #### Design
      The general design is as following:
      * We track memory usage of each client, taking into account all memory used by the
        client (query buffer, output buffer, parsed arguments, etc...). This is kept up to date
        after reading from the socket, after processing commands and after writing to the socket.
      * Based on the used memory we sort all clients into buckets. Each bucket contains all
        clients using up up to x2 memory of the clients in the bucket below it. For example up
        to 1m clients, up to 2m clients, up to 4m clients, ...
      * Before processing a command and before sleep we check if we're over the configured
        limit. If we are we start disconnecting clients from larger buckets downwards until we're
        under the limit.
      
      #### Config
      `maxmemory-clients` max memory all clients are allowed to consume, above this threshold
      we disconnect clients.
      This config can either be set to 0 (meaning no limit), a size in bytes (possibly with MB/GB
      suffix), or as a percentage of `maxmemory` by using the `%` suffix (e.g. setting it to `10%`
      would mean 10% of `maxmemory`).
      
      #### Important code changes
      * During the development I encountered yet more situations where our io-threads access
        global vars. And needed to fix them. I also had to handle keeps the clients sorted into the
        memory buckets (which are global) while their memory usage changes in the io-thread.
        To achieve this I decided to simplify how we check if we're in an io-thread and make it
        much more explicit. I removed the `CLIENT_PENDING_READ` flag used for checking
        if the client is in an io-thread (it wasn't used for anything else) and just used the global
        `io_threads_op` variable the same way to check during writes.
      * I optimized the cleanup of the client from the `clients_pending_read` list on client freeing.
        We now store a pointer in the `client` struct to this list so we don't need to search in it
        (`pending_read_list_node`).
      * Added `evicted_clients` stat to `INFO` command.
      * Added `CLIENT NO-EVICT ON|OFF` sub command to exclude a specific client from the
        client eviction mechanism. Added corrosponding 'e' flag in the client info string.
      * Added `multi-mem` field in the client info string to show how much memory is used up
        by buffered multi commands.
      * Client `tot-mem` now accounts for buffered multi-commands, pubsub patterns and
        channels (partially), tracking prefixes (partially).
      * CLIENT_CLOSE_ASAP flag is now handled in a new `beforeNextClient()` function so
        clients will be disconnected between processing different clients and not only before sleep.
        This new function can be used in the future for work we want to do outside the command
        processing loop but don't want to wait for all clients to be processed before we get to it.
        Specifically I wanted to handle output-buffer-limit related closing before we process client
        eviction in case the two race with each other.
      * Added a `DEBUG CLIENT-EVICTION` command to print out info about the client eviction
        buckets.
      * Each client now holds a pointer to the client eviction memory usage bucket it belongs to
        and listNode to itself in that bucket for quick removal.
      * Global `io_threads_op` variable now can contain a `IO_THREADS_OP_IDLE` value
        indicating no io-threading is currently being executed.
      * In order to track memory used by each clients in real-time we can't rely on updating
        these stats in `clientsCron()` alone anymore. So now I call `updateClientMemUsage()`
        (used to be `clientsCronTrackClientsMemUsage()`) after command processing, after
        writing data to pubsub clients, after writing the output buffer and after reading from the
        socket (and maybe other places too). The function is written to be fast.
      * Clients are evicted if needed (with appropriate log line) in `beforeSleep()` and before
        processing a command (before performing oom-checks and key-eviction).
      * All clients memory usage buckets are grouped as follows:
        * All clients using less than 64k.
        * 64K..128K
        * 128K..256K
        * ...
        * 2G..4G
        * All clients using 4g and up.
      * Added client-eviction.tcl with a bunch of tests for the new mechanism.
      * Extended maxmemory.tcl to test the interaction between maxmemory and
        maxmemory-clients settings.
      * Added an option to flag a numeric configuration variable as a "percent", this means that
        if we encounter a '%' after the number in the config file (or config set command) we
        consider it as valid. Such a number is store internally as a negative value. This way an
        integer value can be interpreted as either a percent (negative) or absolute value (positive).
        This is useful for example if some numeric configuration can optionally be set to a percentage
        of something else.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      2753429c