1. 20 Jan, 2022 1 commit
    • perryitay's avatar
      Adding module api for processing commands during busy jobs and allow flagging... · c4b78823
      perryitay authored
      
      Adding module api for processing commands during busy jobs and allow flagging the commands that should be handled at this status (#9963)
      
      Some modules might perform a long-running logic in different stages of Redis lifetime, for example:
      * command execution
      * RDB loading
      * thread safe context
      
      During this long-running logic Redis is not responsive.
      
      This PR offers 
      1. An API to process events while a busy command is running (`RM_Yield`)
      2. A new flag (`ALLOW_BUSY`) to mark the commands that should be handled during busy
        jobs which can also be used by modules (`allow-busy`)
      3. In slow commands and thread safe contexts, this flag will start rejecting commands with -BUSY only
        after `busy-reply-threshold`
      4. During loading (`rdb_load` callback), it'll process events right away (not wait for `busy-reply-threshold`),
        but either way, the processing is throttled to the server hz rate.
      5. Allow modules to Yield to redis background tasks, but not to client commands
      
      * rename `script-time-limit` to `busy-reply-threshold` (an alias to the pre-7.0 `lua-time-limit`)
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      c4b78823
  2. 17 Jan, 2022 1 commit
    • Oran Agra's avatar
      Set repl-diskless-sync to yes by default, add repl-diskless-sync-max-replicas (#10092) · ae899589
      Oran Agra authored
      1. enable diskless replication by default
      2. add a new config named repl-diskless-sync-max-replicas that enables
         replication to start before the full repl-diskless-sync-delay was
         reached.
      3. put replica online sooner on the master (see below)
      4. test suite uses repl-diskless-sync-delay of 0 to be faster
      5. a few tests that use multiple replica on a pre-populated master, are
         now using the new repl-diskless-sync-max-replicas
      6. fix possible timing issues in a few cluster tests (see below)
      
      put replica online sooner on the master 
      ----------------------------------------------------
      there were two tests that failed because they needed for the master to
      realize that the replica is online, but the test code was actually only
      waiting for the replica to realize it's online, and in diskless it could
      have been before the master realized it.
      
      changes include two things:
      1. the tests wait on the right thing
      2. issues in the master, putting the replica online in two steps.
      
      the master used to put the replica as online in 2 steps. the first
      step was to mark it as online, and the second step was to enable the
      write event (only after getting ACK), but in fact the first step didn't
      contains some of the tasks to put it online (like updating good slave
      count, and sending the module event). this meant that if a test was
      waiting to see that the replica is online form the point of view of the
      master, and then confirm that the module got an event, or that the
      master has enough good replicas, it could fail due to timing issues.
      
      so now the full effect of putting the replica online, happens at once,
      and only the part about enabling the writes is delayed till the ACK.
      
      fix cluster tests 
      --------------------
      I added some code to wait for the replica to sync and avoid race
      conditions.
      later realized the sentinel and cluster tests where using the original 5
      seconds delay, so changed it to 0.
      
      this means the other changes are probably not needed, but i suppose
      they're still better (avoid race conditions)
      ae899589
  3. 10 Jan, 2022 1 commit
  4. 09 Jan, 2022 1 commit
  5. 05 Jan, 2022 2 commits
    • filipe oliveira's avatar
      Added INFO LATENCYSTATS section: latency by percentile distribution/latency by... · 5dd15443
      filipe oliveira authored
      
      Added INFO LATENCYSTATS section: latency by percentile distribution/latency by cumulative distribution of latencies (#9462)
      
      # Short description
      
      The Redis extended latency stats track per command latencies and enables:
      - exporting the per-command percentile distribution via the `INFO LATENCYSTATS` command.
        **( percentile distribution is not mergeable between cluster nodes ).**
      - exporting the per-command cumulative latency distributions via the `LATENCY HISTOGRAM` command.
        Using the cumulative distribution of latencies we can merge several stats from different cluster nodes
        to calculate aggregate metrics .
      
      By default, the extended latency monitoring is enabled since the overhead of keeping track of the
      command latency is very small.
       
      If you don't want to track extended latency metrics, you can easily disable it at runtime using the command:
       - `CONFIG SET latency-tracking no`
      
      By default, the exported latency percentiles are the p50, p99, and p999.
      You can alter them at runtime using the command:
      - `CONFIG SET latency-tracking-info-percentiles "0.0 50.0 100.0"`
      
      
      ## Some details:
      - The total size per histogram should sit around 40 KiB. We only allocate those 40KiB when a command
        was called for the first time.
      - With regards to the WRITE overhead As seen below, there is no measurable overhead on the achievable
        ops/sec or full latency spectrum on the client. Including also the measured redis-benchmark for unstable
        vs this branch. 
      - We track from 1 nanosecond to 1 second ( everything above 1 second is considered +Inf )
      
      ## `INFO LATENCYSTATS` exposition format
      
         - Format: `latency_percentiles_usec_<CMDNAME>:p0=XX,p50....` 
      
      ## `LATENCY HISTOGRAM [command ...]` exposition format
      
      Return a cumulative distribution of latencies in the format of a histogram for the specified command names.
      
      The histogram is composed of a map of time buckets:
      - Each representing a latency range, between 1 nanosecond and roughly 1 second.
      - Each bucket covers twice the previous bucket's range.
      - Empty buckets are not printed.
      - Everything above 1 sec is considered +Inf.
      - At max there will be log2(1000000000)=30 buckets
      
      We reply a map for each command in the format:
      `<command name> : { `calls`: <total command calls> , `histogram` : { <bucket 1> : latency , < bucket 2> : latency, ...  } }`
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      5dd15443
    • Binbin's avatar
      Fix typos in aof.c / redis.conf (#10057) · 95380887
      Binbin authored
      95380887
  6. 03 Jan, 2022 3 commits
    • chenyang8094's avatar
      Implement Multi Part AOF mechanism to avoid AOFRW overheads. (#9788) · 87789fae
      chenyang8094 authored
      
      
      Implement Multi-Part AOF mechanism to avoid overheads during AOFRW.
      Introducing a folder with multiple AOF files tracked by a manifest file.
      
      The main issues with the the original AOFRW mechanism are:
      * buffering of commands that are processed during rewrite (consuming a lot of RAM)
      * freezes of the main process when the AOFRW completes to drain the remaining part of the buffer and fsync it.
      * double disk IO for the data that arrives during AOFRW (had to be written to both the old and new AOF files)
      
      The main modifications of this PR:
      1. Remove the AOF rewrite buffer and related code.
      2. Divide the AOF into multiple files, they are classified as two types, one is the the `BASE` type,
        it represents the full amount of data (Maybe AOF or RDB format) after each AOFRW, there is only
        one `BASE` file at most. The second is `INCR` type, may have more than one. They represent the
        incremental commands since the last AOFRW.
      3. Use a AOF manifest file to record and manage these AOF files mentioned above.
      4. The original configuration of `appendfilename` will be the base part of the new file name, for example:
        `appendonly.aof.1.base.rdb` and `appendonly.aof.2.incr.aof`
      5. Add manifest-related TCL tests, and modified some existing tests that depend on the `appendfilename`
      6. Remove the `aof_rewrite_buffer_length` field in info.
      7. Add `aof-disable-auto-gc` configuration. By default we're automatically deleting HISTORY type AOFs.
        It also gives users the opportunity to preserve the history AOFs. just for testing use now.
      8. Add AOFRW limiting measure. When the AOFRW failures reaches the threshold (3 times now),
        we will delay the execution of the next AOFRW by 1 minute. If the next AOFRW also fails, it will be
        delayed by 2 minutes. The next is 4, 8, 16, the maximum delay is 60 minutes (1 hour). During the limit
        period, we can still use the 'bgrewriteaof' command to execute AOFRW immediately.
      9. Support upgrade (load) data from old version redis.
      10. Add `appenddirname` configuration, as the directory name of the append only files. All AOF files and
        manifest file will be placed in this directory.
      11. Only the last AOF file (BASE or INCR) can be truncated. Otherwise redis will exit even if
        `aof-load-truncated` is enabled.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      87789fae
    • Madelyn Olson's avatar
      Implement clusterbus message extensions and cluster hostname support (#9530) · 5460c100
      Madelyn Olson authored
      Implement the ability for cluster nodes to advertise their location with extension messages.
      5460c100
    • Harkrishn Patro's avatar
      Sharded pubsub implementation (#8621) · 9f888576
      Harkrishn Patro authored
      
      
      This commit implements a sharded pubsub implementation based off of shard channels.
      Co-authored-by: default avatarHarkrishn Patro <harkrisp@amazon.com>
      Co-authored-by: default avatarMadelyn Olson <madelyneolson@gmail.com>
      9f888576
  7. 02 Jan, 2022 1 commit
    • Viktor Söderqvist's avatar
      Wait for replicas when shutting down (#9872) · 45a155bd
      Viktor Söderqvist authored
      
      
      To avoid data loss, this commit adds a grace period for lagging replicas to
      catch up the replication offset.
      
      Done:
      
      * Wait for replicas when shutdown is triggered by SIGTERM and SIGINT.
      
      * Wait for replicas when shutdown is triggered by the SHUTDOWN command. A new
        blocked client type BLOCKED_SHUTDOWN is introduced, allowing multiple clients
        to call SHUTDOWN in parallel.
        Note that they don't expect a response unless an error happens and shutdown is aborted.
      
      * Log warning for each replica lagging behind when finishing shutdown.
      
      * CLIENT_PAUSE_WRITE while waiting for replicas.
      
      * Configurable grace period 'shutdown-timeout' in seconds (default 10).
      
      * New flags for the SHUTDOWN command:
      
          - NOW disables the grace period for lagging replicas.
      
          - FORCE ignores errors writing the RDB or AOF files which would normally
            prevent a shutdown.
      
          - ABORT cancels ongoing shutdown. Can't be combined with other flags.
      
      * New field in the output of the INFO command: 'shutdown_in_milliseconds'. The
        value is the remaining maximum time to wait for lagging replicas before
        finishing the shutdown. This field is present in the Server section **only**
        during shutdown.
      
      Not directly related:
      
      * When shutting down, if there is an AOF saving child, it is killed **even** if AOF
        is disabled. This can happen if BGREWRITEAOF is used when AOF is off.
      
      * Client pause now has end time and type (WRITE or ALL) per purpose. The
        different pause purposes are *CLIENT PAUSE command*, *failover* and
        *shutdown*. If clients are unpaused for one purpose, it doesn't affect client
        pause for other purposes. For example, the CLIENT UNPAUSE command doesn't
        affect client pause initiated by the failover or shutdown procedures. A completed
        failover or a failed shutdown doesn't unpause clients paused by the CLIENT
        PAUSE command.
      
      Notes:
      
      * DEBUG RESTART doesn't wait for replicas.
      
      * We already have a warning logged when a replica disconnects. This means that
        if any replica connection is lost during the shutdown, it is either logged as
        disconnected or as lagging at the time of exit.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      45a155bd
  8. 19 Dec, 2021 1 commit
    • YaacovHazan's avatar
      Protected configs and sensitive commands (#9920) · ae2f5b7b
      YaacovHazan authored
      Block sensitive configs and commands by default.
      
      * `enable-protected-configs` - block modification of configs with the new `PROTECTED_CONFIG` flag.
         Currently we add this flag to `dbfilename`, and `dir` configs,
         all of which are non-mutable configs that can set a file redis will write to.
      * `enable-debug-command` - block the `DEBUG` command
      * `enable-module-command` - block the `MODULE` command
      
      These have a default value set to `no`, so that these features are not
      exposed by default to client connections, and can only be set by modifying the config file.
      
      Users can change each of these to either `yes` (allow all access), or `local` (allow access from
      local TCP connections and unix domain connections)
      
      Note that this is a **breaking change** (specifically the part about MODULE command being disabled by default).
      I.e. we don't consider DEBUG command being blocked as an issue (people shouldn't have been using it),
      and the few configs we protected are unlikely to have been set at runtime anyway.
      On the other hand, it's likely to assume some users who use modules, load them from the config file anyway.
      Note that's the whole point of this PR, for redis to be more secure by default and reduce the attack surface on
      innocent users, so secure defaults will necessarily mean a breaking change.
      ae2f5b7b
  9. 17 Dec, 2021 1 commit
    • ny0312's avatar
      Introduce memory management on cluster link buffers (#9774) · 792afb44
      ny0312 authored
      Introduce memory management on cluster link buffers:
       * Introduce a new `cluster-link-sendbuf-limit` config that caps memory usage of cluster bus link send buffers.
       * Introduce a new `CLUSTER LINKS` command that displays current TCP links to/from peers.
       * Introduce a new `mem_cluster_links` field under `INFO` command output, which displays the overall memory usage by all current cluster links.
       * Introduce a new `total_cluster_links_buffer_limit_exceeded` field under `CLUSTER INFO` command output, which displays the accumulated count of cluster links freed due to `cluster-link-sendbuf-limit`.
      792afb44
  10. 06 Dec, 2021 1 commit
  11. 01 Dec, 2021 1 commit
    • yoav-steinberg's avatar
      Multiparam config set (#9748) · 0e5b813e
      yoav-steinberg authored
      We can now do: `config set maxmemory 10m repl-backlog-size 5m`
      
      ## Basic algorithm to support "transaction like" config sets:
      
      1. Backup all relevant current values (via get).
      2. Run "verify" and "set" on everything, if we fail run "restore".
      3. Run "apply" on everything (optional optimization: skip functions already run). If we fail run "restore".
      4. Return success.
      
      ### restore
      1. Run set on everything in backup. If we fail log it and continue (this puts us in an undefined
         state but we decided it's better than the alternative of panicking). This indicates either a bug
         or some unsupported external state.
      2. Run apply on everything in backup (optimization: skip functions already run). If we fail log
         it (see comment above).
      3. Return error.
      
      ## Implementation/design changes:
      * Apply function are idempotent (have no effect if they are run more than once for the same config).
      * No indication in set functions if we're reading the config or running from the `CONFIG SET` command
         (removed `update` argument).
      * Set function should set some config variable and assume an (optional) apply function will use that
         later to apply. If we know this setting can be safely applied immediately and can always be reverted
         and doesn't depend on any other configuration we can apply immediately from within the set function
         (and not store the setting anywhere). This is the case of this `dir` config, for example, which has no
         apply function. No apply function is need also in the case that setting the variable in the `server` struct
         is all that needs to be done to make the configuration take effect. Note that the original concept of `update_fn`,
         which received the old and new values was removed and replaced by the optional apply function.
      * Apply functions use settings written to the `server` struct and don't receive any inputs.
      * I take care that for the generic (non-special) configs if there's no change I avoid calling the setter (possible
         optimization: avoid calling the apply function as well).
      * Passing the same config parameter more than once to `config set` will fail. You can't do `config set my-setting
         value1 my-setting value2`.
      
      Note that getting `save` in the context of the conf file parsing to work here as before was a pain.
      The conf file supports an aggregate `save` definition, where each `save` line is added to the server's
      save params. This is unlike any other line in the config file where each line overwrites any previous
      configuration. Since we now support passing multiple save params in a single line (see top comments
      about `save` in https://github.com/redis/redis/pull/9644) we should deprecate the aggregate nature of
      this config line and perhaps reduce this ugly code in the future.
      0e5b813e
  12. 24 Nov, 2021 1 commit
    • sundb's avatar
      Replace ziplist with listpack in quicklist (#9740) · 45129059
      sundb authored
      
      
      Part three of implementing #8702, following #8887 and #9366 .
      
      ## Description of the feature
      1. Replace the ziplist container of quicklist with listpack.
      2. Convert existing quicklist ziplists on RDB loading time. an O(n) operation.
      
      ## Interface changes
      1. New `list-max-listpack-size` config is an alias for `list-max-ziplist-size`.
      2. Replace `debug ziplist` command with `debug listpack`.
      
      ## Internal changes
      1. Add `lpMerge` to merge two listpacks . (same as `ziplistMerge`)
      2. Add `lpRepr` to print info of listpack which is used in debugCommand and `quicklistRepr`. (same as `ziplistRepr`)
      3. Replace `QUICKLIST_NODE_CONTAINER_ZIPLIST` with `QUICKLIST_NODE_CONTAINER_PACKED`(following #9357 ).
          It represent that a quicklistNode is a packed node, as opposed to a plain node.
      4. Remove `createZiplistObject` method, which is never used.
      5. Calculate listpack entry size using overhead overestimation in `quicklistAllowInsert`.
          We prefer an overestimation, which would at worse lead to a few bytes below the lowest limit of 4k.
      
      ## Improvements
      1. Calling `lpShrinkToFit` after converting Ziplist to listpack, which was missed at #9366.
      2. Optimize `quicklistAppendPlainNode` to avoid memcpy data.
      
      ## Bugfix
      1. Fix crash in `quicklistRepr` when ziplist is compressed, introduced from #9366.
      
      ## Test
      1. Add unittest for `lpMerge`.
      2. Modify the old quicklist ziplist corrupt dump test.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      45129059
  13. 18 Nov, 2021 1 commit
  14. 04 Nov, 2021 1 commit
    • Eduardo Semprebon's avatar
      Replica keep serving data during repl-diskless-load=swapdb for better availability (#9323) · 91d0c758
      Eduardo Semprebon authored
      
      
      For diskless replication in swapdb mode, considering we already spend replica memory
      having a backup of current db to restore in case of failure, we can have the following benefits
      by instead swapping database only in case we succeeded in transferring db from master:
      
      - Avoid `LOADING` response during failed and successful synchronization for cases where the
        replica is already up and running with data.
      - Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
        time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
      - This could be implemented also for disk replication with similar benefits if consumers are willing
        to spend the extra memory usage.
      
      General notes:
      - The concept of `backupDb` becomes `tempDb` for clarity.
      - Async loading mode will only kick in if the replica is syncing from a master that has the same
        repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline. 
      - New property in INFO: `async_loading` to differentiate from the blocking loading
      - Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
        and the tempDb that is passed around.
      - Because this is affecting replicas only, we assume that if they are not readonly and write commands
        during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
        here anyways to avoid complications.
      
      Considerations for review:
      - We have many cases where server.loading flag is used and even though I tried my best, there may
        be cases where async_loading should be checked as well and cases where it shouldn't (would require
        very good understanding of whole code)
      - Several places that had different behavior depending on the loading flag where actually meant to just
        handle commands coming from the AOF client differently than ones coming from real clients, changed
        to check CLIENT_ID_AOF instead.
      
      **Additional for Release Notes**
      - Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
        contribute on triggering next database SAVE
      - New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
      - Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
        Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
        ABORTED and COMPLETED.
      - New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
        to allow modules to declare they support the diskless replication with async loading (when absent, we fall
        back to disk-based loading).
      Co-authored-by: default avatarEduardo Semprebon <edus@saxobank.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      91d0c758
  15. 25 Oct, 2021 2 commits
    • Wang Yuan's avatar
      Add timestamp annotations in AOF (#9326) · 9ec3294b
      Wang Yuan authored
      Add timestamp annotation in AOF, one part of #9325.
      
      Enabled with the new `aof-timestamp-enabled` config option.
      
      Timestamp annotation format is "#TS:${timestamp}\r\n"."
      TS" is short of timestamp and this method could save extra bytes in AOF.
      
      We can use timestamp annotation for some special functions. 
      - know the executing time of commands
      - restore data to a specific point-in-time (by using redis-check-rdb to truncate the file)
      9ec3294b
    • Wang Yuan's avatar
      Replication backlog and replicas use one global shared replication buffer (#9166) · c1718f9d
      Wang Yuan authored
      ## Background
      For redis master, one replica uses one copy of replication buffer, that is a big waste of memory,
      more replicas more waste, and allocate/free memory for every reply list also cost much.
      If we set client-output-buffer-limit small and write traffic is heavy, master may disconnect with
      replicas and can't finish synchronization with replica. If we set  client-output-buffer-limit big,
      master may be OOM when there are many replicas that separately keep much memory.
      Because replication buffers of different replica client are the same, one simple idea is that
      all replicas only use one replication buffer, that will effectively save memory.
      
      Since replication backlog content is the same as replicas' output buffer, now we
      can discard replication backlog memory and use global shared replication buffer
      to implement replication backlog mechanism.
      
      ## Implementation
      I create one global "replication buffer" which contains content of replication stream.
      The structure of "replication buffer" is similar to the reply list that exists in every client.
      But the node of list is `replBufBlock`, which has `id, repl_offset, refcount` fields.
      ```c
      /* Replication buffer blocks is the list of replBufBlock.
       *
       * +--------------+       +--------------+       +--------------+
       * | refcount = 1 |  ...  | refcount = 0 |  ...  | refcount = 2 |
       * +--------------+       +--------------+       +--------------+
       *      |                                            /       \
       *      |                                           /         \
       *      |                                          /           \
       *  Repl Backlog                               Replia_A      Replia_B
       * 
       * Each replica or replication backlog increments only the refcount of the
       * 'ref_repl_buf_node' which it points to. So when replica walks to the next
       * node, it should first increase the next node's refcount, and when we trim
       * the replication buffer nodes, we remove node always from the head node which
       * refcount is 0. If the refcount of the head node is not 0, we must stop
       * trimming and never iterate the next node. */
      
      /* Similar with 'clientReplyBlock', it is used for shared buffers between
       * all replica clients and replication backlog. */
      typedef struct replBufBlock {
          int refcount;           /* Number of replicas or repl backlog using. */
          long long id;           /* The unique incremental number. */
          long long repl_offset;  /* Start replication offset of the block. */
          size_t size, used;
          char buf[];
      } replBufBlock;
      ```
      So now when we feed replication stream into replication backlog and all replicas, we only need
      to feed stream into replication buffer `feedReplicationBuffer`. In this function, we set some fields of
      replication backlog and replicas to references of the global replication buffer blocks. And we also
      need to check replicas' output buffer limit to free if exceeding `client-output-buffer-limit`, and trim
      replication backlog if exceeding `repl-backlog-size`.
      
      When sending reply to replicas, we also need to iterate replication buffer blocks and send its
      content, when totally sending one block for replica, we decrease current node count and
      increase the next current node count, and then free the block which reference is 0 from the
      head of replication buffer blocks.
      
      Since now we use linked list to manage replication backlog, it may cost much time for iterating
      all linked list nodes to find corresponding replication buffer node. So we create a rax tree to
      store some nodes  for index, but to avoid rax tree occupying too much memory, i record
      one per 64 nodes for index.
      
      Currently, to make partial resynchronization as possible as much, we always let replication
      backlog as the last reference of replication buffer blocks, backlog size may exceeds our setting
      if slow replicas that reference vast replication buffer blocks, and this method doesn't increase
      memory usage since they share replication buffer. To avoid freezing server for freeing unreferenced
      replication buffer blocks when we need to trim backlog for exceeding backlog size setting,
      we trim backlog incrementally (free 64 blocks per call now), and make it faster in
      `beforeSleep` (free 640 blocks).
      
      ### Other changes
      - `mem_total_replication_buffers`: we add this field in INFO command, it means the total
        memory of replication buffers used.
      - `mem_clients_slaves`:  now even replica is slow to replicate, and its output buffer memory
        is not 0, but it still may be 0, since replication backlog and replicas share one global replication
        buffer, only if replication buffer memory is more than the repl backlog setting size, we consider
        the excess as replicas' memory. Otherwise, we think replication buffer memory is the consumption
        of repl backlog.
      - Key eviction
        Since all replicas and replication backlog share global replication buffer, we think only the
        part of exceeding backlog size the extra separate consumption of replicas.
        Because we trim backlog incrementally in the background, backlog size may exceeds our
        setting if slow replicas that reference vast replication buffer blocks disconnect.
        To avoid massive eviction loop, we don't count the delayed freed replication backlog into
        used memory even if there are no replicas, i.e. we also regard this memory as replicas's memory.
      - `client-output-buffer-limit` check for replica clients
        It doesn't make sense to set the replica clients output buffer limit lower than the repl-backlog-size
        config (partial sync will succeed and then replica will get disconnected). Such a configuration is
        ignored (the size of repl-backlog-size will be used). This doesn't have memory consumption
        implications since the replica client will share the backlog buffers memory.
      - Drop replication backlog after loading data if needed
        We always create replication backlog if server is a master, we need it because we put DELs in
        it when loading expired keys in RDB, but if RDB doesn't have replication info or there is no rdb,
        it is not possible to support partial resynchronization, to avoid extra memory of replication backlog,
        we drop it.
      - Multi IO threads
       Since all replicas and replication backlog use global replication buffer,  if I/O threads are enabled,
        to guarantee data accessing thread safe, we must let main thread handle sending the output buffer
        to all replicas. But before, other IO threads could handle sending output buffer of all replicas.
      
      ## Other optimizations
      This solution resolve some other problem:
      - When replicas disconnect with master since of out of output buffer limit, releasing the output
        buffer of replicas may freeze server if we set big `client-output-buffer-limit` for replicas, but now,
        it doesn't cause freezing.
      - This implementation may mitigate reply list copy cost time(also freezes server) when one replication
        has huge reply buffer and another replica can copy buffer for full synchronization. now, we just copy
        reference info, it is very light.
      - If we set replication backlog size big, it also may cost much time to copy replication backlog into
        replica's output buffer. But this commit eliminates this problem.
      - Resizing replication backlog size doesn't empty current replication backlog content.
      c1718f9d
  16. 20 Oct, 2021 1 commit
    • guybe7's avatar
      Treat subcommands as commands (#9504) · 43e736f7
      guybe7 authored
      ## Intro
      
      The purpose is to allow having different flags/ACL categories for
      subcommands (Example: CONFIG GET is ok-loading but CONFIG SET isn't)
      
      We create a small command table for every command that has subcommands
      and each subcommand has its own flags, etc. (same as a "regular" command)
      
      This commit also unites the Redis and the Sentinel command tables
      
      ## Affected commands
      
      CONFIG
      Used to have "admin ok-loading ok-stale no-script"
      Changes:
      1. Dropped "ok-loading" in all except GET (this doesn't change behavior since
      there were checks in the code doing that)
      
      XINFO
      Used to have "read-only random"
      Changes:
      1. Dropped "random" in all except CONSUMERS
      
      XGROUP
      Used to have "write use-memory"
      Changes:
      1. Dropped "use-memory" in all except CREATE and CREATECONSUMER
      
      COMMAND
      No changes.
      
      MEMORY
      Used to have "random read-only"
      Changes:
      1. Dropped "random" in PURGE and USAGE
      
      ACL
      Used to have "admin no-script ok-loading ok-stale"
      Changes:
      1. Dropped "admin" in WHOAMI, GENPASS, and CAT
      
      LATENCY
      No changes.
      
      MODULE
      No changes.
      
      SLOWLOG
      Used to have "admin random ok-loading ok-stale"
      Changes:
      1. Dropped "random" in RESET
      
      OBJECT
      Used to have "read-only random"
      Changes:
      1. Dropped "random" in ENCODING and REFCOUNT
      
      SCRIPT
      Used to have "may-replicate no-script"
      Changes:
      1. Dropped "may-replicate" in all except FLUSH and LOAD
      
      CLIENT
      Used to have "admin no-script random ok-loading ok-stale"
      Changes:
      1. Dropped "random" in all except INFO and LIST
      2. Dropped "admin" in ID, TRACKING, CACHING, GETREDIR, INFO, SETNAME, GETNAME, and REPLY
      
      STRALGO
      No changes.
      
      PUBSUB
      No changes.
      
      CLUSTER
      Changes:
      1. Dropped "admin in countkeysinslots, getkeysinslot, info, nodes, keyslot, myid, and slots
      
      SENTINEL
      No changes.
      
      (note that DEBUG also fits, but we decided not to convert it since it's for
      debugging and anyway undocumented)
      
      ## New sub-command
      This commit adds another element to the per-command output of COMMAND,
      describing the list of subcommands, if any (in the same structure as "regular" commands)
      Also, it adds a new subcommand:
      ```
      COMMAND LIST [FILTERBY (MODULE <module-name>|ACLCAT <cat>|PATTERN <pattern>)]
      ```
      which returns a set of all commands (unless filters), but excluding subcommands.
      
      ## Module API
      A new module API, RM_CreateSubcommand, was added, in order to allow
      module writer to define subcommands
      
      ## ACL changes:
      1. Now, that each subcommand is actually a command, each has its own ACL id.
      2. The old mechanism of allowed_subcommands is redundant
      (blocking/allowing a subcommand is the same as blocking/allowing a regular command),
      but we had to keep it, to support the widespread usage of allowed_subcommands
      to block commands with certain args, that aren't subcommands (e.g. "-select +select|0").
      3. I have renamed allowed_subcommands to allowed_firstargs to emphasize the difference.
      4. Because subcommands are commands in ACL too, you can now use "-" to block subcommands
      (e.g. "+client -client|kill"), which wasn't possible in the past.
      5. It is also possible to use the allowed_firstargs mechanism with subcommand.
      For example: `+config -config|set +config|set|loglevel` will block all CONFIG SET except
      for setting the log level.
      6. All of the ACL changes above required some amount of refactoring.
      
      ## Misc
      1. There are two approaches: Either each subcommand has its own function or all
         subcommands use the same function, determining what to do according to argv[0].
         For now, I took the former approaches only with CONFIG and COMMAND,
         while other commands use the latter approach (for smaller blamelog diff).
      2. Deleted memoryGetKeys: It is no longer needed because MEMORY USAGE now uses the "range" key spec.
      4. Bugfix: GETNAME was missing from CLIENT's help message.
      5. Sentinel and Redis now use the same table, with the same function pointer.
         Some commands have a different implementation in Sentinel, so we redirect
         them (these are ROLE, PUBLISH, and INFO).
      6. Command stats now show the stats per subcommand (e.g. instead of stats just
         for "config" you will have stats for "config|set", "config|get", etc.)
      7. It is now possible to use COMMAND directly on subcommands:
         COMMAND INFO CONFIG|GET (The pipeline syntax was inspired from ACL, and
         can be used in functions lookupCommandBySds and lookupCommandByCString)
      8. STRALGO is now a container command (has "help")
      
      ## Breaking changes:
      1. Command stats now show the stats per subcommand (see (5) above)
      43e736f7
  17. 19 Oct, 2021 1 commit
  18. 03 Oct, 2021 1 commit
    • Binbin's avatar
      Cleanup typos, incorrect comments, and fixed small memory leak in redis-cli (#9153) · dd3ac97f
      Binbin authored
      1. Remove forward declarations from header files to functions that do not exist:
      hmsetCommand and rdbSaveTime.
      2. Minor phrasing fixes in #9519
      3. Add missing sdsfree(title) and fix typo in redis-benchmark.
      4. Modify some error comments in some zset commands.
      5. Fix copy-paste bug comment in syncWithMaster about `ip-address`.
      dd3ac97f
  19. 30 Sep, 2021 1 commit
  20. 23 Sep, 2021 1 commit
    • yoav-steinberg's avatar
      Client eviction (#8687) · 2753429c
      yoav-steinberg authored
      
      
      ### Description
      A mechanism for disconnecting clients when the sum of all connected clients is above a
      configured limit. This prevents eviction or OOM caused by accumulated used memory
      between all clients. It's a complimentary mechanism to the `client-output-buffer-limit`
      mechanism which takes into account not only a single client and not only output buffers
      but rather all memory used by all clients.
      
      #### Design
      The general design is as following:
      * We track memory usage of each client, taking into account all memory used by the
        client (query buffer, output buffer, parsed arguments, etc...). This is kept up to date
        after reading from the socket, after processing commands and after writing to the socket.
      * Based on the used memory we sort all clients into buckets. Each bucket contains all
        clients using up up to x2 memory of the clients in the bucket below it. For example up
        to 1m clients, up to 2m clients, up to 4m clients, ...
      * Before processing a command and before sleep we check if we're over the configured
        limit. If we are we start disconnecting clients from larger buckets downwards until we're
        under the limit.
      
      #### Config
      `maxmemory-clients` max memory all clients are allowed to consume, above this threshold
      we disconnect clients.
      This config can either be set to 0 (meaning no limit), a size in bytes (possibly with MB/GB
      suffix), or as a percentage of `maxmemory` by using the `%` suffix (e.g. setting it to `10%`
      would mean 10% of `maxmemory`).
      
      #### Important code changes
      * During the development I encountered yet more situations where our io-threads access
        global vars. And needed to fix them. I also had to handle keeps the clients sorted into the
        memory buckets (which are global) while their memory usage changes in the io-thread.
        To achieve this I decided to simplify how we check if we're in an io-thread and make it
        much more explicit. I removed the `CLIENT_PENDING_READ` flag used for checking
        if the client is in an io-thread (it wasn't used for anything else) and just used the global
        `io_threads_op` variable the same way to check during writes.
      * I optimized the cleanup of the client from the `clients_pending_read` list on client freeing.
        We now store a pointer in the `client` struct to this list so we don't need to search in it
        (`pending_read_list_node`).
      * Added `evicted_clients` stat to `INFO` command.
      * Added `CLIENT NO-EVICT ON|OFF` sub command to exclude a specific client from the
        client eviction mechanism. Added corrosponding 'e' flag in the client info string.
      * Added `multi-mem` field in the client info string to show how much memory is used up
        by buffered multi commands.
      * Client `tot-mem` now accounts for buffered multi-commands, pubsub patterns and
        channels (partially), tracking prefixes (partially).
      * CLIENT_CLOSE_ASAP flag is now handled in a new `beforeNextClient()` function so
        clients will be disconnected between processing different clients and not only before sleep.
        This new function can be used in the future for work we want to do outside the command
        processing loop but don't want to wait for all clients to be processed before we get to it.
        Specifically I wanted to handle output-buffer-limit related closing before we process client
        eviction in case the two race with each other.
      * Added a `DEBUG CLIENT-EVICTION` command to print out info about the client eviction
        buckets.
      * Each client now holds a pointer to the client eviction memory usage bucket it belongs to
        and listNode to itself in that bucket for quick removal.
      * Global `io_threads_op` variable now can contain a `IO_THREADS_OP_IDLE` value
        indicating no io-threading is currently being executed.
      * In order to track memory used by each clients in real-time we can't rely on updating
        these stats in `clientsCron()` alone anymore. So now I call `updateClientMemUsage()`
        (used to be `clientsCronTrackClientsMemUsage()`) after command processing, after
        writing data to pubsub clients, after writing the output buffer and after reading from the
        socket (and maybe other places too). The function is written to be fast.
      * Clients are evicted if needed (with appropriate log line) in `beforeSleep()` and before
        processing a command (before performing oom-checks and key-eviction).
      * All clients memory usage buckets are grouped as follows:
        * All clients using less than 64k.
        * 64K..128K
        * 128K..256K
        * ...
        * 2G..4G
        * All clients using 4g and up.
      * Added client-eviction.tcl with a bunch of tests for the new mechanism.
      * Extended maxmemory.tcl to test the interaction between maxmemory and
        maxmemory-clients settings.
      * Added an option to flag a numeric configuration variable as a "percent", this means that
        if we encounter a '%' after the number in the config file (or config set command) we
        consider it as valid. Such a number is store internally as a negative value. This way an
        integer value can be interpreted as either a percent (negative) or absolute value (positive).
        This is useful for example if some numeric configuration can optionally be set to a percentage
        of something else.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      2753429c
  21. 13 Sep, 2021 1 commit
  22. 09 Sep, 2021 1 commit
    • sundb's avatar
      Replace all usage of ziplist with listpack for t_zset (#9366) · 3ca6972e
      sundb authored
      Part two of implementing #8702 (zset), after #8887.
      
      ## Description of the feature
      Replaced all uses of ziplist with listpack in t_zset, and optimized some of the code to optimize performance.
      
      ## Rdb format changes
      New `RDB_TYPE_ZSET_LISTPACK` rdb type.
      
      ## Rdb loading improvements:
      1) Pre-expansion of dict for validation of duplicate data for listpack and ziplist.
      2) Simplifying the release of empty key objects when RDB loading.
      3) Unify ziplist and listpack data verify methods for zset and hash, and move code to rdb.c.
      
      ## Interface changes
      1) New `zset-max-listpack-entries` config is an alias for `zset-max-ziplist-entries` (same with `zset-max-listpack-value`).
      2) OBJECT ENCODING will return listpack instead of ziplist.
      
      ## Listpack improvements:
      1) Add `lpDeleteRange` and `lpDeleteRangeWithEntry` functions to delete a range of entries from listpack.
      2) Improve the performance of `lpCompare`, converting from string to integer is faster than converting from integer to string.
      3) Replace `snprintf` with `ll2string` to improve performance in converting numbers to strings in `lpGet()`.
      
      ## Zset improvements:
      1) Improve the performance of `zzlFind` method, use `lpFind` instead of `lpCompare` in a loop.
      2) Use `lpDeleteRangeWithEntry` instead of `lpDelete` twice to delete a element of zset.
      
      ## Tests
      1) Add some unittests for `lpDeleteRange` and `lpDeleteRangeWithEntry` function.
      2) Add zset RDB loading test.
      3) Add benchmark test for `lpCompare` and `ziplsitCompare`.
      4) Add empty listpack zset corrupt dump test.
      3ca6972e
  23. 30 Aug, 2021 1 commit
    • Wang Yuan's avatar
      Use sync_file_range to optimize fsync if possible (#9409) · 9a0c0617
      Wang Yuan authored
      We implement incremental data sync in rio.c by call fsync, on slow disk, that may cost a lot of time,
      sync_file_range could provide async fsync, so we could serialize key/value and sync file data at the same time.
      
      > one tip for sync_file_range usage: http://lkml.iu.edu/hypermail/linux/kernel/1005.2/01845.html
      
      Additionally, this change avoids a single large write to be used, which can result in a mass of dirty
      pages in the kernel (increasing the risk of someone else's write to block).
      
      On HDD, current solution could reduce approximate half of dumping RDB time,
      this PR costs 50s for dump 7.7G rdb but unstable branch costs 93s.
      On NVME SSD, this PR can't reduce much time,  this PR costs 40s, unstable branch costs 48s.
      
      Moreover, I find calling data sync every 4MB is better than 32MB.
      9a0c0617
  24. 10 Aug, 2021 1 commit
    • sundb's avatar
      Replace all usage of ziplist with listpack for t_hash (#8887) · 02fd76b9
      sundb authored
      
      
      Part one of implementing #8702 (taking hashes first before other types)
      
      ## Description of the feature
      1. Change ziplist encoded hash objects to listpack encoding.
      2. Convert existing ziplists on RDB loading time. an O(n) operation.
      
      ## Rdb format changes
      1. Add RDB_TYPE_HASH_LISTPACK rdb type.
      2. Bump RDB_VERSION to 10
      
      ## Interface changes
      1. New `hash-max-listpack-entries` config is an alias for `hash-max-ziplist-entries` (same with `hash-max-listpack-value`)
      2. OBJECT ENCODING will return `listpack` instead of `ziplist`
      
      ## Listpack improvements:
      1. Support direct insert, replace integer element (rather than convert back and forth from string)
      3. Add more listpack capabilities to match the ziplist ones (like `lpFind`, `lpRandomPairs` and such)
      4. Optimize element length fetching, avoid multiple calculations
      5. Use inline to avoid function call overhead.
      
      ## Tests
      1. Add a new test to the RDB load time conversion
      2. Adding the listpack unit tests. (based on the one in ziplist.c)
      3. Add a few "corrupt payload: fuzzer findings" tests, and slightly modify existing ones.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      02fd76b9
  25. 20 Jul, 2021 1 commit
    • Oran Agra's avatar
      Fix ACL category for SELECT, WAIT, ROLE, LASTSAVE, READONLY, READWRITE, ASKING (#9208) · 32e61ee2
      Oran Agra authored
      - SELECT and WAIT don't read or write from the keyspace (unlike DEL, EXISTS, EXPIRE, DBSIZE, KEYS, etc).
      they're more similar to AUTH and HELLO (and maybe PING and COMMAND).
      they only affect the current connection, not the server state, so they should be `@connection`, not `@keyspace`
      
      - ROLE, like LASTSAVE is `@admin` (and `@dangerous` like INFO) 
      
      - ASKING, READONLY, READWRITE are `@connection` too (not `@keyspace`)
      
      - Additionally, i'm now documenting the exact meaning of each ACL category so it's clearer which commands belong where.
      32e61ee2
  26. 17 Jul, 2021 1 commit
  27. 27 Jun, 2021 1 commit
  28. 24 Jun, 2021 1 commit
    • Yossi Gottlieb's avatar
      Add bind-source-addr configuration argument. (#9142) · f233c4c5
      Yossi Gottlieb authored
      In the past, the first bind address that was explicitly specified was
      also used to bind outgoing connections. This could result with some
      problems. For example: on some systems using `bind 127.0.0.1` would
      result with outgoing connections also binding to `127.0.0.1` and failing
      to connect to remote addresses.
      
      With the recent change to the way `bind` is handled, this presented
      other issues:
      
      * The default first bind address is '*' which is not a valid address.
      * We make no distinction between user-supplied config that is identical
      to the default, and the default config.
      
      This commit addresses both these issues by introducing an explicit
      configuration parameter to control the bind address on outgoing
      connections.
      f233c4c5
  29. 16 Jun, 2021 2 commits
    • Sam Bortman's avatar
      Support glob pattern matching for config include files (#8980) · c2b93ff8
      Sam Bortman authored
      This will allow distros to use an "include conf.d/*.conf" statement in the default configuration file
      which will facilitate customization across upgrades/downgrades.
      
      The change itself is trivial: instead of opening an individual file, the glob call creates a vector of files to open, and each file is opened in turn, and its content is added to the configuration.
      c2b93ff8
    • yoav-steinberg's avatar
      Remove gopher protocol support. (#9057) · 362786c5
      yoav-steinberg authored
      Gopher support was added mainly because it was simple (trivial to add).
      But apparently even something that was trivial at the time, does cause complications
      down the line when adding more features.
      We recently ran into a few issues with io-threads conflicting with the gopher support.
      We had to either complicate the code further in order to solve them, or drop gopher.
      AFAIK it's completely unused, so we wanna chuck it, rather than keep supporting it.
      362786c5
  30. 21 Apr, 2021 1 commit
  31. 19 Apr, 2021 2 commits
    • Hanna Fadida's avatar
      Modules: adding a module type for key space notification (#8759) · 53a4d6c3
      Hanna Fadida authored
      Adding a new type mask ​for key space notification, REDISMODULE_NOTIFY_MODULE, to enable unique notifications from commands on REDISMODULE_KEYTYPE_MODULE type keys (which is currently unsupported).
      
      Modules can subscribe to a module key keyspace notification by RM_SubscribeToKeyspaceEvents,
      and clients by notify-keyspace-events of redis.conf or via the CONFIG SET, with the characters 'd' or 'A' 
      (REDISMODULE_NOTIFY_MODULE type mask is part of the '**A**ll' notation for key space notifications).
      
      Refactor: move some pubsub test infra from pubsub.tcl to util.tcl to be re-used by other tests.
      53a4d6c3
    • Harkrishn Patro's avatar
      ACL channels permission handling for save/load scenario. (#8794) · 7a3d1487
      Harkrishn Patro authored
      
      
      In the initial release of Redis 6.2 setting a user to only allow pubsub access to
      a specific channel, and doing ACL SAVE, resulted in an assertion when
      ACL LOAD was used. This was later changed by #8723 (not yet released),
      but still not properly resolved (now it errors instead of crash).
      
      The problem is that the server that generates an ACL file, doesn't know what
      would be the setting of the acl-pubsub-default config in the server that will load it.
      so ACL SAVE needs to always start with resetchannels directive.
      
      This should still be compatible with old acl files (from redis 6.0), and ones from earlier
      versions of 6.2 that didn't mess with channels.
      Co-authored-by: default avatarHarkrishn Patro <harkrisp@amazon.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      7a3d1487
  32. 04 Apr, 2021 1 commit
    • Sokolov Yura's avatar
      Add cluster-allow-replica-migration option. (#5285) · 1cab9620
      Sokolov Yura authored
      
      
      Previously (and by default after commit) when master loose its last slot
      (due to migration, for example), its replicas will migrate to new last slot
      holder.
      
      There are cases where this is not desired:
      * Consolidation that results with removed nodes (including the replica, eventually).
      * Manually configured cluster topologies, which the admin wishes to preserve.
      
      Needlessly migrating a replica triggers a full synchronization and can have a negative impact, so
      we prefer to be able to avoid it where possible.
      
      This commit adds 'cluster-allow-replica-migration' configuration option that is
      enabled by default to preserve existed behavior. When disabled, replicas will
      not be auto-migrated.
      
      Fixes #4896
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      1cab9620
  33. 30 Mar, 2021 2 commits
    • Jérôme Loyet's avatar
      Add replica-announced config option (#8653) · 91f4f416
      Jérôme Loyet authored
      The 'sentinel replicas <master>' command will ignore replicas with
      `replica-announced` set to no.
      
      The goal of disabling the config setting replica-announced is to allow ghost
      replicas. The replica is in the cluster, synchronize with its master, can be
      promoted to master and is not exposed to sentinel clients. This way, it is
      acting as a live backup or living ghost.
      
      In addition, to prevent the replica to be promoted as master, set
      replica-priority to 0.
      91f4f416
    • Viktor Söderqvist's avatar
      Add support for plaintext clients in TLS cluster (#8587) · 5629dbe7
      Viktor Söderqvist authored
      The cluster bus is established over TLS or non-TLS depending on the configuration tls-cluster. The client ports distributed in the cluster and sent to clients are assumed to be TLS or non-TLS also depending on tls-cluster.
      
      The cluster bus is now extended to also contain the non-TLS port of clients in a TLS cluster, when available. The non-TLS port of a cluster node, when available, is sent to clients connected without TLS in responses to CLUSTER SLOTS, CLUSTER NODES, CLUSTER SLAVES and MOVED and ASK redirects, instead of the TLS port.
      
      The user was able to override the client port by defining cluster-announce-port. Now cluster-announce-tls-port is added, so the user can define an alternative announce port for both TLS and non-TLS clients.
      
      Fixes #8134
      5629dbe7