"vscode:/vscode.git/clone" did not exist on "fe273b3829511febe42f9b854ba921213f7bedbb"
  1. 28 Feb, 2023 3 commits
    • Harkrishn Patro's avatar
      Propagate message to a node only if the cluster link is healthy. (#11752) · ca0b6cae
      Harkrishn Patro authored
      Currently while a sharded pubsub message publish tries to propagate the message across the cluster, a NULL check is missing for clusterLink. clusterLink could be NULL if the link is causing memory beyond the set threshold cluster-link-sendbuf-limit and server terminates the link.
      
      This change introduces two things:
      
      Avoids the engine crashes on the publishing node if a message is tried to be sent to a node and the link is NULL.
      Adds a debugging tool CLUSTERLINK KILL to terminate the clusterLink between two nodes.
      
      (cherry picked from commit fd397568)
      ca0b6cae
    • uriyage's avatar
      Optimization: sdsRemoveFreeSpace to avoid realloc on noop (#11766) · af80a4a5
      uriyage authored
      
      
      In #7875 (Redis 6.2), we changed the sds alloc to be the usable allocation
      size in order to:
      
      > reduce the need for realloc calls by making the sds implicitly take over
      the internal fragmentation
      
      This change was done most sds functions, excluding `sdsRemoveFreeSpace` and
      `sdsResize`, the reason is that in some places (e.g. clientsCronResizeQueryBuffer)
      we call sdsRemoveFreeSpace when we see excessive free space and want to trim it.
      so if we don't trim it exactly to size, the caller may still see excessive free space and
      call it again and again.
      
      However, this resulted in some excessive calls to realloc, even when there's no need
      and it's gonna be a no-op (e.g. when reducing 15 bytes allocation to 13).
      
      It turns out that a call for realloc with jemalloc can be expensive even if it ends up
      doing nothing, so this PR adds a check using `je_nallocx`, which is cheap to avoid
      the call for realloc.
      
      in addition to that this PR unifies sdsResize and sdsRemoveFreeSpace into common
      code. the difference between them was that sdsResize would avoid using SDS_TYPE_5,
      since it want to keep the string ready to be resized again, while sdsRemoveFreeSpace
      would permit using SDS_TYPE_5 and get an optimal memory consumption.
      now both methods take a `would_regrow` argument that makes it more explicit.
      
      the only actual impact of that is that in clientsCronResizeQueryBuffer we call both sdsResize
      and sdsRemoveFreeSpace for in different cases, and we now prevent the use of SDS_TYPE_5 in both.
      
      The new test that was added to cover this concern used to pass before this PR as well,
      this PR is just a performance optimization and cleanup.
      
      Benchmark:
      `redis-benchmark -c 100 -t set  -d 512 -P 10  -n  100000000`
      on i7-9850H with jemalloc, shows improvement from 1021k ops/sec to 1067k (average of 3 runs).
      some 4.5% improvement.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      (cherry picked from commit 46393f98)
      af80a4a5
    • Madelyn Olson's avatar
      Optimize the performance of cluster slots for non-continuous slots (#11745) · c0e064ef
      Madelyn Olson authored
      This change improves the performance of cluster slots by removing the deferring lengths that are used. Deferring lengths are used in two contexts, the first is for determining the number of replicas that serve a slot (Added in 6.2 as part of a different performance improvement) and the second is for determining the extra networking options for each node (Added in 7.0). For continuous slots, (e.g. 0-8196) this improvement is very negligible, however it becomes more significant when slots are not continuous (e.g. 0 2 4 6 etc) which can happen in production for various users.
      
      The `cluster slots` command is deprecated in favor of `cluster shards`, but since most clients don't support the new command yet I think it's important to not degrade performance here.
      
      Benchmarking shows about 2x improvement, however I wasn't able to get a coherent TPS number since the benchmark process was being saturated long before Redis was, so had to run with multiple benchmarks and merge results. If needed I can add this to our memtier framework. Instead the next section shows the number of usec per call from the benchmark results, which shows significant improvement as well as having a more coherent response in the CoB.
      
      | | New Code | Old Code | % Improvements
      |----|----|----- |-----
      | Uniform slots| usec_per_call=10.46 | usec_per_call=11.03 | 5.7%
      | Worst case (Only even slots)| usec_per_call=963.80 | usec_per_call=2950.99 | 307%
      
      This change also removes some extra white space that I added a when making a code change for adding hostnames.
      
      (cherry picked from commit e74a1f3b)
      c0e064ef
  2. 12 Dec, 2022 3 commits
  3. 21 Sep, 2022 5 commits
  4. 11 Jul, 2022 1 commit
  5. 04 Jul, 2022 1 commit
    • Qu Chen's avatar
      Unlock cluster config file upon server shutdown. (#10912) · 33b7ff38
      Qu Chen authored
      Currently in cluster mode, Redis process locks the cluster config file when
      starting up and holds the lock for the entire lifetime of the process.
      When the server shuts down, it doesn't explicitly release the lock on the
      cluster config file. We noticed a problem with restart testing that if you shut down
      a very large redis-server process (i.e. with several hundred GB of data stored),
      it takes the OS a while to free the resources and unlock the cluster config file.
      So if we immediately try to restart the redis server process, it might fail to acquire
      the lock on the cluster config file and fail to come up.
      
      This fix explicitly releases the lock on the cluster config file upon a shutdown rather
      than relying on the OS to release the lock, which is a cleaner and safer approach to
      free up resources acquired. 
      33b7ff38
  6. 28 Jun, 2022 1 commit
  7. 23 Jun, 2022 1 commit
    • WuYunlong's avatar
      migrateGetSocket() cleanup.. (#5546) · 64205345
      WuYunlong authored
      I think parameter c is only useful to get client reply.
      Besides, other commands' host and port parameters may not be the at index 1 and 2.
      64205345
  8. 21 Jun, 2022 2 commits
  9. 14 Jun, 2022 1 commit
    • Huang Zhw's avatar
      Throw -TRYAGAIN instead of -ASK on migrating nodes for multi-key commands when... · 78960ad5
      Huang Zhw authored
      Throw -TRYAGAIN instead of -ASK on migrating nodes for multi-key commands when the node only has some of the keys (#9526)
      
      * In cluster getNodeByQuery when target slot is in migrating state and
      the slot lack some keys but have at least one key, should return TRYAGAIN.
      
      Before this commit, when a node is in migrating state and recevies
      multiple keys command, if some keys don't exist, the command emits
      an `ASK` redirection.
      
      After this commit, if some keys exist and some keys don't exist, the
      command emits a TRYAGAIN error. If all keys don't exist, the command
      emits an `ASK` redirection.
      78960ad5
  10. 06 Jun, 2022 1 commit
    • Mixficsol's avatar
      Update cluster.c (#10773) · c751d8a6
      Mixficsol authored
      On line 4068, redis has a logical nodeIsSlave(myself) on the outer if layer,
      which you can delete without having to repeat the decision
      c751d8a6
  11. 10 May, 2022 1 commit
    • Binbin's avatar
      CLUSTER SHARDS should returns slots as integers, not strings (#10683) · 2a1ea8c7
      Binbin authored
      It used to returns slots as strings, like:
      ```
      redis> cluster shards
      1) 1) "slots"
         2) 1) "10923"
            2) "16383"
      ```
      
      CLUSTER SHARDS docs and the top comment of #10293 says that it returns integers.
      Note other commands like CLUSTER SLOTS, it returns slots as integers.
      Use addReplyLongLong instead of addReplyBulkLongLong, now it returns slots as integers:
      ```
      redis> cluster shards
      1) 1) "slots"
         2) 1) (integer) 10923
            2) (integer) 16383
      ```
      
      This is a small breaking change, introduced in 7.0.0 (7.0 RC3, #10293)
      
      Fixes #10680
      2a1ea8c7
  12. 17 Apr, 2022 1 commit
    • guybe7's avatar
      Add RM_PublishMessageShard (#10543) · f49ff156
      guybe7 authored
      since PUBLISH and SPUBLISH use different dictionaries for channels and clients,
      and we already have an API for PUBLISH, it only makes sense to have one for SPUBLISH
      
      Add test coverage and unifying some test infrastructure.
      f49ff156
  13. 10 Apr, 2022 1 commit
  14. 05 Apr, 2022 1 commit
  15. 02 Apr, 2022 1 commit
    • Viktor Söderqvist's avatar
      Turn into replica on SETSLOT (#10489) · b53c7f2c
      Viktor Söderqvist authored
      * Fix race condition where node loses its last slot and turns into replica
      
      When a node has lost its last slot and finds out from the SETSLOT command
      before the cluster bus PONG from the new owner arrives. In this case, the
      node didn't turn itself into a replica of the new slot owner.
      
      This commit adds the same logic to the SETSLOT command as already exists
      for the cluster bus PONG processing.
      
      * Revert "Fix new / failing cluster slot migration test (#10482)"
      
      This reverts commit 0b21ef8d.
      
      In this test, the old slot owner finds out that it has lost its last
      slot in a nondeterministic way. Either the cluster bus PONG from the
      new slot owner and sometimes in a SETSLOT command from redis-cli. In
      both cases, the result should be the same and the old owner should
      turn itself into a replica of the new slot owner.
      b53c7f2c
  16. 29 Mar, 2022 1 commit
    • Oran Agra's avatar
      improve malloc efficiency for cluster slots_info_pairs (#10488) · 3b1e65a3
      Oran Agra authored
      This commit improve malloc efficiency of the slots_info_pairs mechanism in cluster.c
      by changing adlist into an array being realloced with greedy growth mechanism
      
      Recently the cluster tests are consistently failing when executed with ASAN in the CI.
      I tried to track down the commit that started it, and it appears to be #10293.
      Looking at the commit, i realize it didn't affect this test / flow, other than the
      replacement of the slots_info_pairs from sds to list.
      
      I concluded that what could be happening is that the slot range is very fragmented,
      and that results in many allocations.
      with sds, it results in one allocation and also, we have a greedy growth mechanism,
      but with adlist, we just have many many small allocations.
      this probably causes stress on ASAN, and causes it to be slow at termination.
      3b1e65a3
  17. 21 Mar, 2022 1 commit
  18. 16 Mar, 2022 2 commits
  19. 03 Mar, 2022 1 commit
  20. 28 Feb, 2022 1 commit
  21. 23 Feb, 2022 1 commit
    • Itamar Haber's avatar
      Add stream consumer group lag tracking and reporting (#9127) · c81c7f51
      Itamar Haber authored
      
      
      Adds the ability to track the lag of a consumer group (CG), that is, the number
      of entries yet-to-be-delivered from the stream.
      
      The proposed constant-time solution is in the spirit of "best-effort."
      
      Partially addresses #8737.
      
      ## Description of approach
      
      We add a new "entries_added" property to the stream. This starts at 0 for a new
      stream and is incremented by 1 with every `XADD`.  It is essentially an all-time
      counter of the entries added to the stream.
      
      Given the stream's length and this counter value, we can trivially find the logical
      "entries_added" counter of the first ID if and only if the stream is contiguous.
      A fragmented stream contains one or more tombstones generated by `XDEL`s.
      The new "xdel_max_id" stream property tracks the latest tombstone.
      
      The CG also tracks its last delivered ID's as an "entries_read" counter and
      increments it independently when delivering new messages, unless the this
      read counter is invalid (-1 means invalid offset). When the CG's counter is
      available, the reported lag is the difference between added and read counters.
      
      Lastly, this also adds a "first_id" field to the stream structure in order to make
      looking it up cheaper in most cases.
      
      ## Limitations
      
      There are two cases in which the mechanism isn't able to track the lag.
      In these cases, `XINFO` replies with `null` in the "lag" field.
      
      The first case is when a CG is created with an arbitrary last delivered ID,
      that isn't "0-0", nor the first or the last entries of the stream. In this case,
      it is impossible to obtain a valid read counter (short of an O(N) operation).
      The second case is when there are one or more tombstones fragmenting
      the stream's entries range.
      
      In both cases, given enough time and assuming that the consumers are
      active (reading and lacking) and advancing, the CG should be able to
      catch up with the tip of the stream and report zero lag.
      Once that's achieved, lag tracking would resume as normal (until the
      next tombstone is set).
      
      ## API changes
      
      * `XGROUP CREATE` added with the optional named argument `[ENTRIESREAD entries-read]`
        for explicitly specifying the new CG's counter.
      * `XGROUP SETID` added with an optional positional argument `[ENTRIESREAD entries-read]`
        for specifying the CG's counter.
      * `XINFO` reports the maximal tombstone ID, the recorded first entry ID, and total
        number of entries added to the stream.
      * `XINFO` reports the current lag and logical read counter of CGs.
      * `XSETID` is an internal command that's used in replication/aof. It has been added with
        the optional positional arguments `[ENTRIESADDED entries-added] [MAXDELETEDID max-deleted-entry-id]`
        for propagating the CG's offset and maximal tombstone ID of the stream.
      
      ## The generic unsolved problem
      
      The current stream implementation doesn't provide an efficient way to obtain the
      approximate/exact size of a range of entries. While it could've been nice to have
      that ability (#5813) in general, let alone specifically in the context of CGs, the risk
      and complexities involved in such implementation are in all likelihood prohibitive.
      
      ## A refactoring note
      
      The `streamGetEdgeID` has been refactored to accommodate both the existing seek
      of any entry as well as seeking non-deleted entries (the addition of the `skip_tombstones`
      argument). Furthermore, this refactoring also migrated the seek logic to use the
      `streamIterator` (rather than `raxIterator`) that was, in turn, extended with the
      `skip_tombstones` Boolean struct field to control the emission of these.
      Co-authored-by: default avatarGuy Benoish <guy.benoish@redislabs.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      c81c7f51
  22. 20 Feb, 2022 1 commit
    • Binbin's avatar
      Show publishshard_sent stat in cluster info (#10314) · c0ea77f0
      Binbin authored
      publishshard was added in #8621 (7.0 RC1), but the publishshard_sent
      stat is not shown in CLUSTER INFO command.
      
      Other changes:
      1. Remove useless `needhelp` statements, it was removed in 3dad8196.
      2. Use `LL_WARNING` log level for some error logs (I/O error, Connection failed).
      3. Fix typos that saw by the way.
      c0ea77f0
  23. 16 Feb, 2022 1 commit
  24. 11 Feb, 2022 1 commit
  25. 23 Jan, 2022 1 commit
    • Binbin's avatar
      sub-command support for ACL CAT and COMMAND LIST. redisCommand always stores fullname (#10127) · 23325c13
      Binbin authored
      
      
      Summary of changes:
      1. Rename `redisCommand->name` to `redisCommand->declared_name`, it is a
        const char * for native commands and SDS for module commands.
      2. Store the [sub]command fullname in `redisCommand->fullname` (sds).
      3. List subcommands in `ACL CAT`
      4. List subcommands in `COMMAND LIST`
      5. `moduleUnregisterCommands` now will also free the module subcommands.
      6. RM_GetCurrentCommandName returns full command name
      
      Other changes:
      1. Add `addReplyErrorArity` and `addReplyErrorExpireTime`
      2. Remove `getFullCommandName` function that now is useless.
      3. Some cleanups about `fullname` since now it is SDS.
      4. Delete `populateSingleCommand` function from server.h that is useless.
      5. Added tests to cover this change.
      6. Add some module unload tests and fix the leaks
      7. Make error messages uniform, make sure they always contain the full command
        name and that it's quoted.
      7. Fixes some typos
      
      see the history in #9504, fixes #10124
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      Co-authored-by: default avatarguybe7 <guy.benoish@redislabs.com>
      23325c13
  26. 20 Jan, 2022 1 commit
  27. 18 Jan, 2022 1 commit
    • Wang Yuan's avatar
      Use const char pointer in redismodule.h as far as possible (#10064) · d697daa7
      Wang Yuan authored
      When I used C++ to develop a redis module. i  used `string.data()` as the second parameter `ele`
      of  `RedisModule_DigestAddStringBuffer`, but there is a warning, since we never change the `ele`,
      i think we should use `const char` for it.
      
      This PR adds const to just a handful of module APIs that required it, all not very widely used.
      The implication is a breaking change in terms of compilation error that's easy to resolve, and no ABI impact.
      The affected APIs are around Digest, Info injection, and Cluster bus messages.
      d697daa7
  28. 03 Jan, 2022 2 commits
  29. 02 Jan, 2022 1 commit
    • Viktor Söderqvist's avatar
      Wait for replicas when shutting down (#9872) · 45a155bd
      Viktor Söderqvist authored
      
      
      To avoid data loss, this commit adds a grace period for lagging replicas to
      catch up the replication offset.
      
      Done:
      
      * Wait for replicas when shutdown is triggered by SIGTERM and SIGINT.
      
      * Wait for replicas when shutdown is triggered by the SHUTDOWN command. A new
        blocked client type BLOCKED_SHUTDOWN is introduced, allowing multiple clients
        to call SHUTDOWN in parallel.
        Note that they don't expect a response unless an error happens and shutdown is aborted.
      
      * Log warning for each replica lagging behind when finishing shutdown.
      
      * CLIENT_PAUSE_WRITE while waiting for replicas.
      
      * Configurable grace period 'shutdown-timeout' in seconds (default 10).
      
      * New flags for the SHUTDOWN command:
      
          - NOW disables the grace period for lagging replicas.
      
          - FORCE ignores errors writing the RDB or AOF files which would normally
            prevent a shutdown.
      
          - ABORT cancels ongoing shutdown. Can't be combined with other flags.
      
      * New field in the output of the INFO command: 'shutdown_in_milliseconds'. The
        value is the remaining maximum time to wait for lagging replicas before
        finishing the shutdown. This field is present in the Server section **only**
        during shutdown.
      
      Not directly related:
      
      * When shutting down, if there is an AOF saving child, it is killed **even** if AOF
        is disabled. This can happen if BGREWRITEAOF is used when AOF is off.
      
      * Client pause now has end time and type (WRITE or ALL) per purpose. The
        different pause purposes are *CLIENT PAUSE command*, *failover* and
        *shutdown*. If clients are unpaused for one purpose, it doesn't affect client
        pause for other purposes. For example, the CLIENT UNPAUSE command doesn't
        affect client pause initiated by the failover or shutdown procedures. A completed
        failover or a failed shutdown doesn't unpause clients paused by the CLIENT
        PAUSE command.
      
      Notes:
      
      * DEBUG RESTART doesn't wait for replicas.
      
      * We already have a warning logged when a replica disconnects. This means that
        if any replica connection is lost during the shutdown, it is either logged as
        disconnected or as lagging at the time of exit.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      45a155bd