1. 07 Feb, 2024 1 commit
  2. 05 Feb, 2024 1 commit
    • guybe7's avatar
      Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822) · 8cd62f82
      guybe7 authored
      # Description
      Gather most of the scattered `redisDb`-related code from the per-slot
      dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
      it's a class that represents an array of dictionaries.
      
      # Motivation
      The main motivation is code cleanliness, the idea of using an array of
      dictionaries is very well-suited to becoming a self-contained data
      structure.
      This allowed cleaning some ugly code, among others: loops that run twice
      on the main dict and expires dict, and duplicate code for allocating and
      releasing this data structure.
      
      # Notes
      1. This PR reverts the part of https://github.com/redis/redis/pull/12848
      where the `rehashing` list is global (handling rehashing `dict`s is
      under the responsibility of `kvstore`, and should not be managed by the
      server)
      2. This PR also replaces the type of `server.pubsubshard_channels` from
      `dict**` to `kvstore` (original PR:
      https://github.com/redis/redis/pull/12804). After that was done,
      server.pubsub_channels was also chosen to be a `kvstore` (with only one
      `dict`, which seems odd) just to make the code cleaner by making it the
      same type as `server.pubsubshard_channels`, see
      `pubsubtype.serverPubSubChannels`
      3. the keys and expires kvstores are currenlty configured to allocate
      the individual dicts only when the first key is added (unlike before, in
      which they allocated them in advance), but they won't release them when
      the last key is deleted.
      
      Worth mentioning that due to the recent change the reply of DEBUG
      HTSTATS changed, in case no keys were ever added to the db.
      
      before:
      ```
      127.0.0.1:6379> DEBUG htstats 9
      [Dictionary HT]
      Hash table 0 stats (main hash table):
      No stats available for empty dictionaries
      [Expires HT]
      Hash table 0 stats (main hash table):
      No stats available for empty dictionaries
      ```
      
      after:
      ```
      127.0.0.1:6379> DEBUG htstats 9
      [Dictionary HT]
      [Expires HT]
      ```
      8cd62f82
  3. 23 Jan, 2024 1 commit
    • Binbin's avatar
      Add sender NULL check in clusterProcessGossipSection invalid_ids case (#12980) · 07b292af
      Binbin authored
      In the following case sender may be unknown, so we need to set up a
      NULL check for sender:
      ```
      /* If this is a MEET packet from an unknown node, we still process
       * the gossip section here since we have to trust the sender because
       * of the message type. */
      if (!sender && type == CLUSTERMSG_TYPE_MEET)
          clusterProcessGossipSection(hdr,link);
      ```
      07b292af
  4. 22 Jan, 2024 1 commit
    • Brennan's avatar
      Prevent nodes with invalid IDs from being propagated through gossip (#12921) · e12f2dec
      Brennan authored
      
      
      There have been occasional instances of memory corruption (though code bugs or bit flips) leading to invalid node information being gossiped around. To prevent this invalid information spreading, we verify the node IDs in received gossip are in an acceptable format, and disregard any gossiped nodes with invalid IDs. This PR uses the existing verifyClusterNodeId function to check the validity of the gossiped node IDs and if an invalid one is encountered, logs raw byte information to help debug the corruption.
      
      ---------
      Co-authored-by: default avatarMadelyn Olson <madelyneolson@gmail.com>
      e12f2dec
  5. 11 Jan, 2024 2 commits
  6. 08 Jan, 2024 1 commit
    • Binbin's avatar
      Fix CLUSTER SHARDS crash in 7.0/7.2 mixed clusters where shard ids are not sync (#12832) · 5b0c6a82
      Binbin authored
      Crash reported in #12695. In the process of upgrading the cluster from
      7.0 to 7.2, because the 7.0 nodes will not gossip shard id, in 7.2 we
      will rely on shard id to build the server.cluster->shards dict.
      
      In some cases, for example, the 7.0 master node and the 7.2 replica node.
      From the view of 7.2 replica node, the cluster->shards dictionary does not
      have its master node. In this case calling CLUSTER SHARDS on the 7.2 replica
      node may crash.
      
      We should fix the underlying assumption of updateShardId, which is that the
      shard dict should be always in sync with the node's shard_id. The fix was
      suggested by PingXie, see more details in #12695.
      5b0c6a82
  7. 07 Jan, 2024 1 commit
    • Binbin's avatar
      Use shard-id of the master if the replica does not support shard-id (#12805) · 4cae66f5
      Binbin authored
      If there are nodes in the cluster that do not support shard-id, they
      will gossip shard-id. From the perspective of nodes that support shard-id,
      their shard-id is meaningless (since shard-id is randomly generated when
      we create a node.)
      
      Nodes that support shard-id will save the shard-id information in nodes.conf.
      If the node is restarted according to nodes.conf, the server will report a
      corrupted cluster config file error. Because auxShardIdSetter will reject
      configurations with inconsistent master-replica shard-ids.
      
      A cluster-wide consensus for the node's shard_id is not necessary. The key
      is maintaining consistency of the shard_id on each individual 7.2 node.
      As the cluster progressively upgrades to version 7.2, we can expect the
      shard_ids across all nodes to naturally converge and align.
      
      In this PR, when processing the gossip, if sender is a replica and does not
      support shard-id, set the shard_id to the shard_id of its master.
      4cae66f5
  8. 27 Dec, 2023 1 commit
    • Chen Tianjie's avatar
      Replace slots_to_channels radix tree with slot specific dictionaries for shard channels. (#12804) · 85279595
      Chen Tianjie authored
      
      
      We have achieved replacing `slots_to_keys` radix tree with key->slot
      linked list (#9356), and then replacing the list with slot specific
      dictionaries for keys (#11695).
      
      Shard channels behave just like keys in many ways, and we also need a
      slots->channels mapping. Currently this is still done by using a radix
      tree. So we should split `server.pubsubshard_channels` into 16384 dicts
      and drop the radix tree, just like what we did to DBs.
      
      Some benefits (basically the benefits of what we've done to DBs):
      1. Optimize counting channels in a slot. This is currently used only in
      removing channels in a slot. But this is potentially more useful:
      sometimes we need to know how many channels there are in a specific slot
      when doing slot migration. Counting is now implemented by traversing the
      radix tree, and with this PR it will be as simple as calling `dictSize`,
      from O(n) to O(1).
      2. The radix tree in the cluster has been removed. The shard channel
      names no longer require additional storage, which can save memory.
      3. Potentially useful in slot migration, as shard channels are logically
      split by slots, thus making it easier to migrate, remove or add as a
      whole.
      4. Avoid rehashing a big dict when there is a large number of channels.
      
      Drawbacks:
      1. Takes more memory than using radix tree when there are relatively few
      shard channels.
      
      What this PR does:
      1. in cluster mode, split `server.pubsubshard_channels` into 16384
      dicts, in standalone mode, still use only one dict.
      2. drop the `slots_to_channels` radix tree.
      3. to save memory (to solve the drawback above), all 16384 dicts are
      created lazily, which means only when a channel is about to be inserted
      to the dict will the dict be initialized, and when all channels are
      deleted, the dict would delete itself.
      5. use `server.shard_channel_count` to keep track of the number of all
      shard channels.
      
      ---------
      Co-authored-by: default avatarViktor Söderqvist <viktor.soderqvist@est.tech>
      85279595
  9. 11 Dec, 2023 1 commit
    • Binbin's avatar
      Fix delKeysInSlot server events are not executed inside an execution unit (#12745) · c85a9b78
      Binbin authored
      This is a follow-up fix to #12733. We need to apply the same changes to
      delKeysInSlot. Refer to #12733 for more details.
      
      This PR contains some other minor cleanups / improvements to the test
      suite and docs.
      It uses the postnotifications test module in a cluster mode test which
      revealed a leak in the test module (fixed).
      c85a9b78
  10. 07 Dec, 2023 1 commit
    • zhaozhao.zz's avatar
      Fix replica node cannot expand dicts when loading legacy RDB (#12839) · 8e11f84d
      zhaozhao.zz authored
      When loading RDB on cluster nodes, it is necessary to consider the
      scenario where a node is a replica.
      
      For example, during a rolling upgrade, new version instances are often
      mounted as replicas on old version instances. In this case, the full
      synchronization legacy RDB does not contain slot information, and the
      new version instance, acting as a replica, should be able to handle the
      legacy RDB correctly for `dbExpand`.
      
      Additionally, renaming `getMyClusterSlotCount` to `getMyShardSlotCount`
      would be appropriate.
      
      Introduced in #11695
      8e11f84d
  11. 03 Dec, 2023 1 commit
  12. 22 Nov, 2023 13 commits
  13. 21 Nov, 2023 2 commits
  14. 01 Nov, 2023 1 commit
  15. 31 Oct, 2023 1 commit
  16. 24 Oct, 2023 1 commit
  17. 15 Oct, 2023 1 commit
    • Vitaly's avatar
      Replace cluster metadata with slot specific dictionaries (#11695) · 0270abda
      Vitaly authored
      This is an implementation of https://github.com/redis/redis/issues/10589
      
       that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot.  Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
      
      ## Important changes
      * Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
      * getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
      * Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
      * scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
      * Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot. 
      * Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
      * DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
      
      ## Performance
      This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict. 
      
      RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
      
      ## Interface changes
      * Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
      * Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
      * New RDB version to support the new op code for SLOT information. 
      
      ---------
      Co-authored-by: default avatarVitaly Arbuzov <arvit@amazon.com>
      Co-authored-by: default avatarHarkrishn Patro <harkrisp@amazon.com>
      Co-authored-by: default avatarRoshan Khatri <rvkhatri@amazon.com>
      Co-authored-by: default avatarMadelyn Olson <madelyneolson@gmail.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      0270abda
  18. 13 Oct, 2023 1 commit
  19. 12 Oct, 2023 1 commit
    • Binbin's avatar
      Fix crash when running rebalance command in a mixed cluster of 7.0 and 7.2 (#12604) · e5ef1613
      Binbin authored
      In #10536, we introduced the assert, some older versions of servers
      (like 7.0) doesn't gossip shard_id, so we will not add the node to
      cluster->shards, and node->shard_id is filled in randomly and may not
      be found here.
      
      It causes that if we add a 7.2 node to a 7.0 cluster and allocate slots
      to the 7.2 node, the 7.2 node will crash when it hits this assert. Somehow
      like #12538.
      
      In this PR, we remove the assert and replace it with an unconditional removal.
      e5ef1613
  20. 02 Oct, 2023 1 commit
  21. 26 Sep, 2023 1 commit
  22. 21 Sep, 2023 1 commit
    • Chen Tianjie's avatar
      Use server.current_client to decide whether cluster commands should return TLS info. (#12569) · 2aad03fa
      Chen Tianjie authored
      Starting a change in #12233 (released in 7.2), CLUSTER commands use client's
      connection to decide whether to return TLS port or non-TLS port, but commands
      called by Lua script and module's RM_Call don't have a real client with connection,
      and would currently be regarded as non-TLS connections.
      
      We can use server.current_client instead when it is available. When it is not (module calls
      commands without a real client), we may see this as an undefined behavior, and return null
      or default port (currently in this PR it returns default port, judged by server.tls_cluster).
      2aad03fa
  23. 10 Sep, 2023 1 commit
  24. 03 Sep, 2023 1 commit
    • secwall's avatar
      Check shard_id pointer validity in updateShardId (#12538) · a2046c1e
      secwall authored
      When connecting between a 7.0 and 7.2 cluster, the 7.0 cluster will not populate the shard_id field, which is expect on the 7.2 cluster. This is not intended behavior, as the 7.2 cluster is supposed to use a temporary shard_id while the node is in the upgrading state, but it wasn't being correctly set in this case.
      a2046c1e
  25. 15 Aug, 2023 1 commit
  26. 16 Jul, 2023 1 commit
    • Chen Tianjie's avatar
      Hide the comma after cport when there is no hostname. (#12411) · 91011100
      Chen Tianjie authored
      According to the format shown in https://redis.io/commands/cluster-nodes/
      ```
      <ip:port@cport[,hostname[,auxiliary_field=value]*]>
      ```
      when there is no hostname, and the auxiliary fields are hidden, the cluster topology should be
      ```
      <ip:port@cport>
      ```
      However in the code we always print the hostname even when it is an empty string, leaving an unnecessary comma tailing after cport, which is weird and conflicts with the doc.
      ```
      94ca2f6cf85228a49fde7b738ee1209de7bee325 127.0.0.1:6379@16379, myself,master - 0 0 0 connected 0-16383
      ```
      91011100