- 05 Aug, 2024 2 commits
-
-
Josh Hershberg authored
This and the previous commit make the cluster shards command a generic implementation instead of a specific implementation for each cluster API implementation. This commit (a) adds functions to the cluster API and (b) modifies the cluster shards cmd implementation to use cluster API functions instead of directly accessing the legacy clustering implementation. Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
Josh Hershberg authored
This and the next following commit makes the cluster shards command a generic implementation instead of a specific implementation for each cluster API implementation. This commit simply moves the cluster shards implementation from cluster_legacy.c to cluster.c without changing the implementation at all. The reason for doing so was to help with reviewing the changes in the diff. Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
- 01 Aug, 2024 1 commit
-
-
debing.sun authored
Close https://github.com/redis/redis/issues/13414 When the cluster's master node fails and is switched to another node, the first node in the shard node list (the old master) is no longer valid. Add a new method clusterGetMasterFromShard() to obtain the current master.
-
- 20 Mar, 2024 1 commit
-
-
Pieter Cailliau authored
[Read more about the license change here](https://redis.com/blog/redis-adopts-dual-source-available-licensing/) Live long and prosper
🖖
-
- 05 Mar, 2024 1 commit
-
-
Ping Xie authored
This commit updates the processing of PONG gossip messages in the cluster. When a node (B) becomes a replica due to a failover, its PONG messages include its new primary node's (A) information and B's configuration epoch is aligned with A's. This allows observer nodes to identify changes in primary-ship, addressing issues of intermediate states and enhancing cluster state consistency during topology changes. Fix #13018
-
- 14 Feb, 2024 1 commit
-
-
Sankar authored
The receiver does not update any of its cluster state based on gossip about itself. This commit explicitly avoids sending or processing gossip about the receiver. Currently cluster bus gossips include 10% of nodes in the cluster with a minimum of 3 nodes. For up to 30 node clusters, this commit makes sure that 1/3 of the gossip (1 out of 3 gossips) is never discarded. This should help with relatively faster convergence of cluster state in general.
-
- 07 Feb, 2024 1 commit
-
-
Binbin authored
After fix for #13033, address sanitizer reports this heap-use-after-free error. When the pubsubshard_channels dict becomes empty, we will delete the dict, and the dictReleaseIterator will call dictResetIterator, it will use the dict so we will trigger the error. This PR introduced a new struct kvstoreDictIterator to wrap dictIterator. Replace the original dict iterator with the new kvstore dict iterator. --------- Co-authored-by:
Oran Agra <oran@redislabs.com> Co-authored-by:
guybe7 <guy.benoish@redislabs.com>
-
- 05 Feb, 2024 1 commit
-
-
guybe7 authored
# Description Gather most of the scattered `redisDb`-related code from the per-slot dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e. it's a class that represents an array of dictionaries. # Motivation The main motivation is code cleanliness, the idea of using an array of dictionaries is very well-suited to becoming a self-contained data structure. This allowed cleaning some ugly code, among others: loops that run twice on the main dict and expires dict, and duplicate code for allocating and releasing this data structure. # Notes 1. This PR reverts the part of https://github.com/redis/redis/pull/12848 where the `rehashing` list is global (handling rehashing `dict`s is under the responsibility of `kvstore`, and should not be managed by the server) 2. This PR also replaces the type of `server.pubsubshard_channels` from `dict**` to `kvstore` (original PR: https://github.com/redis/redis/pull/12804). After that was done, server.pubsub_channels was also chosen to be a `kvstore` (with only one `dict`, which seems odd) just to make the code cleaner by making it the same type as `server.pubsubshard_channels`, see `pubsubtype.serverPubSubChannels` 3. the keys and expires kvstores are currenlty configured to allocate the individual dicts only when the first key is added (unlike before, in which they allocated them in advance), but they won't release them when the last key is deleted. Worth mentioning that due to the recent change the reply of DEBUG HTSTATS changed, in case no keys were ever added to the db. before: ``` 127.0.0.1:6379> DEBUG htstats 9 [Dictionary HT] Hash table 0 stats (main hash table): No stats available for empty dictionaries [Expires HT] Hash table 0 stats (main hash table): No stats available for empty dictionaries ``` after: ``` 127.0.0.1:6379> DEBUG htstats 9 [Dictionary HT] [Expires HT] ```
-
- 23 Jan, 2024 1 commit
-
-
Binbin authored
In the following case sender may be unknown, so we need to set up a NULL check for sender: ``` /* If this is a MEET packet from an unknown node, we still process * the gossip section here since we have to trust the sender because * of the message type. */ if (!sender && type == CLUSTERMSG_TYPE_MEET) clusterProcessGossipSection(hdr,link); ```
-
- 22 Jan, 2024 1 commit
-
-
Brennan authored
There have been occasional instances of memory corruption (though code bugs or bit flips) leading to invalid node information being gossiped around. To prevent this invalid information spreading, we verify the node IDs in received gossip are in an acceptable format, and disregard any gossiped nodes with invalid IDs. This PR uses the existing verifyClusterNodeId function to check the validity of the gossiped node IDs and if an invalid one is encountered, logs raw byte information to help debug the corruption. --------- Co-authored-by:
Madelyn Olson <madelyneolson@gmail.com>
-
- 11 Jan, 2024 2 commits
-
-
Harkrishn Patro authored
Avoid crash while performing `DEBUG CLUSTERLINK KILL` mutliple times (cluster link might not be created/valid).
-
bentotten authored
When one shard, sole primary node marks potentially failed replica as FAIL instead of PFAIL (#12824) Fixes issue where a single primary cannot mark a replica as failed in a single-shard cluster.
-
- 08 Jan, 2024 1 commit
-
-
Binbin authored
Crash reported in #12695. In the process of upgrading the cluster from 7.0 to 7.2, because the 7.0 nodes will not gossip shard id, in 7.2 we will rely on shard id to build the server.cluster->shards dict. In some cases, for example, the 7.0 master node and the 7.2 replica node. From the view of 7.2 replica node, the cluster->shards dictionary does not have its master node. In this case calling CLUSTER SHARDS on the 7.2 replica node may crash. We should fix the underlying assumption of updateShardId, which is that the shard dict should be always in sync with the node's shard_id. The fix was suggested by PingXie, see more details in #12695.
-
- 07 Jan, 2024 1 commit
-
-
Binbin authored
If there are nodes in the cluster that do not support shard-id, they will gossip shard-id. From the perspective of nodes that support shard-id, their shard-id is meaningless (since shard-id is randomly generated when we create a node.) Nodes that support shard-id will save the shard-id information in nodes.conf. If the node is restarted according to nodes.conf, the server will report a corrupted cluster config file error. Because auxShardIdSetter will reject configurations with inconsistent master-replica shard-ids. A cluster-wide consensus for the node's shard_id is not necessary. The key is maintaining consistency of the shard_id on each individual 7.2 node. As the cluster progressively upgrades to version 7.2, we can expect the shard_ids across all nodes to naturally converge and align. In this PR, when processing the gossip, if sender is a replica and does not support shard-id, set the shard_id to the shard_id of its master.
-
- 27 Dec, 2023 1 commit
-
-
Chen Tianjie authored
We have achieved replacing `slots_to_keys` radix tree with key->slot linked list (#9356), and then replacing the list with slot specific dictionaries for keys (#11695). Shard channels behave just like keys in many ways, and we also need a slots->channels mapping. Currently this is still done by using a radix tree. So we should split `server.pubsubshard_channels` into 16384 dicts and drop the radix tree, just like what we did to DBs. Some benefits (basically the benefits of what we've done to DBs): 1. Optimize counting channels in a slot. This is currently used only in removing channels in a slot. But this is potentially more useful: sometimes we need to know how many channels there are in a specific slot when doing slot migration. Counting is now implemented by traversing the radix tree, and with this PR it will be as simple as calling `dictSize`, from O(n) to O(1). 2. The radix tree in the cluster has been removed. The shard channel names no longer require additional storage, which can save memory. 3. Potentially useful in slot migration, as shard channels are logically split by slots, thus making it easier to migrate, remove or add as a whole. 4. Avoid rehashing a big dict when there is a large number of channels. Drawbacks: 1. Takes more memory than using radix tree when there are relatively few shard channels. What this PR does: 1. in cluster mode, split `server.pubsubshard_channels` into 16384 dicts, in standalone mode, still use only one dict. 2. drop the `slots_to_channels` radix tree. 3. to save memory (to solve the drawback above), all 16384 dicts are created lazily, which means only when a channel is about to be inserted to the dict will the dict be initialized, and when all channels are deleted, the dict would delete itself. 5. use `server.shard_channel_count` to keep track of the number of all shard channels. --------- Co-authored-by:
Viktor Söderqvist <viktor.soderqvist@est.tech>
-
- 11 Dec, 2023 1 commit
-
-
Binbin authored
This is a follow-up fix to #12733. We need to apply the same changes to delKeysInSlot. Refer to #12733 for more details. This PR contains some other minor cleanups / improvements to the test suite and docs. It uses the postnotifications test module in a cluster mode test which revealed a leak in the test module (fixed).
-
- 07 Dec, 2023 1 commit
-
-
zhaozhao.zz authored
When loading RDB on cluster nodes, it is necessary to consider the scenario where a node is a replica. For example, during a rolling upgrade, new version instances are often mounted as replicas on old version instances. In this case, the full synchronization legacy RDB does not contain slot information, and the new version instance, acting as a replica, should be able to handle the legacy RDB correctly for `dbExpand`. Additionally, renaming `getMyClusterSlotCount` to `getMyShardSlotCount` would be appropriate. Introduced in #11695
-
- 03 Dec, 2023 1 commit
-
-
Binbin authored
We forgot to call sdsfreesplitres. This is just a cleanup since it will only be leaked in the error paths, and we will exit on the error paths.
-
- 22 Nov, 2023 13 commits
-
-
Yehoshua (Josh) Hershberg authored
A followup PR for #12742 Add some brief comments explaining the purpose of the file to the head of cluster_legacy.c and cluster.c. Add copyright notice to cluster.c Signed-off-by:
Josh Hershberg <yehoshua@redis.com> Co-authored-by:
Josh Hershberg <yehoshua@redis.com>
-
Binbin authored
We meant to divide it by the number of slots, otherwise it will do slots times dictExpand, bug was introduced in #11695.
-
Josh Hershberg authored
Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
Josh Hershberg authored
Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
Josh Hershberg authored
The failover command is up until now not supported in cluster mode. This commit allows a cluster implementation to support the command. The legacy clustering implementation still does not support this command. Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
Josh Hershberg authored
Move primary functions used to implement datapath clustering into cluster.c, making them shared. This required adding "accessor" and other functions to abstract access to node details and cluster state. Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
Josh Hershberg authored
Divide up clusterCommand into clusterCommand for shared sub-commands and clusterCommandSpecial for implementation specific sub-commands. So to, the cluster command help sub-command has been divided into two implementations, clusterCommandHelp and clusterCommandHelpSpecial. Some common sub-subcommand implementations have been extracted and their implemenations either made shared or else implementation specific. Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
Josh Hershberg authored
Simple rename, "GetSlotBit" is implementation specific Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
Josh Hershberg authored
Move (but do not change) some items from cluster_legacy.c back info cluster.c. These items are shared code that all clustering implementations will use. Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
Josh Hershberg authored
Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
Josh Hershberg authored
More declerations can be moved into cluster_legacy.h as they are not requied for the cluster api. The code was simply moved, not changed in any way. Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
Josh Hershberg authored
Move clusterNode into cluster_legacy.h. In order to achieve this some accessor methods were added and also a refactor of how debugCommand handles cluster related subcommands. Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
Josh Hershberg authored
Move clusterState into cluster_legacy.h. In order to achieve this some "accessor" methods needed to be added to the cluster API and some other minor refactors. Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
- 21 Nov, 2023 2 commits
-
-
Josh Hershberg authored
create new cluster.c Signed-off-by:
Josh Hershberg <yehoshua@redis.com> forgot to #include cluster_legacy.h Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
Josh Hershberg authored
Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
- 01 Nov, 2023 1 commit
-
-
Viktor Söderqvist authored
Optimize the performance of SCAN commands when a match pattern can only contain keys from a single slot in cluster mode. This can happen when the pattern contains a hash tag before any wildcard matchers or when the key contains no matchers.
-
- 31 Oct, 2023 1 commit
-
-
Viktor Söderqvist authored
Add a defensive checks to prevent double freeing a node from the cluster blacklist.
-
- 24 Oct, 2023 1 commit
-
-
Binbin authored
Fix some outdated comments and add comment for moduleNotifyKeyspaceEvent we added in #11084 since it seems a bit implicit. --------- Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 15 Oct, 2023 1 commit
-
-
Vitaly authored
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data. ## Important changes * Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms. * getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time. * Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree. * scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue. * Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot. * Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading. * DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well. ## Performance This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict. RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load. ## Interface changes * Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS` * Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored. * New RDB version to support the new op code for SLOT information. --------- Co-authored-by:
Vitaly Arbuzov <arvit@amazon.com> Co-authored-by:
Harkrishn Patro <harkrisp@amazon.com> Co-authored-by:
Roshan Khatri <rvkhatri@amazon.com> Co-authored-by:
Madelyn Olson <madelyneolson@gmail.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 13 Oct, 2023 1 commit
-
-
Harkrishn Patro authored
Unsubscribe all clients from replica for shard channel if the master ownership changes
-
- 12 Oct, 2023 1 commit
-
-
Binbin authored
In #10536, we introduced the assert, some older versions of servers (like 7.0) doesn't gossip shard_id, so we will not add the node to cluster->shards, and node->shard_id is filled in randomly and may not be found here. It causes that if we add a 7.2 node to a 7.0 cluster and allocate slots to the 7.2 node, the 7.2 node will crash when it hits this assert. Somehow like #12538. In this PR, we remove the assert and replace it with an unconditional removal.
-
- 02 Oct, 2023 1 commit
-
-
Madelyn Olson authored
Fixed some usages of tabs which caused weird indentation in the code. Tried to find all of the places so their was one PR. I ignored all of the usages of tabs which don't really affect readability.
-