- 23 Jun, 2017 2 commits
-
-
Suraj Narkhede authored
1. brpop last key index, thus checking all keys for slots. 2. Memory leak in clusterRedirectBlockedClientIfNeeded. 3. Remove while loop in clusterRedirectBlockedClientIfNeeded.
-
Suraj Narkhede authored
-
- 04 Jun, 2017 1 commit
-
-
Antonio Mallia authored
-
- 15 Apr, 2017 1 commit
-
-
antirez authored
However we allow for 500 milliseconds of tolerance, in order to avoid often discarding semantically valid info (the node is up) because of natural few milliseconds desync among servers even when NTP is used. Note that anyway we should ping the node from time to time regardless and discover if it's actually down from our point of view, since no update is accepted while we have an active ping on the node. Related to #3929.
-
- 14 Apr, 2017 4 commits
-
-
antirez authored
To rely on the fact that nodes in PFAIL state will be shared around by randomly adding them in the gossip section is a weak assumption, especially after changes related to sending less ping/pong packets. We want to always include gossip entries for all the nodes that are in PFAIL state, so that the PFAIL -> FAIL state promotion can happen much faster and reliably. Related to #3929.
-
antirez authored
The gossip section times are 32 bit, so cannot store the milliseconds time but just the seconds approximation, which is good enough for our uses. At the same time however, when comparing the gossip section times of other nodes with our node's view, we need to convert back to milliseconds. Related to #3929. Without this change the patch to reduce the traffic in the bus message does not work.
-
antirez authored
-
antirez authored
Cluster of bigger sizes tend to have a lot of traffic in the cluster bus just for failure detection: a node will try to get a ping reply from another node no longer than when the half the node timeout would elapsed, in order to avoid a false positive. However this means that if we have N nodes and the node timeout is set to, for instance M seconds, we'll have to ping N nodes every M/2 seconds. This N*M/2 pings will receive the same number of pongs, so a total of N*M packets per node. However given that we have a total of N nodes doing this, the total number of messages will be N*N*M. In a 100 nodes cluster with a timeout of 60 seconds, this translates to a total of 100*100*30 packets per second, summing all the packets exchanged by all the nodes. This is, as you can guess, a lot... So this patch changes the implementation in a very simple way in order to trust the reports of other nodes: if a node A reports a node B as alive at least up to a given time, we update our view accordingly. The problem with this approach is that it could result into a subset of nodes being able to reach a given node X, and preventing others from detecting that is actually not reachable from the majority of nodes. So the above algorithm is refined by trusting other nodes only if we do not have currently a ping pending for the node X, and if there are no failure reports for that node. Since each node, anyway, pings 10 other nodes every second (one node every 100 milliseconds), anyway eventually even trusting the other nodes reports, we will detect if a given node is down from our POV. Now to understand the number of packets that the cluster would exchange for failure detection with the patch, we can start considering the random PINGs that the cluster sent anyway as base line: Each node sends 10 packets per second, so the total traffic if no additioal packets would be sent, including PONG packets, would be: Total messages per second = N*10*2 However by trusting other nodes gossip sections will not AWALYS prevent pinging nodes for the "half timeout reached" rule all the times. The math involved in computing the actual rate as N and M change is quite complex and depends also on another parameter, which is the number of entries in the gossip section of PING and PONG packets. However it is possible to compare what happens in cluster of different sizes experimentally. After applying this patch a very important reduction in the number of packets exchanged is trivial to observe, without apparent impacts on the failure detection performances. Actual numbers with different cluster sizes should be published in the Reids Cluster documentation in the future. Related to #3929.
-
- 13 Apr, 2017 1 commit
-
-
antirez authored
First step in order to change Cluster in order to use less messages. Related to issue #3929.
-
- 27 Mar, 2017 1 commit
-
-
antirez authored
-
- 09 Feb, 2017 1 commit
-
-
antirez authored
After investigating issue #3796, it was discovered that MIGRATE could call migrateCloseSocket() after the original MIGRATE c->argv was already rewritten as a DEL operation. As a result the host/port passed to migrateCloseSocket() could be anything, often a NULL pointer that gets deferenced crashing the server. Now the socket is closed at an earlier time when there is a socket error in a later stage where no retry will be performed, before we rewrite the argument vector. Moreover a check was added so that later, in the socket_err label, there is no further attempt at closing the socket if the argument was rewritten. This fix should resolve the bug reported in #3796.
-
- 14 Dec, 2016 1 commit
-
-
antirez authored
After the fix for #3673 the ttl var is always initialized inside the loop itself, so the early initialization is not needed. Variables declaration also moved to a more local scope.
-
- 13 Dec, 2016 1 commit
-
-
antirez authored
BACKGROUND AND USE CASEj Redis slaves are normally write only, however the supprot a "writable" mode which is very handy when scaling reads on slaves, that actually need write operations in order to access data. For instance imagine having slaves replicating certain Sets keys from the master. When accessing the data on the slave, we want to peform intersections between such Sets values. However we don't want to intersect each time: to cache the intersection for some time often is a good idea. To do so, it is possible to setup a slave as a writable slave, and perform the intersection on the slave side, perhaps setting a TTL on the resulting key so that it will expire after some time. THE BUG Problem: in order to have a consistent replication, expiring of keys in Redis replication is up to the master, that synthesize DEL operations to send in the replication stream. However slaves logically expire keys by hiding them from read attempts from clients so that if the master did not promptly sent a DEL, the client still see logically expired keys as non existing. Because slaves don't actively expire keys by actually evicting them but just masking from the POV of read operations, if a key is created in a writable slave, and an expire is set, the key will be leaked forever: 1. No DEL will be received from the master, which does not know about such a key at all. 2. No eviction will be performed by the slave, since it needs to disable eviction because it's up to masters, otherwise consistency of data is lost. THE FIX In order to fix the problem, the slave should be able to tag keys that were created in the slave side and have an expire set in some way. My solution involved using an unique additional dictionary created by the writable slave only if needed. The dictionary is obviously keyed by the key name that we need to track: all the keys that are set with an expire directly by a client writing to the slave are tracked. The value in the dictionary is a bitmap of all the DBs where such a key name need to be tracked, so that we can use a single dictionary to track keys in all the DBs used by the slave (actually this limits the solution to the first 64 DBs, but the default with Redis is to use 16 DBs). This solution allows to pay both a small complexity and CPU penalty, which is zero when the feature is not used, actually. The slave-side eviction is encapsulated in code which is not coupled with the rest of the Redis core, if not for the hook to track the keys. TODO I'm doing the first smoke tests to see if the feature works as expected: so far so good. Unit tests should be added before merging into the 4.0 branch.
-
- 08 Dec, 2016 1 commit
-
-
Jan-Erik Rediger authored
Before, if a previous key had a TTL set but the current one didn't, the TTL was reused and thus resulted in wrong expirations set. This behaviour was experienced, when `MigrateDefaultPipeline` in redis-trib was set to >1 Fixes #3655
-
- 29 Nov, 2016 1 commit
-
-
andyli authored
-
- 16 Nov, 2016 1 commit
-
-
antirez authored
-
- 16 Jun, 2016 1 commit
-
-
antirez authored
Reference issue #3218. Checking the code I can't find a reason why the original RESTORE code was so opinionated about restoring only the current version. The code in to `rdb.c` appears to be capable as always to restore data from older versions of Redis, and the only places where it is needed the current version in order to correctly restore data, is while loading the opcodes, not the values itself as it happens in the case of RESTORE. For the above reasons, this commit enables RESTORE to accept older versions of values payloads.
-
- 05 May, 2016 1 commit
-
- 02 May, 2016 1 commit
-
-
antirez authored
This fixes issue #3043. Before this fix, after a complete resharding of a master slots to other nodes, the master remains empty and the slaves migrate away to other masters with non-zero nodes. However the old master now empty, is no longer considered a target for migration, because the system has no way to tell it had slaves in the past. This fix leaves the algorithm used in the past untouched, but adds a new rule. When a new or old master which is empty and without slaves, are assigend with their first slot, if other masters in the cluster have slaves, they are automatically considered to be targets for replicas migration.
-
- 02 Feb, 2016 1 commit
-
-
antirez authored
We need to be able to correctly parse the node address in the case of IPv6 addresses.
-
- 01 Feb, 2016 1 commit
-
-
antirez authored
-
- 29 Jan, 2016 6 commits
-
-
antirez authored
CLUSTER SLOTS now includes IDs in the nodes description associated with a given slot range. Certain client libraries implementations need a way to reference a node in an unique way, so they were relying on CLUSTER NODES, that is not a stable API and may change frequently depending on Redis Cluster future requirements.
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
- 28 Jan, 2016 1 commit
-
-
Itamar Haber authored
-
- 27 Jan, 2016 1 commit
-
-
antirez authored
-
- 26 Jan, 2016 3 commits
-
-
antirez authored
-
antirez authored
-
antirez authored
The change covers the case where: 1. There is a node we can't reach (in fail or pfail state). 2. We see a different address for this node, in the gossip section sent to us by a node that, instead, is able to talk with the node we cannot talk to. In this case it's a good bet to switch to the address reported by this node, since there was an address switch and it is able to talk with the node and we are not. However previosuly this was done in a dangerous way, by initiating an handshake. The handshake, using the MEET packet, forces the receiver to join our cluster, and this is not a good idea. If the node in question really just switched address, but is the same node, it already knows about us, so we just need to perform an address update and a reconnection. So with this commit instead we just update the address of the node, release the node link if any, and attempt to reconnect in the next clusterCron() cycle. The commit also improves debugging messages printed by Cluster during address or ID switches.
-
- 19 Jan, 2016 2 commits
-
-
antirez authored
Centralize cleanup of newargv in a single place. Add more comments to help a bit following a complex function. Related to issue #3016.
-
antirez authored
Another leak was fixed in the case of syntax error by restructuring the allocation strategy for the two dynamic vectors. We also make sure to always close the cached socket on I/O errors so that all the I/O errors are handled the same, even if we had a previously queued error of a different kind from the destination server. Thanks to Kevin McGehee. Related to issue #3016.
-
- 18 Jan, 2016 1 commit
-
-
antirez authored
In issue #3016 Kevin McGehee identified multiple very serious issues in the new implementation of MIGRATE. This commit attempts to restructure the code in oder to avoid mistakes, an analysis of the new implementation is in progress in order to check for possible edge cases.
-
- 14 Jan, 2016 1 commit
-
-
antirez authored
With this commit we preserve the list of nodes that have .slaveof set to the node, even when the node is turned into a slave, and make sure to fix the .slaveof pointers to NULL when a node is freed from memory, regardless of the fact it's a slave or a master. Basically we try to remember the logical master in the current configuration even if the logical master advertised it as a slave already. However we still remember the associations, so that when a node is freed we can fix them. This should fix issue #3002.
-
- 11 Jan, 2016 3 commits
-
-
antirez authored
-
antirez authored
Sometimes during "fixes" we have to setup a new configuration and assign slots to nodes. With BUMPEPOCH we can make sure the new configuration of the node will win if there are conflicting configurations (for example another node is *also* claiming the same slot because the cluster is totally messed up).
-
antirez authored
-
- 08 Jan, 2016 1 commit
-
-
antirez authored
Extend the MIGRATE extra freedom to be able to be called in the context of the local slot, anytime there is a slot open in one or the other direction (importing or migrating). This is useful for redis-trib to fix the cluster when it has in an odd state. Thix fix allows "redis-trib fix" to make its work in certain cases where previously an error was reported.
-