- 11 Dec, 2015 2 commits
-
-
antirez authored
-
antirez authored
We wait a fixed amount of time (5 seconds currently) much greater than the usual Cluster node to node communication latency, before migrating. This way when a failover occurs, before detecting the new master as a target for migration, we give the time to its natural slaves (the slaves of the failed over master) to announce they switched to the new master, preventing an useless migration operation.
-
- 10 Dec, 2015 1 commit
-
-
antirez authored
-
- 09 Dec, 2015 1 commit
-
-
antirez authored
Some time ago I broken replicas migration (reported in #2924). The idea was to prevent masters without replicas from getting replicas because of replica migration, I remember it to create issues with tests, but there is no clue in the commit message about why it was so undesirable. However my patch as a side effect totally ruined the concept of replicas migration since we want it to work also for instances that, technically, never had slaves in the past: promoted slaves. So now instead the ability to be targeted by replicas migration, is a new flag "migrate-to". It only applies to masters, and is set in the following two cases: 1. When a master gets a slave, it is set. 2. When a slave turns into a master because of fail over, it is set. This way replicas migration targets are only masters that used to have slaves, and slaves of masters (that used to have slaves... obviously) and are promoted. The new flag is only internal, and is never exposed in the output nor persisted in the nodes configuration, since all the information to handle it are implicit in the cluster configuration we already have.
-
- 27 Nov, 2015 1 commit
-
-
antirez authored
-
- 01 Oct, 2015 2 commits
- 27 Jul, 2015 3 commits
- 26 Jul, 2015 6 commits
- 24 Jun, 2015 1 commit
-
-
Jan-Erik Rediger authored
-
- 11 Jun, 2015 1 commit
-
-
antirez authored
Related to issues #2609 and #2612.
-
- 24 Mar, 2015 1 commit
-
-
antirez authored
There was a bug in Redis Cluster caused by clients blocked in a blocking list pop operation, for keys no longer handled by the instance, or in a condition where the cluster became down after the client blocked. A typical situation is: 1) BLPOP <somekey> 0 2) <somekey> hash slot is resharded to another master. The client will block forever int this case. A symmentrical non-cluster-specific bug happens when an instance is turned from master to slave. In that case it is more serious since this will desynchronize data between slaves and masters. This other bug was discovered as a side effect of thinking about the bug explained and fixed in this commit, but will be fixed in a separated commit.
-
- 21 Mar, 2015 2 commits
- 20 Mar, 2015 4 commits
-
-
antirez authored
-
antirez authored
In no case we should try to attempt to failover if myself->slaveof is NULL.
-
antirez authored
This commit moves the process of generating a new config epoch without consensus out of the clusterCommand() implementation, in order to make it reusable for other reasons (current target is to have a CLUSTER FAILOVER option forcing the failover when no master majority is reachable). Moreover the commit moves other functions which are similarly related to config epochs in a new logical section of the cluster.c file, just for clarity.
-
antirez authored
Before we relied on the global cluster state to make sure all the hash slots are linked to some node, when getNodeByQuery() is called. So finding the hash slot unbound was checked with an assertion. However this is fragile. The cluster state is often updated in the clusterBeforeSleep() function, and not ASAP on state change, so it may happen to process clients with a cluster state that is 'ok' but yet certain hash slots set to NULL. With this commit the condition is also checked in getNodeByQuery() and reported with a identical error code of -CLUSTERDOWN but slightly different error message so that we have more debugging clue in the future. Root cause of issue #2288.
-
- 18 Mar, 2015 1 commit
-
-
antirez authored
There are rare conditions where node->slaveof may be NULL even if the node is a slave. To check by flag is much more robust.
-
- 13 Mar, 2015 2 commits
- 10 Mar, 2015 1 commit
-
-
Michel Martens authored
-
- 27 Feb, 2015 1 commit
-
-
antirez authored
-
- 26 Feb, 2015 2 commits
-
-
antirez authored
1. Remove useless "cs" initialization. 2. Add a "select" var to capture a condition checked multiple times. 3. Avoid duplication of the same if (!copy) conditional. 4. Don't increment dirty if copy is given (no deletion is performed), otherwise we propagate MIGRATE when not needed.
-
Tommy Wang authored
Avoid redundant SELECT calls when continuously migrating keys to the same dbid within a target Redis instance.
-
- 30 Jan, 2015 2 commits
-
-
antirez authored
This improves PFAIL -> FAIL switch. Too late at this point in the RC releases to add proper PFAIL/FAIL separate dictionary to do this in a less randomized way. Tested in practice with experiments that this helps. PFAIL -> FAIL average with 20 nodes and node-timeout set to 5 seconds takes 2.5 seconds without this commit, 1 second with this commit.
-
antirez authored
-
- 29 Jan, 2015 3 commits
-
-
antirez authored
-
antirez authored
-
antirez authored
Otherwise it is impossible to receive the majority of failure reports in the node_timeout*2 window in larger clusters. Still with a 200 nodes cluster, 20 gossip sections are a very reasonable amount of bytes to send. A side effect of this change is also fater cluster nodes joins for large clusters, because the cluster layout makes less time to propagate.
-
- 24 Jan, 2015 1 commit
-
-
antirez authored
Otherwise we risk sending not initialized data to other nodes, that may contain anything. This was actually not possible only because the initialization of the buffer where the cluster packets header is created was larger than the 3 gossip sections we use, so the memory was already all filled with zeroes by the memset().
-
- 21 Jan, 2015 2 commits
-
-
Matt Stancliff authored
Fixes valgrind error: 48 bytes in 1 blocks are definitely lost in loss record 196 of 373 at 0x4910D3: je_malloc (jemalloc.c:944) by 0x42807D: zmalloc (zmalloc.c:125) by 0x41FA0D: dictGetIterator (dict.c:543) by 0x41FA48: dictGetSafeIterator (dict.c:555) by 0x459B73: clusterHandleSlaveMigration (cluster.c:2776) by 0x45BF27: clusterCron (cluster.c:3123) by 0x423344: serverCron (redis.c:1239) by 0x41D6CD: aeProcessEvents (ae.c:311) by 0x41D8EA: aeMain (ae.c:455) by 0x41A84B: main (redis.c:3832)
-
Matt Stancliff authored
If array has N elements, we can't read +1 if we are already at N. Also, we need to move elements by their storage size in the array, not just by individual bytes.
-