- 15 Jan, 2014 6 commits
-
-
antirez authored
-
antirez authored
The rejoin delay usually is the node timeout. However if the node timeout is too small, we set it to 500 milliseconds, that is a value chosen to be greater than most setups RTT / instances latency figures so that likely communication with other nodes happen before rejoining.
-
antirez authored
Usually we update the cluster state (to understand if we should accept queries or reply with an error) only when there is a change in the state of the nodes. However for the "delayed rejoin" feature to work, that is, for a master to wait some time before accepting queries again after it rejoins the majority, we need to periodically update the last time when the node was partitioned away from the majority. With this commit if the cluster is down we update the state ten times per second.
-
antirez authored
See issue #1426 on Github.
-
antirez authored
Even without the user messing manually with the file, it is still possible to have blank lines (just a single "\n" per line) because of how the nodes.conf update/write process works.
-
antirez authored
The way the file was generated was unsafe and leaded to nodes.conf file corruption (zero length file) on server stop/crash during the creation of the file. The previous file update method was as simple as open with O_TRUNC followed by the write call. While the write call was a single one with the full payload, ensuring no half-written files for POSIX semantics, stopping the server just after the open call resulted into a zero-length file (all the nodes information lost!).
-
- 14 Jan, 2014 4 commits
-
-
antirez authored
A client can enter a special cluster read-only mode using the READONLY command: if the client read from a slave instance after this command, for slots that are actually served by the instance's master, the queries will be processed without redirection, allowing clients to read from slaves (but without any kind fo read-after-write guarantee). The READWRITE command can be used in order to exit the readonly state.
-
antirez authored
-
antirez authored
64mb is the default value in redis.conf. For some reason instead the hard-coded default was 1mb that is too small.
-
antirez authored
-
- 13 Jan, 2014 3 commits
- 10 Jan, 2014 9 commits
-
-
antirez authored
The command totally removes a monitored master.
-
antirez authored
The claim about unlinking the instance from the connected hash tables was the opposite of the reality. Also the current actual behavior is safer in most cases, so it is better to manually unlink when needed.
-
antirez authored
-
antirez authored
-
antirez authored
It allows to add new masters to monitor at runtime.
-
antirez authored
The new function is used when we want to normalize an IP address without performing a DNS lookup if the string to resolve is not a valid IP. This is useful every time only IPs are valid inputs or when we want to skip DNS resolution that is slow during runtime operations if we are required to block.
-
antirez authored
With SENTINEL MASTERS it was already possible to list all the configured masters, but not a specific one.
-
antirez authored
Note: the auth password with the master is voluntarily not exposed.
-
antirez authored
-
- 09 Jan, 2014 2 commits
- 08 Jan, 2014 4 commits
-
-
antirez authored
Fixes issue #1491 on Github.
-
antirez authored
-
antirez authored
Masters not understanding REPLCONF ACK will reply with errors to our requests causing a number of possible issues. This commit detects a global replication offest set to -1 at the end of the replication, and marks the client representing the master with the REDIS_PRE_PSYNC flag. Note that this flag was called REDIS_PRE_PSYNC_SLAVE but now it is just REDIS_PRE_PSYNC as it is used for both slaves and masters starting with this commit. This commit fixes issue #1488.
-
antirez authored
-
- 25 Dec, 2013 6 commits
-
-
antirez authored
-
antirez authored
-
antirez authored
Durign a refactoring I mispelled _port for port. This is one of the reasons I never used _varname myself.
-
antirez authored
-
antirez authored
Now the socket is closed if anetNonBlock() fails, and in general the code structure makes it harder to introduce this kind of bugs in the future. Reference: pull request #1059.
-
antirez authored
The function actually needs to be split into sub-functions at some point in the future.
-
- 23 Dec, 2013 2 commits
-
-
antirez authored
There were two problems with the implementation. 1) "save" was not correctly processed when no save point was configured, as reported in issue #1416. 2) The way the code checked if an option existed in the "processed" dictionary was wrong, as we add the element with as a key associated with a NULL value, so dictFetchValue() can't be used to check for existance, but dictFind() must be used, that returns NULL only if the entry does not exist at all.
-
antirez authored
This was no longer the case with 2.8 becuase of a bug introduced with the IPv6 support. Now it is fixed. This fixes issue #1287 and #1477.
-
- 22 Dec, 2013 3 commits
-
-
antirez authored
Currently replication offsets could be used into a limited way in order to understand, out of a set of slaves, what is the one with the most updated data. For example this comparison is possible of N slaves were replicating all with the same master. However the replication offset was not transferred from master to slaves (that are later promoted as masters) in any way, so for instance if there were three instances A, B, C, with A master and B and C replication from A, the following could happen: C disconnects from A. B is turned into master. A is switched to master of B. B receives some write. In this context there was no way to compare the offset of A and C, because B would use its own local master replication offset as replication offset to initialize the replication with A. With this commit what happens is that when B is turned into master it inherits the replication offset from A, making A and C comparable. In the above case assuming no inconsistencies are created during the disconnection and failover process, A will show to have a replication offset greater than C. Note that this does not mean offsets are always comparable to understand what is, in a set of instances, since in more complex examples the replica with the higher replication offset could be partitioned away when picking the instance to elect as new master. However this in general improves the ability of a system to try to pick a good replica to promote to master.
-
antirez authored
-
antirez authored
When the configured node timeout is very small, the data validity time (maximum data age for a slave to try a failover) is too little (ten times the configured node timeout) when the replication link with the master is mostly idle. In this case we'll receive some data from the master only every server.repl_ping_slave_period to refresh the last interaction with the master. This commit adds to the max data validity time the slave ping period to avoid this problem of slaves sensing too old data without a good reason. However this max data validity time is likely a setting that should be configurable by the Redis Cluster user in a way completely independent from the node timeout.
-
- 20 Dec, 2013 1 commit
-
-
antirez authored
-