- 05 Feb, 2014 6 commits
-
-
antirez authored
Sometime an osx master with a Linux server over a slow link caused a strange error where osx called the writable function for the socket but actually apparently there was no room in the socket buffer to accept the write: write(2) call returned an EAGAIN error, that was not checked, so we considered write(2) == 0 always as a connection reset, which was unfortunate since the bulk transfer has to start again. Also more errors are logged with the WARNING level in the same code path now.
-
antirez authored
For manual failover we need a manual failover in progress, and that mf_can_start is true (master offset received and matched).
-
antirez authored
-
antirez authored
Otherwise it is always detected as a manual failover timed out.
-
antirez authored
When a slave requests masters vote for a manual failover, the REQUEST_AUTH message is flagged in a special way in order to force the masters to give the authorization even if the master is not marked as failing.
-
antirez authored
-
- 04 Feb, 2014 1 commit
-
-
antirez authored
The API is one of the bulding blocks of CLUSTER FAILOVER command that executes a manual failover in Redis Cluster. However exposed as a command that the user can call directly, it makes much simpler to upgrade a standalone Redis instance using a slave in a safer way. The commands works like that: CLIENT PAUSE <milliesconds> All the clients that are not slaves and not in MONITOR state are paused for the specified number of milliesconds. This means that slaves are normally served in the meantime. At the end of the specified amount of time all the clients are unblocked and will continue operations normally. This command has no effects on the population of the slow log, since clients are not blocked in the middle of operations but only when there is to process new data. Note that while the clients are unblocked, still new commands are accepted and queued in the client buffer, so clients will likely not block while writing to the server while the pause is active.
-
- 03 Feb, 2014 5 commits
-
-
antirez authored
Keys expiring in the middle of the execution of Lua scripts are to create inconsistencies in masters and / or AOF files. See the following example: if redis.call("exists",KEYS[1]) == 1 then redis.call("incr","mycounter") end if redis.call("exists",KEYS[1]) == 1 then return redis.call("incr","mycounter") end The script executes two times the same *if key exists then incrementcounter* logic. However the two executions will work differently in the master and the slaves, provided some unlucky timing happens. In the master the first time the key may still exist, while the second time the key may no longer exist. This will result in the key incremented just one time. However as a side effect the master will generate a synthetic `DEL` command in the replication channel in order to force the slaves to expire the key (given that key expiration is master-driven). When the same script will run in the slave, the key will no longer be there, so the script will not increment the key. The key idea used to implement the expire-at-first-lookup semantics was provided by Marc Gravell.
-
antirez authored
-
antirez authored
server.lua_time_start is expressed in milliseconds. Use mstime_t instead of long long, and populate it with mstime() instead of ustime()/1000. Functionally identical but more natural.
-
Salvatore Sanfilippo authored
update copyright year
-
PatrickJS authored
-
- 31 Jan, 2014 8 commits
-
-
antirez authored
The Redis test uses a server-clients model in order to parallelize the execution of different tests. However in recent versions of osx not setting the channel to a binary encoding caused issues even if AFAIK no binary data is really sent via this channel. However now the channels are deliberately set to a binary encoding and this solves the issue. The exact issue was the test not terminating and giving the impression of running forever, since test clients or servers were unable to exchange the messages to continue.
-
antirez authored
-
antirez authored
This is especially important since we already have a concept of backlog (the replication backlog).
-
Nenad Merdanovic authored
In high RPS environments, the default listen backlog is not sufficient, so giving users the power to configure it is the right approach, especially since it requires only minor modifications to the code.
-
antirez authored
-
antirez authored
-
antirez authored
It is possible to configure the min number of additional working slaves a master should be left with, for a slave to migrate to an orphaned master.
-
antirez authored
This fixes issue #1530.
-
- 30 Jan, 2014 3 commits
-
-
antirez authored
The check was placed in a way that conflicted with the continue statements used by the node hearth beat code later that needs to skip the current node sometimes. Moved at the start of the function so that's always executed.
-
antirez authored
This feature allows slaves to migrate to orphaned masters (masters without working slaves), as long as a set of conditions are met, including the fact that the migrating slave needs to be in a master-slaves ring with at least another slave working.
-
antirez authored
-
- 29 Jan, 2014 10 commits
-
-
antirez authored
-
antirez authored
When we schedule a failover, broadcast a PONG to the slaves. The other slaves that plan to get elected will do the same too, this way it is likely that every slave will have a good picture of its own rank. Note that this is N*N messages where N is the number of slaves for the failing master, however usually even large clusters have many master nodes but a limited number of replicas per node, so this is harmless.
-
antirez authored
-
antirez authored
Note that when we compute the initial delay, there are probably still more up to date information to receive from slaves with new offsets, so the delay is recomputed when new data is available.
-
antirez authored
Return the number of slaves for the same master having a better replication offset of the current slave, that is, the slave "rank" used to pick a delay before the request for election.
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
Accessing to the 'myself' node, the node representing the currently running instance, is handy without the need to type server.cluster->myself every time.
-
antirez authored
Now we can broadcast a pong to all the instances or just the local slaves (that is useful for replication offset propagation).
-
- 28 Jan, 2014 4 commits
-
-
antirez authored
-
antirez authored
-
antirez authored
The two fields are used in order to remember the latest known replication offset and the time we received it from other slave nodes. This will be used by slaves in order to start the election procedure with a delay that is proportional to the rank of the slave among the other slaves for this master, when sorted for replication offset. Usually this allows the slave with the most updated offset to win the election and replace the failing master in the cluster.
-
antirez authored
-
- 25 Jan, 2014 1 commit
-
-
antirez authored
-
- 24 Jan, 2014 1 commit
-
-
antirez authored
-
- 22 Jan, 2014 1 commit
-
-
antirez authored
-