- 13 Feb, 2014 3 commits
-
-
antirez authored
-
antirez authored
It was verified that reverting the commit that fixes the bug, the test no longer passes.
-
antirez authored
This commit fixes a serious Lua scripting replication issue, described by Github issue #1549. The root cause of the problem is that scripts were put inside the script cache, assuming that slaves and AOF already contained it, even if the scripts sometimes produced no changes in the data set, and were not actaully propagated to AOF/slaves. Example: eval "if tonumber(KEYS[1]) > 0 then redis.call('incr', 'x') end" 1 0 Then: evalsha <sha1 step 1 script> 1 0 At this step sha1 of the script is added to the replication script cache (the script is marked as known to the slaves) and EVALSHA command is transformed to EVAL. However it is not dirty (there is no changes to db), so it is not propagated to the slaves. Then the script is called again: evalsha <sha1 step 1 script> 1 1 At this step master checks that the script already exists in the replication script cache and doesn't transform it to EVAL command. It is dirty and propagated to the slaves, but they fail to evaluate the script as they don't have it in the script cache. The fix is trivial and just uses the new API to force the propagation of the executed command regardless of the dirty state of the data set. Thank you to @minus-infinity on Github for finding the issue, understanding the root cause, and fixing it.
-
- 12 Feb, 2014 2 commits
-
-
antirez authored
-
antirez authored
A system similar to the RDB write error handling is used, in which when we can't write to the AOF file, writes are no longer accepted until we are able to write again. For fsync == always we still abort on errors since there is currently no easy way to avoid replying with success to the user otherwise, and this would violate the contract with the user of only acknowledging data already secured on disk.
-
- 11 Feb, 2014 6 commits
-
-
antirez authored
-
antirez authored
Logging them at WARNING level was of little utility and of sure disturb.
-
antirez authored
-
antirez authored
Avoid to trash a configEpoch for every slot migrated if this node has already the max configEpoch across the cluster. Still work to do in this area but this avoids both ending with a very high configEpoch without any reason and to flood the system with fsyncs.
-
antirez authored
The actual goal of the function was to get the max configEpoch found in the cluster, so make it general by removing the assignment of the max epoch to currentEpoch that is useful only at startup.
-
antirez authored
Removed a stale conditional preventing the configEpoch from incrementing after the import in certain conditions. Since the master got a new slot it should always claim a new configuration.
-
- 10 Feb, 2014 22 commits
-
-
antirez authored
The node receiving the hash slot needs to have a version that wins over the other versions in order to force the ownership of the slot. However the current code is far from perfect since a failover can happen during the manual resharding. The fix is a work in progress but the bottom line is that the new version must either be voted as usually, set by redis-trib manually after it makes sure can't be used by other nodes, or reserved configEpochs could be used for manual operations (for example odd versions could be never used by slaves and are always used by CLUSTER SETSLOT NODE).
-
antirez authored
During slots migration redis-trib can send a number of SETSLOT commands. Fsyncing every time is a bit too much in production as verified empirically. To make sure configs are fsynced on all nodes after a resharding redis-trib may send something like CLUSTER CONFSYNC. In this case fsyncs were not providing too much value since anyway processes can crash in the middle of the resharding of an hash slot, and redis-trib should be able to recover from this condition anyway.
-
antirez authored
If the slot is manually assigned to another node, clear the migrating status regardless of the fact it was previously assigned to us or not, as long as we no longer have keys for this slot. This avoid a race during slots migration that may leave the slot in migrating status in the source node, since it received an update message from the destination node that is already claiming the slot. This way we are sure that redis-trib at the end of the slot migration is always able to close the slot correctly.
-
antirez authored
-
antirez authored
The case is the trivial one a single node claiming the slot as migrating, without nodes claiming it as importing.
-
antirez authored
-
antirez authored
There is no way we can update the slave's node->slaveof pointer if we don't know the master (no node with such an ID in our tables).
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
Masters without slots don't participate to the cluster but just do redirections, no need to take them in FAIL state if they are back reachable.
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
PUBLISH both published messages via Cluster bus and replication when cluster was enabled, resulting in duplicated message in the slave.
-
antirez authored
Sounds better after all.
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
- 07 Feb, 2014 3 commits
-
-
antirez authored
-
antirez authored
Currently this is marginally useful, only to make sure two keys are in the same hash slot when the cluster is stable (no rehashing in progress). In the future it is possible that support will be added to run mutli-keys operations with keys in the same hash slot.
-
antirez authored
-
- 05 Feb, 2014 4 commits
-
-
antirez authored
Sometime an osx master with a Linux server over a slow link caused a strange error where osx called the writable function for the socket but actually apparently there was no room in the socket buffer to accept the write: write(2) call returned an EAGAIN error, that was not checked, so we considered write(2) == 0 always as a connection reset, which was unfortunate since the bulk transfer has to start again. Also more errors are logged with the WARNING level in the same code path now.
-
antirez authored
For manual failover we need a manual failover in progress, and that mf_can_start is true (master offset received and matched).
-
antirez authored
-
antirez authored
Otherwise it is always detected as a manual failover timed out.
-