- 21 Aug, 2015 2 commits
-
-
antirez authored
-
antirez authored
Before this commit, after triggering a BGSAVE it was up to the caller of startBgsavForReplication() to handle slaves in WAIT_BGSAVE_START in order to update them accordingly. However when the replication target is the socket, this is not possible since the process of updating the slaves and sending the FULLRESYNC reply must be coupled with the process of starting an RDB save (the reason is, we need to send the FULLSYNC command and spawn a child that will start to send RDB data to the slaves ASAP). This commit moves the responsibility of handling slaves in WAIT_BGSAVE_START to startBgsavForReplication() so that for both diskless and disk-based replication we have the same chain of responsiblity. In order accomodate such change, the syncCommand() also needs to put the client in the slave list ASAP (just after the initial checks) and not at the end, so that startBgsavForReplication() can find the new slave alrady in the list. Another related change is what happens if the BGSAVE fails because of fork() or other errors: we now remove the slave from the list of slaves and send an error, scheduling the slave connection to be terminated. As a side effect of this change the following errors found by Oran Agra are fixed (thanks!): 1. rdbSaveToSlavesSockets() on failed fork will get the slaves cleaned up, otherwise they remain in a wrong state forever since we setup them for full resync before actually trying to fork. 2. updateSlavesWaitingBgsave() with replication target set as "socket" was broken since the function changed the slaves state from WAIT_BGSAVE_START to WAIT_BGSAVE_END via replicationSetupSlaveForFullResync(), so later rdbSaveToSlavesSockets() will not find any slave in the right state (WAIT_BGSAVE_START) to feed.
-
- 20 Aug, 2015 5 commits
-
-
antirez authored
-
antirez authored
Using chained replication where C is slave of B which is in turn slave of A, if B reconnects the replication link with A but discovers it is no longer possible to PSYNC, slaves of B must be disconnected and PSYNC not allowed, since the new B dataset may be completely different after the synchronization with the master. Note that there are varius semantical differences in the way this is handled now compared to the past. In the past the semantics was: 1. When a slave lost connection with its master, disconnected the chained slaves ASAP. Which is not needed since after a successful PSYNC with the master, the slaves can continue and don't need to resync in turn. 2. However after a failed PSYNC the replication backlog was not reset, so a slave was able to PSYNC successfully even if the instance did a full sync with its master, containing now an entirely different data set. Now instead chained slaves are not disconnected when the slave lose the connection with its master, but only when it is forced to full SYNC with its master. This means that if the slave having chained slaves does a successful PSYNC all its slaves can continue without troubles. See issue #2694 for more details.
-
antirez authored
-
antirez authored
Talking with @oranagra we had to reason a little bit to understand if this function could ever flush the output buffers of the wrong slaves, having online state but actually not being ready to receive writes before the first ACK is received from them (this happens with diskless replication). Next time we'll just read this comment.
-
antirez authored
-
- 07 Aug, 2015 2 commits
-
-
antirez authored
It is simpler if removing the read event handler from the FD is up to slaveTryPartialResynchronization, after all it is only called in the context of syncWithMaster. This commit also makes sure that on error all the event handlers are removed from the socket before closing it.
-
antirez authored
-
- 06 Aug, 2015 2 commits
-
-
antirez authored
-
antirez authored
Add the concept of slaves capabilities to Redis, the slave now presents to the Redis master with a set of capabilities in the form: REPLCONF capa SOMECAPA capa OTHERCAPA ... This has the effect of setting slave->slave_capa with the corresponding SLAVE_CAPA macros that the master can test later to understand if it the slave will understand certain formats and protocols of the replication process. This makes it much simpler to introduce new replication capabilities in the future in a way that don't break old slaves or masters. This patch was designed and implemented together with Oran Agra (@oranagra).
-
- 05 Aug, 2015 7 commits
-
-
antirez authored
Our function to read a line with a timeout handles newlines as requests to refresh the timeout, however the code kept subtracting the buffer size left every time a newline was received, for a bug in the loop logic. Fixed by this commit.
-
antirez authored
For PINGs we use the period configured by the user, but for the newlines of slaves waiting for an RDB to be created (including slaves waiting for the FULLRESYNC reply) we need to ping with frequency of 1 second, since the timeout is fixed and needs to be refreshed.
-
antirez authored
-
antirez authored
In previous commits we moved the FULLRESYNC to the moment we start the BGSAVE, so that the offset we provide is the right one. However this also means that we need to re-emit the SELECT statement every time a new slave starts to accumulate the changes. To obtian this effect in a more clean way, the function that sends the FULLRESYNC reply was overloaded with a more important role of also doing this and chanigng the slave state. So it was renamed to replicationSetupSlaveForFullResync() to better reflect what it does now.
-
antirez authored
-
antirez authored
-
antirez authored
This commit attempts to fix a bug involving PSYNC and diskless replication (currently experimental) found by Yuval Inbar from Redis Labs and that was later found to have even more far reaching effects (the bug also exists when diskstore is off). The gist of the bug is that, a Redis master replies with +FULLRESYNC to a PSYNC attempt that fails and requires a full resynchronization. However, the baseline offset sent along with FULLRESYNC was always the current master replication offset. This is not ok, because there are many reasosn that may delay the RDB file creation. And... guess what, the master offset we communicate must be the one of the time the RDB was created. So for example: 1) When the BGSAVE for replication is delayed since there is one already but is not good for replication. 2) When the BGSAVE is not needed as we attach one currently ongoing. 3) When because of diskless replication the BGSAVE is delayed. In all the above cases the PSYNC reply is wrong and the slave may reconnect later claiming to need a wrong offset: this may cause data curruption later.
-
- 17 Jul, 2015 5 commits
-
-
antirez authored
-
Jan-Erik Rediger authored
-
MOON_CLJ authored
-
Yongyue Sun authored
Signed-off-by:
Yongyue Sun <abioy.sun@gmail.com>
-
Tom Kiemes authored
aof_delayed_fsync was not set to 0 when calling CONFIG RESETSTAT
-
- 16 Jul, 2015 2 commits
-
-
antirez authored
The previos attempt to process each client at least once every ten seconds was not a good idea, because: 1. Usually because of the past min iterations set to 50, you get much better processing period most of the times. 2. However when there are many clients and a normal setting for server.hz, the edge case is triggered, and waiting 10 seconds for a BLPOP that asked for 1 second is not ok. 3. Moreover, because of the high min-itereations limit of 50, when HZ was set to an high value, the actual behavior was to process a lot of clients per second. Also the function checking for timeouts called gettimeofday() at each iteration which can be costly. The new implementation will try to process each client once per second, gets the current time as argument, and does not attempt to process more than 5 clients per iteration if not needed. So now: 1. The CPU usage of an idle Redis process is the same or better. 2. The CPU usage of a busy Redis process is the same or better. 3. However a non trivial amount of work may be performed per iteration when there are many many clients. In this particular case the user may want to raise the "HZ" value if needed. Btw with 4000 clients it was still not possible to noticy any actual latency created by processing 400 clients per second, since the work performed for each client is pretty small.
-
antirez authored
-
- 13 Jul, 2015 1 commit
-
-
antirez authored
The new return value is the number of keys existing, among the ones specified in the command line, counting the same key multiple times if given multiple times (and if it exists). See PR #2667.
-
- 11 Jun, 2015 5 commits
-
-
linfangrong authored
-
antirez authored
We usually want to reach the master using the address of the interface Redis is bound to (via the "bind" config option). That's useful since the master will get (and publish) the slave address getting the peer name of the incoming socket connection from the slave. However, when this is not possible, for example because the slave is bound to the loopback interface but repliaces from a master accessed via an external interface, we want to still connect with the master even from a different interface: in this case it is not really important that the master will provide any other address, while it is vital to be able to replicate correctly. Related to issues #2609 and #2612.
-
antirez authored
This performs a best effort source address binding attempt. If it is possible to bind the local address and still have a successful connect(), then this socket is returned. Otherwise the call is retried without source address binding attempt. Related to issues #2609 and #2612.
-
antirez authored
Two code paths jumped to the "ok, return the socket to the user" code path to handle error conditions. Related to issues #2609 and #2612.
-
antirez authored
Related to issues #2609 and #2612.
-
- 04 Jun, 2015 1 commit
-
-
antirez authored
-
- 03 Jun, 2015 1 commit
-
-
Ben Murphy authored
-
- 29 May, 2015 5 commits
-
-
Itamar Haber authored
DEL/INCR/DECR and others could be NTH but apparently never made it to the implementation of SORT
-
antirez authored
From Twitter: "@antirez that’s an awfully-named command :( http://en.wikipedia.org/wiki/Retching"
-
antirez authored
Normally ZADD only returns the number of elements added to a sorted set, using the RETCH option it returns the sum of elements added or for which the score was updated.
-
antirez authored
-
antirez authored
-
- 25 May, 2015 1 commit
-
-
therealbill authored
This new command triggers a config flush to save the in-memory config to disk. This is useful for cases of a configuration management system or a package manager wiping out your sentinel config while the process is still running - and has not yet been restarted. It can also be useful for scripting a backup and migrate or clone of a running sentinel.
-
- 19 May, 2015 1 commit
-
-
antirez authored
A way for monitoring systems to check that Sentinel is technically able to reach the quorum and failover, using the currently visible Sentinels.
-