- 29 Oct, 2014 1 commit
-
-
antirez authored
This caused BGSAVE to be triggered a second time without any need when we switch from socket to disk target via the command CONFIG SET repl-diskless-sync no and there is already a slave waiting for the BGSAVE to start. Also comments clarified about what is happening.
-
- 27 Oct, 2014 6 commits
- 24 Oct, 2014 3 commits
- 23 Oct, 2014 1 commit
-
-
antirez authored
Child now reports full info to the parent including IDs of slaves in failure state and exit code.
-
- 22 Oct, 2014 3 commits
-
-
antirez authored
EWOULDBLOCK with the fdset rio target is returned when we try to write but the send timeout socket option triggered an error. Better to translate the error in something the user can actually recognize as a timeout.
-
antirez authored
We need to avoid that a child -> slaves transfer can continue forever. We use the same timeout used as global replication timeout, which is documented to also affect I/O operations during bulk transfers.
-
antirez authored
-
- 17 Oct, 2014 10 commits
-
-
antirez authored
-
antirez authored
-
antirez authored
While the socket is set in blocking mode, we still can get short writes writing to a socket.
-
antirez authored
-
antirez authored
To perform a socket write() for each RDB rio API write call was extremely unefficient, so now rio has minimal buffering capabilities. Writes are accumulated into a buffer and only when a given limit is reacehd are actually wrote to the N slaves FDs. Trivia: rio lacked support for buffering since our targets were: 1) Memory buffers. 2) C standard I/O. Both were buffered already.
-
antirez authored
-
antirez authored
This is useful for normal replication in order to refresh the slave when we are persisting on disk, but for diskless replication the child is already receiving data while in WAIT_BGSAVE_END state.
-
antirez authored
-
antirez authored
-
antirez authored
-
- 16 Oct, 2014 6 commits
-
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
If we turn from diskless to disk-based replication via CONFIG SET, we need a way to start a BGSAVE if there are slaves alerady waiting for a BGSAVE to start. Normally with disk-based replication we do it as soon as the previous child exits, but when there is a configuration change via CONFIG SET, we may have slaves in WAIT_BGSAVE_START state without an RDB background process currently active.
-
antirez authored
-
antirez authored
-
- 15 Oct, 2014 3 commits
- 14 Oct, 2014 3 commits
-
-
antirez authored
Fdset target is used when we want to write an RDB file directly to slave's sockets. In this setup as long as there is a single slave that is still receiving our payload, we want to continue sennding instead of aborting. However rio calls should abort of no FD is ok. Also we want the errors reported so that we can signal the parent who is ok and who is broken, so there is a new set integers with the state of each fd. Zero is ok, non-zero is the errno of the failure, if avaialble, or a generic EIO.
-
antirez authored
-
antirez authored
-
- 10 Oct, 2014 3 commits
- 08 Oct, 2014 1 commit
-
-
antirez authored
We need to remember what is the saving strategy of the current RDB child process, since the configuration may be modified at runtime via CONFIG SET and still we'll need to understand, when the child exists, what to do and for what goal the process was initiated: to create an RDB file on disk or to write stuff directly to slave's sockets.
-