• antirez's avatar
    startBgsaveForReplication(): handle waiting slaves state change. · c2ff9de3
    antirez authored
    Before this commit, after triggering a BGSAVE it was up to the caller of
    startBgsavForReplication() to handle slaves in WAIT_BGSAVE_START in
    order to update them accordingly. However when the replication target is
    the socket, this is not possible since the process of updating the
    slaves and sending the FULLRESYNC reply must be coupled with the process
    of starting an RDB save (the reason is, we need to send the FULLSYNC
    command and spawn a child that will start to send RDB data to the slaves
    ASAP).
    
    This commit moves the responsibility of handling slaves in
    WAIT_BGSAVE_START to startBgsavForReplication() so that for both
    diskless and disk-based replication we have the same chain of
    responsiblity. In order accomodate such change, the syncCommand() also
    needs to put the client in the slave list ASAP (just after the initial
    checks) and not at the end, so that startBgsavForReplication() can find
    the new slave alrady in the list.
    
    Another related change is what happens if the BGSAVE fails because of
    fork() or other errors: we now remove the slave from the list of slaves
    and send an error, scheduling the slave connection to be terminated.
    
    As a side effect of this change the following errors found by
    Oran Agra are fixed (thanks!):
    
    1. rdbSaveToSlavesSockets() on failed fork will get the slaves cleaned
    up, otherwise they remain in a wrong state forever since we setup them
    for full resync before actually trying to fork.
    
    2. updateSlavesWaitingBgsave() with replication target set as "socket"
    was broken since the function changed the slaves state from
    WAIT_BGSAVE_START to WAIT_BGSAVE_END via
    replicationSetupSlaveForFullResync(), so later rdbSaveToSlavesSockets()
    will not find any slave in the right state (WAIT_BGSAVE_START) to feed.
    c2ff9de3
replication.c 92.7 KB