- 06 Sep, 2020 1 commit
-
-
Oran Agra authored
Starting redis 6.0 and the changes we made to the diskless master to be suitable for TLS, I made the master avoid reaping (wait3) the pid of the child until we know all replicas are done reading their rdb. I did that in order to avoid a state where the rdb_child_pid is -1 but we don't yet want to start another fork (still busy serving that data to replicas). It turns out that the solution used so far was problematic in case the fork child was being killed (e.g. by the kernel OOM killer), in that case there's a chance that we currently disabled the read event on the rdb pipe, since we're waiting for a replica to become writable again. and in that scenario the master would have never realized the child exited, and the replica will remain hung too. Note that there's no mechanism to detect a hung replica while it's in rdb transfer state. The solution here is to add another pipe which is used by the parent to tell the child it is safe to exit. this mean that when the child exits, for whatever reason, it is safe to reap it. Besides that, i'm re-introducing an adjustment to REPLCONF ACK which was part of #6271 (Accelerate diskless master connections) but was dropped when that PR was rebased after the TLS fork/pipe changes (5a477946). Now that RdbPipeCleanup no longer calls checkChildrenDone, and the ACK has chance to detect that the child exited, it should be the one to call it so that we don't have to wait for cron (server.hz) to do that.
-
- 19 Aug, 2020 1 commit
-
-
Wang Yuan authored
-
- 13 Aug, 2020 1 commit
-
-
caozb authored
Client that issued WAIT last will most likely have the highest replication offset, so imagine a probably common case where all clients are waiting for the same number of replicas. we prefer the loop to start from the last client (one waiting for the highest offset), so that the optimization in the function will call replicationCountAcksByOffset for each client until it found a good one, and stop calling it for the rest of the clients. the way the loop was implemented would mean that in such case it is likely to call replicationCountAcksByOffset for all clients. Note: the change from > to >= is not directly related to the above. Co-authored-by:
曹正斌 <caozb@jiedaibao.com>
-
- 12 Aug, 2020 1 commit
-
-
Yossi Gottlieb authored
Add Linux kernel OOM killer control option. This adds the ability to control the Linux OOM killer oom_score_adj parameter for all Redis processes, depending on the process role (i.e. master, replica, background child). A oom-score-adj global boolean flag control this feature. In addition, specific values can be configured using oom-score-adj-values if additional tuning is required.
-
- 06 Aug, 2020 1 commit
-
-
Oran Agra authored
Diskless master has some inherent latencies. 1) fork starts with delay from cron rather than immediately 2) replica is put online only after an ACK. but the ACK was sent only once a second. 3) but even if it would arrive immediately, it will not register in case cron didn't yet detect that the fork is done. Besides that, when a replica disconnects, it doesn't immediately attempts to re-connect, it waits for replication cron (one per second). in case it was already online, it may be important to try to re-connect as soon as possible, so that the backlog at the master doesn't vanish. In case it disconnected during rdb transfer, one can argue that it's not very important to re-connect immediately, but this is needed for the "diskless loading short read" test to be able to run 100 iterations in 5 seconds, rather than 3 (waiting for replication cron re-connection) changes in this commit: 1) sync command starts a fork immediately if no sync_delay is configured 2) replica sends REPLCONF ACK when done reading the rdb (rather than on 1s cron) 3) when a replica unexpectedly disconnets, it immediately tries to re-connect rather than waiting 1s 4) when when a child exits, if there is another replica waiting, we spawn a new one right away, instead of waiting for 1s replicationCron. 5) added a call to connectWithMaster from replicationSetMaster. which is called from the REPLICAOF command but also in 3 places in cluster.c, in all of these the connection attempt will now be immediate instead of delayed by 1 second. side note: we can add a call to rdbPipeReadHandler in replconfCommand when getting a REPLCONF ACK from the replica to solve a race where the replica got the entire rdb and EOF marker before we detected that the pipe was closed. in the test i did see this race happens in one about of some 300 runs, but i concluded that this race is unlikely in real life (where the replica is on another host and we're more likely to first detect the pipe was closed. the test runs 100 iterations in 3 seconds, so in some cases it'll take 4 seconds instead (waiting for another REPLCONF ACK). Removing unneeded startBgsaveForReplication from updateSlavesWaitingForBgsave Now that CheckChildrenDone is calling the new replicationStartPendingFork (extracted from serverCron) there's actually no need to call startBgsaveForReplication from updateSlavesWaitingForBgsave anymore, since as soon as updateSlavesWaitingForBgsave returns, CheckChildrenDone is calling replicationStartPendingFork that handles that anyway. The code in updateSlavesWaitingForBgsave had a bug in which it ignored repl-diskless-sync-delay, but removing that code shows that this bug was hiding another bug, which is that the max_idle should have used >= and not >, this one second delay has a big impact on my new test.
-
- 31 Jul, 2020 1 commit
-
-
Oran Agra authored
besides, hooks test was time sensitive. when the replica managed to reconnect quickly after the client kill, the test would fail
-
- 20 Jul, 2020 1 commit
-
-
zhaozhao.zz authored
-
- 14 Jul, 2020 1 commit
-
-
Oran Agra authored
in case the rdb child failed, crashed or terminated unexpectedly redis would have marked the replica clients with repl_put_online_on_ack and then kill them only after a minute when no ack was received. it would not stream anything to these connections, so the only effect of this bug is a delay of 1 minute in the replicas attempt to re-connect.
-
- 06 Jun, 2020 2 commits
-
- 28 May, 2020 1 commit
-
-
antirez authored
-
- 27 May, 2020 2 commits
- 26 May, 2020 2 commits
- 25 May, 2020 2 commits
-
-
antirez authored
-
zhaozhao.zz authored
After adjustMeaningfulReplOffset(), all the other related variable should be updated, including server.second_replid_offset. Or the old version redis like 5.0 may receive wrong data from replication stream, cause redis 5.0 can sync with redis 6.0, but doesn't know meaningful offset.
-
- 22 May, 2020 3 commits
-
-
antirez authored
Otherwise we run into that: Backtrace: src/redis-server 127.0.0.1:21322(logStackTrace+0x45)[0x479035] src/redis-server 127.0.0.1:21322(sigsegvHandler+0xb9)[0x4797f9] /lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7fd373c5e390] src/redis-server 127.0.0.1:21322(_serverAssert+0x6a)[0x47660a] src/redis-server 127.0.0.1:21322(freeReplicationBacklog+0x42)[0x451282] src/redis-server 127.0.0.1:21322[0x4552d4] src/redis-server 127.0.0.1:21322[0x4c5593] src/redis-server 127.0.0.1:21322(aeProcessEvents+0x2e6)[0x42e786] src/redis-server 127.0.0.1:21322(aeMain+0x1d)[0x42eb0d] src/redis-server 127.0.0.1:21322(main+0x4c5)[0x42b145] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7fd3738a3830] src/redis-server 127.0.0.1:21322(_start+0x29)[0x42b409] Since we disconnect all the replicas and free the replication backlog in certain replication paths, and the code that will free the replication backlog expects that no replica is connected. However we still need to free the replicas asynchronously in certain cases, as documented in the top comment of disconnectSlaves().
-
antirez authored
Citing from the issue: btw I suggest we change this fix to something else: * We revert the fix. * We add a call that disconnects chained replicas in the place where we trim the replica (that is a master i this case) offset. This way we can avoid disconnections when there is no trimming of the backlog. Note that we now want to disconnect replicas asynchronously in disconnectSlaves(), because it's in general safer now that we can call it from freeClient(). Otherwise for instance the command: CLIENT KILL TYPE master May crash: clientCommand() starts running the linked of of clients, looking for clients to kill. However it finds the master, kills it calling freeClient(), but this in turn calls replicationCacheMaster() that may also call disconnectSlaves() now. So the linked list iterator of the clientCommand() will no longer be valid.
-
Qu Chen authored
Disconnect chained replicas when the replica performs PSYNC with the master always to avoid replication offset mismatch between master and chained replicas.
-
- 21 May, 2020 1 commit
-
-
Madelyn Olson authored
-
- 16 May, 2020 1 commit
-
-
antirez authored
This was broken in 1a7cd2c0: we identified a crash in the CI, what was happening before the fix should be like that: 1. The client gets in the async free list. 2. However freeClient() gets called again against the same client which is a master. 3. The client arrived in freeClient() with the CLOSE_ASAP flag set. 4. The master gets cached, but NOT removed from the CLOSE_ASAP linked list. 5. The master client that was cached was immediately removed since it was still in the list. 6. Redis accessed a freed cached master. This is how the crash looked like: === REDIS BUG REPORT START: Cut & paste starting from here === 1092:S 16 May 2020 11:44:09.731 # Redis 999.999.999 crashed by signal: 11 1092:S 16 May 2020 11:44:09.731 # Crashed running the instruction at: 0x447e18 1092:S 16 May 2020 11:44:09.731 # Accessing address: 0xffffffffffffffff 1092:S 16 May 2020 11:44:09.731 # Failed assertion: (:0) ------ STACK TRACE ------ EIP: src/redis-server 127.0.0.1:21300(readQueryFromClient+0x48)[0x447e18] And the 0xffff address access likely comes from accessing an SDS that is set to NULL (we go -1 offset to read the header).
-
- 15 May, 2020 1 commit
-
-
antirez authored
The context is issue #7205: since the introduction of threaded I/O we close clients asynchronously by default from readQueryFromClient(). So we should no longer prevent the caching of the master client, to later PSYNC incrementally, if such flags are set. However we also don't want the master client to be cached with such flags (would be closed immediately after being restored). And yet we want a way to understand if a master was closed because of a protocol error, and in that case prevent the caching.
-
- 27 Apr, 2020 1 commit
-
-
Oran Agra authored
Now both master and replicas keep track of the last replication offset that contains meaningful data (ignoring the tailing pings), and both trim that tail from the replication backlog, and the offset with which they try to use for psync. the implication is that if someone missed some pings, or even have excessive pings that the promoted replica has, it'll still be able to psync (avoid full sync). the downside (which was already committed) is that replicas running old code may fail to psync, since the promoted replica trims pings form it's backlog. This commit adds a test that reproduces several cases of promotions and demotions with stale and non-stale pings Background: The mearningful offset on the master was added recently to solve a problem were the master is left all alone, injecting PINGs into it's backlog when no one is listening and then gets demoted and tries to replicate from a replica that didn't have any of the PINGs (or at least not the last ones). however, consider this case: master A has two replicas (B and C) replicating directly from it. there's no traffic at all, and also no network issues, just many pings in the tail of the backlog. now B gets promoted, A becomes a replica of B, and C remains a replica of A. when A gets demoted, it trims the pings from its backlog, and successfully replicate from B. however, C is still aware of these PINGs, when it'll disconnect and re-connect to A, it'll ask for something that's not in the backlog anymore (since A trimmed the tail of it's backlog), and be forced to do a full sync (something it didn't have to do before the meaningful offset fix). Besides that, the psync2 test was always failing randomly here and there, it turns out the reason were PINGs. Investigating it shows the following scenario: cycle 1: redis #1 is master, and all the rest are direct replicas of #1 cycle 2: redis #2 is promoted to master, #1 is a replica of #2 and #3 is replica of #1 now we see that when #1 is demoted it prints: 17339:S 21 Apr 2020 11:16:38.523 * Using the meaningful offset 3929963 instead of 3929977 to exclude the final PINGs (14 bytes difference) 17339:S 21 Apr 2020 11:16:39.391 * Trying a partial resynchronization (request e2b3f8817735fdfe5fa4626766daa938b61419e5:3929964). 17339:S 21 Apr 2020 11:16:39.392 * Successful partial resynchronization with master. and when #3 connects to the demoted #2, #2 says: 17339:S 21 Apr 2020 11:16:40.084 * Partial resynchronization not accepted: Requested offset for secondary ID was 3929978, but I can reply up to 3929964 so the issue here is that the meaningful offset feature saved the day for the demoted master (since it needs to sync from a replica that didn't get the last ping), but it didn't help one of the other replicas which did get the last ping.
-
- 14 Apr, 2020 1 commit
-
-
ShooterIT authored
-
- 28 Mar, 2020 1 commit
-
-
zhaozhao.zz authored
-
- 27 Mar, 2020 1 commit
-
-
antirez authored
See #7002.
-
- 25 Mar, 2020 1 commit
-
-
antirez authored
A very commonly signaled operational problem with Redis master-replicas sets is that, once the master becomes unavailable for some reason, especially because of network problems, many times it wont be able to perform a partial resynchronization with the new master, once it rejoins the partition, for the following reason: 1. The master becomes isolated, however it keeps sending PINGs to the replicas. Such PINGs will never be received since the link connection is actually already severed. 2. On the other side, one of the replicas will turn into the new master, setting its secondary replication ID offset to the one of the last command received from the old master: this offset will not include the PINGs sent by the master once the link was already disconnected. 3. When the master rejoins the partion and is turned into a replica, its offset will be too advanced because of the PINGs, so a PSYNC will fail, and a full synchronization will be required. Related to issue #7002 and other discussion we had in the past around this problem.
-
- 23 Mar, 2020 1 commit
-
-
antirez authored
-
- 04 Mar, 2020 4 commits
- 03 Mar, 2020 1 commit
-
-
antirez authored
-
- 25 Feb, 2020 1 commit
-
-
Hengjian Tang authored
-
- 06 Feb, 2020 3 commits
-
-
Guy Benoish authored
1. Call emptyDb even in case of diskless-load: We want modules to get the same FLUSHDB event as disk-based replication. 2. Do not fire any module events when flushing the backups array. 3. Delete redundant call to signalFlushedDb (Called from emptyDb).
-
Oran Agra authored
replicationUnsetMaster can be called from other places, not just replicaofCOmmand, and all of these need to restart AOF
-
Oran Agra authored
this function possibly iterates on the module list
-
- 31 Dec, 2019 1 commit
-
-
ShooterIT authored
-
- 19 Dec, 2019 1 commit
-
-
Johannes Truschnigg authored
"Partial Resynchronization" is a special variant of replication success that we have to tell systemd about if it is managing redis-server via a Type=Notify service unit.
-