- 19 Nov, 2019 1 commit
-
-
antirez authored
This is what happened: 1. Instance starts, is a slave in the cluster configuration, but actually server.masterhost is not set, so technically the instance is acting like a master. 2. loadDataFromDisk() calls replicationCacheMasterUsingMyself() even if the instance is a master, in the case it is logically a slave and the cluster is enabled. So now we have a cached master even if the instance is practically configured as a master (from the POV of server.masterhost value and so forth). 3. clusterCron() sees that the instance requires to replicate from its master, because logically it is a slave, so it calls replicationSetMaster() that will in turn call replicationCacheMasterUsingMyself(): before this commit, this call would overwrite the old cached master, creating a memory leak.
-
- 25 Sep, 2019 1 commit
-
-
antirez authored
-
- 15 May, 2019 2 commits
-
-
antirez authored
CLIENT PAUSE may be used, in other contexts, for a long time making all the slaves time out. Better for now to be more specific about what should disable senidng PINGs. An alternative to that would be to virtually refresh the slave interactions when clients are paused, however for now I went for this more conservative solution.
-
chendianqiang authored
-
- 13 May, 2019 4 commits
- 10 Mar, 2019 2 commits
-
-
antirez authored
-
John Sully authored
-
- 05 Nov, 2018 3 commits
-
-
antirez authored
This logs what happens in the context of the fix in PR #5367.
-
Andrey Bugaevskiy authored
-
Andrey Bugaevskiy authored
During the full database resync we may still have unsaved changes on the receiving side. This causes a race condition between synced data rename/load and the rename of rdbSave tempfile.
-
- 09 Oct, 2018 1 commit
-
-
antirez authored
-
- 14 Sep, 2018 2 commits
- 17 Jul, 2018 1 commit
-
-
Oran Agra authored
The slave sends \n keepalive messages to the master while parsing the rdb, and later sends REPLCONF ACK once a second. rarely, the master recives both a linefeed char and a REPLCONF in the same read, \n*3\r\n$8\r\nREPLCONF\r\n... and it tries to trim two chars (\r\n) from the query buffer, trimming the '*' from *3\r\n$8\r\nREPLCONF\r\n... then the master tries to process a command starting with '3' and replies to the slave a bunch of -ERR and one +OK. although the slave silently ignores these (prints a log message), this corrupts the replication offset at the slave since the slave increases the replication offset, and the master did not. other than the fix in processInlineBuffer, i did several other improvments while hunting this very rare bug. - when redis replies with "unknown command" it includes a portion of the arguments, not just the command name. so it would be easier to understand what was recived, in my case, on the slave side, it was -ERR, but the "arguments" were the interesting part (containing info on the error). - about a year ago i added code in addReplyErrorLength to print the error to the log in case of a reply to master (since this string isn't actually trasmitted to the master), now changed that block to print a similar log message to indicate an error being sent from the master to the slave. note that the slave is marked as CLIENT_SLAVE only after PSYNC was received, so this will not cause any harm for REPLCONF, and will only indicate problems that are gonna corrupt the replication stream anyway. - two places were c->reply was emptied, and i wanted to reset sentlen this is a precaution (i did not actually see such a problem), since a non-zero sentlen will cause corruption to be transmitted on the socket.
-
- 16 Jul, 2018 1 commit
-
-
Oran Agra authored
A) slave buffers didn't count internal fragmentation and sds unused space, this caused them to induce eviction although we didn't mean for it. B) slave buffers were consuming about twice the memory of what they actually needed. - this was mainly due to sdsMakeRoomFor growing to twice as much as needed each time but networking.c not storing more than 16k (partially fixed recently in 237a38737). - besides it wasn't able to store half of the new string into one buffer and the other half into the next (so the above mentioned fix helped mainly for small items). - lastly, the sds buffers had up to 30% internal fragmentation that was wasted, consumed but not used. C) inefficient performance due to starting from a small string and reallocing many times. what i changed: - creating dedicated buffers for reply list, counting their size with zmalloc_size - when creating a new reply node from, preallocate it to at least 16k. - when appending a new reply to the buffer, first fill all the unused space of the previous node before starting a new one. other changes: - expose mem_not_counted_for_evict info field for the benefit of the test suite - add a test to make sure slave buffers are counted correctly and that they don't cause eviction
-
- 03 Jul, 2018 2 commits
-
-
Jack Drogon authored
-
antirez authored
PR #5081 fixes an "interesting" bug about Redis Cluster failover but in general about the updating of repl_down_since, that is used in order to count the time a slave was left disconnected from its master. While the fix provided resolves the specific issue, in general the validity of repl_down_since is limited to states that are different than the state CONNECTED, and the disconnected time is set when the state is DISCONNECTED. However from CONNECTED to other states, the state machine must always go to DISCONNECTED first. So it makes sense to set the field to zero (since it is meaningless in that context) when the state is set to CONNECTED.
-
- 30 Jun, 2018 1 commit
-
-
WuYunlong authored
automatically as expected.
-
- 26 Jun, 2018 4 commits
-
-
antirez authored
Related to #5037.
-
antirez authored
-
Madelyn Olson authored
-
Madelyn Olson authored
-
- 12 Jun, 2018 1 commit
-
-
shenlongxing authored
-
- 06 Jun, 2018 1 commit
-
-
shenlongxing authored
-
- 16 Jan, 2018 2 commits
-
-
antirez authored
-
Oran Agra authored
after a slave is promoted (assuming it has no slaves and it booted over an hour ago), it will lose it's replication backlog at the next replication cron, rather than waiting for slaves to connect to it. so on a simple master/slave faiover, if the new slave doesn't connect immediately, it may be too later and PSYNC2 will fail.
-
- 05 Dec, 2017 1 commit
-
-
antirez authored
We have this operation in two places: when caching the master and when linking a new client after the client creation. By having an API for this we avoid incurring in errors when modifying one of the two places forgetting the other. The function is also a good place where to document why we cache the linked list node. Related to #4497 and #4210.
-
- 30 Nov, 2017 1 commit
-
-
zhaozhao.zz authored
-
- 24 Nov, 2017 1 commit
-
-
antirez authored
Related to PR #4412 and issue #4407.
-
- 01 Nov, 2017 1 commit
-
-
zhaozhao.zz authored
When we free the backlog, we should use a new replication ID and clear the ID2. Since without backlog we can not increment master_repl_offset even do write commands, that may lead to inconsistency when we try to connect a "slave-before" master (if this master is our slave before, our replid equals the master's replid2). As the master have our history, so we can match the master's replid2 and second_replid_offset, that make partial sync work, but the data is inconsistent.
-
- 20 Sep, 2017 2 commits
-
-
antirez authored
-
zhaozhao.zz authored
This commit is a reinforcement of commit c1c99e9f. 1. Replication information can be stored when the RDB file is generated by a mater using server.slaveseldb when server.repl_backlog is not NULL, or set repl_stream_db be -1. That's safe, because NULL server.repl_backlog will trigger full synchronization, then master will send SELECT command to replicaiton stream. 2. Only do rdbSave* when rsiptr is not NULL, if we do rdbSave* without rdbSaveInfo, slave will miss repl-stream-db. 3. Save the replication informations also in the case of SAVE command, FLUSHALL command and DEBUG reload.
-
- 19 Sep, 2017 2 commits
-
-
antirez authored
This commit attempts to fix a number of bugs reported in #4316. They are related to the way replication info like replication ID, offsets, and currently selected DB in the master client, are stored and loaded by Redis. In order to avoid inconsistencies the changes in this commit try to enforce that: 1. Replication information are only stored when the RDB file is generated by a slave that has a valid 'master' client, so that we can always extract the currently selected DB. 2. When replication informations are persisted in the RDB file, all the info for a successful PSYNC or nothing is persisted. 3. The RDB replication informations are only loaded if the instance is configured as a slave, otherwise a master can start with IDs that relate to a different history of the data set, and stil retain such IDs in the future while receiving unrelated writes.
-
antirez authored
A slave may be started with an RDB file able to provide enough slave to perform a successful partial SYNC with its master. However in such a case, how outlined in issue #4268, the slave backlog will not be started, since it was only initialized on full syncs attempts. This creates different problems with successive PSYNC attempts that will always result in full synchronizations. Thanks to @fdingiit for discovering the issue.
-
- 31 Aug, 2017 1 commit
-
-
jianqingdu authored
fix not call va_end when syncWrite() failed in sendSynchronousCommand()
-
- 27 Apr, 2017 1 commit
-
-
antirez authored
The master client cleanup was incomplete: resetClient() was missing and the output buffer of the client was not reset, so pending commands related to the previous connection could be still sent. The first problem caused the client argument vector to be, at times, half populated, so that when the correct replication stream arrived the protcol got mixed to the arugments creating invalid commands that nobody called. Thanks to @yangsiran for also investigating this problem, after already providing important design / implementation hints for the original PSYNC2 issues (see referenced Github issue). Note that this commit adds a new function to the list library of Redis in order to be able to reset a list without destroying it. Related to issue #3899.
-
- 19 Apr, 2017 1 commit
-
-
antirez authored
During the review of the fix for #3899, @yangsiran identified an implementation bug: given that the offset is now relative to the applied part of the replication log, when we cache a master, the successive PSYNC2 request will be made in order to *include* the transaction that was not completely processed. This means that we need to discard any pending transaction from our replication buffer: it will be re-executed.
-