- 19 Jul, 2018 8 commits
- 17 Jul, 2018 1 commit
-
-
Oran Agra authored
The slave sends \n keepalive messages to the master while parsing the rdb, and later sends REPLCONF ACK once a second. rarely, the master recives both a linefeed char and a REPLCONF in the same read, \n*3\r\n$8\r\nREPLCONF\r\n... and it tries to trim two chars (\r\n) from the query buffer, trimming the '*' from *3\r\n$8\r\nREPLCONF\r\n... then the master tries to process a command starting with '3' and replies to the slave a bunch of -ERR and one +OK. although the slave silently ignores these (prints a log message), this corrupts the replication offset at the slave since the slave increases the replication offset, and the master did not. other than the fix in processInlineBuffer, i did several other improvments while hunting this very rare bug. - when redis replies with "unknown command" it includes a portion of the arguments, not just the command name. so it would be easier to understand what was recived, in my case, on the slave side, it was -ERR, but the "arguments" were the interesting part (containing info on the error). - about a year ago i added code in addReplyErrorLength to print the error to the log in case of a reply to master (since this string isn't actually trasmitted to the master), now changed that block to print a similar log message to indicate an error being sent from the master to the slave. note that the slave is marked as CLIENT_SLAVE only after PSYNC was received, so this will not cause any harm for REPLCONF, and will only indicate problems that are gonna corrupt the replication stream anyway. - two places were c->reply was emptied, and i wanted to reset sentlen this is a precaution (i did not actually see such a problem), since a non-zero sentlen will cause corruption to be transmitted on the socket.
-
- 16 Jul, 2018 1 commit
-
-
Oran Agra authored
A) slave buffers didn't count internal fragmentation and sds unused space, this caused them to induce eviction although we didn't mean for it. B) slave buffers were consuming about twice the memory of what they actually needed. - this was mainly due to sdsMakeRoomFor growing to twice as much as needed each time but networking.c not storing more than 16k (partially fixed recently in 237a38737). - besides it wasn't able to store half of the new string into one buffer and the other half into the next (so the above mentioned fix helped mainly for small items). - lastly, the sds buffers had up to 30% internal fragmentation that was wasted, consumed but not used. C) inefficient performance due to starting from a small string and reallocing many times. what i changed: - creating dedicated buffers for reply list, counting their size with zmalloc_size - when creating a new reply node from, preallocate it to at least 16k. - when appending a new reply to the buffer, first fill all the unused space of the previous node before starting a new one. other changes: - expose mem_not_counted_for_evict info field for the benefit of the test suite - add a test to make sure slave buffers are counted correctly and that they don't cause eviction
-
- 14 Jul, 2018 1 commit
-
-
WuYunlong authored
event if we do have problems persisting on disk previously.
-
- 04 Jul, 2018 3 commits
- 03 Jul, 2018 2 commits
-
-
Jack Drogon authored
-
antirez authored
-
- 02 Jul, 2018 1 commit
-
-
antirez authored
-
- 01 Jul, 2018 1 commit
-
-
chendianqiang authored
-
- 27 Jun, 2018 1 commit
-
-
antirez authored
-
- 21 Jun, 2018 1 commit
-
-
michael-grunder authored
Unlike the BZPOP variants, these functions take a single key. This fixes an erroneous CROSSSLOT error when passing a count to a cluster enabled server.
-
- 11 Jun, 2018 1 commit
-
-
antirez authored
A user with many connections (10 thousand) on a single Redis server reports in issue #4983 that sometimes Redis is idle becuase at the same time many clients need to resize their query buffer according to the old policy. It looks like this was created by the fact that we allow the query buffer to grow without problems to a size up to PROTO_MBULK_BIG_ARG normally, but when the client is idle we immediately are more strict, and a query buffer greater than 1024 bytes is already enough to trigger the resize. So for instance if most of the clients stop at the same time this issue should be easily triggered. This behavior actually looks odd, and there should be only a clear limit after we say, let's look at this query buffer to check if it's time to resize it. This commit puts the limit at PROTO_MBULK_BIG_ARG, and the check is performed both if compared to the peak usage the current usage is too big, or if the client is idle. Then when the check is performed, to waste just a few kbytes is considered enough to proceed with the resize. This should fix the issue.
-
- 09 Jun, 2018 1 commit
-
-
Itamar Haber authored
-
- 07 Jun, 2018 2 commits
-
-
Itamar Haber authored
-
antirez authored
Also add the concept of size/items limit, instead of just having as limit the number of bytes.
-
- 04 Jun, 2018 1 commit
-
-
赵磊 authored
-
- 17 May, 2018 1 commit
-
-
Oran Agra authored
problems fixed: * failing to read fragmentation information from jemalloc * overflow in jemalloc fragmentation hint to the defragger * test suite not triggering eviction after population
-
- 11 May, 2018 1 commit
-
-
antirez authored
This commit also adds a top comment about a subtle behavior of mixing blocking operations of different types in the same key.
-
- 29 Apr, 2018 1 commit
-
-
Itamar Haber authored
An implementation of the [Ze POP Redis Module](https://github.com/itamarhaber/zpop) as core Redis commands. Fixes #1861.
-
- 19 Apr, 2018 1 commit
-
-
antirez authored
-
- 18 Apr, 2018 1 commit
-
-
antirez authored
-
- 06 Apr, 2018 1 commit
-
-
charpty authored
Signed-off-by:
charpty <charpty@gmail.com>
-
- 20 Mar, 2018 1 commit
-
-
antirez authored
With XINFO out of the blue I invented a new syntax for commands never used in Redis in the past... Let's fix it and make it Great Again!!11one (TM)
-
- 19 Mar, 2018 1 commit
-
-
antirez authored
-
- 15 Mar, 2018 7 commits