- 14 Sep, 2018 17 commits
-
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
Aliases added for all the commands mentioning slave. Moreover CONFIG REWRITE will use the new names, and will be able to reuse the old lines mentioning the old options.
-
antirez authored
--slave alias remains but is undocumented, just for backward compatibiltiy.
-
antirez authored
-
Amin Mesbah authored
Slight robustness improvement, especially if the limit values are changed, as was suggested in antires/redis#4291 [1]. [1] https://github.com/antirez/redis/pull/4291
-
Jeffrey Lovitz authored
-
youjiali1995 authored
-
youjiali1995 authored
-
Weiliang Li authored
-
- 06 Sep, 2018 1 commit
-
-
antirez authored
-
- 05 Sep, 2018 8 commits
-
-
antirez authored
-
antirez authored
-
antirez authored
See issue #5250 and issue #5292 for more info.
-
antirez authored
Here the idea is that we do not want freeMemoryIfNeeded() to propagate a DEL command before the script and change what happens in the script execution once it reaches the slave. For example see this potential issue (in the words of @soloestoy): On master, we run the following script: if redis.call('get','key') then redis.call('set','xxx','yyy') end redis.call('set','c','d') Then when redis attempts to execute redis.call('set','xxx','yyy'), we call freeMemoryIfNeeded(), and the key may get deleted, and because redis.call('set','xxx','yyy') has already been executed on master, this script will be replicated to slave. But the slave received "DEL key" before the script, and will ignore maxmemory, so after that master has xxx and c, slave has only one key c. Note that this patch (and other related work) was authored collaboratively in issue #5250 with the help of @soloestoy and @oranagra. Related to issue #5250.
-
antirez authored
See issue #5250 and the new comments added to the code in this commit for details.
-
antirez authored
Related to #5250.
-
antirez authored
See #5304.
-
zhaozhao.zz authored
To avoid copying buffers to create a large Redis Object which exceeding PROTO_IOBUF_LEN 32KB, we just read the remaining data we need, which may less than PROTO_IOBUF_LEN. But the remaining len may be zero, if the bulklen+2 equals sdslen(c->querybuf), in client pause context. For example: Time1: python >>> import os, socket >>> server="127.0.0.1" >>> port=6379 >>> data1="*3\r\n$3\r\nset\r\n$1\r\na\r\n$33000\r\n" >>> data2="".join("x" for _ in range(33000)) + "\r\n" >>> data3="\n\n" >>> s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) >>> s.settimeout(10) >>> s.connect((server, port)) >>> s.send(data1) 28 Time2: redis-cli client pause 10000 Time3: >>> s.send(data2) 33002 >>> s.send(data3) 2 >>> s.send(data3) Traceback (most recent call last): File "<stdin>", line 1, in <module> socket.error: [Errno 104] Connection reset by peer To fix that, we should check if remaining is greater than zero.
-
- 04 Sep, 2018 14 commits
-
-
antirez authored
-
zhaozhao.zz authored
If we are going to read a large object from network try to make it likely that it will start at c->querybuf boundary so that we can optimize object creation avoiding a large copy of data. But only when the data we have not parsed is less than or equal to ll+2. If the data length is greater than ll+2, trimming querybuf is just a waste of time, because at this time the querybuf contains not only our bulk. It's easy to reproduce the that: Time1: call `client pause 10000` on slave. Time2: redis-benchmark -t set -r 10000 -d 33000 -n 10000. Then slave hung after 10 seconds.
-
antirez authored
-
zhaozhao.zz authored
-
zhaozhao.zz authored
-
antirez authored
Related to #5305.
-
antirez authored
See #5297.
-
antirez authored
Technically speaking we don't really need to put the master client in the clients that need to be processed, since in practice the PING commands from the master will take care, however it is conceptually more sane to do so.
-
antirez authored
Processing command from the master while the slave is in busy state is not correct, however we cannot, also, just reply -BUSY to the replication stream commands from the master. The correct solution is to stop processing data from the master, but just accumulate the stream into the buffers and resume the processing later. Related to #5297.
-
antirez authored
However the master scripts will be impossible to kill. Related to #5297.
-
antirez authored
See reasoning in #5297.
-
dejun.xdj authored
-
Chris Lamb authored
-
zhaozhao.zz authored
-