- 05 Sep, 2018 4 commits
-
-
antirez authored
See issue #5250 and the new comments added to the code in this commit for details.
-
antirez authored
Related to #5250.
-
antirez authored
See #5304.
-
zhaozhao.zz authored
To avoid copying buffers to create a large Redis Object which exceeding PROTO_IOBUF_LEN 32KB, we just read the remaining data we need, which may less than PROTO_IOBUF_LEN. But the remaining len may be zero, if the bulklen+2 equals sdslen(c->querybuf), in client pause context. For example: Time1: python >>> import os, socket >>> server="127.0.0.1" >>> port=6379 >>> data1="*3\r\n$3\r\nset\r\n$1\r\na\r\n$33000\r\n" >>> data2="".join("x" for _ in range(33000)) + "\r\n" >>> data3="\n\n" >>> s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) >>> s.settimeout(10) >>> s.connect((server, port)) >>> s.send(data1) 28 Time2: redis-cli client pause 10000 Time3: >>> s.send(data2) 33002 >>> s.send(data3) 2 >>> s.send(data3) Traceback (most recent call last): File "<stdin>", line 1, in <module> socket.error: [Errno 104] Connection reset by peer To fix that, we should check if remaining is greater than zero.
-
- 04 Sep, 2018 14 commits
-
-
antirez authored
-
zhaozhao.zz authored
If we are going to read a large object from network try to make it likely that it will start at c->querybuf boundary so that we can optimize object creation avoiding a large copy of data. But only when the data we have not parsed is less than or equal to ll+2. If the data length is greater than ll+2, trimming querybuf is just a waste of time, because at this time the querybuf contains not only our bulk. It's easy to reproduce the that: Time1: call `client pause 10000` on slave. Time2: redis-benchmark -t set -r 10000 -d 33000 -n 10000. Then slave hung after 10 seconds.
-
antirez authored
-
zhaozhao.zz authored
-
zhaozhao.zz authored
-
antirez authored
Related to #5305.
-
antirez authored
See #5297.
-
antirez authored
Technically speaking we don't really need to put the master client in the clients that need to be processed, since in practice the PING commands from the master will take care, however it is conceptually more sane to do so.
-
antirez authored
Processing command from the master while the slave is in busy state is not correct, however we cannot, also, just reply -BUSY to the replication stream commands from the master. The correct solution is to stop processing data from the master, but just accumulate the stream into the buffers and resume the processing later. Related to #5297.
-
antirez authored
However the master scripts will be impossible to kill. Related to #5297.
-
antirez authored
See reasoning in #5297.
-
dejun.xdj authored
-
Chris Lamb authored
-
zhaozhao.zz authored
-
- 31 Aug, 2018 1 commit
-
-
Salvatore Sanfilippo authored
#5299 Fix blocking XREAD for streams that ran dry
-
- 29 Aug, 2018 21 commits
-
-
Sascha Roland authored
The conclusion, that a xread request can be answered syncronously in case that the stream's last_id is larger than the passed last-received-id parameter, assumes, that there must be entries present, which could be returned immediately. This assumption fails for empty streams that actually contained some entries which got removed by xdel, ... . As result, the client is answered synchronously with an empty result, instead of blocking for new entries to arrive. An additional check for a non-empty stream is required.
-
antirez authored
-
zhaozhao.zz authored
-
zhaozhao.zz authored
-
Oran Agra authored
Few tests had borderline thresholds that were adjusted. The slave buffers test had two issues, preventing the slave buffer from growing: 1) the slave didn't necessarily go to sleep on time, or woke up too early, now using SIGSTOP to make sure it goes to sleep exactly when we want. 2) the master disconnected the slave on timeout
-
antirez authored
-
antirez authored
-
antirez authored
Note: this breaks backward compatibility with Redis 4, since now slaves by default are exact copies of masters and do not try to evict keys independently.
-
antirez authored
-
antirez authored
-
zhaozhao.zz authored
-
zhaozhao.zz authored
-
zhaozhao.zz authored
Function setProtocolError just records proctocol error details in server log, set client as CLIENT_CLOSE_AFTER_REPLY. It doesn't care about querybuf sdsrange, because we will do it after procotol parsing.
-
zhaozhao.zz authored
-
zhaozhao.zz authored
-
zhaozhao.zz authored
-
zhaozhao.zz authored
This is an optimization for processing pipeline, we discussed a problem in issue #5229: clients may be paused if we apply `CLIENT PAUSE` command, and then querybuf may grow too large, the cost of memmove in sdsrange after parsing a completed command will be horrible. The optimization is that parsing all commands in queyrbuf , after that we can just call sdsrange only once.
-
Chris Lamb authored
See <https://reproducible-builds.org/specs/source-date-epoch/ > for more details. Signed-off-by:
Chris Lamb <chris@chris-lamb.co.uk>
-
Chris Lamb authored
This may look a little pointless (and it is a complete no-op change here) but as package maintainers need to modify these lines to actually daemonize (etc. etc) but it's far preferable if the diff is restricted to actually changing just that bit, not adding docs, etc. The less diff the better, in general. Signed-off-by:
Chris Lamb <chris@chris-lamb.co.uk>
-
dejun.xdj authored
-
shenlongxing authored
-