- 25 Feb, 2019 5 commits
- 21 Feb, 2019 2 commits
-
-
antirez authored
-
Madelyn Olson authored
-
- 12 Feb, 2019 2 commits
-
-
antirez authored
Related to #5832.
-
zhaozhao.zz authored
-
- 18 Jan, 2019 1 commit
-
-
antirez authored
-
- 15 Jan, 2019 1 commit
-
-
antirez authored
-
- 11 Jan, 2019 1 commit
-
-
antirez authored
-
- 10 Jan, 2019 1 commit
-
-
antirez authored
-
- 09 Jan, 2019 14 commits
-
-
antirez authored
-
antirez authored
The function naming was totally nuts. Let's fix it as we break PRs anyway with RESP3 refactoring and changes.
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
- 11 Dec, 2018 1 commit
-
-
antirez authored
See #5663.
-
- 07 Dec, 2018 1 commit
-
-
zhaozhao.zz authored
-
- 04 Dec, 2018 1 commit
-
-
Madelyn Olson authored
-
- 30 Oct, 2018 1 commit
-
-
antirez authored
Fake clients are used in special situations and are not linked to the normal clients list, freeing them will always result in Redis crashing in one way or the other. It's not common to send replies to fake clients, but we have one usage in the modules API. When a client is blocked, we associate to the blocked client object (that is safe to manipulate in a thread), a fake client that accumulates replies. So because of this bug there was the problem described in issue #5443. The fix was verified to work with the provided example module. To write a regression is very hard and unlikely to be triggered in the future.
-
- 09 Oct, 2018 3 commits
-
-
zhaozhao.zz authored
-
antirez authored
Related to #4840. Note that when we re-enter the event loop with aeProcessEvents() we don't process timers, nor before/after sleep callbacks, so we should never end calling freeClientsInAsyncFreeQueue() when re-entering the loop.
-
antirez authored
The idea is to have an API for the cases like -BUSY state and DEBUG RELOAD where we have to manually deinstall the read handler. See #4804.
-
- 11 Sep, 2018 2 commits
- 04 Sep, 2018 1 commit
-
-
antirez authored
See #5304.
-
- 03 Sep, 2018 3 commits
-
-
antirez authored
-
antirez authored
Related to #5305.
-
zhaozhao.zz authored
If we are going to read a large object from network try to make it likely that it will start at c->querybuf boundary so that we can optimize object creation avoiding a large copy of data. But only when the data we have not parsed is less than or equal to ll+2. If the data length is greater than ll+2, trimming querybuf is just a waste of time, because at this time the querybuf contains not only our bulk. It's easy to reproduce the that: Time1: call `client pause 10000` on slave. Time2: redis-benchmark -t set -r 10000 -d 33000 -n 10000. Then slave hung after 10 seconds.
-