- 12 Jun, 2018 3 commits
-
-
antirez authored
See issue #5006. The comment in the code was also wrong and was rectified as well.
-
antirez authored
See issue #5005 comments.
-
Baoyi Chen authored
fix [#5005](https://github.com/antirez/redis/issues/5005)
-
- 11 Jun, 2018 2 commits
-
-
antirez authored
A user with many connections (10 thousand) on a single Redis server reports in issue #4983 that sometimes Redis is idle becuase at the same time many clients need to resize their query buffer according to the old policy. It looks like this was created by the fact that we allow the query buffer to grow without problems to a size up to PROTO_MBULK_BIG_ARG normally, but when the client is idle we immediately are more strict, and a query buffer greater than 1024 bytes is already enough to trigger the resize. So for instance if most of the clients stop at the same time this issue should be easily triggered. This behavior actually looks odd, and there should be only a clear limit after we say, let's look at this query buffer to check if it's time to resize it. This commit puts the limit at PROTO_MBULK_BIG_ARG, and the check is performed both if compared to the peak usage the current usage is too big, or if the client is idle. Then when the check is performed, to waste just a few kbytes is considered enough to proceed with the resize. This should fix the issue.
-
antirez authored
We unblocked the client too early, when the group name object was no longer valid in client->bpop, so propagating XCLAIM later in streamPropagateXCLAIM() deferenced a field already set to NULL.
-
- 10 Jun, 2018 3 commits
-
-
zhaozhao.zz authored
-
zhaozhao.zz authored
-
michael-grunder authored
-
- 09 Jun, 2018 1 commit
-
-
shenlongxing authored
-
- 08 Jun, 2018 1 commit
-
-
antirez authored
-
- 07 Jun, 2018 5 commits
- 06 Jun, 2018 2 commits
-
-
shenlongxing authored
-
antirez authored
Close #4989.
-
- 05 Jun, 2018 2 commits
- 04 Jun, 2018 3 commits
-
-
antirez authored
Now that we have SETID, the inetrnals of consumer groups should be able to handle the case of the same message delivered multiple times just as a side effect of calling XREADGROUP. Normally this should never happen but if the admin manually "XGROUP SETID mykey mygroup 0", messages will get re-delivered to clients waiting for the ">" special ID. The consumer groups internals were not able to handle the case of a message re-delivered in this circumstances that was already assigned to another owner.
-
antirez authored
-
antirez authored
-
- 03 Jun, 2018 2 commits
-
-
Yossi Gottlieb authored
-
michael-grunder authored
-
- 01 Jun, 2018 1 commit
-
-
artix authored
-
- 31 May, 2018 3 commits
-
-
zhaozhao.zz authored
-
antirez authored
-
artix authored
-
- 30 May, 2018 1 commit
-
-
Remi Collet authored
-
- 29 May, 2018 2 commits
-
-
antirez authored
The AOF tail of a combined RDB+AOF is based on the premise of applying the AOF commands to the exact state that there was in the server while the RDB was persisted. By expiring keys while loading the RDB file, we change the state, so applying the AOF tail later may change the state. Test case: * Time1: SET a 10 * Time2: EXPIREAT a $time5 * Time3: INCR a * Time4: PERSIT A. Start bgrewiteaof with RDB preamble. The value of a is 11 without expire time. * Time5: Restart redis from the RDB+AOF: consistency violation. Thanks to @soloestoy for providing the patch. Thanks to @trevor211 for the original issue report and the initial fix. Check issue #4950 for more info.
-
WuYunlong authored
we add a new slave, and do a failover, eighter by manual or not, other local slaves will delete the expired keys properly.
-
- 27 May, 2018 1 commit
-
-
zhaozhao.zz authored
-
- 25 May, 2018 8 commits
-
-
antirez authored
-
zhaozhao.zz authored
-
antirez authored
-
antirez authored
-
antirez authored
-
zhaozhao.zz authored
-
antirez authored
-
Mota authored
-