- 13 May, 2019 1 commit
-
-
antirez authored
Now clients that are ready to be terminated asynchronously are processed more often in beforeSleep() instead of being processed in serverCron(). This means that the test will not be able to catch the moment the client was terminated, also note that the 'omem' figure now changes in big steps, because of the new client output buffers layout. So we have to change the test range in order to accomodate for that. Yet the test is useful enough to be worth taking, even if its precision is reduced by this commit. Probably if we get more problems, a thing that makes sense is just to check that the limit is < 200k. That's more than enough actually.
-
- 05 May, 2019 1 commit
-
-
Oran Agra authored
solving few replication related tests race conditions which fail on slow machines bugfix in slave buffers test: since the test is executed twice, each time with a different commands count, the threshold for the delta can't be a constant.
-
- 08 Apr, 2019 2 commits
- 24 Mar, 2019 2 commits
-
-
Yossi Gottlieb authored
-
Yossi Gottlieb authored
-
- 15 Mar, 2019 2 commits
- 12 Mar, 2019 1 commit
-
-
Steve Webster authored
-
- 08 Mar, 2019 1 commit
-
-
Steve Webster authored
The XCLAIM docs state the XCLAIM increments the delivery counter for messages. This PR makes the code match the documentation - which seems like the desired behaviour - whilst still allowing RETRYCOUNT to be specified manually. My understanding of the way streamPropagateXCLAIM() works is that this change will safely propagate to replicas since retry count is pulled directly from the streamNACK struct. Fixes #5194
-
- 30 Jan, 2019 1 commit
-
-
antirez authored
-
- 28 Jan, 2019 6 commits
- 18 Jan, 2019 1 commit
-
-
antirez authored
-
- 17 Jan, 2019 1 commit
-
-
antirez authored
This way the behavior is very similar to the past one. This is useful in order to remember the user she probably failed to configure a password correctly.
-
- 28 Nov, 2018 1 commit
-
-
Qu Chen authored
-
- 19 Nov, 2018 2 commits
- 12 Nov, 2018 1 commit
-
-
Oran Agra authored
-
- 17 Oct, 2018 1 commit
-
-
antirez authored
-
- 16 Oct, 2018 2 commits
-
-
zhaozhao.zz authored
-
antirez authored
See #5426.
-
- 13 Oct, 2018 1 commit
-
-
antirez authored
-
- 10 Oct, 2018 2 commits
- 09 Oct, 2018 2 commits
-
-
zhaozhao.zz authored
-
zhaozhao.zz authored
-
- 30 Sep, 2018 1 commit
-
- 11 Sep, 2018 1 commit
-
-
antirez authored
-
- 05 Sep, 2018 1 commit
-
-
antirez authored
-
- 21 Aug, 2018 1 commit
-
-
Oran Agra authored
Few tests had borderline thresholds that were adjusted. The slave buffers test had two issues, preventing the slave buffer from growing: 1) the slave didn't necessarily go to sleep on time, or woke up too early, now using SIGSTOP to make sure it goes to sleep exactly when we want. 2) the master disconnected the slave on timeout
-
- 13 Aug, 2018 1 commit
-
-
zhaozhao.zz authored
-
- 02 Aug, 2018 1 commit
-
- 01 Aug, 2018 1 commit
-
-
zhaozhao.zz authored
-
- 24 Jul, 2018 1 commit
-
-
Oran Agra authored
it looks like on slow machines we're getting: [err]: slave buffer are counted correctly in tests/unit/maxmemory.tcl Expected condition '$slave_buf > 2*1024*1024' to be true (16914 > 2*1024*1024) this is a result of the slave waking up too early and eating the slave buffer before the traffic and the test ends.
-
- 18 Jul, 2018 1 commit
-
-
Oran Agra authored
on slower machines, the active defrag test tended to fail. although the fragmentation ratio was below the treshold, the defragger was still in the middle of a scan cycle. this commit changes: - the defragger uses the current fragmentation state, rather than the cache one that is updated by server cron every 100ms. this actually fixes a bug of starting one excess scan cycle - the test lets the defragger use more CPU cycles, in hope that the defrag will be faster, but also give it more time before we give up.
-