- 04 Nov, 2020 1 commit
-
-
Yossi Gottlieb authored
Not disabling save, slower systems begun background save that did not complete in time, resulting with SAVE failing with "ERR Background save already in progress".
-
- 22 Oct, 2020 1 commit
-
-
Yossi Gottlieb authored
Useful for running tests on systems which may be way slower than usual.
-
- 03 Sep, 2020 1 commit
-
-
Oran Agra authored
During long running scripts or loading RDB/AOF, we may need to do some defragging. Since processEventsWhileBlocked is called periodically at unknown intervals, and many cron jobs either depend on run_with_period (including active defrag), or rely on being called at server.hz rate (i.e. active defrag knows ho much time to run by looking at server.hz), the whileBlockedCron may have to run a loop triggering the cron jobs in it (currently only active defrag) several times. Other changes: - Adding a test for defrag during aof loading. - Changing key-load-delay config to take negative values for fractions of a microsecond sleep
-
- 20 May, 2020 1 commit
-
-
Oran Agra authored
There's a rare case which leads to stagnation in the defragger, causing it to keep scanning the keyspace and do nothing (not moving any allocation), this happens when all the allocator slabs of a certain bin have the same % utilization, but the slab from which new allocations are made have a lower utilization. this commit fixes it by removing the current slab from the overall average utilization of the bin, and also eliminate any precision loss in the utilization calculation and move the decision about the defrag to reside inside jemalloc. and also add a test that consistently reproduce this issue.
-
- 16 Apr, 2020 1 commit
-
-
Oran Agra authored
this test is time sensitive and it sometimes fail to pass below the latency threshold, even on strong machines. this test was the reson we're running just 2 parallel tests in the github actions CI, revering this.
-
- 27 Feb, 2020 1 commit
-
-
Oran Agra authored
it seems that running two clients at a time is ok too, resuces action time from 20 minutes to 10. we'll use this for now, and if one day it won't be enough we'll have to run just the sensitive tests one by one separately from the others. this commit also fixes an issue with the defrag test that appears to be very rare.
-
- 25 Feb, 2020 1 commit
-
-
Oran Agra authored
seems that github actions are slow, using just one client to reduce false positives. also adding verbose, testing only on latest ubuntu, and building on older one. when doing that, i can reduce the test threshold back to something saner
-
- 23 Feb, 2020 1 commit
-
-
Oran Agra authored
I saw that the new defag test for list was failing in CI recently, so i reduce it's threshold from 12 to 60. besides that, i add / improve the latency test for that other two defrag tests (add a sensitive latency and digest / save checks) and fix bad usage of debug populate (can't overrides existing keys). this was the original intention, which creates higher fragmentation.
-
- 18 Feb, 2020 1 commit
-
-
Oran Agra authored
When active defrag kicks in and finds a big list, it will create a bookmark to a node so that it is able to resume iteration from that node later. The quicklist manages that bookmark, and updates it in case that node is deleted. This will increase memory usage only on lists of over 1000 (see active-defrag-max-scan-fields) quicklist nodes (1000 ziplists, not 1000 items) by 16 bytes. In 32 bit build, this change reduces the maximum effective config of list-compress-depth and list-max-ziplist-size (from 32767 to 8191)
-
- 12 Nov, 2018 1 commit
-
-
Oran Agra authored
-
- 21 Aug, 2018 1 commit
-
-
Oran Agra authored
Few tests had borderline thresholds that were adjusted. The slave buffers test had two issues, preventing the slave buffer from growing: 1) the slave didn't necessarily go to sleep on time, or woke up too early, now using SIGSTOP to make sure it goes to sleep exactly when we want. 2) the master disconnected the slave on timeout
-
- 18 Jul, 2018 1 commit
-
-
Oran Agra authored
on slower machines, the active defrag test tended to fail. although the fragmentation ratio was below the treshold, the defragger was still in the middle of a scan cycle. this commit changes: - the defragger uses the current fragmentation state, rather than the cache one that is updated by server cron every 100ms. this actually fixes a bug of starting one excess scan cycle - the test lets the defragger use more CPU cycles, in hope that the defrag will be faster, but also give it more time before we give up.
-
- 27 Jun, 2018 2 commits
- 24 May, 2018 1 commit
-
-
antirez authored
They failed when active defrag could not be activated because the Jemalloc version does not include the additional APIs.
-
- 17 May, 2018 1 commit
-
-
Oran Agra authored
problems fixed: * failing to read fragmentation information from jemalloc * overflow in jemalloc fragmentation hint to the defragger * test suite not triggering eviction after population
-
- 12 Mar, 2018 1 commit
-
-
Oran Agra authored
other fixes / improvements: - LUA script memory isn't taken from zmalloc (taken from libc malloc) so it can cause high fragmentation ratio to be displayed (which is false) - there was a problem with "fragmentation" info being calculated from RSS and used_memory sampled at different times (now sampling them together) other details: - adding a few more allocator info fields to INFO and MEMORY commands - improve defrag test to measure defrag latency of big keys - increasing the accuracy of the defrag test (by looking at real grag info) this way we can use an even lower threshold and still avoid false positives - keep the old (total) "fragmentation" field unchanged, but add new ones for spcific things - add these the MEMORY DOCTOR command - deduct LUA memory from the rss in case of non jemalloc allocator (one for which we don't "allocator active/used") - reduce sampling rate of the rss and allocator info
-
- 22 Apr, 2017 3 commits
-
-
antirez authored
Related to #3786.
-
antirez authored
Apparently 1.4 is too low compared to what you get in certain setups (including mine). I raised it to 1.55 that hopefully is still enough to test that the fragmentation went down from 1.7 but without incurring in issues, however the test setup may be still fragile so certain times this may lead to false positives again, it's hard to test for these things in a determinsitic way. Related to #3786.
-
oranagra authored
-
- 10 Feb, 2015 1 commit
-
-
antirez authored
This test on Linux was extremely slow, since in Tcl we can't enable easily tcp-nodelay, so the busy loop used to take *a lot* with bigger writes. Fixed using pipelining.
-
- 25 Nov, 2013 1 commit
-
-
antirez authored
Fixes issue #1298.
-
- 29 Aug, 2013 1 commit
-
-
antirez authored
-