1. 12 Jul, 2024 1 commit
    • debing.sun's avatar
      Avoid starting defrag after config resetstat for defrag test (#13399) · d39548c8
      debing.sun authored
      
      
      If `config resetstat` is executed and a defrag is started after it, the
      `total_active_defrag_time` will not be 0.
      When we start the defrag again, we will skip the following steps:
      1. waiting for the defrag to start. (s total_active_defrag_time is equal
      0)
      2. waiting for the test to complete. (active_defrag_running is euqal 0)
      which result in the test failed.
      
      ---------
      Co-authored-by: default avataroranagra <oran@redislabs.com>
      d39548c8
  2. 16 May, 2024 1 commit
  3. 14 May, 2024 1 commit
    • debing.sun's avatar
      Add defragment support for HFE (#13229) · 80be2cc2
      debing.sun authored
      
      
      ## Background
      1. All hash objects that contain HFE are referenced by db->hexpires.
      2. All fields in a dict hash object with HFE are referenced by an
      ebucket.
      
      So when we defrag the hash object or the field in a dict with HFE, we
      also need to update the references in them.
      
      ## Interface
      1. Add a new interface `ebDefragItem`, which can accept a defrag
      callback to defrag items in ebuckets, and simultaneously update their
      references in the ebucket.
      
      ## Mainly changes
      1. The key type of dict of hash object is no longer sds, so add new
      `activeDefragHfieldDict()` to defrag the dict instead of
      `activeDefragSdsDict()`.
      2. When we defrag the dict of hash object by using `dictScanDefrag()`,
      we always set the defrag callback `defragKey` of `dictDefragFunctions`
      to NULL, because we can't reallocate a field with out updating it's
      reference in ebuckets.
      Instead, we will defrag the field of the dict and update its reference
      in the callback `dictScanDefrag` of dictScanFunction().
      3. When we defrag the hash robj with HFE, we will use `ebDefragItem` to
      defrag the robj and update the reference in db->hexpires.
      
      ## TODO:
      Defrag ebucket structure incremently, which will be handler in a future
      PR.
      
      ---------
      Co-authored-by: default avatarOzan Tezcan <ozantezcan@gmail.com>
      Co-authored-by: default avatarMoti Cohen <moti.cohen@redis.com>
      80be2cc2
  4. 16 Apr, 2024 1 commit
    • Binbin's avatar
      Allocate Lua VM code with jemalloc instead of libc, and count it used memory (#13133) · 804110a4
      Binbin authored
      
      
      ## Background
      1. Currently Lua memory control does not pass through Redis's zmalloc.c.
      Redis maxmemory cannot limit memory problems caused by users abusing lua
      since these lua VM memory is not part of used_memory.
      
      2. Since jemalloc is much better (fragmentation and speed), and also we
      know it and trust it. we are
      going to use jemalloc instead of libc to allocate the Lua VM code and
      count it used memory.
      
      ## Process:
      In this PR, we will use jemalloc in lua. 
      1. Create an arena for all lua vm (script and function), which is
      shared, in order to avoid blocking defragger.
      2. Create a bound tcache for the lua VM, since the lua VM and the main
      thread are by default in the same tcache, and if there is no isolated
      tcache, lua may request memory from the tcache which has just been freed
      by main thread, and vice versa
      On the other hand, since lua vm might be release in bio thread, but
      tcache is not thread-safe, we need to recreate
          the tcache every time we recreate the lua vm.
      3. Remove lua memory statistics from memory fragmentation statistics to
      avoid the effects of lua memory fragmentation
      
      ## Other
      Add the following new fields to `INFO DEBUG` (we may promote them to
      INFO MEMORY some day)
      1. allocator_allocated_lua: total number of bytes allocated of lua arena
      2. allocator_active_lua: total number of bytes in active pages allocated
      in lua arena
      3. allocator_resident_lua: maximum number of bytes in physically
      resident data pages mapped in lua arena
      4. allocator_frag_bytes_lua: fragment bytes in lua arena
      
      This is oranagra's idea, and i got some help from sundb.
      
      This solves the third point in #13102.
      
      ---------
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      804110a4
  5. 04 Mar, 2024 1 commit
    • debing.sun's avatar
      Implement defragmentation for pubsub kvstore (#13058) · ad127303
      debing.sun authored
      
      
      After #13013
      
      ### This PR make effort to defrag the pubsub kvstore in the following
      ways:
      
      1. Till now server.pubsub(shard)_channels only share channel name obj
      with the first subscribed client, now change it so that the clients and
      the pubsub kvstore share the channel name robj.
      This would save a lot of memory when there are many subscribers to the
      same channel.
      It also means that we only need to defrag the channel name robj in the
      pubsub kvstore, and then update
      all client references for the current channel, avoiding the need to
      iterate through all the clients to do the same things.
          
      2. Refactor the code to defragment pubsub(shard) in the same way as
      defragment of keys and EXPIRES, with the exception that we only
      defragment pubsub(without shard) when slot is zero.
      
      
      ### Other
      Fix an overlook in #11695, if defragment doesn't reach the end time, we
      should wait for the current
      db's keys and expires, pubsub and pubsubshard to finish before leaving,
      now it's possible to exit
      early when the keys are defragmented.
      
      ---------
      Co-authored-by: default avataroranagra <oran@redislabs.com>
      ad127303
  6. 06 Feb, 2024 1 commit
    • Binbin's avatar
      Re-compute active_defrag_running after adjusting defrag configurations (#13020) · 13bd3643
      Binbin authored
      Currently, once active defrag starts, we can not adjust
      active_defrag_running
      downwards. This is because active_defrag_running will be dynamically
      compute
      based on the fragmentation, we think we should not lower the effort when
      the
      fragmentation drops.
      
      However, we need to note that active_defrag_running will also be
      dynamically
      computed based on configurations. In this case, we are not respecting
      cycle-min
      or cycle-max. Some people may realize halfway through that defrag
      consumes a
      lot and want to adjust it.
      
      Previously we could only turn off activedefrag and then turn it on again
      to
      adjust active_defrag_running downwards. So in this PR, when a active
      defrag
      configuration change is made, we will re-compute it.
      
      These configuration items are:
      - active-defrag-cycle-min
      - active-defrag-cycle-max
      - active-defrag-threshold-upper
      13bd3643
  7. 02 Nov, 2023 1 commit
  8. 22 Oct, 2023 1 commit
    • Harkrishn Patro's avatar
      Fix defrag test (#12674) · 26eb4ce3
      Harkrishn Patro authored
      Fixing issues started after #11695 when the defrag tests are being executed in cluster mode too.
      For some reason, it looks like the defragmentation is over too quickly, before the test is able to
      detect that it's running.
      so now instead of waiting to see that it's active, we wait to see that it did some work
      ```
      [err]: Active defrag big list: cluster in tests/unit/memefficiency.tcl
      defrag not started.
      [err]: Active defrag big keys: cluster in tests/unit/memefficiency.tcl
      defrag didn't stop.
      ```
      26eb4ce3
  9. 19 Oct, 2023 1 commit
  10. 15 Oct, 2023 1 commit
    • Vitaly's avatar
      Replace cluster metadata with slot specific dictionaries (#11695) · 0270abda
      Vitaly authored
      This is an implementation of https://github.com/redis/redis/issues/10589
      
       that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot.  Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
      
      ## Important changes
      * Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
      * getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
      * Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
      * scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
      * Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot. 
      * Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
      * DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
      
      ## Performance
      This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict. 
      
      RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
      
      ## Interface changes
      * Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
      * Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
      * New RDB version to support the new op code for SLOT information. 
      
      ---------
      Co-authored-by: default avatarVitaly Arbuzov <arvit@amazon.com>
      Co-authored-by: default avatarHarkrishn Patro <harkrisp@amazon.com>
      Co-authored-by: default avatarRoshan Khatri <rvkhatri@amazon.com>
      Co-authored-by: default avatarMadelyn Olson <madelyneolson@gmail.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      0270abda
  11. 08 Sep, 2023 1 commit
  12. 04 Mar, 2023 1 commit
    • Binbin's avatar
      Increase the threshold of the AOF loading defrag test (#11871) · bfe50a30
      Binbin authored
      This test is very sensitive and fragile. It often fails in Daily,
      in most cases, it failed in test-ubuntu-32bit (the AOF loading one),
      with the range in (31, 40):
      ```
      [err]: Active defrag in tests/unit/memefficiency.tcl
      Expected 38 <= 30 (context: type eval line 113 cmd {assert {$max_latency <= 30}} proc ::test)
      ```
      
      The AOF loading part isn't tightly fixed to the cron hz. It calls
      processEventsWhileBlocked once in every 1024 command calls.
      ```
              /* Serve the clients from time to time */
              if (!(loops++ % 1024)) {
                  off_t progress_delta = ftello(fp) - last_progress_report_size;
                  loadingIncrProgress(progress_delta);
                  last_progress_report_size += progress_delta;
                  processEventsWhileBlocked();
                  processModuleLoadingProgressEvent(1);
              }
      ```
      
      In this case, we can either decrease the 1024 or increase the
      threshold of just the AOF part of that test. Considering the test
      machines are sometimes slow, and all sort of quirks could happen
      (which do not indicate a bug), and we've already set to 30, we suppose
      we can set it a little bit higher, set it to 40. We can have this instead of
      adding another testing config (we can add it when we really need it).
      
      Fixes #11868
      bfe50a30
  13. 09 Dec, 2022 1 commit
    • Oran Agra's avatar
      Solve issues with active defrag test failing on fast machines (#11598) · 528bb11d
      Oran Agra authored
      We do defrag during AOF loading, but aim to detect fragmentation only
      once a second, so this test aims to slow down the AOF loading and mimic
      loading of a large file.
      On fast machines the sleep, plus the actual work we did was insufficient
      making it sleep longer so the test won't fail.
      
      The error we used to get is this one:
      Expected 0 > 100000 (context: type eval line 106 cmd {assert {$hits > 100000}} proc ::test)
      528bb11d
  14. 09 Mar, 2022 1 commit
  15. 21 Feb, 2022 1 commit
    • yoav-steinberg's avatar
      Fix script active defrag test (#10318) · b59bb9b4
      yoav-steinberg authored
      This includes two fixes:
      * We forgot to count non-key reallocs in defragmentation stats.
      * Fix the script defrag tests so to make dict entries less signigicant in fragmentation by making the scripts larger.
      This assures active defrage will complete and reach desired results.
      Some inherent fragmentation might exists in dict entries which we need to ignore.
      This lead to occasional CI failures.
      b59bb9b4
  16. 11 Feb, 2022 1 commit
    • yoav-steinberg's avatar
      Fix Eval scripts defrag (broken 7.0 in RC1) (#10271) · 2eb9b196
      yoav-steinberg authored
      Remove scripts defragger since it was broken since #10126 (released in 7.0 RC1).
      would crash the server if defragger starts in a server that contains eval scripts.
      
      In #10126 the global `lua_script` dict became a dict to a custom `luaScript` struct with an internal `robj`
      in it instead of a generic `sds` -> `robj` dict. This means we need custom code to defrag it and since scripts
      should never really cause much fragmentation it makes more sense to simply remove the defrag code for scripts.
      2eb9b196
  17. 19 Dec, 2021 1 commit
    • Oran Agra's avatar
      Add external test that runs without debug command (#9964) · 6add1b72
      Oran Agra authored
      - add needs:debug flag for some tests
      - disable "save" in external tests (speedup?)
      - use debug_digest proc instead of debug command directly so it can be skipped
      - use OBJECT ENCODING instead of DEBUG OBJECT to get encoding
      - add a proc for OBJECT REFCOUNT so it can be skipped
      - move a bunch of tests in latency_monitor tests to happen later so that latency monitor has some values in it
      - add missing close_replication_stream calls
      - make sure to close the temp client if DEBUG LOG fails
      6add1b72
  18. 21 Nov, 2021 1 commit
    • Oran Agra's avatar
      Improve active defrag in jemalloc 5.2 (#9778) · d4e7ffb3
      Oran Agra authored
      Background:
      Following the upgrade to jemalloc 5.2, there was a test that used to be flaky and
      started failing consistently (on 32bit), so we disabled it ​(see #9645).
      
      This is a test that i introduced in #7289 when i attempted to solve a rare stagnation
      problem, and it later turned out i failed to solve it, ans what's more i added a test that
      caused it to be not so rare, and as i mentioned, now in jemalloc 5.2 it became consistent on 32bit.
      
      Stagnation can happen when all the slabs of the bin are equally utilized, so the decision
      to move an allocation from a relatively empty slab to a relatively full one, will never
      happen, and in that test all the slabs are at 50% utilization, so the defragger could just
      keep scanning the keyspace and not move anything.
      
      What this PR changes:
      * First, finally in jemalloc 5.2 we have the count of non-full slabs, so when we compare
        the utilization of the current slab, we can compare it to the average utilization of the non-full
        slabs in our bin, instead of the total average of our bin. this takes the full slabs out of the game,
        since they're not candidates for migration (neither source nor target).
      * Secondly, We add some 12% (100/8) to the decision to defrag an allocation, this is the part
        that aims to avoid stagnation, and it's especially important since the above mentioned change
        can get us closer to stagnation.
      * Thirdly, since jemalloc 5.2 adds sharded bins, we take into account all shards (something
        that's missing from the original PR that merged it), this isn't expected to make any difference
        since anyway there should be just one shard.
      
      How this was benchmarked.
      What i did was run the memefficiency test unit with `--verbose` and compare the defragger hits
      and misses the tests reported.
      At first, when i took into consideration only the non-full slabs, it got a lot worse (i got into
      stagnation, or just got a lot of misses and a lot of hits), but when i added the 10% i got back
      to results that were slightly better than the ones of the jemalloc 5.1 branch. i.e. full defragmentation
      was achieved with fewer hits (relocations), and fewer misses (keyspace scans).
      d4e7ffb3
  19. 02 Nov, 2021 1 commit
  20. 18 Oct, 2021 1 commit
  21. 30 Aug, 2021 1 commit
  22. 10 Jun, 2021 1 commit
    • Binbin's avatar
      Fixed some typos, add a spell check ci and others minor fix (#8890) · 0bfccc55
      Binbin authored
      This PR adds a spell checker CI action that will fail future PRs if they introduce typos and spelling mistakes.
      This spell checker is based on blacklist of common spelling mistakes, so it will not catch everything,
      but at least it is also unlikely to cause false positives.
      
      Besides that, the PR also fixes many spelling mistakes and types, not all are a result of the spell checker we use.
      
      Here's a summary of other changes:
      1. Scanned the entire source code and fixes all sorts of typos and spelling mistakes (including missing or extra spaces).
      2. Outdated function / variable / argument names in comments
      3. Fix outdated keyspace masks error log when we check `config.notify-keyspace-events` in loadServerConfigFromString.
      4. Trim the white space at the end of line in `module.c`. Check: https://github.com/redis/redis/pull/7751
      5. Some outdated https link URLs.
      6. Fix some outdated comment. Such as:
          - In README: about the rdb, we used to said create a `thread`, change to `process`
          - dbRandomKey function coment (about the dictGetRandomKey, change to dictGetFairRandomKey)
          - notifyKeyspaceEvent fucntion comment (add type arg)
          - Some others minor fix in comment (Most of them are incorrectly quoted by variable names)
      7. Modified the error log so that users can easily distinguish between TCP and TLS in `changeBindAddr`
      0bfccc55
  23. 09 Jun, 2021 1 commit
    • Yossi Gottlieb's avatar
      Improve test suite to handle external servers better. (#9033) · 8a86bca5
      Yossi Gottlieb authored
      This commit revives the improves the ability to run the test suite against
      external servers, instead of launching and managing `redis-server` processes as
      part of the test fixture.
      
      This capability existed in the past, using the `--host` and `--port` options.
      However, it was quite limited and mostly useful when running a specific tests.
      Attempting to run larger chunks of the test suite experienced many issues:
      
      * Many tests depend on being able to start and control `redis-server` themselves,
      and there's no clear distinction between external server compatible and other
      tests.
      * Cluster mode is not supported (resulting with `CROSSSLOT` errors).
      
      This PR cleans up many things and makes it possible to run the entire test suite
      against an external server. It also provides more fine grained controls to
      handle cases where the external server supports a subset of the Redis commands,
      limited number of databases, cluster mode, etc.
      
      The tests directory now contains a `README.md` file that describes how this
      works.
      
      This commit also includes additional cleanups and fixes:
      
      * Tests can now be tagged.
      * Tag-based selection is now unified across `start_server`, `tags` and `test`.
      * More information is provided about skipped or ignored tests.
      * Repeated patterns in tests have been extracted to common procedures, both at a
        global level and on a per-test file basis.
      * Cleaned up some cases where test setup was based on a previous test executing
        (a major anti-pattern that repeats itself in many places).
      * Cleaned up some cases where test teardown was not part of a test (in the
        future we should have dedicated teardown code that executes even when tests
        fail).
      * Fixed some tests that were flaky running on external servers.
      8a86bca5
  24. 08 Jan, 2021 1 commit
    • Oran Agra's avatar
      Skip defrag tests on systems with bigger page sizes (#8294) · 5843a45d
      Oran Agra authored
      The defragger works well on these systems, but the tests and their
      thresholds are not adjusted for these big pages, so the defragger isn't
      able to get down the fragmentation to the levels the test expects and it
      fails on "defrag didn't stop".
      
      Randomly choosing 8k as the threshold for the skipping
      
      Fixes #8265 (which had 65k pages)
      5843a45d
  25. 14 Dec, 2020 1 commit
    • Oran Agra's avatar
      Tests: fix new defrag test to be skipped when not supported (#8185) · 7d9b09ad
      Oran Agra authored
      Additionally the older defrag tests are using an obsolete way to check
      if the defragger is suuported (the error no longer contains "DISABLED").
      this doesn't usually makes a difference since these tests are completely
      skipped if the allocator is not jemalloc, but that would fail if the
      allocator is a jemalloc that doesn't support defrag.
      7d9b09ad
  26. 04 Nov, 2020 1 commit
    • Yossi Gottlieb's avatar
      Fix test failure on slower systems. · 2faa0f19
      Yossi Gottlieb authored
      Not disabling save, slower systems begun background save that did not
      complete in time, resulting with SAVE failing with "ERR Background save
      already in progress".
      2faa0f19
  27. 22 Oct, 2020 1 commit
  28. 03 Sep, 2020 1 commit
    • Oran Agra's avatar
      Run active defrag while blocked / loading (#7726) · 9ef8d2f6
      Oran Agra authored
      During long running scripts or loading RDB/AOF, we may need to do some
      defragging. Since processEventsWhileBlocked is called periodically at
      unknown intervals, and many cron jobs either depend on run_with_period
      (including active defrag), or rely on being called at server.hz rate
      (i.e. active defrag knows ho much time to run by looking at server.hz),
      the whileBlockedCron may have to run a loop triggering the cron jobs in it
      (currently only active defrag) several times.
      
      Other changes:
      - Adding a test for defrag during aof loading.
      - Changing key-load-delay config to take negative values for fractions
        of a microsecond sleep
      9ef8d2f6
  29. 20 May, 2020 1 commit
    • Oran Agra's avatar
      fix a rare active defrag edge case bug leading to stagnation · 88d71f47
      Oran Agra authored
      There's a rare case which leads to stagnation in the defragger, causing
      it to keep scanning the keyspace and do nothing (not moving any
      allocation), this happens when all the allocator slabs of a certain bin
      have the same % utilization, but the slab from which new allocations are
      made have a lower utilization.
      
      this commit fixes it by removing the current slab from the overall
      average utilization of the bin, and also eliminate any precision loss in
      the utilization calculation and move the decision about the defrag to
      reside inside jemalloc.
      
      and also add a test that consistently reproduce this issue.
      88d71f47
  30. 16 Apr, 2020 1 commit
    • Oran Agra's avatar
      testsuite run the defrag latency test solo · b9fa42a1
      Oran Agra authored
      this test is time sensitive and it sometimes fail to pass below the
      latency threshold, even on strong machines.
      
      this test was the reson we're running just 2 parallel tests in the
      github actions CI, revering this.
      b9fa42a1
  31. 27 Feb, 2020 1 commit
    • Oran Agra's avatar
      fix github actions failing latency test for active defrag - part 2 · 2f1a1c38
      Oran Agra authored
      it seems that running two clients at a time is ok too, resuces action
      time from 20 minutes to 10. we'll use this for now, and if one day it
      won't be enough we'll have to run just the sensitive tests one by one
      separately from the others.
      
      this commit also fixes an issue with the defrag test that appears to be
      very rare.
      2f1a1c38
  32. 25 Feb, 2020 1 commit
    • Oran Agra's avatar
      fix github actions failing latency test for active defrag · 53789342
      Oran Agra authored
      seems that github actions are slow, using just one client to reduce
      false positives.
      
      also adding verbose, testing only on latest ubuntu, and building on
      older one.
      
      when doing that, i can reduce the test threshold back to something saner
      53789342
  33. 23 Feb, 2020 1 commit
    • Oran Agra's avatar
      Fix latency sensitivity of new defrag test · 62adabd0
      Oran Agra authored
      I saw that the new defag test for list was failing in CI recently, so i
      reduce it's threshold from 12 to 60.
      
      besides that, i add / improve the latency test for that other two defrag
      tests (add a sensitive latency and digest / save checks)
      
      and fix bad usage of debug populate (can't overrides existing keys).
      this was the original intention, which creates higher fragmentation.
      62adabd0
  34. 18 Feb, 2020 1 commit
    • Oran Agra's avatar
      Defrag big lists in portions to avoid latency and freeze · 485425ce
      Oran Agra authored
      When active defrag kicks in and finds a big list, it will create a bookmark to
      a node so that it is able to resume iteration from that node later.
      
      The quicklist manages that bookmark, and updates it in case that node is deleted.
      
      This will increase memory usage only on lists of over 1000 (see
      active-defrag-max-scan-fields) quicklist nodes (1000 ziplists, not 1000 items)
      by 16 bytes.
      
      In 32 bit build, this change reduces the maximum effective config of
      list-compress-depth and list-max-ziplist-size (from 32767 to 8191)
      485425ce
  35. 12 Nov, 2018 1 commit
  36. 21 Aug, 2018 1 commit
    • Oran Agra's avatar
      Fix unstable tests on slow machines. · c8452ab0
      Oran Agra authored
      Few tests had borderline thresholds that were adjusted.
      
      The slave buffers test had two issues, preventing the slave buffer from growing:
      1) the slave didn't necessarily go to sleep on time, or woke up too early,
         now using SIGSTOP to make sure it goes to sleep exactly when we want.
      2) the master disconnected the slave on timeout
      c8452ab0
  37. 18 Jul, 2018 1 commit
    • Oran Agra's avatar
      make active defrag test more stable · f89c93c8
      Oran Agra authored
      on slower machines, the active defrag test tended to fail.
      although the fragmentation ratio was below the treshold, the defragger was
      still in the middle of a scan cycle.
      
      this commit changes:
      - the defragger uses the current fragmentation state, rather than the cache one
        that is updated by server cron every 100ms. this actually fixes a bug of
        starting one excess scan cycle
      - the test lets the defragger use more CPU cycles, in hope that the defrag
        will be faster, but also give it more time before we give up.
      f89c93c8
  38. 27 Jun, 2018 2 commits
  39. 24 May, 2018 1 commit