1. 08 Mar, 2021 1 commit
    • Yossi Gottlieb's avatar
      Fix flaky unit/maxmemory test on MacOS/BSD. (#8619) · 7d81f392
      Yossi Gottlieb authored
      It seems like non-Linux sockets may be less greedy, resulting with more
      transient client output buffers.
      
      Haven't proven this but empirically when stressing this test on
      non-Linux tends to exhibit increased mem_clients_normal values.
      7d81f392
  2. 08 Dec, 2020 1 commit
    • Oran Agra's avatar
      Improve stability of new CSC eviction test (#8160) · a102b21d
      Oran Agra authored
      c4fdf09c added a test that now fails with valgrind
      it fails for two resons:
      1) the test samples the used memory and then limits the maxmemory to
         that value, but it turns out this is not atomic and on slow machines
         the background cron process that clean out old query buffers reduces
         the memory so that the setting doesn't cause eviction.
      2) the dbsize was tested late, after reading some invalidation messages
         by that time more and more keys got evicted, partially draining the
         db. this is not the focus of this fix (still a known limitation)
      a102b21d
  3. 06 Dec, 2020 2 commits
    • Oran Agra's avatar
      prevent client tracking from causing feedback loop in performEvictions (#8100) · c4fdf09c
      Oran Agra authored
      When client tracking is enabled signalModifiedKey can increase memory usage,
      this can cause the loop in performEvictions to keep running since it was measuring
      the memory usage impact of signalModifiedKey.
      
      The section that measures the memory impact of the eviction should be just on dbDelete,
      excluding keyspace notification, client tracking, and propagation to AOF and replicas.
      
      This resolves part of the problem described in #8069
      p.s. fix took 1 minute, test took about 3 hours to write.
      c4fdf09c
    • Wang Yuan's avatar
      Limit the main db and expires dictionaries to expand (#7954) · 75f9dec6
      Wang Yuan authored
      As we know, redis may reject user's requests or evict some keys if
      used memory is over maxmemory. Dictionaries expanding may make
      things worse, some big dictionaries, such as main db and expires dict,
      may eat huge memory at once for allocating a new big hash table and be
      far more than maxmemory after expanding.
      There are related issues: #4213 #4583
      
      More details, when expand dict in redis, we will allocate a new big
      ht[1] that generally is double of ht[0], The size of ht[1] will be
      very big if ht[0] already is big. For db dict, if we have more than
      64 million keys, we need to cost 1GB for ht[1] when dict expands.
      
      If the sum of used memory and new hash table of dict needed exceeds
      maxmemory, we shouldn't allow the dict to expand. Because, if we
      enable keys eviction, we still couldn't add much more keys after
      eviction and rehashing, what's worse, redis will keep less keys when
      redis only remains a little memory for storing new hash table instead
      of users' data. Moreover users can't write data in redis if disable
      keys eviction.
      
      What this commit changed ?
      
      Add a new member function expandAllowed for dict type, it provide a way
      for caller to allow expand or not. We expose two parameters for this
      function: more memory needed for expanding and dict current load factor,
      users can implement a function to make a decision by them.
      For main db dict and expires dict type, these dictionaries may be very
      big and cost huge memory for expanding, so we implement a judgement
      function: we can stop dict to expand provisionally if used memory will
      be over maxmemory after dict expands, but to guarantee the performance
      of redis, we still allow dict to expand if dict load factor exceeds the
      safe load factor.
      Add test cases to verify we don't allow main db to expand when left
      memory is not enough, so that avoid keys eviction.
      
      Other changes:
      
      For new hash table size when expand. Before this commit, the size is
      that double used of dict and later _dictNextPower. Actually we aim to
      control a dict load factor between 0.5 and 1.0. Now we replace *2 with
      +1, since the first check is that used >= size, the outcome of before
      will usually be the same as _dictNextPower(used+1). The only case where
      it'll differ is when dict_can_resize is false during fork, so that later
      the _dictNextPower(used*2) will cause the dict to jump to *4 (i.e.
      _dictNextPower(1025*2) will return 4096).
      Fix rehash test cases due to changing algorithm of new hash table size
      when expand.
      75f9dec6
  4. 05 May, 2019 1 commit
    • Oran Agra's avatar
      make replication tests more stable on slow machines · ba809f26
      Oran Agra authored
      solving few replication related tests race conditions which fail on slow machines
      
      bugfix in slave buffers test: since the test is executed twice, each time with
      a different commands count, the threshold for the delta can't be a constant.
      ba809f26
  5. 11 Sep, 2018 1 commit
  6. 21 Aug, 2018 1 commit
    • Oran Agra's avatar
      Fix unstable tests on slow machines. · c8452ab0
      Oran Agra authored
      Few tests had borderline thresholds that were adjusted.
      
      The slave buffers test had two issues, preventing the slave buffer from growing:
      1) the slave didn't necessarily go to sleep on time, or woke up too early,
         now using SIGSTOP to make sure it goes to sleep exactly when we want.
      2) the master disconnected the slave on timeout
      c8452ab0
  7. 24 Jul, 2018 1 commit
    • Oran Agra's avatar
      fix slave buffer test suite false positives · d4ae76d1
      Oran Agra authored
      it looks like on slow machines we're getting:
      [err]: slave buffer are counted correctly in tests/unit/maxmemory.tcl
      Expected condition '$slave_buf > 2*1024*1024' to be true (16914 > 2*1024*1024)
      
      this is a result of the slave waking up too early and eating the
      slave buffer before the traffic and the test ends.
      d4ae76d1
  8. 16 Jul, 2018 1 commit
    • Oran Agra's avatar
      slave buffers were wasteful and incorrectly counted causing eviction · bf680b6f
      Oran Agra authored
      A) slave buffers didn't count internal fragmentation and sds unused space,
         this caused them to induce eviction although we didn't mean for it.
      
      B) slave buffers were consuming about twice the memory of what they actually needed.
      - this was mainly due to sdsMakeRoomFor growing to twice as much as needed each time
        but networking.c not storing more than 16k (partially fixed recently in 237a38737).
      - besides it wasn't able to store half of the new string into one buffer and the
        other half into the next (so the above mentioned fix helped mainly for small items).
      - lastly, the sds buffers had up to 30% internal fragmentation that was wasted,
        consumed but not used.
      
      C) inefficient performance due to starting from a small string and reallocing many times.
      
      what i changed:
      - creating dedicated buffers for reply list, counting their size with zmalloc_size
      - when creating a new reply node from, preallocate it to at least 16k.
      - when appending a new reply to the buffer, first fill all the unused space of the
        previous node before starting a new one.
      
      other changes:
      - expose mem_not_counted_for_evict info field for the benefit of the test suite
      - add a test to make sure slave buffers are counted correctly and that they don't cause eviction
      bf680b6f
  9. 15 Mar, 2017 1 commit
  10. 29 Sep, 2014 1 commit
  11. 18 Jul, 2014 1 commit
  12. 28 Jul, 2011 1 commit