1. 18 Jun, 2018 1 commit
  2. 07 Jun, 2018 1 commit
  3. 11 May, 2018 2 commits
    • antirez's avatar
      ZPOP: change sync ZPOP to have a count argument instead of N keys. · 56bbab23
      antirez authored
      Usually blocking operations make a lot of sense with multiple keys so
      that we can listen to multiple queues (or whatever the app models) with
      a single connection. However in the synchronous case it is more useful
      to be able to ask for N elements. This is a change that I also wanted to
      perform soon or later in the blocking list variant, but here it is more
      natural since there is no reply type difference.
      56bbab23
    • antirez's avatar
      ZPOP: renaming to have explicit MIN/MAX score idea. · 6efb6c1e
      antirez authored
      This commit also adds a top comment about a subtle behavior of mixing
      blocking operations of different types in the same key.
      6efb6c1e
  4. 29 Apr, 2018 1 commit
  5. 19 Apr, 2018 1 commit
  6. 18 Apr, 2018 1 commit
  7. 11 Apr, 2018 1 commit
  8. 09 Apr, 2018 1 commit
  9. 05 Apr, 2018 1 commit
  10. 30 Mar, 2018 1 commit
  11. 29 Mar, 2018 1 commit
  12. 25 Mar, 2018 1 commit
    • antirez's avatar
      AOF: enable RDB-preamble rewriting by default. · 28d28ef3
      antirez authored
      There are too many advantages in doing this, RDB is faster to persist,
      more compact, much faster to load back. The main issues here are that
      the code is less tested because this was not the old default (so we are
      enabling it for the new 5.0 release), and that the AOF is no longer a
      trivially parsable format from now on. However the non-preamble mode
      will be supported in the future as well, if new data types will be
      added.
      28d28ef3
  13. 19 Mar, 2018 1 commit
  14. 15 Mar, 2018 7 commits
  15. 14 Mar, 2018 1 commit
    • antirez's avatar
      Cluster: ability to prevent slaves from failing over their masters. · 432bf477
      antirez authored
      This commit, in some parts derived from PR #3041 which is no longer
      possible to merge (because the user deleted the original branch),
      implements the ability of slaves to have a special configuration
      preventing that they try to start a failover when the master is failing.
      
      There are multiple reasons for wanting this, and the feautre was
      requested in issue #3021 time ago.
      
      The differences between this patch and the original PR are the
      following:
      
      1. The flag is saved/loaded on the nodes configuration.
      2. The 'myself' node is now flag-aware, the flag is updated as needed
         when the configuration is changed via CONFIG SET.
      3. The flag name uses NOFAILOVER instead of NO_FAILOVER to be consistent
         with existing NOADDR.
      4. The redis.conf documentation was rewritten.
      
      Thanks to @deep011 for the original patch.
      432bf477
  16. 12 Mar, 2018 2 commits
    • Oran Agra's avatar
      Adding real allocator fragmentation to INFO and MEMORY command + active defrag test · 806736cd
      Oran Agra authored
      other fixes / improvements:
      - LUA script memory isn't taken from zmalloc (taken from libc malloc)
        so it can cause high fragmentation ratio to be displayed (which is false)
      - there was a problem with "fragmentation" info being calculated from
        RSS and used_memory sampled at different times (now sampling them together)
      
      other details:
      - adding a few more allocator info fields to INFO and MEMORY commands
      - improve defrag test to measure defrag latency of big keys
      - increasing the accuracy of the defrag test (by looking at real grag info)
        this way we can use an even lower threshold and still avoid false positives
      - keep the old (total) "fragmentation" field unchanged, but add new ones for spcific things
      - add these the MEMORY DOCTOR command
      - deduct LUA memory from the rss in case of non jemalloc allocator (one for which we don't "allocator active/used")
      - reduce sampling rate of the rss and allocator info
      806736cd
    • Oran Agra's avatar
      active defrag v2 · be1b4aa9
      Oran Agra authored
      - big keys are not defragged in one go from within the dict scan
        instead they are scanned in parts after the main dict hash bucket is done.
      - add latency monitor sample for defrag
      - change default active-defrag-cycle-min to induce lower latency
      - make active defrag start a new scan right away if needed, so it's easier
        (for the test suite) to detect when it's done
      - make active defrag quick the current cycle after each db / big key
      - defrag  some non key long term global allocations
      - some refactoring for smaller functions and more reusable code
      - during dict rehashing, one scan iteration of the dict, can end up scanning
        one bucket in the smaller dict and many many buckets in the larger dict.
        so waiting for 16 scan iterations before checking the time, may be much too long.
      be1b4aa9
  17. 19 Feb, 2018 1 commit
    • antirez's avatar
      Track number of logically expired keys still in memory. · ffde73c5
      antirez authored
      This commit adds two new fields in the INFO output, stats section:
      
      expired_stale_perc:0.34
      expired_time_cap_reached_count:58
      
      The first field is an estimate of the number of keys that are yet in
      memory but are already logically expired. They reason why those keys are
      yet not reclaimed is because the active expire cycle can't spend more
      time on the process of reclaiming the keys, and at the same time nobody
      is accessing such keys. However as the active expire cycle runs, while
      it will eventually have to return to the caller, because of time limit
      or because there are less than 25% of keys logically expired in each
      given database, it collects the stats in order to populate this INFO
      field.
      
      Note that expired_stale_perc is a running average, where the current
      sample accounts for 5% and the history for 95%, so you'll see it
      changing smoothly over time.
      
      The other field, expired_time_cap_reached_count, counts the number
      of times the expire cycle had to stop, even if still it was finding a
      sizeable number of keys yet to expire, because of the time limit.
      This allows people handling operations to understand if the Redis
      server, during mass-expiration events, is able to collect keys fast
      enough usually. It is normal for this field to increment during mass
      expires, but normally it should very rarely increment. When instead it
      constantly increments, it means that the current workloads is using
      a very important percentage of CPU time to expire keys.
      
      This feature was created thanks to the hints of Rashmi Ramesh and
      Bart Robinson from Twitter. In private email exchanges, they noted how
      it was important to improve the observability of this parameter in the
      Redis server. Actually in big deployments, the amount of keys that are
      yet to expire in each server, even if they are logically expired, may
      account for a very big amount of wasted memory.
      ffde73c5
  18. 14 Feb, 2018 2 commits
  19. 11 Jan, 2018 1 commit
  20. 29 Dec, 2017 1 commit
  21. 08 Dec, 2017 2 commits
  22. 05 Dec, 2017 1 commit
    • antirez's avatar
      add linkClient(): adds the client and caches the list node. · 62a4b817
      antirez authored
      We have this operation in two places: when caching the master and
      when linking a new client after the client creation. By having an API
      for this we avoid incurring in errors when modifying one of the two
      places forgetting the other. The function is also a good place where to
      document why we cache the linked list node.
      
      Related to #4497 and #4210.
      62a4b817
  23. 04 Dec, 2017 1 commit
    • antirez's avatar
      Refactoring: improve luaCreateFunction() API. · 60d26acf
      antirez authored
      The function in its initial form, and after the fixes for the PSYNC2
      bugs, required code duplication in multiple spots. This commit modifies
      it in order to always compute the script name independently, and to
      return the SDS of the SHA of the body: this way it can be used in all
      the places, including for SCRIPT LOAD, without duplicating the code to
      create the Lua function name. Note that this requires to re-compute the
      body SHA1 in the case of EVAL seeing a script for the first time, but
      this should not change scripting performance in any way because new
      scripts definition is a rare event happening the first time a script is
      seen, and the SHA1 computation is anyway not a very slow process against
      the typical Redis script and compared to the actua Lua byte compiling of
      the body.
      
      Note that the function used to assert() if a duplicated script was
      loaded, however actually now two times over three, we want the function
      to handle duplicated scripts just fine: this happens in SCRIPT LOAD and
      in RDB AUX "lua" loading. Moreover the assert was not defending against
      some obvious failure mode, so now the function always tests against
      already defined functions at start.
      60d26acf
  24. 01 Dec, 2017 7 commits