1. 27 May, 2024 1 commit
  2. 22 May, 2024 1 commit
  3. 08 May, 2024 1 commit
    • Ozan Tezcan's avatar
      Add listpack support, hgetf and hsetf commands (#13209) · ca4ed48d
      Ozan Tezcan authored
      **Changes:**
      - Adds listpack support to hash field expiration 
      - Implements hgetf/hsetf commands
      
      **Listpack support for hash field expiration**
      
      We keep field name and value pairs in listpack for the hash type. With
      this PR, if one of hash field expiration command is called on the key
      for the first time, it converts listpack layout to triplets to hold
      field name, value and ttl per field. If a field does not have a TTL, we
      store zero as the ttl value. Zero is encoded as two bytes in the
      listpack. So, once we convert listpack to hold triplets, for the fields
      that don't have a TTL, it will be consuming those extra 2 bytes per
      item. Fields are ordered by ttl in the listpack to find the field with
      minimum expiry time efficiently.
      
      **New command implementations as part of this PR:** 
      
      - HGETF command
      
      For each specified field get its value and optionally set the field's
      expiration time in sec/msec /unix-sec/unix-msec:
        ```
        HGETF key 
          [NX | XX | GT | LT]
      [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT
      unix-time-milliseconds | PERSIST]
          <FIELDS count field [field ...]>
        ```
      
      - HSETF command
      
      For each specified field value pair: set field to value and optionally
      set the field's expiration time in sec/msec /unix-sec/unix-msec:
        ```
        HSETF key 
          [DC] 
          [DCF | DOF] 
          [NX | XX | GT | LT] 
          [GETNEW | GETOLD] 
      [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT
      unix-time-milliseconds | KEEPTTL]
          <FVS count field value [field value …]>
        ```
      
      Todo:
      - Performance improvement.
      - rdb load/save
      - aof
      - defrag
      ca4ed48d
  4. 18 Apr, 2024 1 commit
    • Moti Cohen's avatar
      Hash Field Expiration - Basic support · c18ff056
      Moti Cohen authored
      - Add ebuckets & mstr data structures
      - Integrate active & lazy expiration
      - Add most of the commands 
      - Add support for dict (listpack is missing)
      TODOs:  RDB, notification, listpack, HSET, HGETF, defrag, aof
      c18ff056
  5. 16 Apr, 2024 1 commit
    • Binbin's avatar
      Allocate Lua VM code with jemalloc instead of libc, and count it used memory (#13133) · 804110a4
      Binbin authored
      
      
      ## Background
      1. Currently Lua memory control does not pass through Redis's zmalloc.c.
      Redis maxmemory cannot limit memory problems caused by users abusing lua
      since these lua VM memory is not part of used_memory.
      
      2. Since jemalloc is much better (fragmentation and speed), and also we
      know it and trust it. we are
      going to use jemalloc instead of libc to allocate the Lua VM code and
      count it used memory.
      
      ## Process:
      In this PR, we will use jemalloc in lua. 
      1. Create an arena for all lua vm (script and function), which is
      shared, in order to avoid blocking defragger.
      2. Create a bound tcache for the lua VM, since the lua VM and the main
      thread are by default in the same tcache, and if there is no isolated
      tcache, lua may request memory from the tcache which has just been freed
      by main thread, and vice versa
      On the other hand, since lua vm might be release in bio thread, but
      tcache is not thread-safe, we need to recreate
          the tcache every time we recreate the lua vm.
      3. Remove lua memory statistics from memory fragmentation statistics to
      avoid the effects of lua memory fragmentation
      
      ## Other
      Add the following new fields to `INFO DEBUG` (we may promote them to
      INFO MEMORY some day)
      1. allocator_allocated_lua: total number of bytes allocated of lua arena
      2. allocator_active_lua: total number of bytes in active pages allocated
      in lua arena
      3. allocator_resident_lua: maximum number of bytes in physically
      resident data pages mapped in lua arena
      4. allocator_frag_bytes_lua: fragment bytes in lua arena
      
      This is oranagra's idea, and i got some help from sundb.
      
      This solves the third point in #13102.
      
      ---------
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      804110a4
  6. 20 Mar, 2024 1 commit
  7. 01 Mar, 2024 1 commit
    • Chen Tianjie's avatar
      Add overhead of all DBs and rehashing dict count to info. (#12913) · 4cae99e7
      Chen Tianjie authored
      
      
      Sometimes we need to make fast judgement about why Redis is suddenly
      taking more memory. One of the reasons is main DB's dicts doing
      rehashing.
      
      We may use `MEMORY STATS` to monitor the overhead memory of each DB, but
      there still lacks a total sum to show an overall trend. So this PR adds
      the total overhead of all DBs to `INFO MEMORY` section, together with
      the total count of rehashing DB dicts, providing some intuitive metrics
      about main dicts rehashing.
      
      This PR adds the following metrics to INFO MEMORY
      * `mem_overhead_db_hashtable_rehashing` - only size of ht[0] in
      dictionaries we're rehashing (i.e. the memory that's gonna get released
      soon)
      
      and a similar ones to MEMORY STATS:
      * `overhead.db.hashtable.lut` (complements the existing
      `overhead.hashtable.main` and `overhead.hashtable.expires` which also
      counts the `dictEntry` structs too)
      * `overhead.db.hashtable.rehashing` - temporary rehashing overhead.
      * `db.dict.rehashing.count` - number of top level dictionaries being
      rehashed.
      
      ---------
      Co-authored-by: default avatarzhaozhao.zz <zhaozhao.zz@alibaba-inc.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      4cae99e7
  8. 20 Feb, 2024 1 commit
    • debing.sun's avatar
      Defragger improvements around large bins (#12996) · f6785df6
      debing.sun authored
      
      
      Implement #12963
      
      ## Changes
      1. large bins don't have external fragmentation or are at least
      non-defraggable, so we should ignore the effect of
      large bins when measuring fragmentation, and only measure fragmentation
      of small bins. this affects both the allocator_frag* metrics and also
      the active-defrag trigger
      2. Adding INFO metrics for `muzzy` memory, which is memory returned to
      the OS but still shows as RSS until the OS reclaims it.
      
      ---------
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      f6785df6
  9. 08 Feb, 2024 1 commit
    • Binbin's avatar
      Fix SORT STORE quicklist with the right options (#13042) · 813327b2
      Binbin authored
      We forgot to call quicklistSetOptions after createQuicklistObject,
      in the sort store scenario, we will create a quicklist with default
      fill or compress options.
      
      This PR adds fill and depth parameters to createQuicklistObject to
      specify that options need to be set after creating a quicklist.
      
      This closes #12871.
      
      release notes:
      > Fix lists created by SORT STORE to respect list compression and
      packing configs.
      813327b2
  10. 05 Feb, 2024 1 commit
    • guybe7's avatar
      Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822) · 8cd62f82
      guybe7 authored
      # Description
      Gather most of the scattered `redisDb`-related code from the per-slot
      dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
      it's a class that represents an array of dictionaries.
      
      # Motivation
      The main motivation is code cleanliness, the idea of using an array of
      dictionaries is very well-suited to becoming a self-contained data
      structure.
      This allowed cleaning some ugly code, among others: loops that run twice
      on the main dict and expires dict, and duplicate code for allocating and
      releasing this data structure.
      
      # Notes
      1. This PR reverts the part of https://github.com/redis/redis/pull/12848
      where the `rehashing` list is global (handling rehashing `dict`s is
      under the responsibility of `kvstore`, and should not be managed by the
      server)
      2. This PR also replaces the type of `server.pubsubshard_channels` from
      `dict**` to `kvstore` (original PR:
      https://github.com/redis/redis/pull/12804). After that was done,
      server.pubsub_channels was also chosen to be a `kvstore` (with only one
      `dict`, which seems odd) just to make the code cleaner by making it the
      same type as `server.pubsubshard_channels`, see
      `pubsubtype.serverPubSubChannels`
      3. the keys and expires kvstores are currenlty configured to allocate
      the individual dicts only when the first key is added (unlike before, in
      which they allocated them in advance), but they won't release them when
      the last key is deleted.
      
      Worth mentioning that due to the recent change the reply of DEBUG
      HTSTATS changed, in case no keys were ever added to the db.
      
      before:
      ```
      127.0.0.1:6379> DEBUG htstats 9
      [Dictionary HT]
      Hash table 0 stats (main hash table):
      No stats available for empty dictionaries
      [Expires HT]
      Hash table 0 stats (main hash table):
      No stats available for empty dictionaries
      ```
      
      after:
      ```
      127.0.0.1:6379> DEBUG htstats 9
      [Dictionary HT]
      [Expires HT]
      ```
      8cd62f82
  11. 12 Jan, 2024 1 commit
    • Chen Tianjie's avatar
      Correct bytes_per_key computing. (#12897) · 87786342
      Chen Tianjie authored
      Change the calculation method of bytes_per_key to make it closer to
      the true average key size. The calculation method is as follows:
      
      mh->bytes_per_key = mh->total_keys ? (mh->dataset / mh->total_keys) : 0;
      87786342
  12. 10 Dec, 2023 1 commit
  13. 15 Oct, 2023 1 commit
    • Vitaly's avatar
      Replace cluster metadata with slot specific dictionaries (#11695) · 0270abda
      Vitaly authored
      This is an implementation of https://github.com/redis/redis/issues/10589
      
       that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot.  Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
      
      ## Important changes
      * Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
      * getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
      * Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
      * scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
      * Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot. 
      * Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
      * DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
      
      ## Performance
      This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict. 
      
      RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
      
      ## Interface changes
      * Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
      * Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
      * New RDB version to support the new op code for SLOT information. 
      
      ---------
      Co-authored-by: default avatarVitaly Arbuzov <arvit@amazon.com>
      Co-authored-by: default avatarHarkrishn Patro <harkrisp@amazon.com>
      Co-authored-by: default avatarRoshan Khatri <rvkhatri@amazon.com>
      Co-authored-by: default avatarMadelyn Olson <madelyneolson@gmail.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      0270abda
  14. 20 Jun, 2023 1 commit
    • judeng's avatar
      use embedded string object and more efficient ll2string for long long value... · 93708c7f
      judeng authored
      
      use embedded string object and more efficient ll2string for long long value convert to string (#12250)
      
      A value of type long long is always less than 21 bytes when convert to a
      string, so always meets the conditions for using embedded string object
      which can always get memory reduction and performance gain (less calls
      to the heap allocator).
      Additionally, for the conversion of longlong type to sds, we also use a faster
      algorithm (the one in util.c instead of the one that used to be in sds.c). 
      
      For the DECR command on 32-bit Redis, we get about a 5.7% performance
      improvement. There will also be some performance gains for some commands
      that heavily use sdscatfmt to convert numbers, such as INFO.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      93708c7f
  15. 30 May, 2023 1 commit
  16. 24 May, 2023 1 commit
    • judeng's avatar
      postpone the initialization of oject's lru&lfu until it is added to the db as... · d71478a8
      judeng authored
      postpone the initialization of oject's lru&lfu until it is added to the db as a value object (#11626)
      
      This pr can get two performance benefits:
      1. Stop redundant initialization when most robj objects are created
      2. LRU_CLOCK will no longer be called in io threads, so we can avoid the `atomicGet`
      
      Another code optimization:
      deleted the redundant judgment in dbSetValue, no matter in LFU or LRU, the lru field inold
      robj is always the freshest (it is always updated in lookupkey), so we don't need to judge if in LFU
      d71478a8
  17. 07 Mar, 2023 1 commit
    • Madelyn Olson's avatar
      Always compact nodes in stream listpacks after creating new nodes (#11885) · 2bb29e4a
      Madelyn Olson authored
      This change attempts to alleviate a minor memory usage degradation for Redis 6.2 and onwards when using rather large objects (~2k) in streams. Introduced in #6281, we pre-allocate the head nodes of a stream to be 4kb, to limit the amount of unnecessary initial reallocations that are done. However, if we only ever allocate one object because 2 objects exceeds the max_stream_entry_size, we never actually shrink it to fit the single item. This can lead to a lot of excessive memory usage. For smaller item sizes this becomes less of an issue, as the overhead decreases as the items become smaller in size.
      
      This commit also changes the MEMORY USAGE of streams, since it was reporting the lpBytes instead of the allocated size. This introduced an observability issue when diagnosing the memory issue, since Redis reported the same amount of used bytes pre and post change, even though the new implementation allocated more memory.
      2bb29e4a
  18. 28 Feb, 2023 1 commit
    • uriyage's avatar
      Try to trim strings only when applicable (#11817) · 9d336ac3
      uriyage authored
      
      
      As `sdsRemoveFreeSpace` have an impact on performance even if it is a no-op (see details at #11508). 
      Only call the function when there is a possibility that the string contains free space.
      * For strings coming from the network, it's only if they're bigger than PROTO_MBULK_BIG_ARG
      * For strings coming from scripts, it's only if they're smaller than LUA_CMD_OBJCACHE_MAX_LEN
      * For strings coming from modules, it could be anything.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      Co-authored-by: default avatarsundb <sundbcn@gmail.com>
      9d336ac3
  19. 31 Jan, 2023 1 commit
    • uriyage's avatar
      Optimization: sdsRemoveFreeSpace to avoid realloc on noop (#11766) · 46393f98
      uriyage authored
      
      
      In #7875 (Redis 6.2), we changed the sds alloc to be the usable allocation
      size in order to:
      
      > reduce the need for realloc calls by making the sds implicitly take over
      the internal fragmentation
      
      This change was done most sds functions, excluding `sdsRemoveFreeSpace` and
      `sdsResize`, the reason is that in some places (e.g. clientsCronResizeQueryBuffer)
      we call sdsRemoveFreeSpace when we see excessive free space and want to trim it.
      so if we don't trim it exactly to size, the caller may still see excessive free space and
      call it again and again.
      
      However, this resulted in some excessive calls to realloc, even when there's no need
      and it's gonna be a no-op (e.g. when reducing 15 bytes allocation to 13).
      
      It turns out that a call for realloc with jemalloc can be expensive even if it ends up
      doing nothing, so this PR adds a check using `je_nallocx`, which is cheap to avoid
      the call for realloc.
      
      in addition to that this PR unifies sdsResize and sdsRemoveFreeSpace into common
      code. the difference between them was that sdsResize would avoid using SDS_TYPE_5,
      since it want to keep the string ready to be resized again, while sdsRemoveFreeSpace
      would permit using SDS_TYPE_5 and get an optimal memory consumption.
      now both methods take a `would_regrow` argument that makes it more explicit.
      
      the only actual impact of that is that in clientsCronResizeQueryBuffer we call both sdsResize
      and sdsRemoveFreeSpace for in different cases, and we now prevent the use of SDS_TYPE_5 in both.
      
      The new test that was added to cover this concern used to pass before this PR as well,
      this PR is just a performance optimization and cleanup.
      
      Benchmark:
      `redis-benchmark -c 100 -t set  -d 512 -P 10  -n  100000000`
      on i7-9850H with jemalloc, shows improvement from 1021k ops/sec to 1067k (average of 3 runs).
      some 4.5% improvement.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      46393f98
  20. 11 Jan, 2023 2 commits
    • Viktor Söderqvist's avatar
      Remove the bucket-cb from dictScan and move dictEntry defrag to dictScanDefrag · b60d33c9
      Viktor Söderqvist authored
      This change deletes the dictGetNext and dictGetNextRef functions, so the
      dict API doesn't expose the next field at all.
      
      The bucket function in dictScan is deleted. A separate dictScanDefrag function
      is added which takes a defrag alloc function to defrag-reallocate the dict entries.
      
      "Dirty" code accessing the dict internals in active defrag is removed.
      
      An 'afterReplaceEntry' is added to dictType, which allows the dict user
      to keep the dictEntry metadata up to date after reallocation/defrag/move.
      
      Additionally, for updating the cluster slot-to-key mapping, after a dictEntry
      has been reallocated, we need to know which db a dict belongs to, so we store
      a pointer to the db in a new metadata section in the dict struct, which is
      a new mechanism similar to dictEntry metadata. This adds some complexity but
      provides better isolation.
      b60d33c9
    • Viktor Söderqvist's avatar
      Make dictEntry opaque · c84248b5
      Viktor Söderqvist authored
      Use functions for all accesses to dictEntry (except in dict.c). Dict abuses
      e.g. in defrag.c have been replaced by support functions provided by dict.
      c84248b5
  21. 05 Jan, 2023 1 commit
    • Oran Agra's avatar
      Fix issues with listpack encoded set (#11685) · d0cc3de7
      Oran Agra authored
      PR #11290 added listpack encoding for sets, but was missing two things:
      1. Correct handling of MEMORY USAGE (leading to an assertion).
      2. Had an uncontrolled scratch buffer size in SRANDMEMBER leading to
         OOM panic (reported in #11668). Fixed by copying logic from ZRANDMEMBER.
      
      note that both issues didn't exist in any redis release.
      d0cc3de7
  22. 09 Dec, 2022 1 commit
    • Binbin's avatar
      Fix zuiFind crash / RM_ScanKey hang on SET object listpack encoding (#11581) · 20854cb6
      Binbin authored
      
      
      In #11290, we added listpack encoding for SET object.
      But forgot to support it in zuiFind, causes ZINTER, ZINTERSTORE,
      ZINTERCARD, ZIDFF, ZDIFFSTORE to crash.
      And forgot to support it in RM_ScanKey, causes it hang.
      
      This PR add support SET listpack in zuiFind, and in RM_ScanKey.
      And add tests for related commands to cover this case.
      
      Other changes:
      - There is no reason for zuiFind to go into the internals of the SET.
        It can simply use setTypeIsMember and don't care about encoding.
      - Remove the `#include "intset.h"` from server.h reduce the chance of
        accidental intset API use.
      - Move setTypeAddAux, setTypeRemoveAux and setTypeIsMemberAux
        interfaces to the header.
      - In scanGenericCommand, use setTypeInitIterator and setTypeNext
        to handle OBJ_SET scan.
      - In RM_ScanKey, improve hash scan mode, use lpGetValue like zset,
        they can share code and better performance.
      
      The zuiFind part fixes #11578
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      Co-authored-by: default avatarViktor Söderqvist <viktor.soderqvist@est.tech>
      20854cb6
  23. 07 Dec, 2022 1 commit
    • Harkrishn Patro's avatar
      Optimize client memory usage tracking operation while client eviction is disabled (#11348) · c0267b3f
      Harkrishn Patro authored
      
      
      ## Issue
      During the client input/output buffer processing, the memory usage is
      incrementally updated to keep track of clients going beyond a certain
      threshold `maxmemory-clients` to be evicted. However, this additional
      tracking activity leads to unnecessary CPU cycles wasted when no
      client-eviction is required. It is applicable in two cases.
      
      * `maxmemory-clients` is set to `0` which equates to no client eviction
        (applicable to all clients)
      * `CLIENT NO-EVICT` flag is set to `ON` which equates to a particular
        client not applicable for eviction.  
      
      ## Solution
      * Disable client memory usage tracking during the read/write flow when
        `maxmemory-clients` is set to `0` or `client no-evict` is `on`.
        The memory usage is tracked only during the `clientCron` i.e. it gets
        periodically updated.
      * Cleanup the clients from the memory usage bucket when client eviction
        is disabled.
      * When the maxmemory-clients config is enabled or disabled at runtime,
        we immediately update the memory usage buckets for all clients (tested
        scanning 80000 took some 20ms)
      
      Benchmark shown that this can improve performance by about 5% in
      certain situations.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      c0267b3f
  24. 16 Nov, 2022 1 commit
    • sundb's avatar
      Add listpack encoding for list (#11303) · 2168ccc6
      sundb authored
      Improve memory efficiency of list keys
      
      ## Description of the feature
      The new listpack encoding uses the old `list-max-listpack-size` config
      to perform the conversion, which we can think it of as a node inside a
      quicklist, but without 80 bytes overhead (internal fragmentation included)
      of quicklist and quicklistNode structs.
      For example, a list key with 5 items of 10 chars each, now takes 128 bytes
      instead of 208 it used to take.
      
      ## Conversion rules
      * Convert listpack to quicklist
        When the listpack length or size reaches the `list-max-listpack-size` limit,
        it will be converted to a quicklist.
      * Convert quicklist to listpack
        When a quicklist has only one node, and its length or size is reduced to half
        of the `list-max-listpack-size` limit, it will be converted to a listpack.
        This is done to avoid frequent conversions when we add or remove at the bounding size or length.
          
      ## Interface changes
      1. add list entry param to listTypeSetIteratorDirection
          When list encoding is listpack, `listTypeIterator->lpi` points to the next entry of current entry,
          so when changing the direction, we need to use the current node (listTypeEntry->p) to 
          update `listTypeIterator->lpi` to the next node in the reverse direction.
      
      ## Benchmark
      ### Listpack VS Quicklist with one node
      * LPUSH - roughly 0.3% improvement
      * LRANGE - roughly 13% improvement
      
      ### Both are quicklist
      * LRANGE - roughly 3% improvement
      * LRANGE without pipeline - roughly 3% improvement
      
      From the benchmark, as we can see from the results
      1. When list is quicklist encoding, LRANGE improves performance by <5%.
      2. When list is listpack encoding, LRANGE improves performance by ~13%,
         the main enhancement is brought by `addListListpackRangeReply()`.
      
      ## Memory usage
      1M lists(key:0~key:1000000) with 5 items of 10 chars ("hellohello") each.
      shows memory usage down by 35.49%, from 214MB to 138MB.
      
      ## Note
      1. Add conversion callback to support doing some work before conversion
          Since the quicklist iterator decompresses the current node when it is released, we can 
          no longer decompress the quicklist after we convert the list.
      2168ccc6
  25. 09 Nov, 2022 1 commit
    • Viktor Söderqvist's avatar
      Listpack encoding for sets (#11290) · 4e472a1a
      Viktor Söderqvist authored
      Small sets with not only integer elements are listpack encoded, by default
      up to 128 elements, max 64 bytes per element, new config `set-max-listpack-entries`
      and `set-max-listpack-value`. This saves memory for small sets compared to using a hashtable.
      
      Sets with only integers, even very small sets, are still intset encoded (up to 1G
      limit, etc.). Larger sets are hashtable encoded.
      
      This PR increments the RDB version, and has an effect on OBJECT ENCODING
      
      Possible conversions when elements are added:
      
          intset -> listpack
          listpack -> hashtable
          intset -> hashtable
      
      Note: No conversion happens when elements are deleted. If all elements are
      deleted and then added again, the set is deleted and recreated, thus implicitly
      converted to a smaller encoding.
      4e472a1a
  26. 06 Nov, 2022 1 commit
  27. 24 Aug, 2022 1 commit
  28. 19 Jul, 2022 1 commit
  29. 22 May, 2022 1 commit
    • Binbin's avatar
      Remove ziplist dead code in object.c (#10751) · 18cb4a7d
      Binbin authored
      Remove some dead code in object.c, ziplist is no longer used in 7.0
      
      Some backgrounds:
      zipmap - hash: replaced by ziplist in #285
      ziplist - hash: replaced by listpack in #8887
      ziplist - zset: replaced by listpack in #9366
      ziplist - list: replaced by quicklist (listpack) in #2143 / #9740
      
      Moved the location of ziplist.h in the server.c
      18cb4a7d
  30. 17 Apr, 2022 1 commit
    • guybe7's avatar
      Add RM_MallocSizeString, RM_MallocSizeDict (#10542) · fe1c096b
      guybe7 authored
      Add APIs to allow modules to compute the memory consumption of opaque objects owned by redis.
      Without these, the mem_usage callbacks of module data types are useless in many cases.
      
      Other changes:
      Fix streamRadixTreeMemoryUsage to include the size of the rax structure itself
      fe1c096b
  31. 29 Mar, 2022 1 commit
  32. 03 Jan, 2022 1 commit
    • chenyang8094's avatar
      Implement Multi Part AOF mechanism to avoid AOFRW overheads. (#9788) · 87789fae
      chenyang8094 authored
      
      
      Implement Multi-Part AOF mechanism to avoid overheads during AOFRW.
      Introducing a folder with multiple AOF files tracked by a manifest file.
      
      The main issues with the the original AOFRW mechanism are:
      * buffering of commands that are processed during rewrite (consuming a lot of RAM)
      * freezes of the main process when the AOFRW completes to drain the remaining part of the buffer and fsync it.
      * double disk IO for the data that arrives during AOFRW (had to be written to both the old and new AOF files)
      
      The main modifications of this PR:
      1. Remove the AOF rewrite buffer and related code.
      2. Divide the AOF into multiple files, they are classified as two types, one is the the `BASE` type,
        it represents the full amount of data (Maybe AOF or RDB format) after each AOFRW, there is only
        one `BASE` file at most. The second is `INCR` type, may have more than one. They represent the
        incremental commands since the last AOFRW.
      3. Use a AOF manifest file to record and manage these AOF files mentioned above.
      4. The original configuration of `appendfilename` will be the base part of the new file name, for example:
        `appendonly.aof.1.base.rdb` and `appendonly.aof.2.incr.aof`
      5. Add manifest-related TCL tests, and modified some existing tests that depend on the `appendfilename`
      6. Remove the `aof_rewrite_buffer_length` field in info.
      7. Add `aof-disable-auto-gc` configuration. By default we're automatically deleting HISTORY type AOFs.
        It also gives users the opportunity to preserve the history AOFs. just for testing use now.
      8. Add AOFRW limiting measure. When the AOFRW failures reaches the threshold (3 times now),
        we will delay the execution of the next AOFRW by 1 minute. If the next AOFRW also fails, it will be
        delayed by 2 minutes. The next is 4, 8, 16, the maximum delay is 60 minutes (1 hour). During the limit
        period, we can still use the 'bgrewriteaof' command to execute AOFRW immediately.
      9. Support upgrade (load) data from old version redis.
      10. Add `appenddirname` configuration, as the directory name of the append only files. All AOF files and
        manifest file will be placed in this directory.
      11. Only the last AOF file (BASE or INCR) can be truncated. Otherwise redis will exit even if
        `aof-load-truncated` is enabled.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      87789fae
  33. 02 Jan, 2022 1 commit
  34. 21 Dec, 2021 1 commit
    • zhugezy's avatar
      Remove EVAL script verbatim replication, propagation, and deterministic execution logic (#9812) · 1b0968df
      zhugezy authored
      
      
      # Background
      
      The main goal of this PR is to remove relevant logics on Lua script verbatim replication,
      only keeping effects replication logic, which has been set as default since Redis 5.0.
      As a result, Lua in Redis 7.0 would be acting the same as Redis 6.0 with default
      configuration from users' point of view.
      
      There are lots of reasons to remove verbatim replication.
      Antirez has listed some of the benefits in Issue #5292:
      
      >1. No longer need to explain to users side effects into scripts.
          They can do whatever they want.
      >2. No need for a cache about scripts that we sent or not to the slaves.
      >3. No need to sort the output of certain commands inside scripts
          (SMEMBERS and others): this both simplifies and gains speed.
      >4. No need to store scripts inside the RDB file in order to startup correctly.
      >5. No problems about evicting keys during the script execution.
      
      When looking back at Redis 5.0, antirez and core team decided to set the config
      `lua-replicate-commands yes` by default instead of removing verbatim replication
      directly, in case some bad situations happened. 3 years later now before Redis 7.0,
      it's time to remove it formally.
      
      # Changes
      
      - configuration for lua-replicate-commands removed
        - created config file stub for backward compatibility
      - Replication script cache removed
        - this is useless under script effects replication
        - relevant statistics also removed
      - script persistence in RDB files is also removed
      - Propagation of SCRIPT LOAD and SCRIPT FLUSH to replica / AOF removed
      - Deterministic execution logic in scripts removed (i.e. don't run write commands
        after random ones, and sorting output of commands with random order)
        - the flags indicating which commands have non-deterministic results are kept as hints to clients.
      - `redis.replicate_commands()` & `redis.set_repl()` changed
        - now `redis.replicate_commands()` does nothing and return an 1
        - ...and then `redis.set_repl()` can be issued before `redis.replicate_commands()` now
      - Relevant TCL cases adjusted
      - DEBUG lua-always-replicate-commands removed
      
      # Other changes
      - Fix a recent bug comparing CLIENT_ID_AOF to original_client->flags instead of id. (introduced in #9780)
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      1b0968df
  35. 17 Dec, 2021 1 commit
    • ny0312's avatar
      Introduce memory management on cluster link buffers (#9774) · 792afb44
      ny0312 authored
      Introduce memory management on cluster link buffers:
       * Introduce a new `cluster-link-sendbuf-limit` config that caps memory usage of cluster bus link send buffers.
       * Introduce a new `CLUSTER LINKS` command that displays current TCP links to/from peers.
       * Introduce a new `mem_cluster_links` field under `INFO` command output, which displays the overall memory usage by all current cluster links.
       * Introduce a new `total_cluster_links_buffer_limit_exceeded` field under `CLUSTER INFO` command output, which displays the accumulated count of cluster links freed due to `cluster-link-sendbuf-limit`.
      792afb44
  36. 02 Dec, 2021 1 commit
    • meir@redislabs.com's avatar
      Redis Functions - Added redis function unit and Lua engine · cbd46317
      meir@redislabs.com authored
      Redis function unit is located inside functions.c
      and contains Redis Function implementation:
      1. FUNCTION commands:
        * FUNCTION CREATE
        * FCALL
        * FCALL_RO
        * FUNCTION DELETE
        * FUNCTION KILL
        * FUNCTION INFO
      2. Register engine
      
      In addition, this commit introduce the first engine
      that uses the Redis Function capabilities, the
      Lua engine.
      cbd46317
  37. 01 Dec, 2021 1 commit
    • meir@redislabs.com's avatar
      Redis Functions - Move Lua related variable into luaCtx struct · e0cd580a
      meir@redislabs.com authored
      The following variable was renamed:
      1. lua_caller 			-> script_caller
      2. lua_time_limit 		-> script_time_limit
      3. lua_timedout 		-> script_timedout
      4. lua_oom 			-> script_oom
      5. lua_disable_deny_script 	-> script_disable_deny_script
      6. in_eval			-> in_script
      
      The following variables was moved to lctx under eval.c
      1.  lua
      2.  lua_client
      3.  lua_cur_script
      4.  lua_scripts
      5.  lua_scripts_mem
      6.  lua_replicate_commands
      7.  lua_write_dirty
      8.  lua_random_dirty
      9.  lua_multi_emitted
      10. lua_repl
      11. lua_kill
      12. lua_time_start
      13. lua_time_snapshot
      
      This commit is in a low risk of introducing any issues and it
      is just moving varibales around and not changing any logic.
      e0cd580a
  38. 24 Nov, 2021 1 commit
    • sundb's avatar
      Replace ziplist with listpack in quicklist (#9740) · 45129059
      sundb authored
      
      
      Part three of implementing #8702, following #8887 and #9366 .
      
      ## Description of the feature
      1. Replace the ziplist container of quicklist with listpack.
      2. Convert existing quicklist ziplists on RDB loading time. an O(n) operation.
      
      ## Interface changes
      1. New `list-max-listpack-size` config is an alias for `list-max-ziplist-size`.
      2. Replace `debug ziplist` command with `debug listpack`.
      
      ## Internal changes
      1. Add `lpMerge` to merge two listpacks . (same as `ziplistMerge`)
      2. Add `lpRepr` to print info of listpack which is used in debugCommand and `quicklistRepr`. (same as `ziplistRepr`)
      3. Replace `QUICKLIST_NODE_CONTAINER_ZIPLIST` with `QUICKLIST_NODE_CONTAINER_PACKED`(following #9357 ).
          It represent that a quicklistNode is a packed node, as opposed to a plain node.
      4. Remove `createZiplistObject` method, which is never used.
      5. Calculate listpack entry size using overhead overestimation in `quicklistAllowInsert`.
          We prefer an overestimation, which would at worse lead to a few bytes below the lowest limit of 4k.
      
      ## Improvements
      1. Calling `lpShrinkToFit` after converting Ziplist to listpack, which was missed at #9366.
      2. Optimize `quicklistAppendPlainNode` to avoid memcpy data.
      
      ## Bugfix
      1. Fix crash in `quicklistRepr` when ziplist is compressed, introduced from #9366.
      
      ## Test
      1. Add unittest for `lpMerge`.
      2. Modify the old quicklist ziplist corrupt dump test.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      45129059
  39. 03 Nov, 2021 1 commit
    • perryitay's avatar
      Add support for list type to store elements larger than 4GB (#9357) · f27083a4
      perryitay authored
      Redis lists are stored in quicklist, which is currently a linked list of ziplists.
      Ziplists are limited to storing elements no larger than 4GB, so when bigger
      items are added they're getting truncated.
      This PR changes quicklists so that they're capable of storing large items
      in quicklist nodes that are plain string buffers rather than ziplist.
      
      As part of the PR there were few other changes in redis: 
      1. new DEBUG sub-commands: 
         - QUICKLIST-PACKED-THRESHOLD - set the threshold of for the node type to
           be plan or ziplist. default (1GB)
         - QUICKLIST <key> - Shows low level info about the quicklist encoding of <key>
      2. rdb format change:
         - A new type was added - RDB_TYPE_LIST_QUICKLIST_2 . 
         - container type (packed / plain) was added to the beginning of the rdb object
           (before the actual node list).
      3. testing:
         - Tests that requires over 100MB will be by default skipped. a new flag was
           added to 'ru...
      f27083a4