1. 28 May, 2024 1 commit
    • Ozan Tezcan's avatar
      Fix hscan return value (#13297) · 6a11d458
      Ozan Tezcan authored
      In the last step of hscan, while replying to client, we assume all items
      in the result list are keys which are mstr instances. Though, there 
      might be values which are sds instances. 
      
      Added a check to avoid calling mstrlen() for value objects.
      
      To reproduce:
      ```
      127.0.0.1:6379> hset myhash1 a 11111111111111111111111111111111111111111111111111111111111111111
      (integer) 0
      127.0.0.1:6379> hscan myhash1 0
      1) "0"
      2) 1) "a"
         2) "11111111111111111111111111111111111111111111111111111111111111111\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
      ```
      6a11d458
  2. 30 Jan, 2024 1 commit
  3. 01 Nov, 2023 1 commit
  4. 15 Oct, 2023 1 commit
    • Vitaly's avatar
      Replace cluster metadata with slot specific dictionaries (#11695) · 0270abda
      Vitaly authored
      This is an implementation of https://github.com/redis/redis/issues/10589
      
       that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot.  Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
      
      ## Important changes
      * Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
      * getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
      * Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
      * scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
      * Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot. 
      * Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
      * DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
      
      ## Performance
      This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict. 
      
      RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
      
      ## Interface changes
      * Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
      * Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
      * New RDB version to support the new op code for SLOT information. 
      
      ---------
      Co-authored-by: default avatarVitaly Arbuzov <arvit@amazon.com>
      Co-authored-by: default avatarHarkrishn Patro <harkrisp@amazon.com>
      Co-authored-by: default avatarRoshan Khatri <rvkhatri@amazon.com>
      Co-authored-by: default avatarMadelyn Olson <madelyneolson@gmail.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      0270abda
  5. 27 Jun, 2023 1 commit
    • judeng's avatar
      improve performance for scan command when matching pattern or data type (#12209) · 07ed0eaf
      judeng authored
      
      
      Optimized the performance of the SCAN command in a few ways:
      1. Move the key filtering (by MATCH pattern) in the scan callback,
        so as to avoid collecting them for later filtering.
      2. Reduce a many memory allocations and copying (use a reference
        to the original sds, instead of creating an robj, an excessive 2 mallocs
        and one string duplication)
      3. Compare TYPE filter directly (as integers), instead of inefficient string
        compare per key.
      4. fixed a small bug: when scan zset and hash types, maxiterations uses
        a more accurate number to avoid wrong double maxiterations.
      
      Changes **postponed** for a later version (8.0):
      1. Prepare to move the TYPE filtering to the scan callback as well. this was
        put on hold since it has side effects that can be considered a breaking
        change, which is that we will not attempt to do lazy expire (delete) a key
        that was filtered by not matching the TYPE (changing it would mean TYPE filter
        starts behaving the same as MATCH filter already does in that respect). 
      2. when the specified key TYPE filter is an unknown type, server will reply a error
        immediately instead of doing a full scan that comes back empty handed. 
      
      Benchmark result:
      For different scenarios, we obtained about 30% or more performance improvement.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      07ed0eaf
  6. 09 Nov, 2022 1 commit
    • Viktor Söderqvist's avatar
      Listpack encoding for sets (#11290) · 4e472a1a
      Viktor Söderqvist authored
      Small sets with not only integer elements are listpack encoded, by default
      up to 128 elements, max 64 bytes per element, new config `set-max-listpack-entries`
      and `set-max-listpack-value`. This saves memory for small sets compared to using a hashtable.
      
      Sets with only integers, even very small sets, are still intset encoded (up to 1G
      limit, etc.). Larger sets are hashtable encoded.
      
      This PR increments the RDB version, and has an effect on OBJECT ENCODING
      
      Possible conversions when elements are added:
      
          intset -> listpack
          listpack -> hashtable
          intset -> hashtable
      
      Note: No conversion happens when elements are deleted. If all elements are
      deleted and then added again, the set is deleted and recreated, thus implicitly
      converted to a smaller encoding.
      4e472a1a
  7. 09 Sep, 2021 1 commit
    • sundb's avatar
      Replace all usage of ziplist with listpack for t_zset (#9366) · 3ca6972e
      sundb authored
      Part two of implementing #8702 (zset), after #8887.
      
      ## Description of the feature
      Replaced all uses of ziplist with listpack in t_zset, and optimized some of the code to optimize performance.
      
      ## Rdb format changes
      New `RDB_TYPE_ZSET_LISTPACK` rdb type.
      
      ## Rdb loading improvements:
      1) Pre-expansion of dict for validation of duplicate data for listpack and ziplist.
      2) Simplifying the release of empty key objects when RDB loading.
      3) Unify ziplist and listpack data verify methods for zset and hash, and move code to rdb.c.
      
      ## Interface changes
      1) New `zset-max-listpack-entries` config is an alias for `zset-max-ziplist-entries` (same with `zset-max-listpack-value`).
      2) OBJECT ENCODING will return listpack instead of ziplist.
      
      ## Listpack improvements:
      1) Add `lpDeleteRange` and `lpDeleteRangeWithEntry` functions to delete a range of entries from listpack.
      2) Improve the performance of `lpCompare`, converting from string to integer is faster than converting from integer to string.
      3) Replace `snprintf` with `ll2string` to improve performance in converting numbers to strings in `lpGet()`.
      
      ## Zset improvements:
      1) Improve the performance of `zzlFind` method, use `lpFind` instead of `lpCompare` in a loop.
      2) Use `lpDeleteRangeWithEntry` instead of `lpDelete` twice to delete a element of zset.
      
      ## Tests
      1) Add some unittests for `lpDeleteRange` and `lpDeleteRangeWithEntry` function.
      2) Add zset RDB loading test.
      3) Add benchmark test for `lpCompare` and `ziplsitCompare`.
      4) Add empty listpack zset corrupt dump test.
      3ca6972e
  8. 10 Aug, 2021 1 commit
    • sundb's avatar
      Replace all usage of ziplist with listpack for t_hash (#8887) · 02fd76b9
      sundb authored
      
      
      Part one of implementing #8702 (taking hashes first before other types)
      
      ## Description of the feature
      1. Change ziplist encoded hash objects to listpack encoding.
      2. Convert existing ziplists on RDB loading time. an O(n) operation.
      
      ## Rdb format changes
      1. Add RDB_TYPE_HASH_LISTPACK rdb type.
      2. Bump RDB_VERSION to 10
      
      ## Interface changes
      1. New `hash-max-listpack-entries` config is an alias for `hash-max-ziplist-entries` (same with `hash-max-listpack-value`)
      2. OBJECT ENCODING will return `listpack` instead of `ziplist`
      
      ## Listpack improvements:
      1. Support direct insert, replace integer element (rather than convert back and forth from string)
      3. Add more listpack capabilities to match the ziplist ones (like `lpFind`, `lpRandomPairs` and such)
      4. Optimize element length fetching, avoid multiple calculations
      5. Use inline to avoid function call overhead.
      
      ## Tests
      1. Add a new test to the RDB load time conversion
      2. Adding the listpack unit tests. (based on the one in ziplist.c)
      3. Add a few "corrupt payload: fuzzer findings" tests, and slightly modify existing ones.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      02fd76b9
  9. 09 Jun, 2021 1 commit
    • Yossi Gottlieb's avatar
      Improve test suite to handle external servers better. (#9033) · 8a86bca5
      Yossi Gottlieb authored
      This commit revives the improves the ability to run the test suite against
      external servers, instead of launching and managing `redis-server` processes as
      part of the test fixture.
      
      This capability existed in the past, using the `--host` and `--port` options.
      However, it was quite limited and mostly useful when running a specific tests.
      Attempting to run larger chunks of the test suite experienced many issues:
      
      * Many tests depend on being able to start and control `redis-server` themselves,
      and there's no clear distinction between external server compatible and other
      tests.
      * Cluster mode is not supported (resulting with `CROSSSLOT` errors).
      
      This PR cleans up many things and makes it possible to run the entire test suite
      against an external server. It also provides more fine grained controls to
      handle cases where the external server supports a subset of the Redis commands,
      limited number of databases, cluster mode, etc.
      
      The tests directory now contains a `README.md` file that describes how this
      works.
      
      This commit also includes additional cleanups and fixes:
      
      * Tests can now be tagged.
      * Tag-based selection is now unified across `start_server`, `tags` and `test`.
      * More information is provided about skipped or ignored tests.
      * Repeated patterns in tests have been extracted to common procedures, both at a
        global level and on a per-test file basis.
      * Cleaned up some cases where test setup was based on a previous test executing
        (a major anti-pattern that repeats itself in many places).
      * Cleaned up some cases where test teardown was not part of a test (in the
        future we should have dedicated teardown code that executes even when tests
        fail).
      * Fixed some tests that were flaky running on external servers.
      8a86bca5
  10. 17 Jan, 2021 1 commit
    • Yossi Gottlieb's avatar
      Add io-thread daily CI tests. (#8232) · 522d9360
      Yossi Gottlieb authored
      This adds basic coverage to IO threads by running the cluster and few selected Redis test suite tests with the IO threads enabled.
      
      Also provides some necessary additional improvements to the test suite:
      
      * Add --config to sentinel/cluster tests for arbitrary configuration.
      * Fix --tags whitelisting which was broken.
      * Add a `network` tag to some tests that are more network intensive. This is work in progress and more tests should be properly tagged in the future.
      522d9360
  11. 22 May, 2019 1 commit
    • Angus Pearson's avatar
      Implement `SCAN cursor [TYPE type]` modifier suggested in issue #6107. · bf963253
      Angus Pearson authored
      Add tests to check basic functionality of this optional keyword, and also tested with
      a module (redisgraph). Checked quickly with valgrind, no issues.
      
      Copies name the type name canonicalisation code from `typeCommand`, perhaps this would
      be better factored out to prevent the two diverging and both needing to be edited to
      add new `OBJ_*` types, but this is a little fiddly with C strings.
      
      The [redis-doc](https://github.com/antirez/redis-doc/blob/master/commands.json) repo
      will need to be updated with this new arg if accepted.
      
      A quirk to be aware of here is that the GEO commands are backed by zsets not their own
      type, so they're not distinguishable from other zsets.
      
      Additionally, for sparse types this has the same behaviour as `MATCH` in that it may
      return many empty results before giving something, even for large `COUNT`s.
      bf963253
  12. 14 Jun, 2018 1 commit
  13. 11 Jun, 2018 2 commits
  14. 03 Dec, 2014 1 commit
  15. 05 Nov, 2013 1 commit
  16. 31 Oct, 2013 2 commits
  17. 30 Oct, 2013 6 commits