- 30 Jan, 2024 1 commit
-
-
Chen Tianjie authored
Add a way to HSCAN a hash key, and get only the filed names. Command syntax is now: ``` HSCAN key cursor [MATCH pattern] [COUNT count] [NOVALUES] ``` when `NOVALUES` is on, the command will only return keys in the hash. --------- Co-authored-by:
Viktor Söderqvist <viktor.soderqvist@est.tech>
-
- 29 Jan, 2024 1 commit
-
-
Chen Tianjie authored
The function `tryResizeHashTables` only attempts to shrink the dicts that has keys (change from #11695), this was a serious problem until the change in #12850 since it meant if all keys are deleted, we won't shrink the dick. But still, both dictShrink and dictExpand may be blocked by a fork child process, therefore, the cron job needs to perform both dictShrink and dictExpand, for not just non-empty dicts, but all dicts in DBs. What this PR does: 1. Try to resize all dicts in DBs (not just non-empty ones, as it was since #12850) 2. handle both shrink and expand (not just shrink, as it was since forever) 3. Refactor some APIs about dict resizing (get rid of `htNeedsShrink` `htNeedsShrink` `dictShrinkToFit`, and expose `dictShrinkIfNeeded` `dictExpandIfNeeded` which already contains all the code of those functions we get rid of, to make APIs more neat) 4. In the `Don't rehash if redis has child process` test, now that cron would do resizing, we no longer need to write to DB after the child process got killed, and can wait for the cron to expand the hash table.
-
- 22 Jan, 2024 1 commit
-
-
zhaozhao.zz authored
background: some modules need to know the `dbid` information, such as the function used during RDB loading: ``` robj *rdbLoadObject(int rdbtype, rio *rdb, sds key, int dbid, int *error) { .... moduleInitIOContext(io,mt,rdb,&keyobj,dbid); ``` However, during replication, the "tempDb" created for diskless RDB loading is not correctly set with the dbid. This leads to passing the wrong dbid to the `rdbLoadObject` function (as tempDb uses zcalloc, all ids are 0). ``` disklessLoadInitTempDb()->rdbLoadRioWithLoadingCtx()-> /* Read value */ val = rdbLoadObject(type,rdb,key,db->id,&error); ``` To fix it, set the correct ID (relative index) for the tempdb.
-
- 15 Dec, 2023 1 commit
-
-
zhaozhao.zz authored
After #11695, we added two functions `rehashingStarted` and `rehashingCompleted` to the dict structure. We also registered two handlers for the main database's dict and expire structures. This allows the main database to record the dict in `rehashing` list when rehashing starts. Later, in `serverCron`, the `incrementallyRehash` function is continuously called to perform the rehashing operation. However, currently, when rehashing is completed, `rehashingCompleted` does not remove the dict from the `rehashing` list. This results in the `rehashing` list containing many invalid dicts. Although subsequent cron checks and removes dicts that don't require rehashing, it is still inefficient. This PR implements the functionality to remove the dict from the `rehashing` list in `rehashingCompleted`. This is achieved by adding `metadata` to the dict structure, which keeps track of its position in the `rehashing` list, allowing for quick removal. This approach avoids storing duplicate dicts in the `rehashing` list. Additionally, there are other modifications: 1. Whether in standalone or cluster mode, the dict in database is inserted into the rehashing linked list when rehashing starts. This eliminates the need to distinguish between standalone and cluster mode in `incrementallyRehash`. The function only needs to focus on the dicts in the `rehashing` list that require rehashing. 2. `rehashing` list is moved from per-database to Redis server level. This decouples `incrementallyRehash` from the database ID, and in standalone mode, there is no need to iterate over all databases, avoiding unnecessary access to databases that do not require rehashing. In the future, even if unsharded-cluster mode supports multiple databases, there will be no risk involved. 3. The insertion and removal operations of dict structures in the `rehashing` list are decoupled from `activerehashing` config. `activerehashing` only controls whether `incrementallyRehash` is executed in serverCron. There is no need for additional steps when modifying the `activerehashing` switch, as in #12705.
-
- 10 Dec, 2023 2 commits
-
-
Binbin authored
The change in dbSwapDatabases seems harmless. Because in non-clustered mode, dbBuckets calculations are strictly accurate and in cluster mode, we only have one DB. Modify it for uniformity (just like resize_cursor). The change in swapMainDbWithTempDb is needed in case we swap with the temp db, otherwise the overhead memory usage of db can be miscalculated. In addition we will swap all fields (including rehashing list), just for completeness (and reduce the chance of surprises in the future). Introduced in #12697.
-
Binbin authored
when dbExpand is called from rdb.c with try_expand set to 0, it will either panic panic on OOM, or be non-fatal (should not fail RDB loading) At the same time, the log text has been slightly adjusted to make it more unified.
-
- 07 Dec, 2023 2 commits
-
-
Chen Tianjie authored
If not in cluster mode, there is no need to compute slot. A bit optimization for #12754
-
zhaozhao.zz authored
When loading RDB on cluster nodes, it is necessary to consider the scenario where a node is a replica. For example, during a rolling upgrade, new version instances are often mounted as replicas on old version instances. In this case, the full synchronization legacy RDB does not contain slot information, and the new version instance, acting as a replica, should be able to handle the legacy RDB correctly for `dbExpand`. Additionally, renaming `getMyClusterSlotCount` to `getMyShardSlotCount` would be appropriate. Introduced in #11695
-
- 06 Dec, 2023 1 commit
-
-
zhaozhao.zz authored
Additional optimizations for the eviction logic in #11695: To make the eviction logic clearer and decouple the number of sampled keys from the running mode (cluster or standalone). * When sampling in each database, we only care about the number of keys in the current database (not the dicts we sampled from). * If there are a insufficient number of keys in the current database (e.g. 10 times the value of `maxmemory_samples`), we can break out sooner (to avoid looping on a sparse database). * We'll never try to sample the db dicts more times than the number of non-empty dicts in the db (max 1 in non-cluster mode). And it also ensures that each database has a sufficient amount of sampled keys, so even if unsharded-cluster supports multiple databases, there won't be any issues. other changes: 1. keep track of the number of non-empty dicts in each database. 2. move key_count tracking into cumulativeKeyCountAdd rather than all it's callers --------- Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 05 Dec, 2023 1 commit
-
-
Chen Tianjie authored
in #12536 we made a similar optimization for SCAN, now that hashtags in patterns. When we can make sure all keys matching the pettern will be in the same slot, we can limit the iteration to run only one one.
-
- 28 Nov, 2023 2 commits
-
-
zhaozhao.zz authored
Introduced in #11695 . The tryResizeHashTables function gets stuck on the last non-empty slot while iterating through dictionaries. It does not restart from the beginning. The reason for this issue is a problem with the usage of dbIteratorNextDict: /* Returns next dictionary from the iterator, or NULL if iteration is complete. */ dict *dbIteratorNextDict(dbIterator *dbit) { if (dbit->next_slot == -1) return NULL; dbit->slot = dbit->next_slot; dbit->next_slot = dbGetNextNonEmptySlot(dbit->db, dbit->slot, dbit->keyType); return dbGetDictFromIterator(dbit); } When iterating to the last non-empty slot, next_slot is set to -1, causing it to loop indefinitely on that slot. We need to modify the code to ensure that after iterating to the last non-empty slot, it returns to the first non-empty slot. BTW, function tryResizeHashTables is actually iterating over slots that have keys. However, in its implementation, it leverages the dbIterator (which is a key iterator) to obtain slot and dictionary information. While this approach works fine, but it is not very intuitive. This PR also improves readability by changing the iteration to directly iterate over slots, thereby enhancing clarity.
-
zhaozhao.zz authored
The current comment for `findSlotByKeyIndex` is a bit ambiguous and can be misleading, as it may be misunderstood as getting the next slot corresponding to target.
-
- 22 Nov, 2023 3 commits
-
-
Binbin authored
We meant to divide it by the number of slots, otherwise it will do slots times dictExpand, bug was introduced in #11695.
-
Josh Hershberg authored
Simple rename, "GetSlotBit" is implementation specific Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
Josh Hershberg authored
Move clusterState into cluster_legacy.h. In order to achieve this some "accessor" methods needed to be added to the cluster API and some other minor refactors. Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
- 16 Nov, 2023 1 commit
-
-
Binbin authored
In swapdb mode, the temp db does not init the rehashing list, the change added in #12764 caused cluster ci to fail.
-
- 15 Nov, 2023 1 commit
-
-
Binbin authored
This is currently harmless, since we have already cleared the dict before, it will reset the rehashidx to -1, and in incrementallyRehash we will call dictIsRehashing to check. It would be nice to empty the list to avoid meaningless attempts, and the code is also unified to reduce misunderstandings.
-
- 14 Nov, 2023 1 commit
-
-
Binbin authored
When using DB iterator, it will use dictInitSafeIterator to init a old safe dict iterator. When dbIteratorNext is used, it will jump to the next slot db dict when we are done a dict. During this process, we do not have any calls to dictResumeRehashing, which causes the dict's pauserehash to always be > 0. And at last, it will be returned directly in dictRehashMilliseconds, which causes us to have slot dict in a state where rehash cannot be completed. In the "expire scan should skip dictionaries with lot's of empty buckets" test, adding a `keys *` can reproduce the problem stably. `keys *` will call dbIteratorNext to trigger a traversal of all slot dicts. Added dbReleaseIterator and dbIteratorInitNextSafeIterator methods to call dictResetIterator. Issue was introduced in #11695.
-
- 12 Nov, 2023 1 commit
-
-
Roshan Khatri authored
This PR introduces a new macro, serverAssertWithInfoDebug, to do complex assertions only for debugging. The main intention is to allow running complex operations during tests without impacting runtime performance. This assertion is enabled when setting DEBUG_ASSERTIONS. The DEBUG_ASSERTIONS flag is set for the daily and CI variants of `test-sanitizer-address`.
-
- 10 Nov, 2023 1 commit
-
-
zhaozhao.zz authored
Introduced in #12697 , should reset bucket_count when empty db, or the overhead memory usage of db can be miscalculated.
-
- 01 Nov, 2023 1 commit
-
-
Viktor Söderqvist authored
Optimize the performance of SCAN commands when a match pattern can only contain keys from a single slot in cluster mode. This can happen when the pattern contains a hash tag before any wildcard matchers or when the key contains no matchers.
-
- 28 Oct, 2023 1 commit
-
-
Roshan Khatri authored
This PR optimizes the time complexity of findSlotByKeyIndex from O(log^2(N)) to O(log(N)) by using the tree structure of binary index tree to find a slot in one search of the index.
-
- 27 Oct, 2023 1 commit
-
-
Harkrishn Patro authored
As part of #11695 independent dictionaries were introduced per slot. Time complexity to discover total no. of buckets across all dictionaries increased to O(N) with straightforward implementation of iterating over all dictionaries and adding dictBuckets of each. To optimize the time complexity, we could maintain a global counter at db level to keep track of the count of buckets and update it on the start and end of rehashing. --------- Co-authored-by:
Roshan Khatri <rvkhatri@amazon.com>
-
- 24 Oct, 2023 1 commit
-
-
Binbin authored
Fix some outdated comments and add comment for moduleNotifyKeyspaceEvent we added in #11084 since it seems a bit implicit. --------- Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 15 Oct, 2023 1 commit
-
-
Vitaly authored
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data. ## Important changes * Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms. * getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time. * Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree. * scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue. * Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot. * Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading. * DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well. ## Performance This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict. RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load. ## Interface changes * Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS` * Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored. * New RDB version to support the new op code for SLOT information. --------- Co-authored-by:
Vitaly Arbuzov <arvit@amazon.com> Co-authored-by:
Harkrishn Patro <harkrisp@amazon.com> Co-authored-by:
Roshan Khatri <rvkhatri@amazon.com> Co-authored-by:
Madelyn Olson <madelyneolson@gmail.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 12 Oct, 2023 1 commit
-
-
Ye Lin Aung authored
The function was renamed, but the comments were outdated.
-
- 08 Sep, 2023 1 commit
-
-
Binbin authored
and adjustments.
-
- 30 Aug, 2023 1 commit
-
-
bodong.ybd authored
Before: ``` 127.0.0.1:6379> command getkeys sort_ro key (empty array) 127.0.0.1:6379> ``` After: ``` 127.0.0.1:6379> command getkeys sort_ro key 1) "key" 127.0.0.1:6379> ```
-
- 06 Jul, 2023 1 commit
-
-
zhaozhao.zz authored
This is an addition to #12380, to prevent potential bugs when collecting keys from multiple commands in the future. Note that this function also resets numkeys in some cases.
-
- 03 Jul, 2023 1 commit
-
-
Lior Lahav authored
When getKeysUsingKeySpecs processes a command with more than one key-spec, and called with a total of more than 256 keys, it'll call getKeysPrepareResult again, but since numkeys isn't updated, getKeysPrepareResult will not bother to copy key names from the old result (leaving these slots uninitialized). Furthermore, it did not consider the keys it already found when allocating more space. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 27 Jun, 2023 1 commit
-
-
judeng authored
Optimized the performance of the SCAN command in a few ways: 1. Move the key filtering (by MATCH pattern) in the scan callback, so as to avoid collecting them for later filtering. 2. Reduce a many memory allocations and copying (use a reference to the original sds, instead of creating an robj, an excessive 2 mallocs and one string duplication) 3. Compare TYPE filter directly (as integers), instead of inefficient string compare per key. 4. fixed a small bug: when scan zset and hash types, maxiterations uses a more accurate number to avoid wrong double maxiterations. Changes **postponed** for a later version (8.0): 1. Prepare to move the TYPE filtering to the scan callback as well. this was put on hold since it has side effects that can be considered a breaking change, which is that we will not attempt to do lazy expire (delete) a key that was filtered by not matching the TYPE (changing it would mean TYPE filter starts behaving the same as MATCH filter already does in that respect). 2. when the specified key TYPE filter is an unknown type, server will reply a error immediately instead of doing a full scan that comes back empty handed. Benchmark result: For different scenarios, we obtained about 30% or more performance improvement. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 14 Jun, 2023 1 commit
-
-
judeng authored
This change only affects keys with expiry time. For SETEX, the average improvement is 5%, and for GET with expiation key, we gain a improvement of 13%. When keys have expiration time, Redis has an assertion to look up the main dict every time when it touches the expires. This comes with a performance const, especially during rehash. the damage will be double. It looks like that assert was added some ten years old, maybe out of paranoia, and there's probably no reason to keep it at that cost.
-
- 28 May, 2023 1 commit
-
-
Oran Agra authored
This is a redo of #11594 which got reverted in #11940 It improves performance by avoiding double lookup of the the key.
-
- 24 May, 2023 2 commits
-
-
mojh7 authored
functoin -> function
-
judeng authored
postpone the initialization of oject's lru&lfu until it is added to the db as a value object (#11626) This pr can get two performance benefits: 1. Stop redundant initialization when most robj objects are created 2. LRU_CLOCK will no longer be called in io threads, so we can avoid the `atomicGet` Another code optimization: deleted the redundant judgment in dbSetValue, no matter in LFU or LRU, the lru field inold robj is always the freshest (it is always updated in lookupkey), so we don't need to judge if in LFU
-
- 09 May, 2023 1 commit
-
-
Leibale Eidelman authored
in GEO commands, `STORE` and `STOREDIST` are mutually exclusive. use `oneof` block to contain them
-
- 03 May, 2023 1 commit
-
-
Madelyn Olson authored
Technically declaring a prototype with an empty declaration has been deprecated since the early days of C, but we never got a warning for it. C2x will apparently be introducing a breaking change if you are using this type of declarator, so Clang 15 has started issuing a warning with -pedantic. Although not apparently a problem for any of the compiler we build on, if feels like the right thing is to properly adhere to the C standard and use (void).
-
- 16 Apr, 2023 1 commit
-
-
judeng authored
Improve performance by avoiding redundancy memory malloc/free
-
- 23 Feb, 2023 1 commit
-
-
Chen Tianjie authored
When no-touch mode is enabled, the client will not touch LRU/LFU of the keys it accesses, except when executing command `TOUCH`. This allows inspecting or modifying the key-space without affecting their eviction. Changes: - A command `CLIENT NO-TOUCH ON|OFF` to switch on and off this mode. - A client flag `#define CLIENT_NOTOUCH (1ULL<<45)`, which can be shown with `CLIENT INFO`, by the letter "T" in the "flags" field. - Clear `NO-TOUCH` flag in `clearClientConnectionState`, which is used by `RESET` command and resetting temp clients used by modules. - Also clear `NO-EVICT` flag in `clearClientConnectionState`, this might have been an oversight, spotted by @madolson. - A test using `DEBUG OBJECT` command to verify that LRU stat is not touched when no-touch mode is on. Co-authored-by:
chentianjie <chentianjie@alibaba-inc.com> Co-authored-by:
Madelyn Olson <34459052+madolson@users.noreply.github.com> Co-authored-by:
sundb <sundbcn@gmail.com>
-
- 14 Feb, 2023 1 commit
-
-
guybe7 authored
Starting from Redis 7.0 (#9890) we started wrapping everything a command propagates with MULTI/EXEC. The problem is that both SCAN and RANDOMKEY can lazy-expire arbitrary keys (similar behavior to active-expire), and put DELs in a transaction. Fix: When these commands are called without a parent exec-unit (e.g. not in EVAL or MULTI) we avoid wrapping their DELs in a transaction (for the same reasons active-expire and eviction avoids a transaction) This PR adds a per-command flag that indicates that the command may touch arbitrary keys (not the ones in the arguments), and uses that flag to avoid the MULTI-EXEC. For now, this flag is internal, since we're considering other solutions for the future. Note for cluster mode: if SCAN/RANDOMKEY is inside EVAL/MULTI it can still cause the same situation (as it always did), but it won't cause a CROSSSLOT because replicas and AOF do not perform slot checks. The problem with the above is mainly for 3rd party ecosystem tools that propagate commands from master to master, or feed an AOF file with redis-cli into a master. This PR aims to fix the regression in redis 7.0, and we opened #11792 to try to handle the bigger problem with lazy expire better for another release.
-