- 18 Feb, 2024 2 commits
-
-
Binbin authored
From CodeQL: The result of scanf is only checked against 0, but it can also return EOF. Reported in https://github.com/redis/redis/security/code-scanning/38. Reported in https://github.com/redis/redis/security/code-scanning/39.
-
zhaozhao.zz authored
Redis has some special commands that mark the client's state, such as `subscribe` and `blpop`, which mark the client as `CLIENT_PUBSUB` or `CLIENT_BLOCKED`, and we have metrics for the special use cases. However, there are also other special commands, like `WATCH`, which although do not have a specific flags, and should also be considered stateful client types. For stateful clients, in many scenarios, the connections cannot be shared in "connection pool", meaning connection pool cannot be used. For example, whenever the `WATCH` command is executed, a new connection is required to put the client into the "watch state" because the watched keys are stored in the client. If different business logic requires watching different keys, separate connections must be used; otherwise, there will be contamination. This also means that if a user's business heavily relies on the `WATCH` command, a large number of connections will be required. Recently we have encountered this situation in our platform, where some users consume a significant number of connections when using Redis because of `WATCH`. I hope we can have a way to observe these special use cases and special client connections. Here I add a few monitoring metrics: 1. `watching_clients` in `INFO` reply: The number of clients currently in the "watching" state. 2. `total_watched_keys` in `INFO` reply: The total number of keys being watched. 3. `watch` in `CLIENT LIST` reply: The number of keys each client is currently watching.
-
- 15 Feb, 2024 3 commits
-
-
Binbin authored
Usually, the probability that a dict exists is much greater than the probability that it does not exist. In kvstoreDictAddRaw, we will call kvstoreGetDict multiple times. Based on this assumption, we change createDictIfNeeded to something like get or create function: ``` before: dict exist: 2 kvstoreGetDict dict non-exist: 2 kvstoreGetDict after: dict exist: 1 kvstoreGetDict dict non-exist: 3 kvstoreGetDict ``` A possible 3% performance improvement was observed: In addition, some typos/comments i saw have been cleaned up.
-
Binbin authored
In low memory situations, sending a big number of arguments (sets) may cause OOM panic. Use ztrycalloc, like we do on LCS and XAUTOCLAIM, and fail gracefully. This change affects the following commands: ZUNION, ZINTER, ZDIFF, ZUNIONSTORE, ZINTERSTORE, ZDIFFSTORE, ZINTERCARD.
-
Binbin authored
These tests have all failed in daily CI: ``` *** [err]: Blocking XREADGROUP for stream key that has clients blocked on stream - reprocessing command in tests/unit/type/stream-cgroups.tcl Expected '1101' to be between to '1000' and '1100' (context: type eval line 23 cmd {assert_range [expr $end-$start] 1000 1100} proc ::test) *** [err]: BLPOP unblock but the key is expired and then block again - reprocessing command in tests/unit/type/list.tcl Expected '1101' to be between to '1000' and '1100' (context: type eval line 23 cmd {assert_range [expr $end-$start] 1000 1100} proc ::test) *** [err]: BZPOPMIN unblock but the key is expired and then block again - reprocessing command in tests/unit/type/zset.tcl Expected '1103' to be between to '1000' and '1100' (context: type eval line 23 cmd {assert_range [expr $end-$start] 1000 1100} proc ::test) ``` Increase the range to avoid failures, and improve the comment to be clearer. tests was introduced in #13004.
-
- 14 Feb, 2024 1 commit
-
-
Sankar authored
The receiver does not update any of its cluster state based on gossip about itself. This commit explicitly avoids sending or processing gossip about the receiver. Currently cluster bus gossips include 10% of nodes in the cluster with a minimum of 3 nodes. For up to 30 node clusters, this commit makes sure that 1/3 of the gossip (1 out of 3 gossips) is never discarded. This should help with relatively faster convergence of cluster state in general.
-
- 12 Feb, 2024 3 commits
-
-
YaacovHazan authored
Following the changes introduced by 8cd62f82 , the dbExpandExpires used the db_size instead of expires_size. Co-authored-by:
YaacovHazan <yaacov.hazan@redislabs.com>
-
YaacovHazan authored
Following the changes introduced by 8cd62f82 , the kvstoreDictExpand for the expires kvstore used the slot_size instead of expires_slot_size. Co-authored-by:
YaacovHazan <yaacov.hazan@redislabs.com>
-
Binbin authored
This test fails occasionally: ``` *** [err]: CLIENT KILL maxAGE will kill old clients in tests/unit/introspection.tcl Expected 2 == 1 (context: type eval line 14 cmd {assert {$res == 1}} proc ::test) ``` This test is very likely to do a false positive if the execute time takes longer than the max age, for example, if the execution time between sleep and kill exceeds 1s, rd2 will also be killed due to the max age. The test can adjust the order of execution statements to increase the probability of passing, but this is still will be a timing issue in some slow machines, so decided give it a few more chances. The test was introduced in #12299.
-
- 11 Feb, 2024 1 commit
-
-
debing.sun authored
Fail CI: https://github.com/redis/redis/actions/runs/7837608438/job/21387609715 ## Why defragment tests only failed under 32-bit First of all, under 32-bit jemalloc will allocate more small bins and less large bins, which will also lead to more external fragmentation, therefore, the fragmentation ratio is higher in 32-bit than in 64-bit, so the defragment tests(`Active defrag eval scripts: cluster` and `Active defrag big keys: cluster`) always fails in 32-bit. ## Why defragment tests only failed with cluster The fowllowing is the result of `Active defrag eval scripts: cluster` test. 1) Before #11695, the fragmentation ratio is 3.11%. 2) After #11695, the fragmentation ratio grew to 4.58%. Since we are using per-slot dictionary to manage slots, we will only defragment the contents of these dictionaries (keys, values), but not the dictionaries' struct and ht_table, which means that frequent shrinking and expanding of the dictionaries, will make more fragments. 3) After #12850 and #12948, In cluster mode, a large number of cluster slot dicts will be shrunk, creating additional fragmention, and the dictionary will not be defragged. ## Solution * Add defragmentation of the per-slot dictionary's own structures, dict struct and ht_table. ## Other change * Increase floating point print precision of `frags` and `rss` in debug logs for defrag --------- Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 08 Feb, 2024 3 commits
-
-
Binbin authored
The test fails here and there: ``` *** [err]: expire scan should skip dictionaries with lot's of empty buckets in tests/unit/expire.tcl scan didn't handle slot skipping logic. ``` There are two case: 1. In the case of passing the test, we use child process to avoid the dict resize, but it can not completely limit it, since in the dictDelete we still have chance to trigger the resize (hit the force radio). The reason why our test passed before is because the expire dict is still in the rehashing process, so the dictDelete, the dictShrinkIfNeeded can not trigger the resize. 2. In the case of failing the test, the expire dict finished the rehashing, so the last dictDelete, the dictShrinkIfNeeded trigger the dict resize since it hit the force radio, so the skipping logic fail. This PR add a new DEBUG command to disbale the dict resize.
-
Binbin authored
We forgot to call quicklistSetOptions after createQuicklistObject, in the sort store scenario, we will create a quicklist with default fill or compress options. This PR adds fill and depth parameters to createQuicklistObject to specify that options need to be set after creating a quicklist. This closes #12871. release notes: > Fix lists created by SORT STORE to respect list compression and packing configs.
-
debing.sun authored
Fix two crash introducted by #12955 When a quicklist node can't be inserted and split, we eventually merge the current node with its neighboring nodes after inserting, and compress the current node and its siblings. 1. When the current node is merged with another node, the current node may become invalid and can no longer be used. Solution: let `_quicklistMergeNodes()` return the merged nodes. 3. If the current node is a LZF quicklist node, its recompress will be 1. If the split node can be merged with a sibling node to become head or tail, recompress may cause the head and tail to be compressed, which is not allowed. Solution: always recompress to 0 after merging.
-
- 07 Feb, 2024 2 commits
-
-
Binbin authored
After fix for #13033, address sanitizer reports this heap-use-after-free error. When the pubsubshard_channels dict becomes empty, we will delete the dict, and the dictReleaseIterator will call dictResetIterator, it will use the dict so we will trigger the error. This PR introduced a new struct kvstoreDictIterator to wrap dictIterator. Replace the original dict iterator with the new kvstore dict iterator. --------- Co-authored-by:
Oran Agra <oran@redislabs.com> Co-authored-by:
guybe7 <guy.benoish@redislabs.com>
-
Binbin authored
The reason is the same as #13016. The reason is that in #12819, in cron, in addition to trying to shrink, we will also tyring to expand. The dict was expanded by cron before we trigger the bgsave since we do have the enough keys (4096) to hit the radio. Before the bgsave, we only add 4095 keys to avoid this issue.
-
- 06 Feb, 2024 6 commits
-
-
debing.sun authored
Fix #12864 The main reason for this crash is that when replacing a element of a quicklist packed node with lpReplace() method, if the final size is larger than 4GB, lpReplace() will fail and returns NULL, causing `node->entry` to be incorrectly set to NULL. Since the inserted data is not a large element, we can't just replace it like a large element, first quicklistInsertAfter() and then quicklistDelIndex(), because the current node may be merged and invalidated in quicklistInsertAfter(). The solution of this PR: When replacing a node fails (listpack exceeds 4GB), split the current node, create a new node to put in the middle, and try to merge them. This is the same as inserting a large element. In the worst case, its size will not exceed 4GB.
-
Gann authored
This commit addresses a problem in connSocketBlockingConnect where different types of connection failures, including timeouts and other errors, were not consistently handled. Previously, the function did not return C_ERR immediately after detecting a connection failure, which could lead to inconsistent states and misinterpretation of the connection status. With this update, connSocketBlockingConnect now correctly returns C_ERR upon encountering any connection error, ensuring that all types of connection failures are handled consistently and the behavior of the function aligns with expected outcomes in case of connection issues. Closes #12900
-
Binbin authored
After #12822, when pubsubshard_channels became empty, kvstoreDictDelete will delete the dict (which is the only one currently deleting dicts that become empty) and in the next loop, we will make an invalid call to dictNext. After the dict becomes empty, we break out of the loop without calling dictNext.
-
Binbin authored
Currently, once active defrag starts, we can not adjust active_defrag_running downwards. This is because active_defrag_running will be dynamically compute based on the fragmentation, we think we should not lower the effort when the fragmentation drops. However, we need to note that active_defrag_running will also be dynamically computed based on configurations. In this case, we are not respecting cycle-min or cycle-max. Some people may realize halfway through that defrag consumes a lot and want to adjust it. Previously we could only turn off activedefrag and then turn it on again to adjust active_defrag_running downwards. So in this PR, when a active defrag configuration change is made, we will re-compute it. These configuration items are: - active-defrag-cycle-min - active-defrag-cycle-max - active-defrag-threshold-upper
-
Binbin authored
Since now a DB in cluster mode is divided into 16384 dicts, here we directly check kvstoreDictSize instead of kvstoreSize, which may have a higher probability that we can save the lookup. The other change is a cleanup, obviously kvstoreGetHash should be applied to the db->expires dicts.
-
Binbin authored
When the dict is NULL, we also need to push resize_cursor, otherwise it will keep doing useless continue here, and there is no way to resize the other dict behind it. Introduced in #12822. --------- Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 05 Feb, 2024 2 commits
-
-
guybe7 authored
# Description Gather most of the scattered `redisDb`-related code from the per-slot dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e. it's a class that represents an array of dictionaries. # Motivation The main motivation is code cleanliness, the idea of using an array of dictionaries is very well-suited to becoming a self-contained data structure. This allowed cleaning some ugly code, among others: loops that run twice on the main dict and expires dict, and duplicate code for allocating and releasing this data structure. # Notes 1. This PR reverts the part of https://github.com/redis/redis/pull/12848 where the `rehashing` list is global (handling rehashing `dict`s is under the responsibility of `kvstore`, and should not be managed by the server) 2. This PR also replaces the type of `server.pubsubshard_channels` from `dict**` to `kvstore` (original PR: https://github.com/redis/redis/pull/12804). After that was done, server.pubsub_channels was also chosen to be a `kvstore` (with only one `dict`, which seems odd) just to make the code cleaner by making it the same type as `server.pubsubshard_channels`, see `pubsubtype.serverPubSubChannels` 3. the keys and expires kvstores are currenlty configured to allocate the individual dicts only when the first key is added (unlike before, in which they allocated them in advance), but they won't release them when the last key is deleted. Worth mentioning that due to the recent change the reply of DEBUG HTSTATS changed, in case no keys were ever added to the db. before: ``` 127.0.0.1:6379> DEBUG htstats 9 [Dictionary HT] Hash table 0 stats (main hash table): No stats available for empty dictionaries [Expires HT] Hash table 0 stats (main hash table): No stats available for empty dictionaries ``` after: ``` 127.0.0.1:6379> DEBUG htstats 9 [Dictionary HT] [Expires HT] ```
-
Binbin authored
When db->expires_cursor==0, it means the DB is done the scanning, we should exit the loop to avoid the useless scanning. It is easy to see the active expire timeout in the modified test, for example, let's assume that there is only 1 expired key in the DB, and the size / buckets ratio is less than 1%, which means that we will skip it in isExpiryDictValidForSamplingCb, and the return value of expires_cursor is 0. Because `data.sampled == 0` is always true, so `repeat` is also always true, we will keep scanning the DB, but every time it is skipped by the previous judgment (expires_cursor = 0), until the timelimit is finally exhausted.
-
- 04 Feb, 2024 2 commits
-
-
Daz authored
The JSON file lacks the following structural API changes: - GEORADIUSBYMEMBER: add the ANY option for COUNT since 6.2.0. - GEORADIUSBYMEMBER_RO: add the ANY option for COUNT since 6.2.0. - GEORADIUS_RO: Added support for uppercase unit names since 7.0.0. - GEORADIUSBYMEMBER_RO: Added support for uppercase unit names since 7.0.0. --------- Signed-off-by:
daz-3ux <daz-3ux@proton.me> Co-authored-by:
bodong.ybd <bodong.ybd@alibaba-inc.com> Co-authored-by:
Viktor Söderqvist <viktor.soderqvist@est.tech> Co-authored-by:
yangpengda.333 <yangpengda.333@bytedance.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Yanqi Lv authored
Currently, We compute `db->avg_ttl` after each short `dbScan` sweep (a few buckets without checking the time limit). But after each `dbScan` sweep, we don't have much data and this makes the db->avg_ttl less precise. For example, even if we scan the whole db, we can't get the exact avg_ttl because we separate the data. i.e. because of the running average, if we issue 16 calls to scan, we'll give lower weight to the first one, and higher weight to the last one. I think we should calculate `db->avg_ttl` until completing more of the db iteration (judgement of time limit or the beginning of iterating next db) because we have more sample data in this db and can get more accurate result. In the best case, if we scan the whole db, we can get the exact avg_ttl. In this PR, we postpone the avg_ttl calculation until the judgement of time limit or iteration of next db, so we can accumulate more data to get more precise avg_ttl. Note that we still need to make sure to decay the old TTLs at the same speed as before, which is why we want to run the decay mechanism several times, or use the Pow formula, see the comment in the code. In my experiment, this PR can improve 89% or 52% accuracy in different workload. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 01 Feb, 2024 1 commit
-
-
Yanqi Lv authored
In Redis, rdb is produced in three scenarios mainly. - backup, such as `bgsave` and `save` command - full sync in replication - aof rewrite if `aof-use-rdb-preamble` is yes We also have some RDB flags to identify the purpose of rdb saving. ```C /* flags on the purpose of rdb save or load */ #define RDBFLAGS_NONE 0 /* No special RDB loading. */ #define RDBFLAGS_AOF_PREAMBLE (1<<0) /* Load/save the RDB as AOF preamble. */ #define RDBFLAGS_REPLICATION (1<<1) /* Load/save for SYNC. */ ``` But currently, it seems that these flags and purposes of rdb saving don't exactly match. I find it in `rdbSaveRioWithEOFMark` which calls `startSaving` with `RDBFLAGS_REPLICATION` but `rdbSaveRio` with `RDBFLAGS_NONE`. ```C int rdbSaveRioWithEOFMark(int req, rio *rdb, int *error, rdbSaveInfo *rsi) { char eofmark[RDB_EOF_MARK_SIZE]; startSaving(RDBFLAGS_REPLICATION); getRandomHexChars(eofmark,RDB_EOF_MARK_SIZE); if (error) *error = 0; if (rioWrite(rdb,"$EOF:",5) == 0) goto werr; if (rioWrite(rdb,eofmark,RDB_EOF_MARK_SIZE) == 0) goto werr; if (rioWrite(rdb,"\r\n",2) == 0) goto werr; if (rdbSaveRio(req,rdb,error,RDBFLAGS_NONE,rsi) == C_ERR) goto werr; if (rioWrite(rdb,eofmark,RDB_EOF_MARK_SIZE) == 0) goto werr; stopSaving(1); return C_OK; werr: /* Write error. */ /* Set 'error' only if not already set by rdbSaveRio() call. */ if (error && *error == 0) *error = errno; stopSaving(0); return C_ERR; } ``` In this PR, I refine the purpose of rdb saving with accurate flags.
-
- 31 Jan, 2024 3 commits
-
-
Binbin authored
Ci report this failure: ``` *** [err]: Don't rehash if used memory exceeds maxmemory after rehash in tests/unit/maxmemory.tcl Expected '4098' to equal or match '4002' WARNING: the new maxmemory value set via CONFIG SET (1176088) is smaller than the current memory usage (1231083) ``` It can be seen from the log that used_memory changed before we set maxmemory. The reason is that in #12819, in cron, in addition to trying to shrink, we will also tyring to expand. The dict was expanded by cron before we set maxmemory, causing the test to fail. Before setting maxmemory, we only add 4095 keys to avoid triggering resize.
-
Binbin authored
When we use a timer to unblock a client in module, if the timer period and the block timeout are very close, they will unblock the client in the same event loop, and it will trigger the assertion. The reason is that in moduleBlockedClientTimedOut we will protect against re-processing, so we don't actually call updateStatsOnUnblock (see #12817), so we are not able to reset the c->duration. The reason is unblockClientOnTimeout() didn't realize that bc had been unblocked. We add a function to the module to determine if bc is blocked, and then use it in unblockClientOnTimeout() to exit. There is the stack: ``` beforeSleep blockedBeforeSleep handleBlockedClientsTimeout checkBlockedClientTimeout unblockClientOnTimeout unblockClient resetClient -- assertion, crash the server 'c->duration == 0' is not true ```
-
Binbin authored
The block timeout is passed in the test case, but we do not pass in the timeout_callback, and it will crash when unlocking. In this case, in moduleBlockedClientTimedOut we will check timeout_callback. There is the stack: ``` beforeSleep blockedBeforeSleep handleBlockedClientsTimeout checkBlockedClientTimeout unblockClientOnTimeout replyToBlockedClientTimedOut moduleBlockedClientTimedOut -- timeout_callback is NULL, invalidFunctionWasCalled bc->timeout_callback(&ctx,(void**)c->argv,c->argc); ```
-
- 30 Jan, 2024 6 commits
-
-
Chen Tianjie authored
Add a way to HSCAN a hash key, and get only the filed names. Command syntax is now: ``` HSCAN key cursor [MATCH pattern] [COUNT count] [NOVALUES] ``` when `NOVALUES` is on, the command will only return keys in the hash. --------- Co-authored-by:
Viktor Söderqvist <viktor.soderqvist@est.tech>
-
Slava Koyfman authored
Adds an ability to kill clients older than a specified age. Also, fixed the age calculation in `catClientInfoString` to use `commandTimeSnapshot` instead of the old `server.unixtime`, and added missing documentation for `CLIENT KILL ID` to output of `CLIENT help`. --------- Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Oran Agra authored
tests consistently fail on timeout (sleep that's too short). it now takes more time because in #12819 we iterate on all dicts, not just non-empty ones. it passed the PR's CI because it skips the `slow` tag, which might have been misplaced, but now it is probably required. with the fix, the tests take quite a lot of time: ``` [ok]: Redis can trigger resizing (1860 ms) [ok]: Redis can rewind and trigger smaller slot resizing (744 ms) ``` before #12819: ``` [ok]: Redis can trigger resizing (309 ms) [ok]: Redis can rewind and trigger smaller slot resizing (295 ms) ``` failure: https://github.com/redis/redis/actions/runs/7704158180/job/20995931735 ``` *** [err]: expire scan should skip dictionaries with lot's of empty buckets in tests/unit/expire.tcl scan didn't handle slot skipping logic. *** [err]: Redis can trigger resizing in tests/unit/other.tcl Expected '[Dictionary HT] Hash table 0 stats (main hash table): table size: 128 number of elements: 5 [Expires HT] Hash table 0 stats (main hash table): No stats available for empty dictionaries ' to match '*table size: 8*' (context: type eval line 29 cmd {assert_match "*table size: 8*" [r debug HTSTATS 0]} proc ::test) *** [err]: Redis can rewind and trigger smaller slot resizing in tests/unit/other.tcl Expected '[Dictionary HT] Hash table 0 stats (main hash table): table size: 256 number of elements: 10 [Expires HT] Hash table 0 stats (main hash table): No stats available for empty dictionaries ' to match '*table size: 16*' (context: type eval line 27 cmd {assert_match "*table size: 16*" [r debug HTSTATS 0]} proc ::test) ```
-
Binbin authored
This was introduced in #13004, missing this assignment. It causes timeout to be a random value (may be less than now), and then in `Unblock by timer` test, the client is unblocked and then it call timeout_callback, since the callback is NULL, the server will crash. The crash stack is: ``` beforesleep handleBlockedClientsTimeout checkBlockedClientTimeout unblockClientOnTimeout replyToBlockedClientTimedOut moduleBlockedClientTimedOut -- the timeout_callback is NULL, invalidFunctionWasCalled bc->timeout_callback(&ctx,(void**)c->argv,c->argc); ```
-
Binbin authored
This allows specifying the timeout value for opening the TCP connection to a server. The timeout, default 0 means no limit, depending on the OS. It can be specified using the new `-t` switch. revive #3764, fixes #3763 --------- Co-authored-by:
Itamar Haber <itamar@redislabs.com> Co-authored-by:
yoav-steinberg <yoav@redislabs.com>
-
Binbin authored
In #11012, we will reprocess command when client is unblocked on keys, in some blocking commands, for example, in the XREADGROUP BLOCK scenario, because of the re-processing command, we will recalculate the block timeout, causing the blocking time to be reset. This commit add a new CLIENT_REPROCESSING_COMMAND clent flag, explicitly let the command know that it is being re-processed, later in blockForKeys we will not reset the timeout. Affected BLOCK cases: - list / zset / stream, added test cases for each. Unaffected cases: - module (never re-process the commands). - WAIT / WAITAOF (never re-process the commands). Fixes #12998.
-
- 29 Jan, 2024 3 commits
-
-
Chen Tianjie authored
The function `tryResizeHashTables` only attempts to shrink the dicts that has keys (change from #11695), this was a serious problem until the change in #12850 since it meant if all keys are deleted, we won't shrink the dick. But still, both dictShrink and dictExpand may be blocked by a fork child process, therefore, the cron job needs to perform both dictShrink and dictExpand, for not just non-empty dicts, but all dicts in DBs. What this PR does: 1. Try to resize all dicts in DBs (not just non-empty ones, as it was since #12850) 2. handle both shrink and expand (not just shrink, as it was since forever) 3. Refactor some APIs about dict resizing (get rid of `htNeedsShrink` `htNeedsShrink` `dictShrinkToFit`, and expose `dictShrinkIfNeeded` `dictExpandIfNeeded` which already contains all the code of those functions we get rid of, to make APIs more neat) 4. In the `Don't rehash if redis has child process` test, now that cron would do resizing, we no longer need to write to DB after the child process got killed, and can wait for the cron to expand the hash table.
-
Ozan Tezcan authored
Modules may want to handle allocation failures gracefully. Adding RM_TryCalloc() and RM_TryRealloc() for it. RM_TryAlloc() was added before: https://github.com/redis/redis/pull/10541
-
Binbin authored
Fix maxmemory-samples stack overflow crash in evictionPoolPopulate, limit its value to [1,64] (#13000) We have not limited the value of maxmemory-samples in the past, it can be set very large. If it is set very large, we will have stack overflow in evictionPoolPopulate when we trigger the key eviction. There is no reason for this config to be set too high, so just limit its range to [1,64].
-
- 27 Jan, 2024 1 commit
-
-
Roshan Khatri authored
#### Problem Statement: For any read/update operation during rehashing, we're doing ~10+ random DRAM lookups to do the rehashing, as we are using the `rehashidx` to rehash 10 buckets, whose dict entries most likely aren't cached in the CPU or near the bucket we are operating on. If these random bucket are empty, the rehashing process during that command execution is skipped. #### Implementation: For reducing the performance recession while dict is rehashing, we determine the index at which the key would be stored in the 0th HT, we check if that index has already been rehashed, if not we will rehash the bucket containing the key and the bucket will be moved from 0th HT to the 1st HT. If the key has already been rehashed, we perform the random access bucket rehash (using `rehashidx`) and we again verify if rehashing is still ongoing and look up the key in the respective HT. This ensures rehashing is not skipped in any command call and that we rehash a particular bucket or random bucket in each call. #### Changes in this PR: - Added a new method `dictBucketRehash` to perform rehash on a single bucket. - Helper function `moveKeysInBucketOldtoNew` for `dictRehash` and `dictBucketRehash` to move all the keys in a bucket from the old to the new hash HT. - Helper function `verifyMoreRehashRequired` for `dictRehash` and `dictBucketRehash` to check if we have already rehashed the whole table and if more rehashing is required. ### Benchmark: - This PR still shows **~13%** improvement in the latency during rehashing. - Rehashing is now **~2%** faster for this PR when compared to unstable. --------- Co-authored-by:
Oran Agra <oran@redislabs.com> Co-authored-by:
Madelyn Olson <34459052+madolson@users.noreply.github.com>
-
- 26 Jan, 2024 1 commit
-
-
judeng authored
The question is introduced in #12799 , the script cannot find the correct src and deps directories, so it always returns dirty as 0.
-