- 19 Jan, 2024 2 commits
-
-
Yanqi Lv authored
Before this change (most recently modified in https://github.com/redis/redis/pull/12850#discussion_r1421406393), The trigger for normal expand threshold was 100% utilization and the trigger for normal shrink threshold was 10% (HASHTABLE_MIN_FILL). While during fork (DICT_RESIZE_AVOID), when we want to avoid rehash, the trigger thresholds were multiplied by 5 (`dict_force_resize_ratio`), meaning 500% for expand and 2% (100/10/5) for shrink. However, in `dictRehash` (the incremental rehashing), the rehashing threshold for shrinking during fork (DICT_RESIZE_AVOID) was 20% by mistake. This meant that if a shrinking is triggered when `dict_can_resize` is `DICT_RESIZE_ENABLE` which the threshold is 10%, the rehashing can continue when `dict_can_resize` is `DICT_RESIZE_AVOID`. This would cause unwanted CopyOnWrite damage. It'll make sense to change the thresholds of the rehash trigger and the thresholds of the incremental rehashing the same, however, in one we compare the size of the hash table to the number of records, and in the other we compare the size of ht[0] to the size of ht[1], so the formula is not exactly the same. to make things easier we change all the thresholds to powers of 2, so the normal shrinking threshold is changed from 100/10 (i.e. 10%) to 100/8 (i.e. 12.5%), and we change the threshold during forks from 5 to 4, i.e. from 500% to 400% for expand, and from 2% (100/10/5) to 3.125% (100/8/4)
-
debing.sun authored
Fix #12785 and other race condition issues. See the following isolated comments. The following report was obtained using SANITIZER thread. ```sh make SANITIZER=thread ./runtest-moduleapi --config io-threads 4 --config io-threads-do-reads yes --accurate ``` 1. Fixed thread-safe issue in RM_UnblockClient() Related discussion: https://github.com/redis/redis/pull/12817#issuecomment-1831181220 * When blocking a client in a module using `RM_BlockClientOnKeys()` or `RM_BlockClientOnKeysWithFlags()` with a timeout_callback, calling RM_UnblockClient() in module threads can lead to race conditions in `updateStatsOnUnblock()`. - Introduced: Version: 6.2 PR: #7491 - Touch: `server.stat_numcommands`, `cmd->latency_histogram`, `server.slowlog`, and `server.latency_events` - Harm Level: High Potentially corrupts the memory data of `cmd->latency_histogram`, `server.slowlog`, and `server.latency_events` - Solution: Differentiate whether the call to moduleBlockedClientTimedOut() comes from the module or the main thread. Since we can't know if RM_UnblockClient() comes from module threads, we always assume it does and let `updateStatsOnUnblock()` asynchronously update the unblock status. * When error reply is called in timeout_callback(), ctx is not thread-safe, eventually lead to race conditions in `afterErrorReply`. - Introduced: Version: 6.2 PR: #8217 - Touch `server.stat_total_error_replies`, `server.errors`, - Harm Level: High Potentially corrupts the memory data of `server.errors` - Solution: Make the ctx in `timeout_callback()` with `REDISMODULE_CTX_THREAD_SAFE`, and asynchronously reply errors to the client. 2. Made RM_Reply*() family API thread-safe Related discussion: https://github.com/redis/redis/pull/12817#discussion_r1408707239 Call chain: `RM_Reply*()` -> `_addReplyToBufferOrList()` -> touch server.current_client - Introduced: Version: 7.2.0 PR: #12326 - Harm Level: None Since the module fake client won't have the `CLIENT_PUSHING` flag, even if we touch server.current_client, we can still exit after `c->flags & CLIENT_PUSHING`. - Solution Checking `c->flags & CLIENT_PUSHING` earlier. 3. Made freeClient() thread-safe Fix #12785 - Introduced: Version: 4.0 Commit: https://github.com/redis/redis/commit/3fcf959e609e850a114d4016843e4c991066ebac - Harm Level: Moderate * Trigger assertion It happens when the module thread calls freeClient while the io-thread is in progress, which just triggers an assertion, and doesn't make any race condiaions. * Touch `server.current_client`, `server.stat_clients_type_memory`, and `clientMemUsageBucket->clients`. It happens between the main thread and the module threads, may cause data corruption. 1. Error reset `server.current_client` to NULL, but theoretically this won't happen, because the module has already reset `server.current_client` to old value before entering freeClient. 2. corrupts `clientMemUsageBucket->clients` in updateClientMemUsageAndBucket(). 3. Causes server.stat_clients_type_memory memory statistics to be inaccurate. - Solution: * No longer counts memory usage on fake clients, to avoid updating `server.stat_clients_type_memory` in freeClient. * No longer resetting `server.current_client` in unlinkClient, because the fake client won't be evicted or disconnected in the mid of the process. * Judgment assertion `io_threads_op == IO_THREADS_OP_IDLE` only if c is not a fake client. 4. Fixed free client args without GIL Related discussion: https://github.com/redis/redis/pull/12817#discussion_r1408706695 When freeing retained strings in the module thread (refcount decr), or using them in some way (refcount incr), we should do so while holding the GIL, otherwise, they might be simultaneously freed while the main thread is processing the unblock client state. - Introduced: Version: 6.2.0 PR: #8141 - Harm Level: Low Trigger assertion or double free or memory leak. - Solution: Documenting that module API users need to ensure any access to these retained strings is done with the GIL locked 5. Fix adding fake client to server.clients_pending_write It will incorrectly log the memory usage for the fake client. Related discussion: https://github.com/redis/redis/pull/12817#issuecomment-1851899163 - Introduced: Version: 4.0 Commit: https://github.com/redis/redis/commit/9b01b64430fbc1487429144d2e4e72a4a7fd9db2 - Harm Level: None Only result in NOP - Solution: * Don't add fake client into server.clients_pending_write * Add c->conn assertion for updateClientMemUsageAndBucket() and updateClientMemoryUsage() to avoid same issue in the future. So now it will be the responsibility of the caller of both of them to avoid passing in fake client. 6. Fix calling RM_BlockedClientMeasureTimeStart() and RM_BlockedClientMeasureTimeEnd() without GIL - Introduced: Version: 6.2 PR: #7491 - Harm Level: Low Causes inaccuracies in command latency histogram and slow logs, but does not corrupt memory. - Solution: Module API users, if know that non-thread-safe APIs will be used in multi-threading, need to take responsibility for protecting them with their own locks instead of the GIL, as using the GIL is too expensive. ### Other issue 1. RM_Yield is not thread-safe, fixed via #12905. ### Summarize 1. Fix thread-safe issues for `RM_UnblockClient()`, `freeClient()` and `RM_Yield`, potentially preventing memory corruption, data disorder, or assertion. 2. Updated docs and module test to clarify module API users' responsibility for locking non-thread-safe APIs in multi-threading, such as RM_BlockedClientMeasureTimeStart/End(), RM_FreeString(), RM_RetainString(), and RM_HoldString(). ### About backpot to 7.2 1. The implement of (1) is not too satisfying, would like to get more eyes. 2. (2), (3) can be safely for backport 3. (4), (6) just modifying the module tests and updating the documentation, no need for a backpot. 4. (5) is harmless, no need for a backpot. --------- Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 18 Jan, 2024 1 commit
-
-
Binbin authored
Before #12850, we will only try to shrink the dict in serverCron, which we can control by using a child process, but now every time we delete a key, the shrink check will be called. In these test (added in #12802), we meant to disable the resizing, but druing the delete, the dict will meet the force shrink, like 2 / 128 = 0.015 < 0.2, the delete will trigger a force resize and will cause the test to fail. In this commit, we try to keep the load factor at 3 / 128 = 0.023, that is, do not meet the force shrink.
-
- 17 Jan, 2024 1 commit
-
-
Binbin authored
The test have a race: ``` *** [err]: Redis can rewind and trigger smaller slot resizing in tests/unit/other.tcl Expected '[Dictionary HT] Hash table 0 stats (main hash table): table size: 12 number of elements: 2 [Expires HT] Hash table 0 stats (main hash table): No stats available for empty dictionaries ' to match '*table size: 8*' (context: type eval line 12 cmd {assert_match "*table size: 8*" [r debug HTSTATS 0]} proc ::test) ``` When `r del "{alice}$j"` is executed in the loop, when the key is deleted to [9, 12], the load factor has meet HASHTABLE_MIN_FILL, if serverCron happens to trigger slot dict resize, then the test will fail. Because there is not way to meet HASHTABLE_MIN_FILL in the subsequent dels. The solution is to avoid triggering the resize in advance. We can use multi to delete them at once, or we can disable the resize. Since we disabled resize in the previous test, the fix also uses the method of disabling resize. The test is introduced in #12802.
-
- 15 Jan, 2024 2 commits
-
-
Yanqi Lv authored
When we insert entries into dict, it may autonomously expand if needed. However, when we delete entries from dict, it doesn't shrink to the proper size. If there are few entries in a very large dict, it may cause huge waste of memory and inefficiency when iterating. The main keyspace dicts (keys and expires), are shrinked by cron (`tryResizeHashTables` calls `htNeedsResize` and `dictResize`), And some data structures such as zset and hash also do that (call `htNeedsResize`) right after a loop of calls to `dictDelete`, But many other dicts are completely missing that call (they can only expand). In this PR, we provide the ability to automatically shrink the dict when deleting. The conditions triggering the shrinking is the same as `htNeedsResize` used to have. i.e. we expand when we're over 100% utilization, and shrink when we're below 10% utilization. Additionally: * Add `dictPauseAutoResize` so that flows that do mass deletions, will only trigger shrinkage at the end. * Rename `dictResize` to `dictShrinkToFit` (same logic as it used to have, but better name describing it) * Rename `_dictExpand` to `_dictResize` (same logic as it used to have, but better name describing it) related to discussion https://github.com/redis/redis/pull/12819#discussion_r1409293878 --------- Co-authored-by:
Oran Agra <oran@redislabs.com> Co-authored-by:
zhaozhao.zz <zhaozhao.zz@alibaba-inc.com>
-
zhaozhao.zz authored
Regarding how to obtain the hash slot of a key, there is an optimization in `getKeySlot()`, it is used to avoid redundant hash calculations for keys: when the current client is in the process of executing a command, it can directly use the slot of the current client because the slot to access has already been calculated in advance in `processCommand()`. However, scripts are a special case where, in default mode or with `allow-cross-slot-keys` enabled, they are allowed to access keys beyond the pre-declared range. This means that the keys they operate on may not belong to the slot of the pre-declared keys. Currently, when the commands in a script are executed, the slot of the original client (i.e., the current client) is not correctly updated, leading to subsequent access to the wrong slot. This PR fixes the above issue. When checking the cluster constraints in a script, the slot to be accessed by the current command is set for the original client (i.e., the current client). This ensures that `getKeySlot()` gets the correct slot cache. Additionally, the following modifications are made: 1. The 'sort' and 'sort_ro' commands use `getKeySlot()` instead of `c->slot` because the client could be an engine client in a script and can lead to potential bug. 2. `getKeySlot()` is also used in pubsub to obtain the slot for the channel, standardizing the way slots are retrieved.
-
- 11 Jan, 2024 1 commit
-
-
bentotten authored
When one shard, sole primary node marks potentially failed replica as FAIL instead of PFAIL (#12824) Fixes issue where a single primary cannot mark a replica as failed in a single-shard cluster.
-
- 10 Jan, 2024 1 commit
-
-
Binbin authored
The test was introduced in #10745, but we forgot to add it to the test_helper.tcl, so our CI did not actually run it. This PR adds it and ensures it passes CI tests.
-
- 09 Jan, 2024 1 commit
-
-
Madelyn Olson authored
Fix a daily test failure because alpine doesn't support stack traces and add in an extra assertion related to making sure the stack trace was printed twice.
-
- 03 Jan, 2024 1 commit
-
-
Madelyn Olson authored
This change is trying to make two failure modes a bit easier to deep dive: 1. If a serverPanic or serverAssert occurs during the info (or module) printing, it will recursively panic, which is a lot of fun as it will just keep recursively printing. It will eventually stack overflow, but will generate a lot of text in the process. 2. When a segfault happens during the segfault handler, no information is communicated other than it happened. This can be problematic because `info` may help diagnose the real issue, but without fixing the recursive crash it might be hard to get at that info.
-
- 27 Dec, 2023 2 commits
-
-
Chen Tianjie authored
We have achieved replacing `slots_to_keys` radix tree with key->slot linked list (#9356), and then replacing the list with slot specific dictionaries for keys (#11695). Shard channels behave just like keys in many ways, and we also need a slots->channels mapping. Currently this is still done by using a radix tree. So we should split `server.pubsubshard_channels` into 16384 dicts and drop the radix tree, just like what we did to DBs. Some benefits (basically the benefits of what we've done to DBs): 1. Optimize counting channels in a slot. This is currently used only in removing channels in a slot. But this is potentially more useful: sometimes we need to know how many channels there are in a specific slot when doing slot migration. Counting is now implemented by traversing the radix tree, and with this PR it will be as simple as calling `dictSize`, from O(n) to O(1). 2. The radix tree in the cluster has been removed. The shard channel names no longer require additional storage, which can save memory. 3. Potentially useful in slot migration, as shard channels are logically split by slots, thus making it easier to migrate, remove or add as a whole. 4. Avoid rehashing a big dict when there is a large number of channels. Drawbacks: 1. Takes more memory than using radix tree when there are relatively few shard channels. What this PR does: 1. in cluster mode, split `server.pubsubshard_channels` into 16384 dicts, in standalone mode, still use only one dict. 2. drop the `slots_to_channels` radix tree. 3. to save memory (to solve the drawback above), all 16384 dicts are created lazily, which means only when a channel is about to be inserted to the dict will the dict be initialized, and when all channels are deleted, the dict would delete itself. 5. use `server.shard_channel_count` to keep track of the number of all shard channels. --------- Co-authored-by:
Viktor Söderqvist <viktor.soderqvist@est.tech>
-
sundb authored
Fix #12792 On ubuntu 23(lunar), non-root users will not be allowed to change the oom_score_adj of a process to a value that is too low. Since terminal's default oom_score_adj is 200, if we run the test on terminal, we won't be able to set the oom_score_adj of the redis process to 9 or 22, which is too low. Reproduction on ubuntu 23(lunar) terminal: ```sh $ cat /proc/`pgrep redis-server`/oom_score_adj 200 $ echo 100 > /proc/`pgrep redis-server`/oom_score_adj # success without error $ echo 99 > /proc/`pgrep redis-server`/oom_score_adj echo: write error: Permission denied ``` As from the output above, we can only set the minimum oom score of redis processes to 100. By modifying the test, make oom_score_adj only increase upwards and not decrease. --------- Co-authored-by:
debing.sun <debing.sun@redis.com>
-
- 24 Dec, 2023 2 commits
-
-
Slava Koyfman authored
Previous implementation would disconnect _all_ clients when running `ACL LOAD`, which wasn't very useful. This change brings the behavior in line with that of `ACL SETUSER`, `ACL DELUSER`, in that only clients whose user is deleted or clients subscribed to channels which they no longer have access to will be disconnected. --------- Co-authored-by:
Oran Agra <oran@redislabs.com> Co-authored-by:
Madelyn Olson <34459052+madolson@users.noreply.github.com>
-
Binbin authored
This PR, we added -4 and -6 options to redis-cli to determine IPV4 / IPV6 priority in DNS lookup. This was mentioned in https://github.com/redis/redis/pull/11151#issuecomment-1231570651 For now it's only used in CLUSTER MEET. The options also made it possible to reliably test dns lookup in CI, using this option, we can add some localhost tests for #11151. The commit was cherry-picked from #11151, back then we decided to split the PR. Co-authored-by:
Viktor Söderqvist <viktor.soderqvist@est.tech>
-
- 17 Dec, 2023 1 commit
-
-
Wen Hui authored
We dont have test for hgetall against key doesnot exist so added the test in test suite and along with this, added wrong type cases for other missing commands.
-
- 13 Dec, 2023 3 commits
-
-
Chen Tianjie authored
The by/get options of sort/sort_ro command used to be forbidden in cluster mode, since we are not sure which slot the pattern may be in. As the optimization done in #12536, patterns now can be mapped to slots, we should allow by/get options in cluster mode when the pattern maps to the same slot as the key.
-
Binbin authored
In #11489, we consider acl username to be sensitive information, and consider the ACL GETUSER a sensitive command and remove it from redis-cli historyfile. This PR redact username information in ACL GETUSER and ACL DELUSER from SLOWLOG, and also remove ACL DELUSER from redis-cli historyfile. This PR also mark tls-key-file-pass and tls-client-key-file-pass as sensitive config, will redact it from SLOWLOG and also remove them from redis-cli historyfile.
-
Chen Tianjie authored
In INFO CLIENTS section, we already have blocked_clients and tracking_clients. We should add a new metric showing the number of pubsub connections, which helps performance monitoring and trouble shooting.
-
- 11 Dec, 2023 1 commit
-
-
Binbin authored
This is a follow-up fix to #12733. We need to apply the same changes to delKeysInSlot. Refer to #12733 for more details. This PR contains some other minor cleanups / improvements to the test suite and docs. It uses the postnotifications test module in a cluster mode test which revealed a leak in the test module (fixed).
-
- 05 Dec, 2023 1 commit
-
-
Chen Tianjie authored
in #12536 we made a similar optimization for SCAN, now that hashtags in patterns. When we can make sure all keys matching the pettern will be in the same slot, we can limit the iteration to run only one one.
-
- 30 Nov, 2023 1 commit
-
-
sundb authored
Fix compilation warning in KeySpace_ServerEventCallback and add CFLAGS=-Werror flag for module CI (#12786) Warning: ``` postnotifications.c:216:77: warning: format specifies type 'long' but the argument has type 'uint64_t' (aka 'unsigned long long') [-Wformat] RedisModule_Log(ctx, "warning", "Got an unexpected subevent '%ld'", subevent); ~~~ ^~~~~~~~ %llu ``` CI: https://github.com/redis/redis/actions/runs/6937308713/job/18871124342#step:6:115 ## Other Add `CFLAGS=-Werror` flag for module CI. --------- Co-authored-by:
Viktor Söderqvist <viktor.soderqvist@est.tech>
-
- 29 Nov, 2023 1 commit
-
-
zhaozhao.zz authored
The following four configurations are renamed to align with Redis style: 1. server_cpulist renamed to server-cpulist 2. bio_cpulist renamed to bio-cpulist 3. aof_rewrite_cpulist renamed to aof-rewrite-cpulist 4. bgsave_cpulist renamed to bgsave-cpulist The original names are retained as aliases to ensure compatibility with old configuration files. We recommend users to gradually transition to using the new configuration names to maintain consistency in style.
-
- 28 Nov, 2023 1 commit
-
-
zhaozhao.zz authored
Introduced in #11695 . The tryResizeHashTables function gets stuck on the last non-empty slot while iterating through dictionaries. It does not restart from the beginning. The reason for this issue is a problem with the usage of dbIteratorNextDict: /* Returns next dictionary from the iterator, or NULL if iteration is complete. */ dict *dbIteratorNextDict(dbIterator *dbit) { if (dbit->next_slot == -1) return NULL; dbit->slot = dbit->next_slot; dbit->next_slot = dbGetNextNonEmptySlot(dbit->db, dbit->slot, dbit->keyType); return dbGetDictFromIterator(dbit); } When iterating to the last non-empty slot, next_slot is set to -1, causing it to loop indefinitely on that slot. We need to modify the code to ensure that after iterating to the last non-empty slot, it returns to the first non-empty slot. BTW, function tryResizeHashTables is actually iterating over slots that have keys. However, in its implementation, it leverages the dbIterator (which is a key iterator) to obtain slot and dictionary information. While this approach works fine, but it is not very intuitive. This PR also improves readability by changing the iteration to directly iterate over slots, thereby enhancing clarity.
-
- 27 Nov, 2023 1 commit
-
-
Binbin authored
When we register notification or server event in RedisModule_OnLoad, but RedisModule_OnLoad eventually fails, triggering notification or server event will cause the server to crash. If the loading fails on a later stage of moduleLoad, we do call moduleUnload which handles all un-registration, but when it fails on the RedisModule_OnLoad call, we only un-register several specific things and these were missing: - moduleUnsubscribeNotifications - moduleUnregisterFilters - moduleUnsubscribeAllServerEvents Refactored the code to reuse the code from moduleUnload. Fixes #12808.
-
- 23 Nov, 2023 1 commit
-
-
meiravgri authored
see discussion from after https://github.com/redis/redis/pull/12453 was merged ---- This PR replaces signals that are not considered async-signal-safe (AS-safe) with safe calls. #### **1. serverLog() and serverLogFromHandler()** `serverLog` uses unsafe calls. It was decided that we will **avoid** `serverLog` calls by the signal handlers when: * The signal is not fatal, such as SIGALRM. In these cases, we prefer using `serverLogFromHandler` which is the safe version of `serverLog`. Note they have different prompts: `serverLog`: `62220:M 26 Oct 2023 14:39:04.526 # <msg>` `serverLogFromHandler`: `62220:signal-handler (1698331136) <msg>` * The code was added recently. Calls to `serverLog` by the signal handler have been there ever since Redis exists and it hasn't caused problems so far. To avoid regression, from now we should use `serverLogFromHandler` #### **2. `snprintf` `fgets` and `strtoul`(base = 16) --------> `_safe_snprintf`, `fgets_async_signal_safe`, `string_to_hex`** The safe version of `snprintf` was taken from [here](https://github.com/twitter/twemcache/blob/8cfc4ca5e76ed936bd3786c8cc43ed47e7778c08/src/mc_util.c#L754) #### **3. fopen(), fgets(), fclose() --------> open(), read(), close()** #### **4. opendir(), readdir(), closedir() --------> open(), syscall(SYS_getdents64), close()** #### **5. Threads_mngr sync mechanisms** * waiting for the thread to generate stack trace: semaphore --------> busy-wait * `globals_rw_lock` was removed: as we are not using malloc and the semaphore anymore we don't need to protect `ThreadsManager_cleanups`. #### **6. Stacktraces buffer** The initial problem was that we were not able to safely call malloc within the signal handler. To solve that we created a buffer on the stack of `writeStacktraces` and saved it in a global pointer, assuming that under normal circumstances, the function `writeStacktraces` would complete before any thread attempted to write to it. However, **if threads lag behind, they might access this global pointer after it no longer belongs to the `writeStacktraces` stack, potentially corrupting memory.** To address this, various solutions were discussed [here](https://github.com/redis/redis/pull/12658#discussion_r1390442896) Eventually, we decided to **create a pipe** at server startup that will remain valid as long as the process is alive. We chose this solution due to its minimal memory usage, and since `write()` and `read()` are atomic operations. It ensures that stack traces from different threads won't mix. **The stacktraces collection process is now as follows:** * Cleaning the pipe to eliminate writes of late threads from previous runs. * Each thread writes to the pipe its stacktrace * Waiting for all the threads to mark completion or until a timeout (2 sec) is reached * Reading from the pipe to print the stacktraces. #### **7. Changes that were considered and eventually were dropped** * replace watchdog timer with a POSIX timer: according to [settimer man](https://linux.die.net/man/2/setitimer) > POSIX.1-2008 marks getitimer() and setitimer() obsolete, recommending the use of the POSIX timers API ([timer_gettime](https://linux.die.net/man/2/timer_gettime)(2), [timer_settime](https://linux.die.net/man/2/timer_settime)(2), etc.) instead. However, although it is supposed to conform to POSIX std, POSIX timers API is not supported on Mac. You can take a look here at the Linux implementation: [here](https://github.com/redis/redis/commit/c7562ee13546e504977372fdf40d33c3f86775a5 ) To avoid messing up the code, and uncertainty regarding compatibility, it was decided to drop it for now. * avoid using sds (uses malloc) in logConfigDebugInfo It was considered to print config info instead of using sds, however apparently, `logConfigDebugInfo` does more than just print the sds, so it was decided this fix is out of this issue scope. #### **8. fix Signal mask check** The check `signum & sig_mask` intended to indicate whether the signal is blocked by the thread was incorrect. Actually, the bit position in the signal mask corresponds to the signal number. We fixed this by changing the condition to: `sig_mask & (1L << (sig_num - 1))` #### **9. Unrelated changes** both `fork.tcl `and `util.tcl` implemented a function called `count_log_message` expecting different parameters. This caused confusion when trying to run daily tests with additional test parameters to run a specific test. The `count_log_message` in `fork.tcl` was removed and the calls were replaced with calls to `count_log_message` located in `util.tcl` --------- Co-authored-by:
Ozan Tezcan <ozantezcan@gmail.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 19 Nov, 2023 2 commits
-
-
Wen Hui authored
We have some test cases of swapdb with watchkey but missing seperate basic swapdb test cases, unhappy path and flushdb after swapdb. So added the test cases in keyspace.tcl.
-
Binbin authored
CI reports that this test failed, the reason is because during the command processing, the node processed PING/PONG, resulting in ping_sent or pong_received mismatch. Change to use MULTI to avoid timing issue. The test was introduced in #12224.
-
- 14 Nov, 2023 1 commit
-
-
Binbin authored
When using DB iterator, it will use dictInitSafeIterator to init a old safe dict iterator. When dbIteratorNext is used, it will jump to the next slot db dict when we are done a dict. During this process, we do not have any calls to dictResumeRehashing, which causes the dict's pauserehash to always be > 0. And at last, it will be returned directly in dictRehashMilliseconds, which causes us to have slot dict in a state where rehash cannot be completed. In the "expire scan should skip dictionaries with lot's of empty buckets" test, adding a `keys *` can reproduce the problem stably. `keys *` will call dbIteratorNext to trigger a traversal of all slot dicts. Added dbReleaseIterator and dbIteratorInitNextSafeIterator methods to call dictResetIterator. Issue was introduced in #11695.
-
- 11 Nov, 2023 1 commit
-
-
Harkrishn Patro authored
Test recently added fails on timeout in valgrind in GH actions. Locally with valgrind the test finishes within 1.5 sec(s). Couldn't find any issue due to lack of reproducibility. Increasing the timeout and adding an additional log to the test to understand how many keys were left at the end.
-
- 08 Nov, 2023 1 commit
-
-
Meir Shpilraien (Spielrein) authored
Redis 7.2 (#9406) introduced a new modules event, `RedisModuleEvent_Key`. This new event allows the module to read the key data just before it is removed from the database (either deleted, expired, evicted, or overwritten). When the key is removed from the database, either by active expire or eviction. The new event was not called as part of an execution unit. This can cause an issue if the module registers a post notification job inside the event. This job will not be executed atomically with the expiration/eviction operation and will not replicated inside a Multi/Exec. Moreover, the post notification job will be executed right after the event where it is still not safe to perform any write operation, this will violate the promise that post notification job will be called atomically with the operation that triggered it and **only when it is safe to write**. This PR fixes the issue by wrapping each expiration/eviction of a key with an execution unit. This makes sure the entire operation will run atomically and all the post notification jobs will be executed at the end where it is safe to write. Tests were modified to verify the fix.
-
- 06 Nov, 2023 1 commit
-
-
Yossi Gottlieb authored
This change overcomes many stability issues experienced with the vmactions action. We need to limit VMs to 8GB for better stability, as the 13GB default seems to hang them occasionally. Shell code has been simplified since this action seem to use `bash -e` which will abort on non-zero exit codes anyway.
-
- 02 Nov, 2023 1 commit
-
-
Roshan Khatri authored
Reverts the skipping defrag tests in cluster mode (done in #12672. instead it skips only some defrag tests that are relevant for cluster modes. The test now run well after investigating and making the changes in #12674 and #12694. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 01 Nov, 2023 1 commit
-
-
Viktor Söderqvist authored
Optimize the performance of SCAN commands when a match pattern can only contain keys from a single slot in cluster mode. This can happen when the pattern contains a hash tag before any wildcard matchers or when the key contains no matchers.
-
- 24 Oct, 2023 1 commit
-
-
Harkrishn Patro authored
Test failure on freebsd CI: ``` *** [err]: expire scan should skip dictionaries with lot's of empty buckets in tests/unit/expire.tcl scan didn't handle slot skipping logic. ``` Observation: expiry of keys might happen before the empty buckets are formed and won't help with the expiry skip logic validation. Solution: Disable expiration until the empty buckets are formed.
-
- 22 Oct, 2023 1 commit
-
-
Harkrishn Patro authored
Fixing issues started after #11695 when the defrag tests are being executed in cluster mode too. For some reason, it looks like the defragmentation is over too quickly, before the test is able to detect that it's running. so now instead of waiting to see that it's active, we wait to see that it did some work ``` [err]: Active defrag big list: cluster in tests/unit/memefficiency.tcl defrag not started. [err]: Active defrag big keys: cluster in tests/unit/memefficiency.tcl defrag didn't stop. ```
-
- 19 Oct, 2023 2 commits
-
-
Harkrishn Patro authored
Temporarily disabling few of the defrag tests in cluster mode to make the daily run stable: Active defrag eval scripts Active defrag big keys Active defrag big list Active defrag edge case
-
Harkrishn Patro authored
Dictionary iterator logic in the `tryResizeHashTables` method is picking the next (incorrect) dictionary while the cursor is at a given slot. This could lead to some dictionary/slot getting skipped from resizing. Also stabilize the test. problem introduced recently in #11695
-
- 15 Oct, 2023 1 commit
-
-
Vitaly authored
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data. ## Important changes * Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms. * getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time. * Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree. * scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue. * Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot. * Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading. * DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well. ## Performance This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict. RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load. ## Interface changes * Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS` * Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored. * New RDB version to support the new op code for SLOT information. --------- Co-authored-by:
Vitaly Arbuzov <arvit@amazon.com> Co-authored-by:
Harkrishn Patro <harkrisp@amazon.com> Co-authored-by:
Roshan Khatri <rvkhatri@amazon.com> Co-authored-by:
Madelyn Olson <madelyneolson@gmail.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 13 Oct, 2023 2 commits
-
-
Oran Agra authored
when a server in the test suite crashes and is restarted by redstart_server, we didn't clean it's pid from the list. we can see that when the corrupt-dump-fuzzer hangs, it has a long list of servers to lean, but in fact they're all already dead.
-
Harkrishn Patro authored
Unsubscribe all clients from replica for shard channel if the master ownership changes
-