- 28 Nov, 2023 2 commits
-
-
zhaozhao.zz authored
Introduced in #11695 . The tryResizeHashTables function gets stuck on the last non-empty slot while iterating through dictionaries. It does not restart from the beginning. The reason for this issue is a problem with the usage of dbIteratorNextDict: /* Returns next dictionary from the iterator, or NULL if iteration is complete. */ dict *dbIteratorNextDict(dbIterator *dbit) { if (dbit->next_slot == -1) return NULL; dbit->slot = dbit->next_slot; dbit->next_slot = dbGetNextNonEmptySlot(dbit->db, dbit->slot, dbit->keyType); return dbGetDictFromIterator(dbit); } When iterating to the last non-empty slot, next_slot is set to -1, causing it to loop indefinitely on that slot. We need to modify the code to ensure that after iterating to the last non-empty slot, it returns to the first non-empty slot. BTW, function tryResizeHashTables is actually iterating over slots that have keys. However, in its implementation, it leverages the dbIterator (which is a key iterator) to obtain slot and dictionary information. While this approach works fine, but it is not very intuitive. This PR also improves readability by changing the iteration to directly iterate over slots, thereby enhancing clarity.
-
zhaozhao.zz authored
The current comment for `findSlotByKeyIndex` is a bit ambiguous and can be misleading, as it may be misunderstood as getting the next slot corresponding to target.
-
- 27 Nov, 2023 3 commits
-
-
Binbin authored
When we register notification or server event in RedisModule_OnLoad, but RedisModule_OnLoad eventually fails, triggering notification or server event will cause the server to crash. If the loading fails on a later stage of moduleLoad, we do call moduleUnload which handles all un-registration, but when it fails on the RedisModule_OnLoad call, we only un-register several specific things and these were missing: - moduleUnsubscribeNotifications - moduleUnregisterFilters - moduleUnsubscribeAllServerEvents Refactored the code to reuse the code from moduleUnload. Fixes #12808.
-
zhaozhao.zz authored
Currently, when the number of databases exceeds 16, the efficiency of cleaning expired keys is relatively low. The reason is that by default only 16 databases are scanned when attempting to clean expired keys (CRON_DBS_PER_CALL is 16). But users may set databases higher than 16, such as 256, but it does not necessarily mean that all 256 databases have expiration time set. If only one database has expiration time set, this database needs 16 activeExpireCycle rounds in order to be scanned once, and 15 of those rounds are meaningless. To optimize the efficiency of expiration in such scenarios, we use dbs_per_call to control the number of databases with expired keys being scanned. Additionally, add a condition to limit the maximum number of rounds to server.dbnum to prevent excessive spinning. This ensures that even if only one database has expired keys, it can be triggered within a single cron job.
-
binfeng-xin authored
Fix `signalModifiedKey()` order, call it after key modification completed, to ensure the state of key eventual consistency. When a key is modified, Redis calls `signalModifiedKey` to notify other systems, such as the watch system of transactions and the tracking system of client side caching. However, in some commands, the `signalModifiedKey` call happens during the key modification process instead of after the key modification is completed. This can potentially cause issues, as systems relying on `signalModifiedKey` may receive the "write in flight" status of the key rather than its final state. These commands include: 1. PFADD 2. LSET, LMOVE, LREM 3. ZPOPMIN, ZPOPMAX, BZPOPMIN, BZPOPMAX, ZMPOP, BZMPOP Currently there is no problem with Redis, but it is better to adjust the order of `signalModifiedKey()`, to avoid issues in future development on Redis. --------- Co-authored-by:
zhaozhao.zz <zhaozhao.zz@alibaba-inc.com>
-
- 23 Nov, 2023 6 commits
-
-
meiravgri authored
see discussion from after https://github.com/redis/redis/pull/12453 was merged ---- This PR replaces signals that are not considered async-signal-safe (AS-safe) with safe calls. #### **1. serverLog() and serverLogFromHandler()** `serverLog` uses unsafe calls. It was decided that we will **avoid** `serverLog` calls by the signal handlers when: * The signal is not fatal, such as SIGALRM. In these cases, we prefer using `serverLogFromHandler` which is the safe version of `serverLog`. Note they have different prompts: `serverLog`: `62220:M 26 Oct 2023 14:39:04.526 # <msg>` `serverLogFromHandler`: `62220:signal-handler (1698331136) <msg>` * The code was added recently. Calls to `serverLog` by the signal handler have been there ever since Redis exists and it hasn't caused problems so far. To avoid regression, from now we should use `serverLogFromHandler` #### **2. `snprintf` `fgets` and `strtoul`(base = 16) --------> `_safe_snprintf`, `fgets_async_signal_safe`, `string_to_hex`** The safe version of `snprintf` was taken from [here](https://github.com/twitter/twemcache/blob/8cfc4ca5e76ed936bd3786c8cc43ed47e7778c08/src/mc_util.c#L754) #### **3. fopen(), fgets(), fclose() --------> open(), read(), close()** #### **4. opendir(), readdir(), closedir() --------> open(), syscall(SYS_getdents64), close()** #### **5. Threads_mngr sync mechanisms** * waiting for the thread to generate stack trace: semaphore --------> busy-wait * `globals_rw_lock` was removed: as we are not using malloc and the semaphore anymore we don't need to protect `ThreadsManager_cleanups`. #### **6. Stacktraces buffer** The initial problem was that we were not able to safely call malloc within the signal handler. To solve that we created a buffer on the stack of `writeStacktraces` and saved it in a global pointer, assuming that under normal circumstances, the function `writeStacktraces` would complete before any thread attempted to write to it. However, **if threads lag behind, they might access this global pointer after it no longer belongs to the `writeStacktraces` stack, potentially corrupting memory.** To address this, various solutions were discussed [here](https://github.com/redis/redis/pull/12658#discussion_r1390442896) Eventually, we decided to **create a pipe** at server startup that will remain valid as long as the process is alive. We chose this solution due to its minimal memory usage, and since `write()` and `read()` are atomic operations. It ensures that stack traces from different threads won't mix. **The stacktraces collection process is now as follows:** * Cleaning the pipe to eliminate writes of late threads from previous runs. * Each thread writes to the pipe its stacktrace * Waiting for all the threads to mark completion or until a timeout (2 sec) is reached * Reading from the pipe to print the stacktraces. #### **7. Changes that were considered and eventually were dropped** * replace watchdog timer with a POSIX timer: according to [settimer man](https://linux.die.net/man/2/setitimer) > POSIX.1-2008 marks getitimer() and setitimer() obsolete, recommending the use of the POSIX timers API ([timer_gettime](https://linux.die.net/man/2/timer_gettime)(2), [timer_settime](https://linux.die.net/man/2/timer_settime)(2), etc.) instead. However, although it is supposed to conform to POSIX std, POSIX timers API is not supported on Mac. You can take a look here at the Linux implementation: [here](https://github.com/redis/redis/commit/c7562ee13546e504977372fdf40d33c3f86775a5 ) To avoid messing up the code, and uncertainty regarding compatibility, it was decided to drop it for now. * avoid using sds (uses malloc) in logConfigDebugInfo It was considered to print config info instead of using sds, however apparently, `logConfigDebugInfo` does more than just print the sds, so it was decided this fix is out of this issue scope. #### **8. fix Signal mask check** The check `signum & sig_mask` intended to indicate whether the signal is blocked by the thread was incorrect. Actually, the bit position in the signal mask corresponds to the signal number. We fixed this by changing the condition to: `sig_mask & (1L << (sig_num - 1))` #### **9. Unrelated changes** both `fork.tcl `and `util.tcl` implemented a function called `count_log_message` expecting different parameters. This caused confusion when trying to run daily tests with additional test parameters to run a specific test. The `count_log_message` in `fork.tcl` was removed and the calls were replaced with calls to `count_log_message` located in `util.tcl` --------- Co-authored-by:
Ozan Tezcan <ozantezcan@gmail.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Binbin authored
We don't have any debug level, change it to log level.
-
Moshe Kaplan authored
`open()` can return any non-negative integer on success, including zero. This change modifies the check from open()'s return value to also include checking for a return value of zero (e.g., if stdin were closed and then `open()` was called). Fixes Coverity 404720 Can't happen in Redis. just a cleanup.
-
Moshe Kaplan authored
If fopen() is successful, but redis_fstat() determines the file is zero bytes, the file handle stored in fp will leak. This change closes the filehandle stored in fp if the file is zero bytes. An FD leak on a tool like redis-check-aof isn't an issue (it'll exit soon anyway). This is just a cleanup.
-
Moshe Kaplan authored
If fopen() is successful, but redis_fstat() fails, the file handle stored in fp will leak. This change closes the filehandle stored in fp if redis_fstat() fails. Fixes Coverity 390029
-
Moshe Kaplan authored
If a recursive call to dirRemove() returns -1, dirRemove() the directory handle stored in dir will leak. This change closes the directory handle stored in dir even if the recursive call to dirRemove() returns -1. Fixed Coverity 371073
-
- 22 Nov, 2023 16 commits
-
-
Binbin authored
dictExpandAllowed (for the main db dict and the expire dict) seems to involve a few function calls and memory accesses, and we can do it only after the thresholds checks and can get some performance improvements. A simple benchmark test: there are 11032768 fixed keys in the database, start a redis-server with `--maxmemory big_number --save ""`, start a redis-benchmark with `-P 100 -r 1000000000 -t set -n 5000000`, collect `throughput summary: n requests per second` result. After five rounds of testing: ``` unstable this PR 848032.56 897988.56 854408.69 913408.88 858663.94 914076.81 871839.56 916758.31 882612.56 920640.75 ``` We can see a 5% performance improvement in general condition. But note we'll still hit the degradation when we over the thresholds.
-
Yehoshua (Josh) Hershberg authored
A followup PR for #12742 Add some brief comments explaining the purpose of the file to the head of cluster_legacy.c and cluster.c. Add copyright notice to cluster.c Signed-off-by:
Josh Hershberg <yehoshua@redis.com> Co-authored-by:
Josh Hershberg <yehoshua@redis.com>
-
Binbin authored
We meant to divide it by the number of slots, otherwise it will do slots times dictExpand, bug was introduced in #11695.
-
Oran Agra authored
This PR reworks the clustering code to support multiple clustering implementations, specifically, the current "legacy" clustering implementation or, although not part of this PR, flotilla (see #10875). Which implementation is used could be a compile-time flag (will be added later). Legacy clustering functionality remains unchanged. The basic idea is as follows. The header cluster.h now contains function declarations that define the "Cluster API." These are the contract and interface between any clustering implementation and the rest of the Redis source code. Some of the function definitions are shared between all clustering implementations. These functions are in cluster.c. The functions and data structures specific to legacy clustering are in cluster-legacy.c/h. One consequence of this is that the structs clusterNode and clusterState which were previously "public" to the rest of Redis are now hidden behind the Cluster API. The PR is divided up into commits, each with a commit message explaining the changes. some are just mass rename or moving code between files (may not require close inspection / review), others are manual changes. One other, related change is: - The "failover" command is now plugged into the Cluster API so that the clustering implementation can (a) enable/disable the command to begin with and if enabled (b) perform the actual failover. The "failover" command remains disabled for legacy clustering.
-
Josh Hershberg authored
Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
Josh Hershberg authored
Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
Josh Hershberg authored
Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
Josh Hershberg authored
The failover command is up until now not supported in cluster mode. This commit allows a cluster implementation to support the command. The legacy clustering implementation still does not support this command. Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
Josh Hershberg authored
Move primary functions used to implement datapath clustering into cluster.c, making them shared. This required adding "accessor" and other functions to abstract access to node details and cluster state. Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
Josh Hershberg authored
Divide up clusterCommand into clusterCommand for shared sub-commands and clusterCommandSpecial for implementation specific sub-commands. So to, the cluster command help sub-command has been divided into two implementations, clusterCommandHelp and clusterCommandHelpSpecial. Some common sub-subcommand implementations have been extracted and their implemenations either made shared or else implementation specific. Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
Josh Hershberg authored
Simple rename, "GetSlotBit" is implementation specific Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
Josh Hershberg authored
Move (but do not change) some items from cluster_legacy.c back info cluster.c. These items are shared code that all clustering implementations will use. Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
Josh Hershberg authored
Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
Josh Hershberg authored
More declerations can be moved into cluster_legacy.h as they are not requied for the cluster api. The code was simply moved, not changed in any way. Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
Josh Hershberg authored
Move clusterNode into cluster_legacy.h. In order to achieve this some accessor methods were added and also a refactor of how debugCommand handles cluster related subcommands. Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
Josh Hershberg authored
Move clusterState into cluster_legacy.h. In order to achieve this some "accessor" methods needed to be added to the cluster API and some other minor refactors. Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
- 21 Nov, 2023 4 commits
-
-
Binbin authored
When we are loading data, it is not safe to generate data through DEBUG POPULATE. POPULATE may generate keys, causing panic when loading data with duplicate keys.
-
Josh Hershberg authored
Move some declerations from cluster.h to cluster_legacy.h. The items moved are specific to the legacy clustering implementation and DO NOT require any other refactoring other than moving them from one file to another. Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
Josh Hershberg authored
create new cluster.c Signed-off-by:
Josh Hershberg <yehoshua@redis.com> forgot to #include cluster_legacy.h Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
Josh Hershberg authored
Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
- 20 Nov, 2023 1 commit
-
-
iKun authored
Make redis-cli --bigkeys and --memkeys usable on a replicas in cluster mode, by sending the READONLY command. This is only if -c is also given. We don't detect if a node is a master or a replica so we send READONLY in both cases. The READONLY has no effect on masters. Release notes: Make redis-cli --bigkeys and --memkeys usable on cluster replicas --------- Co-authored-by:
Viktor Söderqvist <viktor.soderqvist@est.tech> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 19 Nov, 2023 3 commits
-
-
Hwang Si Yeon authored
Add documentation of the URI format in the `--help` output of `redis-cli` and `redis-benchmark`. In particular, it's good for users to know that they need to specify "default" as the username when authenticating without a username. Other details of the URI format are described too, like scheme and dbnum. It used to be possible to connect to Redis using an URL with an empty username, like `redis-cli -u redis://:PASSWORD@localhost:6379/0`. This was broken in 6.2 (#8048), and there was a discussion about it #9186. Now, users need to specify "default" as the username and it's better to document it. Refer to #12746 for more details.
-
Wen Hui authored
We have some test cases of swapdb with watchkey but missing seperate basic swapdb test cases, unhappy path and flushdb after swapdb. So added the test cases in keyspace.tcl.
-
Binbin authored
CI reports that this test failed, the reason is because during the command processing, the node processed PING/PONG, resulting in ping_sent or pong_received mismatch. Change to use MULTI to avoid timing issue. The test was introduced in #12224.
-
- 17 Nov, 2023 2 commits
-
-
zhaozhao.zz authored
-
zhaozhao.zz authored
Introduced by #11695, should skip empty db in case unnecessary SELECT.
-
- 16 Nov, 2023 1 commit
-
-
Binbin authored
In swapdb mode, the temp db does not init the rehashing list, the change added in #12764 caused cluster ci to fail.
-
- 15 Nov, 2023 1 commit
-
-
Binbin authored
This is currently harmless, since we have already cleared the dict before, it will reset the rehashidx to -1, and in incrementallyRehash we will call dictIsRehashing to check. It would be nice to empty the list to avoid meaningless attempts, and the code is also unified to reduce misunderstandings.
-
- 14 Nov, 2023 1 commit
-
-
Binbin authored
When using DB iterator, it will use dictInitSafeIterator to init a old safe dict iterator. When dbIteratorNext is used, it will jump to the next slot db dict when we are done a dict. During this process, we do not have any calls to dictResumeRehashing, which causes the dict's pauserehash to always be > 0. And at last, it will be returned directly in dictRehashMilliseconds, which causes us to have slot dict in a state where rehash cannot be completed. In the "expire scan should skip dictionaries with lot's of empty buckets" test, adding a `keys *` can reproduce the problem stably. `keys *` will call dbIteratorNext to trigger a traversal of all slot dicts. Added dbReleaseIterator and dbIteratorInitNextSafeIterator methods to call dictResetIterator. Issue was introduced in #11695.
-