- 14 May, 2024 1 commit
-
-
debing.sun authored
## Background 1. All hash objects that contain HFE are referenced by db->hexpires. 2. All fields in a dict hash object with HFE are referenced by an ebucket. So when we defrag the hash object or the field in a dict with HFE, we also need to update the references in them. ## Interface 1. Add a new interface `ebDefragItem`, which can accept a defrag callback to defrag items in ebuckets, and simultaneously update their references in the ebucket. ## Mainly changes 1. The key type of dict of hash object is no longer sds, so add new `activeDefragHfieldDict()` to defrag the dict instead of `activeDefragSdsDict()`. 2. When we defrag the dict of hash object by using `dictScanDefrag()`, we always set the defrag callback `defragKey` of `dictDefragFunctions` to NULL, because we can't reallocate a field with out updating it's reference in ebuckets. Instead, we will defrag the field of the dict and update its reference in the callback `dictScanDefrag` of dictScanFunction(). 3. When we defrag the hash robj with HFE, we will use `ebDefragItem` to defrag the robj and update the reference in db->hexpires. ## TODO: Defrag ebucket structure incremently, which will be handler in a future PR. --------- Co-authored-by:
Ozan Tezcan <ozantezcan@gmail.com> Co-authored-by:
Moti Cohen <moti.cohen@redis.com>
-
- 08 May, 2024 1 commit
-
-
Ozan Tezcan authored
**Changes:** - Adds listpack support to hash field expiration - Implements hgetf/hsetf commands **Listpack support for hash field expiration** We keep field name and value pairs in listpack for the hash type. With this PR, if one of hash field expiration command is called on the key for the first time, it converts listpack layout to triplets to hold field name, value and ttl per field. If a field does not have a TTL, we store zero as the ttl value. Zero is encoded as two bytes in the listpack. So, once we convert listpack to hold triplets, for the fields that don't have a TTL, it will be consuming those extra 2 bytes per item. Fields are ordered by ttl in the listpack to find the field with minimum expiry time efficiently. **New command implementations as part of this PR:** - HGETF command For each specified field get its value and optionally set the field's expiration time in sec/msec /unix-sec/unix-msec: ``` HGETF key [NX | XX | GT | LT] [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | PERSIST] <FIELDS count field [field ...]> ``` - HSETF command For each specified field value pair: set field to value and optionally set the field's expiration time in sec/msec /unix-sec/unix-msec: ``` HSETF key [DC] [DCF | DOF] [NX | XX | GT | LT] [GETNEW | GETOLD] [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | KEEPTTL] <FVS count field value [field value …]> ``` Todo: - Performance improvement. - rdb load/save - aof - defrag
-
- 18 Apr, 2024 1 commit
-
-
Moti Cohen authored
- Add ebuckets & mstr data structures - Integrate active & lazy expiration - Add most of the commands - Add support for dict (listpack is missing) TODOs: RDB, notification, listpack, HSET, HGETF, defrag, aof
-
- 04 Apr, 2024 1 commit
-
-
debing.sun authored
fix some issues that come from sanitizer thread report. 1. when the main thread is updating daylight_active, other threads (bio, module thread) may be writing logs at the same time. ``` WARNING: ThreadSanitizer: data race (pid=661064) Read of size 4 at 0x55c9a4d11c70 by thread T2: #0 serverLogRaw /home/sundb/data/redis_fork/src/server.c:116 (redis-server+0x8d797) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) #1 _serverLog.constprop.2 /home/sundb/data/redis_fork/src/server.c:146 (redis-server+0x2a3b14) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) #2 bioProcessBackgroundJobs /home/sundb/data/redis_fork/src/bio.c:329 (redis-server+0x1c24ca) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) Previous write of size 4 at 0x55c9a4d11c70 by main thread (mutexes: write M0, write M1, write M2, write M3): #0 updateCachedTimeWithUs /home/sundb/data/redis_fork/src/server.c:1102 (redis-server+0x925e7) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) #1 updateCachedTimeWithUs /home/sundb/data/redis_fork/src/server.c:1087 (redis-server+0x925e7) #2 updateCachedTime /home/sundb/data/redis_fork/src/server.c:1118 (redis-server+0x925e7) #3 afterSleep /home/sundb/data/redis_fork/src/server.c:1811 (redis-server+0x925e7) #4 aeProcessEvents /home/sundb/data/redis_fork/src/ae.c:389 (redis-server+0x85ae0) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) #5 aeProcessEvents /home/sundb/data/redis_fork/src/ae.c:342 (redis-server+0x85ae0) #6 aeMain /home/sundb/data/redis_fork/src/ae.c:477 (redis-server+0x85ae0) #7 main /home/sundb/data/redis_fork/src/server.c:7211 (redis-server+0x7168c) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) ``` 2. thread leaks in module tests ``` WARNING: ThreadSanitizer: thread leak (pid=668683) Thread T13 (tid=670041, finished) created by main thread at: #0 pthread_create ../../../../src/libsanitizer/tsan/tsan_interceptors_posix.cpp:1036 (libtsan.so.2+0x3d179) (BuildId: 28a9f70061dbb2dfa2cef661d3b23aff4ea13536) #1 HelloBlockNoTracking_RedisCommand /home/sundb/data/redis_fork/tests/modules/blockonbackground.c:200 (blockonbackground.so+0x97fd) (BuildId: 9cd187906c57e88cdf896d121d1d96448b37a136) #2 HelloBlockNoTracking_RedisCommand /home/sundb/data/redis_fork/tests/modules/blockonbackground.c:169 (blockonbackground.so+0x97fd) #3 call /home/sundb/data/redis_fork/src/server.c:3546 (redis-server+0x9b7fb) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) #4 processCommand /home/sundb/data/redis_fork/src/server.c:4176 (redis-server+0xa091c) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) #5 processCommandAndResetClient /home/sundb/data/redis_fork/src/networking.c:2468 (redis-server+0xd2b8e) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) #6 processInputBuffer /home/sundb/data/redis_fork/src/networking.c:2576 (redis-server+0xd2b8e) #7 readQueryFromClient /home/sundb/data/redis_fork/src/networking.c:2722 (redis-server+0xd358f) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) #8 callHandler /home/sundb/data/redis_fork/src/connhelpers.h:58 (redis-server+0x288a7b) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) #9 connSocketEventHandler /home/sundb/data/redis_fork/src/socket.c:277 (redis-server+0x288a7b) #10 aeProcessEvents /home/sundb/data/redis_fork/src/ae.c:417 (redis-server+0x85b45) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) #11 aeProcessEvents /home/sundb/data/redis_fork/src/ae.c:342 (redis-server+0x85b45) #12 aeMain /home/sundb/data/redis_fork/src/ae.c:477 (redis-server+0x85b45) #13 main /home/sundb/data/redis_fork/src/server.c:7211 (redis-server+0x7168c) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) ```
-
- 02 Apr, 2024 1 commit
-
-
Moti Cohen authored
# Overview Users utilize the `FLUSHDB SYNC` and `FLUSHALL SYNC` commands for a variety of reasons. The main issue with this command is that if the database becomes substantial in size, the server will be unresponsive for an extended period. Other than freezing application traffic, this may also lead some clients making incorrect judgments about the server's availability. For instance, a watchdog may erroneously decide to terminate the process, resulting in potential adverse outcomes. While a `FLUSH* ASYNC` can address these issues, it might not be used for two reasons: firstly, it's not the default, and secondly, in some cases, the client issuing the flush wants to wait for its completion before repopulating the database. Between the option of triggering FLUSH* asynchronously in the background without indication for completion versus running it synchronously in the foreground by the main thread, there is another more appealing option. We can block the client that requested the flush, execute the flush command in the background, and once done, unblock the client and return notification for completion. This approach ensures the server remains responsive to other clients, and the blocked client receives the expected response only after the flush operation has been successfully carried out. # Implementation details Instead of defining yet another flavor to the flush command, we can modify `FLUSHALL SYNC` and `FLUSHDB SYNC` always run in this new mode. ## Extending BIO Threads capabilities Today jobs that are carried out by BIO threads don't have the capability to indicate completion to the main thread. We can add this infrastructure by having an additional dummy job, coined as completion-job, that eventually will be written by BIO threads to a response-queue. The main thread will take care to consume items from the response-queue and call the provided callback function of each completion-job. ## FLUSH* SYNC to run as blocking ASYNC Command `FLUSH* SYNC` will be modified to create one or more async jobs to flush DB(s) and afterward will push additional completion-job request. By sending the completion job request only at the end, the main thread will be called back only after all the preceding jobs completed their task in the background. During that time, the client of the command is suspended and marked as `BLOCKED_LAZYFREE` whereas any other client will be able to communicate with the server without any issue.
-
- 20 Mar, 2024 1 commit
-
-
Pieter Cailliau authored
[Read more about the license change here](https://redis.com/blog/redis-adopts-dual-source-available-licensing/) Live long and prosper
🖖
-
- 19 Mar, 2024 1 commit
-
-
Binbin authored
Users who abuse lua error_reply will generate a new error object on each error call, which can make server.errors get bigger and bigger. This will cause the server to block when calling INFO (we also return errorstats by default). To prevent the damage it can cause, when a misuse is detected, we will print a warning log and disable the errorstats to avoid adding more new errors. It can be re-enabled via CONFIG RESETSTAT. Because server.errors may be very large (it may be better now since we have the limit), config resetstat may block for a while. So in resetErrorTableStats, we will try to lazyfree server.errors. See the related discussion at the end of #8217.
-
- 13 Mar, 2024 1 commit
-
-
Binbin authored
In some cases, users will abuse lua eval. Each EVAL call generates a new lua script, which is added to the lua interpreter and cached to redis-server, consuming a large amount of memory over time. Since EVAL is mostly the one that abuses the lua cache, and these won't have pipeline issues (i.e. the script won't disappear unexpectedly, and cause errors like it would with SCRIPT LOAD and EVALSHA), we implement a plain FIFO LRU eviction only for these (not for scripts loaded with SCRIPT LOAD). ### Implementation notes: When not abused we'll probably have less than 100 scripts, and when abused we'll have many thousands. So we use a hard coded value of 500 scripts. And considering that we don't have many scripts, then unlike keys, we don't need to worry about the memory usage of keeping a true sorted LRU linked list. We compute the SHA of each script anyway, and put the script in a dict, we can store a listNode there, and use it for quick removal and re-insertion into an LRU list each time the script is used. ### New interfaces: At the same time, a new `evicted_scripts` field is added to INFO, which represents the number of evicted eval scripts. Users can check it to see if they are abusing EVAL. ### benchmark: `./src/redis-benchmark -P 10 -n 1000000 -r 10000000000 eval "return __rand_int__" 0` The simple abuse of eval benchmark test that will create 1 million EVAL scripts. The performance has been improved by 50%, and the max latency has dropped from 500ms to 13ms (this may be caused by table expansion inside Lua when the number of scripts is large). And in the INFO memory, it used to consume 120MB (server cache) + 310MB (lua engine), but now it only consumes 70KB (server cache) + 210KB (lua_engine) because of the scripts eviction. For non-abusive case of about 100 EVAL scripts, there's no noticeable change in performance or memory usage. ### unlikely potentially breaking change: in theory, a user can maybe load a script with EVAL and then use EVALSHA to call it (by calculating the SHA1 value on the client side), it could be that if we read the docs carefully we'll realized it's a valid scenario, but we suppose it's extremely rare. So it may happen that EVALSHA acts on a script created by EVAL, and the script is evicted and EVALSHA returns a NOSCRIPT error. that is if you have more than 500 scripts being used in the same transaction / pipeline. This solves the second point in #13102.
-
- 04 Mar, 2024 1 commit
-
-
debing.sun authored
After #13013 ### This PR make effort to defrag the pubsub kvstore in the following ways: 1. Till now server.pubsub(shard)_channels only share channel name obj with the first subscribed client, now change it so that the clients and the pubsub kvstore share the channel name robj. This would save a lot of memory when there are many subscribers to the same channel. It also means that we only need to defrag the channel name robj in the pubsub kvstore, and then update all client references for the current channel, avoiding the need to iterate through all the clients to do the same things. 2. Refactor the code to defragment pubsub(shard) in the same way as defragment of keys and EXPIRES, with the exception that we only defragment pubsub(without shard) when slot is zero. ### Other Fix an overlook in #11695, if defragment doesn't reach the end time, we should wait for the current db's keys and expires, pubsub and pubsubshard to finish before leaving, now it's possible to exit early when the keys are defragmented. --------- Co-authored-by:
oranagra <oran@redislabs.com>
-
- 01 Mar, 2024 1 commit
-
-
Chen Tianjie authored
Sometimes we need to make fast judgement about why Redis is suddenly taking more memory. One of the reasons is main DB's dicts doing rehashing. We may use `MEMORY STATS` to monitor the overhead memory of each DB, but there still lacks a total sum to show an overall trend. So this PR adds the total overhead of all DBs to `INFO MEMORY` section, together with the total count of rehashing DB dicts, providing some intuitive metrics about main dicts rehashing. This PR adds the following metrics to INFO MEMORY * `mem_overhead_db_hashtable_rehashing` - only size of ht[0] in dictionaries we're rehashing (i.e. the memory that's gonna get released soon) and a similar ones to MEMORY STATS: * `overhead.db.hashtable.lut` (complements the existing `overhead.hashtable.main` and `overhead.hashtable.expires` which also counts the `dictEntry` structs too) * `overhead.db.hashtable.rehashing` - temporary rehashing overhead. * `db.dict.rehashing.count` - number of top level dictionaries being rehashed. --------- Co-authored-by:
zhaozhao.zz <zhaozhao.zz@alibaba-inc.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 28 Feb, 2024 1 commit
-
-
Binbin authored
Even if we have SCRIPT FLUSH ASYNC now, when there are a lot of lua scripts, SCRIPT FLUSH ASYNC will still block the main thread. This is because lua_close is executed in the main thread, and lua heap needs to release a lot of memory. In this PR, we take the current lua instance on lctx.lua and call lua_close on it in a background thread, to close it in async way. This is MeirShpilraien's idea.
-
- 20 Feb, 2024 1 commit
-
-
debing.sun authored
Implement #12963 ## Changes 1. large bins don't have external fragmentation or are at least non-defraggable, so we should ignore the effect of large bins when measuring fragmentation, and only measure fragmentation of small bins. this affects both the allocator_frag* metrics and also the active-defrag trigger 2. Adding INFO metrics for `muzzy` memory, which is memory returned to the OS but still shows as RSS until the OS reclaims it. --------- Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 18 Feb, 2024 2 commits
-
-
Binbin authored
AOF_FSYNC_EVERYSEC higher resolution, change aof_last_fsync and aof_flush_postponed_start to use mstime (#13041) Currently aof_last_fsync is using a low resolution unixtime is really bad, it checks if the absolute number of (full) seconds changed by one. depending on which side of the second barrier it falls, we can get very different results. This PR change the resolution to use milliseconds instead of complete seconds. In cases where the event loop cycle duration is short and their rapid (e.g. running many fast commands with short pipeline, or a high `hz` config), this change will not make much difference, since in anyway, we'll be quick to detect that we're on a "new second", and it's likely that these fsync will always be executed close to the second switch barrier. But in cases of rare or slow event loops cycles (e.g. either slow commands, or very low rate of traffic to redis, and low `hz`), it could easily be that with the old code, in some cases we'll have over 1.5 seconds between fsyncs, and in others less than 0.5. see discussion in #8612 This PR also handle aof_flush_postponed_start as well, the damage there is smaller since the threshold is 2 seconds, and not 1. --------- Co-authored-by:
Oran Agra <oran@redislabs.com>
-
zhaozhao.zz authored
Redis has some special commands that mark the client's state, such as `subscribe` and `blpop`, which mark the client as `CLIENT_PUBSUB` or `CLIENT_BLOCKED`, and we have metrics for the special use cases. However, there are also other special commands, like `WATCH`, which although do not have a specific flags, and should also be considered stateful client types. For stateful clients, in many scenarios, the connections cannot be shared in "connection pool", meaning connection pool cannot be used. For example, whenever the `WATCH` command is executed, a new connection is required to put the client into the "watch state" because the watched keys are stored in the client. If different business logic requires watching different keys, separate connections must be used; otherwise, there will be contamination. This also means that if a user's business heavily relies on the `WATCH` command, a large number of connections will be required. Recently we have encountered this situation in our platform, where some users consume a significant number of connections when using Redis because of `WATCH`. I hope we can have a way to observe these special use cases and special client connections. Here I add a few monitoring metrics: 1. `watching_clients` in `INFO` reply: The number of clients currently in the "watching" state. 2. `total_watched_keys` in `INFO` reply: The total number of keys being watched. 3. `watch` in `CLIENT LIST` reply: The number of keys each client is currently watching.
-
- 08 Feb, 2024 2 commits
-
-
Binbin authored
The test fails here and there: ``` *** [err]: expire scan should skip dictionaries with lot's of empty buckets in tests/unit/expire.tcl scan didn't handle slot skipping logic. ``` There are two case: 1. In the case of passing the test, we use child process to avoid the dict resize, but it can not completely limit it, since in the dictDelete we still have chance to trigger the resize (hit the force radio). The reason why our test passed before is because the expire dict is still in the rehashing process, so the dictDelete, the dictShrinkIfNeeded can not trigger the resize. 2. In the case of failing the test, the expire dict finished the rehashing, so the last dictDelete, the dictShrinkIfNeeded trigger the dict resize since it hit the force radio, so the skipping logic fail. This PR add a new DEBUG command to disbale the dict resize.
-
Binbin authored
We forgot to call quicklistSetOptions after createQuicklistObject, in the sort store scenario, we will create a quicklist with default fill or compress options. This PR adds fill and depth parameters to createQuicklistObject to specify that options need to be set after creating a quicklist. This closes #12871. release notes: > Fix lists created by SORT STORE to respect list compression and packing configs.
-
- 06 Feb, 2024 1 commit
-
-
Binbin authored
Currently, once active defrag starts, we can not adjust active_defrag_running downwards. This is because active_defrag_running will be dynamically compute based on the fragmentation, we think we should not lower the effort when the fragmentation drops. However, we need to note that active_defrag_running will also be dynamically computed based on configurations. In this case, we are not respecting cycle-min or cycle-max. Some people may realize halfway through that defrag consumes a lot and want to adjust it. Previously we could only turn off activedefrag and then turn it on again to adjust active_defrag_running downwards. So in this PR, when a active defrag configuration change is made, we will re-compute it. These configuration items are: - active-defrag-cycle-min - active-defrag-cycle-max - active-defrag-threshold-upper
-
- 05 Feb, 2024 1 commit
-
-
guybe7 authored
# Description Gather most of the scattered `redisDb`-related code from the per-slot dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e. it's a class that represents an array of dictionaries. # Motivation The main motivation is code cleanliness, the idea of using an array of dictionaries is very well-suited to becoming a self-contained data structure. This allowed cleaning some ugly code, among others: loops that run twice on the main dict and expires dict, and duplicate code for allocating and releasing this data structure. # Notes 1. This PR reverts the part of https://github.com/redis/redis/pull/12848 where the `rehashing` list is global (handling rehashing `dict`s is under the responsibility of `kvstore`, and should not be managed by the server) 2. This PR also replaces the type of `server.pubsubshard_channels` from `dict**` to `kvstore` (original PR: https://github.com/redis/redis/pull/12804). After that was done, server.pubsub_channels was also chosen to be a `kvstore` (with only one `dict`, which seems odd) just to make the code cleaner by making it the same type as `server.pubsubshard_channels`, see `pubsubtype.serverPubSubChannels` 3. the keys and expires kvstores are currenlty configured to allocate the individual dicts only when the first key is added (unlike before, in which they allocated them in advance), but they won't release them when the last key is deleted. Worth mentioning that due to the recent change the reply of DEBUG HTSTATS changed, in case no keys were ever added to the db. before: ``` 127.0.0.1:6379> DEBUG htstats 9 [Dictionary HT] Hash table 0 stats (main hash table): No stats available for empty dictionaries [Expires HT] Hash table 0 stats (main hash table): No stats available for empty dictionaries ``` after: ``` 127.0.0.1:6379> DEBUG htstats 9 [Dictionary HT] [Expires HT] ```
-
- 31 Jan, 2024 1 commit
-
-
Binbin authored
When we use a timer to unblock a client in module, if the timer period and the block timeout are very close, they will unblock the client in the same event loop, and it will trigger the assertion. The reason is that in moduleBlockedClientTimedOut we will protect against re-processing, so we don't actually call updateStatsOnUnblock (see #12817), so we are not able to reset the c->duration. The reason is unblockClientOnTimeout() didn't realize that bc had been unblocked. We add a function to the module to determine if bc is blocked, and then use it in unblockClientOnTimeout() to exit. There is the stack: ``` beforeSleep blockedBeforeSleep handleBlockedClientsTimeout checkBlockedClientTimeout unblockClientOnTimeout unblockClient resetClient -- assertion, crash the server 'c->duration == 0' is not true ```
-
- 30 Jan, 2024 1 commit
-
-
Binbin authored
In #11012, we will reprocess command when client is unblocked on keys, in some blocking commands, for example, in the XREADGROUP BLOCK scenario, because of the re-processing command, we will recalculate the block timeout, causing the blocking time to be reset. This commit add a new CLIENT_REPROCESSING_COMMAND clent flag, explicitly let the command know that it is being re-processed, later in blockForKeys we will not reset the timeout. Affected BLOCK cases: - list / zset / stream, added test cases for each. Unaffected cases: - module (never re-process the commands). - WAIT / WAITAOF (never re-process the commands). Fixes #12998.
-
- 29 Jan, 2024 1 commit
-
-
Chen Tianjie authored
The function `tryResizeHashTables` only attempts to shrink the dicts that has keys (change from #11695), this was a serious problem until the change in #12850 since it meant if all keys are deleted, we won't shrink the dick. But still, both dictShrink and dictExpand may be blocked by a fork child process, therefore, the cron job needs to perform both dictShrink and dictExpand, for not just non-empty dicts, but all dicts in DBs. What this PR does: 1. Try to resize all dicts in DBs (not just non-empty ones, as it was since #12850) 2. handle both shrink and expand (not just shrink, as it was since forever) 3. Refactor some APIs about dict resizing (get rid of `htNeedsShrink` `htNeedsShrink` `dictShrinkToFit`, and expose `dictShrinkIfNeeded` `dictExpandIfNeeded` which already contains all the code of those functions we get rid of, to make APIs more neat) 4. In the `Don't rehash if redis has child process` test, now that cron would do resizing, we no longer need to write to DB after the child process got killed, and can wait for the cron to expand the hash table.
-
- 19 Jan, 2024 1 commit
-
-
debing.sun authored
Fix #12785 and other race condition issues. See the following isolated comments. The following report was obtained using SANITIZER thread. ```sh make SANITIZER=thread ./runtest-moduleapi --config io-threads 4 --config io-threads-do-reads yes --accurate ``` 1. Fixed thread-safe issue in RM_UnblockClient() Related discussion: https://github.com/redis/redis/pull/12817#issuecomment-1831181220 * When blocking a client in a module using `RM_BlockClientOnKeys()` or `RM_BlockClientOnKeysWithFlags()` with a timeout_callback, calling RM_UnblockClient() in module threads can lead to race conditions in `updateStatsOnUnblock()`. - Introduced: Version: 6.2 PR: #7491 - Touch: `server.stat_numcommands`, `cmd->latency_histogram`, `server.slowlog`, and `server.latency_events` - Harm Level: High Potentially corrupts the memory data of `cmd->latency_histogram`, `server.slowlog`, and `server.latency_events` - Solution: Differentiate whether the call to moduleBlockedClientTimedOut() comes from the module or the main thread. Since we can't know if RM_UnblockClient() comes from module threads, we always assume it does and let `updateStatsOnUnblock()` asynchronously update the unblock status. * When error reply is called in timeout_callback(), ctx is not thread-safe, eventually lead to race conditions in `afterErrorReply`. - Introduced: Version: 6.2 PR: #8217 - Touch `server.stat_total_error_replies`, `server.errors`, - Harm Level: High Potentially corrupts the memory data of `server.errors` - Solution: Make the ctx in `timeout_callback()` with `REDISMODULE_CTX_THREAD_SAFE`, and asynchronously reply errors to the client. 2. Made RM_Reply*() family API thread-safe Related discussion: https://github.com/redis/redis/pull/12817#discussion_r1408707239 Call chain: `RM_Reply*()` -> `_addReplyToBufferOrList()` -> touch server.current_client - Introduced: Version: 7.2.0 PR: #12326 - Harm Level: None Since the module fake client won't have the `CLIENT_PUSHING` flag, even if we touch server.current_client, we can still exit after `c->flags & CLIENT_PUSHING`. - Solution Checking `c->flags & CLIENT_PUSHING` earlier. 3. Made freeClient() thread-safe Fix #12785 - Introduced: Version: 4.0 Commit: https://github.com/redis/redis/commit/3fcf959e609e850a114d4016843e4c991066ebac - Harm Level: Moderate * Trigger assertion It happens when the module thread calls freeClient while the io-thread is in progress, which just triggers an assertion, and doesn't make any race condiaions. * Touch `server.current_client`, `server.stat_clients_type_memory`, and `clientMemUsageBucket->clients`. It happens between the main thread and the module threads, may cause data corruption. 1. Error reset `server.current_client` to NULL, but theoretically this won't happen, because the module has already reset `server.current_client` to old value before entering freeClient. 2. corrupts `clientMemUsageBucket->clients` in updateClientMemUsageAndBucket(). 3. Causes server.stat_clients_type_memory memory statistics to be inaccurate. - Solution: * No longer counts memory usage on fake clients, to avoid updating `server.stat_clients_type_memory` in freeClient. * No longer resetting `server.current_client` in unlinkClient, because the fake client won't be evicted or disconnected in the mid of the process. * Judgment assertion `io_threads_op == IO_THREADS_OP_IDLE` only if c is not a fake client. 4. Fixed free client args without GIL Related discussion: https://github.com/redis/redis/pull/12817#discussion_r1408706695 When freeing retained strings in the module thread (refcount decr), or using them in some way (refcount incr), we should do so while holding the GIL, otherwise, they might be simultaneously freed while the main thread is processing the unblock client state. - Introduced: Version: 6.2.0 PR: #8141 - Harm Level: Low Trigger assertion or double free or memory leak. - Solution: Documenting that module API users need to ensure any access to these retained strings is done with the GIL locked 5. Fix adding fake client to server.clients_pending_write It will incorrectly log the memory usage for the fake client. Related discussion: https://github.com/redis/redis/pull/12817#issuecomment-1851899163 - Introduced: Version: 4.0 Commit: https://github.com/redis/redis/commit/9b01b64430fbc1487429144d2e4e72a4a7fd9db2 - Harm Level: None Only result in NOP - Solution: * Don't add fake client into server.clients_pending_write * Add c->conn assertion for updateClientMemUsageAndBucket() and updateClientMemoryUsage() to avoid same issue in the future. So now it will be the responsibility of the caller of both of them to avoid passing in fake client. 6. Fix calling RM_BlockedClientMeasureTimeStart() and RM_BlockedClientMeasureTimeEnd() without GIL - Introduced: Version: 6.2 PR: #7491 - Harm Level: Low Causes inaccuracies in command latency histogram and slow logs, but does not corrupt memory. - Solution: Module API users, if know that non-thread-safe APIs will be used in multi-threading, need to take responsibility for protecting them with their own locks instead of the GIL, as using the GIL is too expensive. ### Other issue 1. RM_Yield is not thread-safe, fixed via #12905. ### Summarize 1. Fix thread-safe issues for `RM_UnblockClient()`, `freeClient()` and `RM_Yield`, potentially preventing memory corruption, data disorder, or assertion. 2. Updated docs and module test to clarify module API users' responsibility for locking non-thread-safe APIs in multi-threading, such as RM_BlockedClientMeasureTimeStart/End(), RM_FreeString(), RM_RetainString(), and RM_HoldString(). ### About backpot to 7.2 1. The implement of (1) is not too satisfying, would like to get more eyes. 2. (2), (3) can be safely for backport 3. (4), (6) just modifying the module tests and updating the documentation, no need for a backpot. 4. (5) is harmless, no need for a backpot. --------- Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 15 Jan, 2024 1 commit
-
-
Yanqi Lv authored
When we insert entries into dict, it may autonomously expand if needed. However, when we delete entries from dict, it doesn't shrink to the proper size. If there are few entries in a very large dict, it may cause huge waste of memory and inefficiency when iterating. The main keyspace dicts (keys and expires), are shrinked by cron (`tryResizeHashTables` calls `htNeedsResize` and `dictResize`), And some data structures such as zset and hash also do that (call `htNeedsResize`) right after a loop of calls to `dictDelete`, But many other dicts are completely missing that call (they can only expand). In this PR, we provide the ability to automatically shrink the dict when deleting. The conditions triggering the shrinking is the same as `htNeedsResize` used to have. i.e. we expand when we're over 100% utilization, and shrink when we're below 10% utilization. Additionally: * Add `dictPauseAutoResize` so that flows that do mass deletions, will only trigger shrinkage at the end. * Rename `dictResize` to `dictShrinkToFit` (same logic as it used to have, but better name describing it) * Rename `_dictExpand` to `_dictResize` (same logic as it used to have, but better name describing it) related to discussion https://github.com/redis/redis/pull/12819#discussion_r1409293878 --------- Co-authored-by:
Oran Agra <oran@redislabs.com> Co-authored-by:
zhaozhao.zz <zhaozhao.zz@alibaba-inc.com>
-
- 08 Jan, 2024 1 commit
-
-
Yanqi Lv authored
I'm testing the performance of Pub/Sub command recently. I find if many clients unsubscribe or are killed simultaneously, Redis needs a long time to deal with it. In my experiment, I set 5000 clients and each client subscribes 100 channels. Then I call `client kill type pubsub` to simulate the situation where clients unsubscribe all channels at the same time and calculate the execution time. The result shows that it takes about 23s. I use the _perf_ and find that `listSearchKey` in `pubsubUnsubscribeChannel` costs more than 90% cpu time. I think we can optimize this situation. In this PR, I replace list with dict to track the clients subscribing the channel more efficiently. It changes O(N) to O(1) in the search phase. Then I repeat the experiment as above. The results are as follows. | | Execution Time(s) |used_memory(MB) | | :---------------- | :------: | :----: | | unstable(1bd0b549) | 23.734 | 65.41 | | optimize-pubsub | 0.288 | 67.66 | Thanks for #11595 , I use a no-value dict and the results shows that the performance improves significantly but the memory usage only increases slightly. Notice: - This PR will cause the performance degradation about 20% in `[p|s]subscribe` command but won't freeze Redis.
-
- 07 Jan, 2024 1 commit
-
-
debing.sun authored
## Issues and solutions from #12817 1. Touch ProcessingEventsWhileBlocked and calling moduleCount() without GIL in afterSleep() - Introduced: Version: 7.0.0 PR: #9963 - Harm Level: Very High If the module thread calls `RM_Yield()` before the main thread enters afterSleep(), and modifies `ProcessingEventsWhileBlocked`(+1), it will cause the main thread to not wait for GIL, which can lead to all kinds of unforeseen problems, including memory data corruption. - Initial / Abandoned Solution: * Added `__thread` specifier for ProcessingEventsWhileBlocked. `ProcessingEventsWhileBlocked` is used to protect against nested event processing, but event processing in the main thread and module threads should be completely independent and unaffected, so it is safer to use TLS. * Adding a cached module count to keep track of the current number of modules, to avoid having to use `dictSize()`. - Related Warnings: ``` WARNING: ThreadSanitizer: data race (pid=1136) Write of size 4 at 0x0001045990c0 by thread T4 (mutexes: write M0): #0 processEventsWhileBlocked networking.c:4135 (redis-server:arm64+0x10006d124) #1 RM_Yield module.c:2410 (redis-server:arm64+0x10018b66c) #2 bg_call_worker <null>:83232836 (blockedclient.so:arm64+0x16a8) Previous read of size 4 at 0x0001045990c0 by main thread: #0 afterSleep server.c:1861 (redis-server:arm64+0x100024f98) #1 aeProcessEvents ae.c:408 (redis-server:arm64+0x10000fd64) #2 aeMain ae.c:496 (redis-server:arm64+0x100010f0c) #3 main server.c:7220 (redis-server:arm64+0x10003f38c) ``` 2. aeApiPoll() is not thread-safe When using RM_Yield to handle events in a module thread, if the main thread has not yet entered `afterSleep()`, both the module thread and the main thread may touch `server.el` at the same time. - Introduced: Version: 7.0.0 PR: #9963 - Old / Abandoned Solution: Adding a new mutex to protect timing between after beforeSleep() and before afterSleep(). Defect: If the main thread enters the ae loop without any IO events, it will wait until the next timeout or until there is any event again, and the module thread will always hang until the main thread leaves the event loop. - Related Warnings: ``` SUMMARY: ThreadSanitizer: data race ae_kqueue.c:55 in addEventMask ================== ================== WARNING: ThreadSanitizer: data race (pid=14682) Write of size 4 at 0x000100b54000 by thread T9 (mutexes: write M0): #0 aeApiPoll ae_kqueue.c:175 (redis-server:arm64+0x100010588) #1 aeProcessEvents ae.c:399 (redis-server:arm64+0x10000fb84) #2 processEventsWhileBlocked networking.c:4138 (redis-server:arm64+0x10006d3c4) #3 RM_Yield module.c:2410 (redis-server:arm64+0x10018b66c) #4 bg_call_worker <null>:16042052 (blockedclient.so:arm64+0x169c) Previous write of size 4 at 0x000100b54000 by main thread: #0 aeApiPoll ae_kqueue.c:175 (redis-server:arm64+0x100010588) #1 aeProcessEvents ae.c:399 (redis-server:arm64+0x10000fb84) #2 aeMain ae.c:496 (redis-server:arm64+0x100010da8) #3 main server.c:7238 (redis-server:arm64+0x10003f51c) ``` ## The final fix as the comments: https://github.com/redis/redis/pull/12817#discussion_r1436427232 Optimized solution based on the above comment: First, we add `module_gil_acquring` to indicate whether the main thread is currently in the acquiring GIL state. When the module thread starts to yield, there are two possibilities(we assume the caller keeps the GIL): 1. The main thread is in the mid of beforeSleep() and afterSleep(), that is, `module_gil_acquring` is not 1 now. At this point, the module thread will wake up the main thread through the pipe and leave the yield, waiting for the next yield when the main thread may already in the acquiring GIL state. 2. The main thread is in the acquiring GIL state. The module thread release the GIL, yielding CPU to give the main thread an opportunity to start event processing, and then acquire the GIL again until the main thread releases it. This is what https://github.com/redis/redis/pull/12817#discussion_r1436427232 mentioned direction. --------- Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 02 Jan, 2024 1 commit
-
-
AshMosh authored
There are situations (especially in TLS) in which the engine gets too occupied managing a large number of new connections. Existing connections may time-out while the server is processing the new connections initial TLS handshakes, which may cause cause new connections to be established, perpetuating the problem. To better manage the tradeoff between new connection rate and other workloads, this change adds a new config to manage maximum number of new connections per event loop cycle, instead of using a predetermined number (currently 1000). This change introduces two new configurations, max-new-connections-per-cycle and max-new-tls-connections-per-cycle. The default value of the tcp connections is 10 per cycle and the default value of tls connections per cycle is 1. --------- Co-authored-by:
Madelyn Olson <madelyneolson@gmail.com>
-
- 27 Dec, 2023 1 commit
-
-
Chen Tianjie authored
We have achieved replacing `slots_to_keys` radix tree with key->slot linked list (#9356), and then replacing the list with slot specific dictionaries for keys (#11695). Shard channels behave just like keys in many ways, and we also need a slots->channels mapping. Currently this is still done by using a radix tree. So we should split `server.pubsubshard_channels` into 16384 dicts and drop the radix tree, just like what we did to DBs. Some benefits (basically the benefits of what we've done to DBs): 1. Optimize counting channels in a slot. This is currently used only in removing channels in a slot. But this is potentially more useful: sometimes we need to know how many channels there are in a specific slot when doing slot migration. Counting is now implemented by traversing the radix tree, and with this PR it will be as simple as calling `dictSize`, from O(n) to O(1). 2. The radix tree in the cluster has been removed. The shard channel names no longer require additional storage, which can save memory. 3. Potentially useful in slot migration, as shard channels are logically split by slots, thus making it easier to migrate, remove or add as a whole. 4. Avoid rehashing a big dict when there is a large number of channels. Drawbacks: 1. Takes more memory than using radix tree when there are relatively few shard channels. What this PR does: 1. in cluster mode, split `server.pubsubshard_channels` into 16384 dicts, in standalone mode, still use only one dict. 2. drop the `slots_to_channels` radix tree. 3. to save memory (to solve the drawback above), all 16384 dicts are created lazily, which means only when a channel is about to be inserted to the dict will the dict be initialized, and when all channels are deleted, the dict would delete itself. 5. use `server.shard_channel_count` to keep track of the number of all shard channels. --------- Co-authored-by:
Viktor Söderqvist <viktor.soderqvist@est.tech>
-
- 21 Dec, 2023 1 commit
-
-
Binbin authored
Let us see which version of redis this tool is part of. Similarly to redis-cli, redis-benchmark and redis-check-rdb redis-rdb-check and redis-aof-check are actually symlinks to redis, so they will directly use getVersion in server, the format became: ``` {title} v={redis_version} sha={sha}:{dirty} malloc={malloc} bits={bits} build={build} ``` Move cliVersion into cli_common, redis-cli and redis-benchmark will use it, and the format is not change: ``` {title} {redis_version} (git:{sha}) ```
-
- 15 Dec, 2023 1 commit
-
-
zhaozhao.zz authored
After #11695, we added two functions `rehashingStarted` and `rehashingCompleted` to the dict structure. We also registered two handlers for the main database's dict and expire structures. This allows the main database to record the dict in `rehashing` list when rehashing starts. Later, in `serverCron`, the `incrementallyRehash` function is continuously called to perform the rehashing operation. However, currently, when rehashing is completed, `rehashingCompleted` does not remove the dict from the `rehashing` list. This results in the `rehashing` list containing many invalid dicts. Although subsequent cron checks and removes dicts that don't require rehashing, it is still inefficient. This PR implements the functionality to remove the dict from the `rehashing` list in `rehashingCompleted`. This is achieved by adding `metadata` to the dict structure, which keeps track of its position in the `rehashing` list, allowing for quick removal. This approach avoids storing duplicate dicts in the `rehashing` list. Additionally, there are other modifications: 1. Whether in standalone or cluster mode, the dict in database is inserted into the rehashing linked list when rehashing starts. This eliminates the need to distinguish between standalone and cluster mode in `incrementallyRehash`. The function only needs to focus on the dicts in the `rehashing` list that require rehashing. 2. `rehashing` list is moved from per-database to Redis server level. This decouples `incrementallyRehash` from the database ID, and in standalone mode, there is no need to iterate over all databases, avoiding unnecessary access to databases that do not require rehashing. In the future, even if unsharded-cluster mode supports multiple databases, there will be no risk involved. 3. The insertion and removal operations of dict structures in the `rehashing` list are decoupled from `activerehashing` config. `activerehashing` only controls whether `incrementallyRehash` is executed in serverCron. There is no need for additional steps when modifying the `activerehashing` switch, as in #12705.
-
- 13 Dec, 2023 1 commit
-
-
Chen Tianjie authored
In INFO CLIENTS section, we already have blocked_clients and tracking_clients. We should add a new metric showing the number of pubsub connections, which helps performance monitoring and trouble shooting.
-
- 10 Dec, 2023 1 commit
-
-
Binbin authored
The change in dbSwapDatabases seems harmless. Because in non-clustered mode, dbBuckets calculations are strictly accurate and in cluster mode, we only have one DB. Modify it for uniformity (just like resize_cursor). The change in swapMainDbWithTempDb is needed in case we swap with the temp db, otherwise the overhead memory usage of db can be miscalculated. In addition we will swap all fields (including rehashing list), just for completeness (and reduce the chance of surprises in the future). Introduced in #12697.
-
- 06 Dec, 2023 1 commit
-
-
zhaozhao.zz authored
Additional optimizations for the eviction logic in #11695: To make the eviction logic clearer and decouple the number of sampled keys from the running mode (cluster or standalone). * When sampling in each database, we only care about the number of keys in the current database (not the dicts we sampled from). * If there are a insufficient number of keys in the current database (e.g. 10 times the value of `maxmemory_samples`), we can break out sooner (to avoid looping on a sparse database). * We'll never try to sample the db dicts more times than the number of non-empty dicts in the db (max 1 in non-cluster mode). And it also ensures that each database has a sufficient amount of sampled keys, so even if unsharded-cluster supports multiple databases, there won't be any issues. other changes: 1. keep track of the number of non-empty dicts in each database. 2. move key_count tracking into cumulativeKeyCountAdd rather than all it's callers --------- Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 28 Nov, 2023 1 commit
-
-
zhaozhao.zz authored
Introduced in #11695 . The tryResizeHashTables function gets stuck on the last non-empty slot while iterating through dictionaries. It does not restart from the beginning. The reason for this issue is a problem with the usage of dbIteratorNextDict: /* Returns next dictionary from the iterator, or NULL if iteration is complete. */ dict *dbIteratorNextDict(dbIterator *dbit) { if (dbit->next_slot == -1) return NULL; dbit->slot = dbit->next_slot; dbit->next_slot = dbGetNextNonEmptySlot(dbit->db, dbit->slot, dbit->keyType); return dbGetDictFromIterator(dbit); } When iterating to the last non-empty slot, next_slot is set to -1, causing it to loop indefinitely on that slot. We need to modify the code to ensure that after iterating to the last non-empty slot, it returns to the first non-empty slot. BTW, function tryResizeHashTables is actually iterating over slots that have keys. However, in its implementation, it leverages the dbIterator (which is a key iterator) to obtain slot and dictionary information. While this approach works fine, but it is not very intuitive. This PR also improves readability by changing the iteration to directly iterate over slots, thereby enhancing clarity.
-
- 23 Nov, 2023 1 commit
-
-
meiravgri authored
see discussion from after https://github.com/redis/redis/pull/12453 was merged ---- This PR replaces signals that are not considered async-signal-safe (AS-safe) with safe calls. #### **1. serverLog() and serverLogFromHandler()** `serverLog` uses unsafe calls. It was decided that we will **avoid** `serverLog` calls by the signal handlers when: * The signal is not fatal, such as SIGALRM. In these cases, we prefer using `serverLogFromHandler` which is the safe version of `serverLog`. Note they have different prompts: `serverLog`: `62220:M 26 Oct 2023 14:39:04.526 # <msg>` `serverLogFromHandler`: `62220:signal-handler (1698331136) <msg>` * The code was added recently. Calls to `serverLog` by the signal handler have been there ever since Redis exists and it hasn't caused problems so far. To avoid regression, from now we should use `serverLogFromHandler` #### **2. `snprintf` `fgets` and `strtoul`(base = 16) --------> `_safe_snprintf`, `fgets_async_signal_safe`, `string_to_hex`** The safe version of `snprintf` was taken from [here](https://github.com/twitter/twemcache/blob/8cfc4ca5e76ed936bd3786c8cc43ed47e7778c08/src/mc_util.c#L754) #### **3. fopen(), fgets(), fclose() --------> open(), read(), close()** #### **4. opendir(), readdir(), closedir() --------> open(), syscall(SYS_getdents64), close()** #### **5. Threads_mngr sync mechanisms** * waiting for the thread to generate stack trace: semaphore --------> busy-wait * `globals_rw_lock` was removed: as we are not using malloc and the semaphore anymore we don't need to protect `ThreadsManager_cleanups`. #### **6. Stacktraces buffer** The initial problem was that we were not able to safely call malloc within the signal handler. To solve that we created a buffer on the stack of `writeStacktraces` and saved it in a global pointer, assuming that under normal circumstances, the function `writeStacktraces` would complete before any thread attempted to write to it. However, **if threads lag behind, they might access this global pointer after it no longer belongs to the `writeStacktraces` stack, potentially corrupting memory.** To address this, various solutions were discussed [here](https://github.com/redis/redis/pull/12658#discussion_r1390442896) Eventually, we decided to **create a pipe** at server startup that will remain valid as long as the process is alive. We chose this solution due to its minimal memory usage, and since `write()` and `read()` are atomic operations. It ensures that stack traces from different threads won't mix. **The stacktraces collection process is now as follows:** * Cleaning the pipe to eliminate writes of late threads from previous runs. * Each thread writes to the pipe its stacktrace * Waiting for all the threads to mark completion or until a timeout (2 sec) is reached * Reading from the pipe to print the stacktraces. #### **7. Changes that were considered and eventually were dropped** * replace watchdog timer with a POSIX timer: according to [settimer man](https://linux.die.net/man/2/setitimer) > POSIX.1-2008 marks getitimer() and setitimer() obsolete, recommending the use of the POSIX timers API ([timer_gettime](https://linux.die.net/man/2/timer_gettime)(2), [timer_settime](https://linux.die.net/man/2/timer_settime)(2), etc.) instead. However, although it is supposed to conform to POSIX std, POSIX timers API is not supported on Mac. You can take a look here at the Linux implementation: [here](https://github.com/redis/redis/commit/c7562ee13546e504977372fdf40d33c3f86775a5 ) To avoid messing up the code, and uncertainty regarding compatibility, it was decided to drop it for now. * avoid using sds (uses malloc) in logConfigDebugInfo It was considered to print config info instead of using sds, however apparently, `logConfigDebugInfo` does more than just print the sds, so it was decided this fix is out of this issue scope. #### **8. fix Signal mask check** The check `signum & sig_mask` intended to indicate whether the signal is blocked by the thread was incorrect. Actually, the bit position in the signal mask corresponds to the signal number. We fixed this by changing the condition to: `sig_mask & (1L << (sig_num - 1))` #### **9. Unrelated changes** both `fork.tcl `and `util.tcl` implemented a function called `count_log_message` expecting different parameters. This caused confusion when trying to run daily tests with additional test parameters to run a specific test. The `count_log_message` in `fork.tcl` was removed and the calls were replaced with calls to `count_log_message` located in `util.tcl` --------- Co-authored-by:
Ozan Tezcan <ozantezcan@gmail.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 22 Nov, 2023 2 commits
-
-
Josh Hershberg authored
Move clusterNode into cluster_legacy.h. In order to achieve this some accessor methods were added and also a refactor of how debugCommand handles cluster related subcommands. Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
Josh Hershberg authored
Move clusterState into cluster_legacy.h. In order to achieve this some "accessor" methods needed to be added to the cluster API and some other minor refactors. Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
- 14 Nov, 2023 1 commit
-
-
Binbin authored
When using DB iterator, it will use dictInitSafeIterator to init a old safe dict iterator. When dbIteratorNext is used, it will jump to the next slot db dict when we are done a dict. During this process, we do not have any calls to dictResumeRehashing, which causes the dict's pauserehash to always be > 0. And at last, it will be returned directly in dictRehashMilliseconds, which causes us to have slot dict in a state where rehash cannot be completed. In the "expire scan should skip dictionaries with lot's of empty buckets" test, adding a `keys *` can reproduce the problem stably. `keys *` will call dbIteratorNext to trigger a traversal of all slot dicts. Added dbReleaseIterator and dbIteratorInitNextSafeIterator methods to call dictResetIterator. Issue was introduced in #11695.
-
- 12 Nov, 2023 1 commit
-
-
Roshan Khatri authored
This PR introduces a new macro, serverAssertWithInfoDebug, to do complex assertions only for debugging. The main intention is to allow running complex operations during tests without impacting runtime performance. This assertion is enabled when setting DEBUG_ASSERTIONS. The DEBUG_ASSERTIONS flag is set for the daily and CI variants of `test-sanitizer-address`.
-
- 01 Nov, 2023 2 commits
-
-
Viktor Söderqvist authored
Optimize the performance of SCAN commands when a match pattern can only contain keys from a single slot in cluster mode. This can happen when the pattern contains a hash tag before any wildcard matchers or when the key contains no matchers.
-
Chen Tianjie authored
In #12476 server.stat_client_qbuf_limit_disconnections was added. It is written in readQueryFromClient, which may be called by multiple threads when io-threads and io-threads-do-reads are turned on. Somehow we missed to make it an atomic variable.
-