- 09 Jan, 2024 9 commits
-
-
Oran Agra authored
-
Binbin authored
Crash reported in #12695. In the process of upgrading the cluster from 7.0 to 7.2, because the 7.0 nodes will not gossip shard id, in 7.2 we will rely on shard id to build the server.cluster->shards dict. In some cases, for example, the 7.0 master node and the 7.2 replica node. From the view of 7.2 replica node, the cluster->shards dictionary does not have its master node. In this case calling CLUSTER SHARDS on the 7.2 replica node may crash. We should fix the underlying assumption of updateShardId, which is that the shard dict should be always in sync with the node's shard_id. The fix was suggested by PingXie, see more details in #12695. (cherry picked from commit 5b0c6a82)
-
Binbin authored
If there are nodes in the cluster that do not support shard-id, they will gossip shard-id. From the perspective of nodes that support shard-id, their shard-id is meaningless (since shard-id is randomly generated when we create a node.) Nodes that support shard-id will save the shard-id information in nodes.conf. If the node is restarted according to nodes.conf, the server will report a corrupted cluster config file error. Because auxShardIdSetter will reject configurations with inconsistent master-replica shard-ids. A cluster-wide consensus for the node's shard_id is not necessary. The key is maintaining consistency of the shard_id on each individual 7.2 node. As the cluster progressively upgrades to version 7.2, we can expect the shard_ids across all nodes to naturally converge and align. In this PR, when processing the gossip, if sender is a replica and does not support shard-id, set the shard_id to the shard_id of its master. (cherry picked from commit 4cae66f5)
-
Binbin authored
When we register notification or server event in RedisModule_OnLoad, but RedisModule_OnLoad eventually fails, triggering notification or server event will cause the server to crash. If the loading fails on a later stage of moduleLoad, we do call moduleUnload which handles all un-registration, but when it fails on the RedisModule_OnLoad call, we only un-register several specific things and these were missing: - moduleUnsubscribeNotifications - moduleUnregisterFilters - moduleUnsubscribeAllServerEvents Refactored the code to reuse the code from moduleUnload. Fixes #12808. (cherry picked from commit d6f19539)
-
Meir Shpilraien (Spielrein) authored
Redis 7.2 (#9406) introduced a new modules event, `RedisModuleEvent_Key`. This new event allows the module to read the key data just before it is removed from the database (either deleted, expired, evicted, or overwritten). When the key is removed from the database, either by active expire or eviction. The new event was not called as part of an execution unit. This can cause an issue if the module registers a post notification job inside the event. This job will not be executed atomically with the expiration/eviction operation and will not replicated inside a Multi/Exec. Moreover, the post notification job will be executed right after the event where it is still not safe to perform any write operation, this will violate the promise that post notification job will be called atomically with the operation that triggered it and **only when it is safe to write**. This PR fixes the issue by wrapping each expiration/eviction of a key with an execution unit. This makes sure the entire operation will run atomically and all the post notification jobs will be executed at the end where it is safe to write. Tests were modified to verify the fix. (cherry picked from commit 0ffb9d2e)
-
Binbin authored
In the past, we did not call _dictNextExp frequently. It was only called when the dictionary was expanded. Later, dictTypeExpandAllowed was introduced in #7954, which is 6.2. For the data dict and the expire dict, we can check maxmemory before actually expanding the dict. This is a good optimization to avoid maxmemory being exceeded due to the dict expansion. And in #11692, we moved the dictTypeExpandAllowed check before the threshold check, this caused a bit of performance degradation, every time a key is added to the dict, dictTypeExpandAllowed is called to check. The main reason for degradation is that in a large dict, we need to call _dictNextExp frequently, that is, every time we add a key, we need to call _dictNextExp once. Then the threshold is checked to see if the dict needs to be expanded. We can see that the order of checks here can be optimized. So we moved the dictTypeExpandAllowed check back to after the threshold check in #12789. In this way, before the dict is actually expanded (that is, before the threshold is reached), we will not do anything extra compared to before, that is, we will not call _dictNextExp frequently. But note we'll still hit the degradation when we over the thresholds. When the threshold is reached, because #7954, we may delay the dict expansion due to maxmemory limitations. In this case, we will call _dictNextExp every time we add a key during this period. This PR use CLZ in _dictNextExp to get the next power of two. CLZ (count leading zeros) can easily give you the next power of two. It should be noted that we have actually introduced the use of __builtin_clzl in #8687, which is 7.0. So i suppose all the platforms we use have it (even if the CPU doesn't have an instruction). We build 67108864 (2**26) keys through DEBUG POPULTE, which will use approximately 5.49G memory (used_memory:5898522936). If expansion is triggered, the additional hash table will consume approximately 1G memory (2 ** 27 * 8). So we set maxmemory to 6871947673 (that is, 6.4G), which will be less than 5.49G + 1G, so we will delay the dict rehash while addint the keys. After that, each time an element is added to the dict, an allow check will be performed, that is, we can frequently call _dictNextExp to test the comparison before and after the optimization. Using DEBUG HTSTATS 0 to check and make sure that our dict expansion is dealyed. Using `./src/redis-server redis.conf --save "" --maxmemory 6871947673`. Using `./src/redis-benchmark -P 100 -r 1000000000 -t set -n 5000000`. After ten rounds of testing: ``` unstable: this PR: 769585.94 816860.00 771724.00 818196.69 775674.81 822368.44 781983.12 822503.69 783576.25 828088.75 784190.75 828637.75 791389.69 829875.50 794659.94 835660.69 798212.00 830013.25 801153.62 833934.56 ``` We can see there is about 4-5% performance improvement in this case. (cherry picked from commit 22cc9b51)
-
Binbin authored
dictExpandAllowed (for the main db dict and the expire dict) seems to involve a few function calls and memory accesses, and we can do it only after the thresholds checks and can get some performance improvements. A simple benchmark test: there are 11032768 fixed keys in the database, start a redis-server with `--maxmemory big_number --save ""`, start a redis-benchmark with `-P 100 -r 1000000000 -t set -n 5000000`, collect `throughput summary: n requests per second` result. After five rounds of testing: ``` unstable this PR 848032.56 897988.56 854408.69 913408.88 858663.94 914076.81 871839.56 916758.31 882612.56 920640.75 ``` We can see a 5% performance improvement in general condition. But note we'll still hit the degradation when we over the thresholds. (cherry picked from commit 46347693)
-
Oran Agra authored
#11766 introduced a bug in sdsResize where it could forget to update the sds type in the sds header and then cause an overflow in sdsalloc. it looks like the only implication of that is a possible assertion in HLL, but it's hard to rule out possible heap corruption issues with clientsCronResizeQueryBuffer
-
- 01 Nov, 2023 3 commits
-
-
Oran Agra authored
-
Viktor Söderqvist authored
Add a defensive checks to prevent double freeing a node from the cluster blacklist. (cherry picked from commit 8d675950)
-
- 18 Oct, 2023 10 commits
-
-
Oran Agra authored
-
Binbin authored
In #10536, we introduced the assert, some older versions of servers (like 7.0) doesn't gossip shard_id, so we will not add the node to cluster->shards, and node->shard_id is filled in randomly and may not be found here. It causes that if we add a 7.2 node to a 7.0 cluster and allocate slots to the 7.2 node, the 7.2 node will crash when it hits this assert. Somehow like #12538. In this PR, we remove the assert and replace it with an unconditional removal. (cherry picked from commit e5ef1613)
-
Binbin authored
When entering pubsub mode and using the redis-cli only connect command, we need to reset pubsub_mode because we switch to a different connection. This will affect the prompt when the connection is successful, and redis-cli will crash when the connect fails: ``` 127.0.0.1:6379> subscribe ch 1) "subscribe" 2) "ch" 3) (integer) 1 127.0.0.1:6379(subscribed mode)> connect 127.0.0.1 6380 127.0.0.1:6380(subscribed mode)> ping PONG 127.0.0.1:6380(subscribed mode)> connect a b Could not connect to Redis at a:0: Name or service not known Segmentation fault ``` (cherry picked from commit 4de4fcf2)
-
guybe7 authored
If we set `fsynced_reploff_pending` in `startAppendOnly`, and the fork doesn't start immediately (e.g. there's another fork active at the time), any subsequent commands will increment `server.master_repl_offset`, but will not cause a fsync (given they were executed before the fork started, they just ended up in the RDB part of it) Therefore, any WAITAOF will wait on the new master_repl_offset, but it will time out because no fsync will be executed. Release notes: ``` WAITAOF could timeout in the absence of write traffic in case a new AOF is created and an AOFRW can't immediately start. This can happen by the appendonly config is changed at runtime, but also after FLUSHALL, and replica full sync. ``` (cherry picked from commit bfa3931a)
-
Nir Rattner authored
The `retval` variable is defined as an `int`, so with 4 bytes, it cannot properly represent microsecond values greater than the equivalent of about 35 minutes. This bug shouldn't impact standard Redis behavior because Redis doesn't have timer events that are scheduled as far as 35 minutes out, but it may affect custom Redis modules which interact with the event timers via the RM_CreateTimer API. The impact is that `usUntilEarliestTimer` may return 0 for as long as `retval` is scaled to an overflowing value. While `usUntilEarliestTimer` continues to return `0`, `aeApiPoll` will have a zero timeout, and so Redis will use significantly more CPU iterating through its event loop without pause. For timers scheduled far enough into the future, Redis will cycle between ~35 minute periods of high CPU usage and ~35 minute periods of standard CPU usage. (cherry picked from commit 24187ed8)
-
Chen Tianjie authored
Starting a change in #12233 (released in 7.2), CLUSTER commands use client's connection to decide whether to return TLS port or non-TLS port, but commands called by Lua script and module's RM_Call don't have a real client with connection, and would currently be regarded as non-TLS connections. We can use server.current_client instead when it is available. When it is not (module calls commands without a real client), we may see this as an undefined behavior, and return null or default port (currently in this PR it returns default port, judged by server.tls_cluster). (cherry picked from commit 2aad03fa)
-
Jachin authored
Use the __MAC_OS_X_VERSION_MIN_REQUIRED macro to detect the macOS system version instead of using MAC_OS_X_VERSION_10_6. From MacOSX14.0.sdk, the default definitions of MAC_OS_X_VERSION_xxx have been removed in usr/include/AvailabilityMacros.h. It includes AvailabilityVersions.h, where the following condition must be met: `#if (!defined(_POSIX_C_SOURCE) && !defined(_XOPEN_SOURCE)) || defined(_DARWIN_C_SOURCE)` Only then will MAC_OS_X_VERSION_xxx be defined. However, in the project, _DARWIN_C_SOURCE is not defined, which leads to the loss of the definition for MAC_OS_X_VERSION_10_6. (cherry picked from commit a2b0701d)
-
Yossi Gottlieb authored
Before this commit, Unix socket setup performed chmod(2) on the socket file after calling listen(2). Depending on what umask is used, this could leave the file with the wrong permissions for a short period of time. As a result, another process could exploit this race condition and establish a connection that would otherwise not be possible. We now make sure the socket permissions are set up prior to calling listen(2).
-
- 06 Sep, 2023 6 commits
-
-
Oran Agra authored
-
nihohit authored
Updated the command tips for ACL SAVE / SETUSER / DELUSER, CLIENT SETNAME / SETINFO, and LATENCY RESET. The tips now match CONFIG SET, since there's a similar behavior for all of these commands - the user expects to update the various configurations & states on all nodes, not only on a single, random node. For LATENCY RESET the response tip is now agg_sum. Co-authored-by:
Shachar Langbeheim <shachlan@amazon.com> (cherry picked from commit 90e9fc38)
-
secwall authored
When connecting between a 7.0 and 7.2 cluster, the 7.0 cluster will not populate the shard_id field, which is expect on the 7.2 cluster. This is not intended behavior, as the 7.2 cluster is supposed to use a temporary shard_id while the node is in the upgrading state, but it wasn't being correctly set in this case. (cherry picked from commit a2046c1e)
-
bodong.ybd authored
Before: ``` 127.0.0.1:6379> command getkeys sort_ro key (empty array) 127.0.0.1:6379> ``` After: ``` 127.0.0.1:6379> command getkeys sort_ro key 1) "key" 127.0.0.1:6379> ``` (cherry picked from commit b59f53ef)
-
nihohit authored
Since the three commands have similar behavior (change config, return OK), the tips that govern how they should behave should be similar. Co-authored-by:
Shachar Langbeheim <shachlan@amazon.com> (cherry picked from commit 4b281ce5)
-
- 15 Aug, 2023 4 commits
- 10 Aug, 2023 2 commits
-
-
Madelyn Olson authored
When adding a new ACL rule was added, an attempt was made to remove any "overlapping" rules. However, there when a match was found, the search was not resumed at the right location, but instead after the original position of the original command. For example, if the current rules were `-config +config|get` and a rule `+config` was added. It would identify that `-config` was matched, but it would skip over `+config|get`, leaving the compacted rule `-config +config`. This would be evaluated safely, but looks weird. This bug can only be triggered with subcommands, since that is the only way to have sequential matching rules. Resolves #12470. This is also only present in 7.2. I think there was also a minor risk of removing another valid rule, since it would start the search of the next command at an arbitrary point. I couldn't find a valid offset that would have cause a match using any of the existing commands that have subcommands with another command.
-
Binbin authored
After SENTINEL RESET, sometimes the sentinel can sense the master again, causing the test to fail. Here we give it a few more chances.
-
- 05 Aug, 2023 4 commits
-
-
zhaozhao.zz authored
if there are no subscribers, we can ignore the operation
-
zhaozhao.zz authored
Fix the assertion when a busy script (timeout) signal ready keys (like LPUSH), and then an arbitrary client's `allow-busy` command steps into `handleClientsBlockedOnKeys` try wake up clients blocked on keys (like BLPOP). Reproduction process: 1. start a redis with aof `./redis-server --appendonly yes` 2. exec blpop `127.0.0.1:6379> blpop a 0` 3. use another client call a busy script and this script push the blocked key `127.0.0.1:6379> eval "redis.call('lpush','a','b') while(1) do end" 0` 4. user a new client call an allow-busy command like auth `127.0.0.1:6379> auth a` BTW, this issue also break the atomicity of script. This bug has been around for many years, the old versions only have the atomic problem, only 7.0/7.2 has the assertion problem. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
sundb authored
This PR mainly fixes a possible integer overflow in `json_append_string()`. When we use `cjson.encoding()` to encode a string larger than 2GB, at specific compilation flags, an integer overflow may occur leading to truncation, resulting in the part of the string larger than 2GB not being encoded. On the other hand, this overflow doesn't cause any read or write out-of-range or segment fault. 1) using -O0 for lua_cjson (`make LUA_DEBUG=yes`) In this case, `i` will overflow and leads to truncation. When `i` reaches `INT_MAX+1` and overflows to INT_MIN, when compared to len, `i` (1000000..00) is expanded to 64 bits signed integer (1111111.....000000) . At this point i will be greater than len and jump out of the loop, so `for (i = 0; i < len; i++)` will loop up to 2^31 times, and the part of larger than 2GB will be truncated. ```asm `i` => -0x24(%rbp) <+253>: addl $0x1,-0x24(%rbp) ; overflow if i large than 2^31 <+257>: mov -0x24(%rbp),%eax <+260>: movslq %eax,%rdx ; move a 32-bit value with sign extension into a 64-bit signed <+263>: mov -0x20(%rbp),%rax <+267>: cmp %rax,%rdx ; check `i < len` <+270>: jb 0x212600 <json_append_string+148> ``` 2) using -O2/-O3 for lua_cjson (`make LUA_DEBUG=no`, **the default**) In this case, because singed integer overflow is an undefined behavior, `i` will not overflow. `i` will be optimized by the compiler and use 64-bit registers for all subsequent instructions. ```asm <+180>: add $0x1,%rbx ; Using 64-bit register `rbx` for i++ <+184>: lea 0x1(%rdx),%rsi <+188>: mov %rsi,0x10(%rbp) <+192>: mov %al,(%rcx,%rdx,1) <+195>: cmp %rbx,(%rsp) ; check `i < len` <+199>: ja 0x20b63a <json_append_string+154> ``` 3) using 32bit Because `strbuf_ensure_empty_length()` preallocates memory of length (len * 6 + 2), in 32-bit `cjson.encode()` can only handle strings smaller than ((2 ^ 32) - 3 ) / 6. So 32bit is not affected. Also change `i` in `strbuf_append_string()` to `size_t`. Since its second argument `str` is taken from the `char2escape` string array which is never larger than 6, so `strbuf_append_string()` is not at risk of overflow (the bug was unreachable).
-
Binbin authored
GEOHASH / GEODIST / GEOPOS use zsetScore to get the score, in skiplist encoding, we use dictFind to get the score, which is O(1), same as ZSCORE command. It is not clear why these commands had O(Log(N)), and O(N) until now.
-
- 02 Aug, 2023 2 commits
-
-
Meir Shpilraien (Spielrein) authored
Ensure that the function load timeout is disabled during loading from RDB/AOF and on replicas. (#12451) When loading a function from either RDB/AOF or a replica, it is essential not to fail on timeout errors. The loading time may vary due to various factors, such as hardware specifications or the system's workload during the loading process. Once a function has been successfully loaded, it should be allowed to load from persistence or on replicas without encountering a timeout failure. To maintain a clear separation between the engine and Redis internals, the implementation refrains from directly checking the state of Redis within the engine itself. Instead, the engine receives the desired timeout as part of the library creation and duly respects this timeout value. If Redis wishes to disable any timeout, it can simply send a value of 0.
-
zhaozhao.zz authored
When doing merge selector, we should check whether the merge has started (i.e., whether open_bracket_start is -1) every time. Otherwise, encountering an illegal selector pattern could succeed and also cause memory leaks, for example: ``` acl setuser test1 (+PING (+SELECT (+DEL ) ``` The above would leak memory and succeed with only DEL being applied, and would now error after the fix. Co-authored-by:
Oran Agra <oran@redislabs.com>
-