- 19 May, 2024 14 commits
-
-
YaacovHazan authored
-
Yanqi Lv authored
In `beginResultEmission`, -1 means the result length is not known in advance. But after #12185, if we pass -1 to `zrangeResultBeginStore`, it will convert to SIZE_MAX in `zsetTypeCreate` and try to `dictExpand`. Although `dictExpand` won't succeed because the size overflows, I think we'd better to avoid this wrong conversion. This bug can be triggered when the source of `zrangestore` doesn't exist or we use `zrangestore` command with `byscore` or `bylex`. The impact is that dst keys will be converted to use skiplist instead of listpack. (cherry picked from commit bad33f87)
-
Binbin authored
The check in fileIsManifest misjudged the manifest file. For example, if resp aof contains "file", it will be considered a manifest file and the check will fail: ``` *3 $3 set $4 file $4 file ``` In #12951, if the preamble aof also contains it, it will also fail. Fixes #12951. the bug was happening if the the word "file" is mentioned in the first 1024 lines of the AOF. and now as soon as it finds a non-comment line it'll break (if it contains "file" or doesn't) (cherry picked from commit da727ad4)
-
Matthew Douglass authored
Since lua_Number is not explicitly an integer or a double, we need to make an effort to convert it as an integer when that's possible, since the string could later be used in a context that doesn't support scientific notation (e.g. 1e9 instead of 100000000). Since fpconv_dtoa converts numbers with the equivalent of `%f` or `%e`, which ever is shorter, this would break if we try to pass a long integer number to a command that takes integer. we'll get an implicit conversion to string in Lua, and then the parsing in getLongLongFromObjectOrReply will fail. ``` > eval "redis.call('hincrby', 'key', 'field', '1000000000')" 0 (nil) > eval "redis.call('hincrby', 'key', 'field', tonumber('1000000000'))" 0 (error) ERR value is not an integer or out of range script: ac99c32e4daf7e300d593085b611de261954a946, on @user_script:1. ``` Switch to using ll2string if the number can be safely represented as a long long. The problem was introduced in #10587 (Redis 7.2). closes #13113. --------- Co-authored-by:
Binbin <binloveplay1314@qq.com> Co-authored-by:
debing.sun <debing.sun@redis.com> Co-authored-by:
Oran Agra <oran@redislabs.com> (cherry picked from commit 5fdaa53d)
-
debing.sun authored
`CONFIG SET oom-score-adj handles configuration failures` test failed in some CI jobs today. Failed CI: https://github.com/redis/redis/actions/runs/8152519326 Not sure why the github action's docker image perssions have changed, but the issue is similar to #12887, where we can't assume the range of oom_score_adj that a user can change. ## Solution: Modify the way of determining whether the current user has no privileges or not, instead of relying on whether the user id is 0 or not. (cherry picked from commit 9738ba98)
-
LiiNen authored
Fix redis-cli --count (for --scan, --bigkeys, etc) was ignored unless --pattern was also used (#13092) The --count option for redis-cli has been released in redis 7.2. https://github.com/redis/redis/pull/12042 But I have found in code, that some logic was missing for using this 'count' option. ``` static redisReply *sendScan(unsigned long long *it) { redisReply *reply; if (config.pattern) reply = redisCommand(context, "SCAN %llu MATCH %b COUNT %d", *it, config.pattern, sdslen(config.pattern), config.count); else reply = redisCommand(context,"SCAN %llu",*it); ``` The intention was being able to using scan count. But in this case, the --count will be only applied when 'pattern' is declared. So, I had fix it simply, to be worked properly - even if --pattern option is not being used. I tested it simply with time() command several times, and I could see it works as intended with this commit. The examples of test results are below: ``` # unstable build time(./redis-cli -a $AUTH -p $PORT -h $HOST --scan >/dev/null 2>/dev/null) real 0m1.287s user 0m0.011s sys 0m0.022s # count is not applied time(./redis-cli -a $AUTH -p $PORT -h $HOST --scan --count 1000 >/dev/null 2>/dev/null) real 0m1.117s user 0m0.011s sys 0m0.020s # count is applied with --pattern time(./redis-cli -a $AUTH -p $PORT -h $HOST --scan --count 1000 --pattern "hash:*" >/dev/null 2>/dev/null) real 0m0.045s user 0m0.002s sys 0m0.002s ``` ``` # fix-redis-cli-scan-count build time(./redis-cli -a $AUTH -p $PORT -h $HOST --scan >/dev/null 2>/dev/null) real 0m1.084s user 0m0.008s sys 0m0.024s # count is applied even if --pattern is not declared time(./redis-cli -a $AUTH -p $PORT -h $HOST --scan --count 1000 >/dev/null 2>/dev/null) real 0m0.043s user 0m0.000s sys 0m0.004s # of course this also applied time(./redis-cli -a $AUTH -p $PORT -h $HOST --scan --count 1000 --pattern "hash:*" >/dev/null 2>/dev/null) real 0m0.031s user 0m0.002s sys 0m0.002s ``` Thanks a lot. (cherry picked from commit 763827c9)
-
Binbin authored
These tests have all failed in daily CI: ``` *** [err]: Blocking XREADGROUP for stream key that has clients blocked on stream - reprocessing command in tests/unit/type/stream-cgroups.tcl Expected '1101' to be between to '1000' and '1100' (context: type eval line 23 cmd {assert_range [expr $end-$start] 1000 1100} proc ::test) *** [err]: BLPOP unblock but the key is expired and then block again - reprocessing command in tests/unit/type/list.tcl Expected '1101' to be between to '1000' and '1100' (context: type eval line 23 cmd {assert_range [expr $end-$start] 1000 1100} proc ::test) *** [err]: BZPOPMIN unblock but the key is expired and then block again - reprocessing command in tests/unit/type/zset.tcl Expected '1103' to be between to '1000' and '1100' (context: type eval line 23 cmd {assert_range [expr $end-$start] 1000 1100} proc ::test) ``` Increase the range to avoid failures, and improve the comment to be clearer. tests was introduced in #13004. (cherry picked from commit 32f44da5)
-
debing.sun authored
Fix two crash introducted by #12955 When a quicklist node can't be inserted and split, we eventually merge the current node with its neighboring nodes after inserting, and compress the current node and its siblings. 1. When the current node is merged with another node, the current node may become invalid and can no longer be used. Solution: let `_quicklistMergeNodes()` return the merged nodes. 3. If the current node is a LZF quicklist node, its recompress will be 1. If the split node can be merged with a sibling node to become head or tail, recompress may cause the head and tail to be compressed, which is not allowed. Solution: always recompress to 0 after merging. (cherry picked from commit 1e8dc1da)
-
debing.sun authored
Fix #12864 The main reason for this crash is that when replacing a element of a quicklist packed node with lpReplace() method, if the final size is larger than 4GB, lpReplace() will fail and returns NULL, causing `node->entry` to be incorrectly set to NULL. Since the inserted data is not a large element, we can't just replace it like a large element, first quicklistInsertAfter() and then quicklistDelIndex(), because the current node may be merged and invalidated in quicklistInsertAfter(). The solution of this PR: When replacing a node fails (listpack exceeds 4GB), split the current node, create a new node to put in the middle, and try to merge them. This is the same as inserting a large element. In the worst case, its size will not exceed 4GB. (cherry picked from commit 1f00c951)
-
Binbin authored
This was introduced in #13004, missing this assignment. It causes timeout to be a random value (may be less than now), and then in `Unblock by timer` test, the client is unblocked and then it call timeout_callback, since the callback is NULL, the server will crash. The crash stack is: ``` beforesleep handleBlockedClientsTimeout checkBlockedClientTimeout unblockClientOnTimeout replyToBlockedClientTimedOut moduleBlockedClientTimedOut -- the timeout_callback is NULL, invalidFunctionWasCalled bc->timeout_callback(&ctx,(void**)c->argv,c->argc); ``` (cherry picked from commit 45a35a79)
-
Binbin authored
In #11012, we will reprocess command when client is unblocked on keys, in some blocking commands, for example, in the XREADGROUP BLOCK scenario, because of the re-processing command, we will recalculate the block timeout, causing the blocking time to be reset. This commit add a new CLIENT_REPROCESSING_COMMAND clent flag, explicitly let the command know that it is being re-processed, later in blockForKeys we will not reset the timeout. Affected BLOCK cases: - list / zset / stream, added test cases for each. Unaffected cases: - module (never re-process the commands). - WAIT / WAITAOF (never re-process the commands). Fixes #12998. (cherry picked from commit 492021db)
-
- 09 Jan, 2024 9 commits
-
-
Oran Agra authored
-
Binbin authored
Crash reported in #12695. In the process of upgrading the cluster from 7.0 to 7.2, because the 7.0 nodes will not gossip shard id, in 7.2 we will rely on shard id to build the server.cluster->shards dict. In some cases, for example, the 7.0 master node and the 7.2 replica node. From the view of 7.2 replica node, the cluster->shards dictionary does not have its master node. In this case calling CLUSTER SHARDS on the 7.2 replica node may crash. We should fix the underlying assumption of updateShardId, which is that the shard dict should be always in sync with the node's shard_id. The fix was suggested by PingXie, see more details in #12695. (cherry picked from commit 5b0c6a82)
-
Binbin authored
If there are nodes in the cluster that do not support shard-id, they will gossip shard-id. From the perspective of nodes that support shard-id, their shard-id is meaningless (since shard-id is randomly generated when we create a node.) Nodes that support shard-id will save the shard-id information in nodes.conf. If the node is restarted according to nodes.conf, the server will report a corrupted cluster config file error. Because auxShardIdSetter will reject configurations with inconsistent master-replica shard-ids. A cluster-wide consensus for the node's shard_id is not necessary. The key is maintaining consistency of the shard_id on each individual 7.2 node. As the cluster progressively upgrades to version 7.2, we can expect the shard_ids across all nodes to naturally converge and align. In this PR, when processing the gossip, if sender is a replica and does not support shard-id, set the shard_id to the shard_id of its master. (cherry picked from commit 4cae66f5)
-
Binbin authored
When we register notification or server event in RedisModule_OnLoad, but RedisModule_OnLoad eventually fails, triggering notification or server event will cause the server to crash. If the loading fails on a later stage of moduleLoad, we do call moduleUnload which handles all un-registration, but when it fails on the RedisModule_OnLoad call, we only un-register several specific things and these were missing: - moduleUnsubscribeNotifications - moduleUnregisterFilters - moduleUnsubscribeAllServerEvents Refactored the code to reuse the code from moduleUnload. Fixes #12808. (cherry picked from commit d6f19539)
-
Meir Shpilraien (Spielrein) authored
Redis 7.2 (#9406) introduced a new modules event, `RedisModuleEvent_Key`. This new event allows the module to read the key data just before it is removed from the database (either deleted, expired, evicted, or overwritten). When the key is removed from the database, either by active expire or eviction. The new event was not called as part of an execution unit. This can cause an issue if the module registers a post notification job inside the event. This job will not be executed atomically with the expiration/eviction operation and will not replicated inside a Multi/Exec. Moreover, the post notification job will be executed right after the event where it is still not safe to perform any write operation, this will violate the promise that post notification job will be called atomically with the operation that triggered it and **only when it is safe to write**. This PR fixes the issue by wrapping each expiration/eviction of a key with an execution unit. This makes sure the entire operation will run atomically and all the post notification jobs will be executed at the end where it is safe to write. Tests were modified to verify the fix. (cherry picked from commit 0ffb9d2e)
-
Binbin authored
In the past, we did not call _dictNextExp frequently. It was only called when the dictionary was expanded. Later, dictTypeExpandAllowed was introduced in #7954, which is 6.2. For the data dict and the expire dict, we can check maxmemory before actually expanding the dict. This is a good optimization to avoid maxmemory being exceeded due to the dict expansion. And in #11692, we moved the dictTypeExpandAllowed check before the threshold check, this caused a bit of performance degradation, every time a key is added to the dict, dictTypeExpandAllowed is called to check. The main reason for degradation is that in a large dict, we need to call _dictNextExp frequently, that is, every time we add a key, we need to call _dictNextExp once. Then the threshold is checked to see if the dict needs to be expanded. We can see that the order of checks here can be optimized. So we moved the dictTypeExpandAllowed check back to after the threshold check in #12789. In this way, before the dict is actually expanded (that is, before the threshold is reached), we will not do anything extra compared to before, that is, we will not call _dictNextExp frequently. But note we'll still hit the degradation when we over the thresholds. When the threshold is reached, because #7954, we may delay the dict expansion due to maxmemory limitations. In this case, we will call _dictNextExp every time we add a key during this period. This PR use CLZ in _dictNextExp to get the next power of two. CLZ (count leading zeros) can easily give you the next power of two. It should be noted that we have actually introduced the use of __builtin_clzl in #8687, which is 7.0. So i suppose all the platforms we use have it (even if the CPU doesn't have an instruction). We build 67108864 (2**26) keys through DEBUG POPULTE, which will use approximately 5.49G memory (used_memory:5898522936). If expansion is triggered, the additional hash table will consume approximately 1G memory (2 ** 27 * 8). So we set maxmemory to 6871947673 (that is, 6.4G), which will be less than 5.49G + 1G, so we will delay the dict rehash while addint the keys. After that, each time an element is added to the dict, an allow check will be performed, that is, we can frequently call _dictNextExp to test the comparison before and after the optimization. Using DEBUG HTSTATS 0 to check and make sure that our dict expansion is dealyed. Using `./src/redis-server redis.conf --save "" --maxmemory 6871947673`. Using `./src/redis-benchmark -P 100 -r 1000000000 -t set -n 5000000`. After ten rounds of testing: ``` unstable: this PR: 769585.94 816860.00 771724.00 818196.69 775674.81 822368.44 781983.12 822503.69 783576.25 828088.75 784190.75 828637.75 791389.69 829875.50 794659.94 835660.69 798212.00 830013.25 801153.62 833934.56 ``` We can see there is about 4-5% performance improvement in this case. (cherry picked from commit 22cc9b51)
-
Binbin authored
dictExpandAllowed (for the main db dict and the expire dict) seems to involve a few function calls and memory accesses, and we can do it only after the thresholds checks and can get some performance improvements. A simple benchmark test: there are 11032768 fixed keys in the database, start a redis-server with `--maxmemory big_number --save ""`, start a redis-benchmark with `-P 100 -r 1000000000 -t set -n 5000000`, collect `throughput summary: n requests per second` result. After five rounds of testing: ``` unstable this PR 848032.56 897988.56 854408.69 913408.88 858663.94 914076.81 871839.56 916758.31 882612.56 920640.75 ``` We can see a 5% performance improvement in general condition. But note we'll still hit the degradation when we over the thresholds. (cherry picked from commit 46347693)
-
Oran Agra authored
#11766 introduced a bug in sdsResize where it could forget to update the sds type in the sds header and then cause an overflow in sdsalloc. it looks like the only implication of that is a possible assertion in HLL, but it's hard to rule out possible heap corruption issues with clientsCronResizeQueryBuffer
-
- 01 Nov, 2023 3 commits
-
-
Oran Agra authored
-
Viktor Söderqvist authored
Add a defensive checks to prevent double freeing a node from the cluster blacklist. (cherry picked from commit 8d675950)
-
- 18 Oct, 2023 10 commits
-
-
Oran Agra authored
-
Binbin authored
In #10536, we introduced the assert, some older versions of servers (like 7.0) doesn't gossip shard_id, so we will not add the node to cluster->shards, and node->shard_id is filled in randomly and may not be found here. It causes that if we add a 7.2 node to a 7.0 cluster and allocate slots to the 7.2 node, the 7.2 node will crash when it hits this assert. Somehow like #12538. In this PR, we remove the assert and replace it with an unconditional removal. (cherry picked from commit e5ef1613)
-
Binbin authored
When entering pubsub mode and using the redis-cli only connect command, we need to reset pubsub_mode because we switch to a different connection. This will affect the prompt when the connection is successful, and redis-cli will crash when the connect fails: ``` 127.0.0.1:6379> subscribe ch 1) "subscribe" 2) "ch" 3) (integer) 1 127.0.0.1:6379(subscribed mode)> connect 127.0.0.1 6380 127.0.0.1:6380(subscribed mode)> ping PONG 127.0.0.1:6380(subscribed mode)> connect a b Could not connect to Redis at a:0: Name or service not known Segmentation fault ``` (cherry picked from commit 4de4fcf2)
-
guybe7 authored
If we set `fsynced_reploff_pending` in `startAppendOnly`, and the fork doesn't start immediately (e.g. there's another fork active at the time), any subsequent commands will increment `server.master_repl_offset`, but will not cause a fsync (given they were executed before the fork started, they just ended up in the RDB part of it) Therefore, any WAITAOF will wait on the new master_repl_offset, but it will time out because no fsync will be executed. Release notes: ``` WAITAOF could timeout in the absence of write traffic in case a new AOF is created and an AOFRW can't immediately start. This can happen by the appendonly config is changed at runtime, but also after FLUSHALL, and replica full sync. ``` (cherry picked from commit bfa3931a)
-
Nir Rattner authored
The `retval` variable is defined as an `int`, so with 4 bytes, it cannot properly represent microsecond values greater than the equivalent of about 35 minutes. This bug shouldn't impact standard Redis behavior because Redis doesn't have timer events that are scheduled as far as 35 minutes out, but it may affect custom Redis modules which interact with the event timers via the RM_CreateTimer API. The impact is that `usUntilEarliestTimer` may return 0 for as long as `retval` is scaled to an overflowing value. While `usUntilEarliestTimer` continues to return `0`, `aeApiPoll` will have a zero timeout, and so Redis will use significantly more CPU iterating through its event loop without pause. For timers scheduled far enough into the future, Redis will cycle between ~35 minute periods of high CPU usage and ~35 minute periods of standard CPU usage. (cherry picked from commit 24187ed8)
-
Chen Tianjie authored
Starting a change in #12233 (released in 7.2), CLUSTER commands use client's connection to decide whether to return TLS port or non-TLS port, but commands called by Lua script and module's RM_Call don't have a real client with connection, and would currently be regarded as non-TLS connections. We can use server.current_client instead when it is available. When it is not (module calls commands without a real client), we may see this as an undefined behavior, and return null or default port (currently in this PR it returns default port, judged by server.tls_cluster). (cherry picked from commit 2aad03fa)
-
Jachin authored
Use the __MAC_OS_X_VERSION_MIN_REQUIRED macro to detect the macOS system version instead of using MAC_OS_X_VERSION_10_6. From MacOSX14.0.sdk, the default definitions of MAC_OS_X_VERSION_xxx have been removed in usr/include/AvailabilityMacros.h. It includes AvailabilityVersions.h, where the following condition must be met: `#if (!defined(_POSIX_C_SOURCE) && !defined(_XOPEN_SOURCE)) || defined(_DARWIN_C_SOURCE)` Only then will MAC_OS_X_VERSION_xxx be defined. However, in the project, _DARWIN_C_SOURCE is not defined, which leads to the loss of the definition for MAC_OS_X_VERSION_10_6. (cherry picked from commit a2b0701d)
-
Yossi Gottlieb authored
Before this commit, Unix socket setup performed chmod(2) on the socket file after calling listen(2). Depending on what umask is used, this could leave the file with the wrong permissions for a short period of time. As a result, another process could exploit this race condition and establish a connection that would otherwise not be possible. We now make sure the socket permissions are set up prior to calling listen(2).
-
- 06 Sep, 2023 4 commits
-
-
Oran Agra authored
-
nihohit authored
Updated the command tips for ACL SAVE / SETUSER / DELUSER, CLIENT SETNAME / SETINFO, and LATENCY RESET. The tips now match CONFIG SET, since there's a similar behavior for all of these commands - the user expects to update the various configurations & states on all nodes, not only on a single, random node. For LATENCY RESET the response tip is now agg_sum. Co-authored-by:
Shachar Langbeheim <shachlan@amazon.com> (cherry picked from commit 90e9fc38)
-
secwall authored
When connecting between a 7.0 and 7.2 cluster, the 7.0 cluster will not populate the shard_id field, which is expect on the 7.2 cluster. This is not intended behavior, as the 7.2 cluster is supposed to use a temporary shard_id while the node is in the upgrading state, but it wasn't being correctly set in this case. (cherry picked from commit a2046c1e)
-