- 03 Sep, 2024 3 commits
-
-
Filipe Oliveira (Redis) authored
# Overall improvement TBD ( current is approximately 6% on the achievable ops/sec), coming from: - In case of no module we can skip 1.3% CPU cycles on dict Iterator creation/deletion - Use addReplyBulkCBuffer instead of addReplyBulkCString to avoid runtime strlen overhead within HELLO reply on string constants. ## Optimization 1: In case of no module we can skip 1.3% CPU cycles on dict Iterator creation/deletion. ## Optimization 2: Use addReplyBulkCBuffer instead of addReplyBulkCString to avoid runtime strlen overhead within HELLO reply on string constants.
-
cyy-tag authored
Currently aeApiPoll panic does not record error code information. Added variable parameter formatting to _serverPanic to fix the issue --------- Co-authored-by:
yingyin.chen <15816602944@163.com>
-
Ozan Tezcan authored
On a full sync, replica starts discarding existing db. If the existing db is huge and flush is happening synchronously, replica may become unresponsive. Adding a change to yield back to event loop while flushing db on a replica. Replica will reply -LOADING in this case. Note that while replica is loading the new rdb, it may get an error and start flushing the partial db. This step may take a long time as well. Similarly, replica will reply -LOADING in this case. To call processEventsWhileBlocked() and reply -LOADING, we need to do: - Set connSetReadHandler() null not to process further data from the master - Set server.loading flag - Call blockingOperationStarts() rdbload() already does these steps and calls processEventsWhileBlocked() while loading the rdb. Added a new call rdbLoadWithEmptyFunc() which accepts callback to flush db before loading rdb or when an error happens while loading. For diskless replication, doing something similar and calling emptyData() after setting required flags. Additional changes: - Allow `appendonly` config change during loading. Config can be changed while loading data on startup or on replication when slave is loading RDB. We allow config change command to update `server.aof_enabled` and then lazily apply config change after loading operation is completed. - Added a test for `replica-lazy-flush` config
-
- 28 Aug, 2024 1 commit
-
-
CoolThi authored
This change prevents missed optimization for some compilers: https://godbolt.org/z/W66h86E13 (the reduced intermediate form in optimization).
-
- 26 Aug, 2024 1 commit
-
-
Raz Monsonego authored
Currently, module commands are not returned for the `ACL CAT <category>` command, but skipped instead. Since now modules can add ACL categories they should no longer be skipped.
-
- 20 Aug, 2024 2 commits
-
-
Zihao Lin authored
Fixed the issue about GETRANGE and SUBSTR command return unexpected result caused by the `start` and `end` out of definition range of string. --- ## break change Before this PR, when negative `end` was out of range (i.e., end < -strlen), we would fix it to 0 to get the substring, which also resulted in the first character still being returned for this kind of out of range. After this PR, we ensure that `GETRANGE` returns an empty bulk when the negative end index is out of range. Closes #11738 --------- Co-authored-by:
debing.sun <debing.sun@redis.com>
-
judeng authored
Move the TYPE filtering to the scan callback so that avoided the `lookupKey` operation. This is the follow-up to #12209 . In this thread we introduced two breaking changes: 1. we will not attempt to do lazy expire (delete) a key that was filtered by not matching the TYPE (like we already do for MATCH pattern). 2. when the specified key TYPE filter is an unknown type, server will reply a error immediately instead of doing a full scan that comes back empty handed.
-
- 19 Aug, 2024 2 commits
-
-
Meir Shpilraien (Spielrein) authored
The PR attempt to avoid contention on the `used_memory` global variable when allocate or free memory from multiple threads at the same time. Each time a thread is allocating or releasing a memory, it needs to update the `used_memory` global variable. This update might cause a contention when done aggressively from multiple threads. ### The solution Instead of having a single global variable that need to be updated from multiple thread. We create an array of used_memory, each entry in the array is updated by a single thread and the main thread summarizes all the values to accumulate the memory usage. This solution, though reduces the contention between threads on updating the `used_memory` global variable, it adds work to the main thread that need to summarize all the entries at the `used_memory` array. To avoid increasing the work done by the main thread by too much, we limit the size of the used memory array to 16. This means that up to 16 threads can run without any contention between them. If there are more than 16 threads, we will reuse entries on the used_memory array, in this case we might still have contention between threads, but it will be much less significant. Notice, that in order to really avoid contention, the entries in the `used_memory` array must reside on different cache lines. To achieve that we create a struct with padding such that its size will be exactly cache_line size. In addition we make sure the address of the `used_memory` array will be aligned to cache_line size. ### Benchmark Some benchmark shows improvement (up to 15%): | Test Case |Baseline unstable (median obs. +- std.dev)|Comparison test_used_memory_per_thread_array (median obs. +- std.dev)|% change (higher-better)| Note | |-------------------------------------------------------------------------------|------------------------------------------|--------------------------------------------------------------------:|------------------------|------------------------------------| |memtier_benchmark-1key-list-100-elements-lrange-all-elements | 92657 +- 2.0% (2 datapoints) | 101445|9.5% |IMPROVEMENT | |memtier_benchmark-1key-list-1K-elements-lrange-all-elements | 14965 +- 1.3% (2 datapoints) | 16296|8.9% |IMPROVEMENT | |memtier_benchmark-1key-set-10-elements-smembers-pipeline-10 | 431019 +- 5.2% (2 datapoints) | 461039|7.0% |waterline=5.2%. IMPROVEMENT | |memtier_benchmark-1key-set-100-elements-smembers | 74367 +- 0.0% (2 datapoints) | 80190|7.8% |IMPROVEMENT | |memtier_benchmark-1key-set-1K-elements-smembers | 11730 +- 0.4% (2 datapoints) | 13519|15.3% |IMPROVEMENT | Full results: | Test Case |Baseline unstable (median obs. +- std.dev)|Comparison test_used_memory_per_thread_array (median obs. +- std.dev)|% change (higher-better)| Note | |-------------------------------------------------------------------------------|------------------------------------------|--------------------------------------------------------------------:|------------------------|------------------------------------| |memtier_benchmark-10Mkeys-load-hash-5-fields-with-1000B-values | 88613 +- 1.0% (2 datapoints) | 88688|0.1% |No Change | |memtier_benchmark-10Mkeys-load-hash-5-fields-with-1000B-values-pipeline-10 | 124786 +- 1.2% (2 datapoints) | 123671|-0.9% |No Change | |memtier_benchmark-10Mkeys-load-hash-5-fields-with-100B-values | 122460 +- 1.4% (2 datapoints) | 122990|0.4% |No Change | |memtier_benchmark-10Mkeys-load-hash-5-fields-with-100B-values-pipeline-10 | 333384 +- 5.1% (2 datapoints) | 319221|-4.2% |waterline=5.1%. potential REGRESSION| |memtier_benchmark-10Mkeys-load-hash-5-fields-with-10B-values | 137354 +- 0.3% (2 datapoints) | 138759|1.0% |No Change | |memtier_benchmark-10Mkeys-load-hash-5-fields-with-10B-values-pipeline-10 | 401261 +- 4.3% (2 datapoints) | 398524|-0.7% |No Change | |memtier_benchmark-1Mkeys-100B-expire-use-case | 179058 +- 0.4% (2 datapoints) | 180114|0.6% |No Change | |memtier_benchmark-1Mkeys-10B-expire-use-case | 180390 +- 0.2% (2 datapoints) | 180401|0.0% |No Change | |memtier_benchmark-1Mkeys-1KiB-expire-use-case | 175993 +- 0.7% (2 datapoints) | 175147|-0.5% |No Change | |memtier_benchmark-1Mkeys-4KiB-expire-use-case | 165771 +- 0.0% (2 datapoints) | 164434|-0.8% |No Change | |memtier_benchmark-1Mkeys-bitmap-getbit-pipeline-10 | 931339 +- 2.1% (2 datapoints) | 929487|-0.2% |No Change | |memtier_benchmark-1Mkeys-generic-exists-pipeline-10 | 999462 +- 0.4% (2 datapoints) | 963226|-3.6% |potential REGRESSION | |memtier_benchmark-1Mkeys-generic-expire-pipeline-10 | 905333 +- 1.4% (2 datapoints) | 896673|-1.0% |No Change | |memtier_benchmark-1Mkeys-generic-expireat-pipeline-10 | 885015 +- 1.0% (2 datapoints) | 865010|-2.3% |No Change | |memtier_benchmark-1Mkeys-generic-pexpire-pipeline-10 | 897115 +- 1.2% (2 datapoints) | 887544|-1.1% |No Change | |memtier_benchmark-1Mkeys-generic-scan-pipeline-10 | 451103 +- 3.2% (2 datapoints) | 465571|3.2% |potential IMPROVEMENT | |memtier_benchmark-1Mkeys-generic-touch-pipeline-10 | 996809 +- 0.6% (2 datapoints) | 984478|-1.2% |No Change | |memtier_benchmark-1Mkeys-generic-ttl-pipeline-10 | 979570 +- 1.7% (2 datapoints) | 958752|-2.1% |No Change | |memtier_benchmark-1Mkeys-hash-hget-hgetall-hkeys-hvals-with-100B-values | 180888 +- 0.5% (2 datapoints) | 182295|0.8% |No Change | |memtier_benchmark-1Mkeys-hash-hmget-5-fields-with-100B-values-pipeline-10 | 717881 +- 1.0% (2 datapoints) | 724814|1.0% |No Change | |memtier_benchmark-1Mkeys-hash-transactions-multi-exec-pipeline-20 | 1055447 +- 0.4% (2 datapoints) | 1065836|1.0% |No Change | |memtier_benchmark-1Mkeys-lhash-hexists | 164332 +- 0.1% (2 datapoints) | 163636|-0.4% |No Change | |memtier_benchmark-1Mkeys-lhash-hincbry | 171674 +- 0.3% (2 datapoints) | 172737|0.6% |No Change | |memtier_benchmark-1Mkeys-list-lpop-rpop-with-100B-values | 180904 +- 1.1% (2 datapoints) | 179467|-0.8% |No Change | |memtier_benchmark-1Mkeys-list-lpop-rpop-with-10B-values | 181746 +- 0.8% (2 datapoints) | 182416|0.4% |No Change | |memtier_benchmark-1Mkeys-list-lpop-rpop-with-1KiB-values | 182004 +- 0.7% (2 datapoints) | 180237|-1.0% |No Change | |memtier_benchmark-1Mkeys-load-hash-5-fields-with-1000B-values | 105191 +- 0.9% (2 datapoints) | 105058|-0.1% |No Change | |memtier_benchmark-1Mkeys-load-hash-5-fields-with-1000B-values-pipeline-10 | 150683 +- 0.9% (2 datapoints) | 153597|1.9% |No Change | |memtier_benchmark-1Mkeys-load-hash-hmset-5-fields-with-1000B-values | 104122 +- 0.7% (2 datapoints) | 105236|1.1% |No Change | |memtier_benchmark-1Mkeys-load-list-with-100B-values | 149770 +- 0.9% (2 datapoints) | 150510|0.5% |No Change | |memtier_benchmark-1Mkeys-load-list-with-10B-values | 165537 +- 1.9% (2 datapoints) | 164329|-0.7% |No Change | |memtier_benchmark-1Mkeys-load-list-with-1KiB-values | 113315 +- 0.5% (2 datapoints) | 114110|0.7% |No Change | |memtier_benchmark-1Mkeys-load-stream-1-fields-with-100B-values | 131201 +- 0.7% (2 datapoints) | 129545|-1.3% |No Change | |memtier_benchmark-1Mkeys-load-stream-1-fields-with-100B-values-pipeline-10 | 352891 +- 2.8% (2 datapoints) | 348338|-1.3% |No Change | |memtier_benchmark-1Mkeys-load-stream-5-fields-with-100B-values | 104386 +- 0.7% (2 datapoints) | 105796|1.4% |No Change | |memtier_benchmark-1Mkeys-load-stream-5-fields-with-100B-values-pipeline-10 | 227593 +- 5.5% (2 datapoints) | 218783|-3.9% |waterline=5.5%. potential REGRESSION| |memtier_benchmark-1Mkeys-load-string-with-100B-values | 167552 +- 0.2% (2 datapoints) | 170282|1.6% |No Change | |memtier_benchmark-1Mkeys-load-string-with-100B-values-pipeline-10 | 646888 +- 0.5% (2 datapoints) | 639680|-1.1% |No Change | |memtier_benchmark-1Mkeys-load-string-with-10B-values | 174891 +- 0.7% (2 datapoints) | 174382|-0.3% |No Change | |memtier_benchmark-1Mkeys-load-string-with-10B-values-pipeline-10 | 749988 +- 5.1% (2 datapoints) | 769986|2.7% |waterline=5.1%. No Change | |memtier_benchmark-1Mkeys-load-string-with-1KiB-values | 155929 +- 0.1% (2 datapoints) | 156387|0.3% |No Change | |memtier_benchmark-1Mkeys-load-zset-with-10-elements-double-score | 92241 +- 0.2% (2 datapoints) | 92189|-0.1% |No Change | |memtier_benchmark-1Mkeys-load-zset-with-10-elements-int-score | 114328 +- 1.3% (2 datapoints) | 113154|-1.0% |No Change | |memtier_benchmark-1Mkeys-string-get-100B | 180685 +- 0.2% (2 datapoints) | 180359|-0.2% |No Change | |memtier_benchmark-1Mkeys-string-get-100B-pipeline-10 | 991291 +- 3.1% (2 datapoints) | 1020086|2.9% |No Change | |memtier_benchmark-1Mkeys-string-get-10B | 181183 +- 0.3% (2 datapoints) | 177868|-1.8% |No Change | |memtier_benchmark-1Mkeys-string-get-10B-pipeline-10 | 1032554 +- 0.8% (2 datapoints) | 1023120|-0.9% |No Change | |memtier_benchmark-1Mkeys-string-get-1KiB | 180479 +- 0.9% (2 datapoints) | 182215|1.0% |No Change | |memtier_benchmark-1Mkeys-string-get-1KiB-pipeline-10 | 979286 +- 0.9% (2 datapoints) | 989888|1.1% |No Change | |memtier_benchmark-1Mkeys-string-mget-1KiB | 121950 +- 0.4% (2 datapoints) | 120996|-0.8% |No Change | |memtier_benchmark-1key-geo-60M-elements-geodist | 179404 +- 1.0% (2 datapoints) | 181232|1.0% |No Change | |memtier_benchmark-1key-geo-60M-elements-geodist-pipeline-10 | 1023797 +- 0.5% (2 datapoints) | 1014980|-0.9% |No Change | |memtier_benchmark-1key-geo-60M-elements-geohash | 180808 +- 1.2% (2 datapoints) | 180606|-0.1% |No Change | |memtier_benchmark-1key-geo-60M-elements-geohash-pipeline-10 | 1056458 +- 1.6% (2 datapoints) | 1040050|-1.6% |No Change | |memtier_benchmark-1key-geo-60M-elements-geopos | 181808 +- 0.2% (2 datapoints) | 175945|-3.2% |potential REGRESSION | |memtier_benchmark-1key-geo-60M-elements-geopos-pipeline-10 | 1038180 +- 3.4% (2 datapoints) | 1033005|-0.5% |No Change | |memtier_benchmark-1key-geo-60M-elements-geosearch-fromlonlat | 142614 +- 0.3% (2 datapoints) | 144259|1.2% |No Change | |memtier_benchmark-1key-geo-60M-elements-geosearch-fromlonlat-bybox | 141008 +- 0.4% (2 datapoints) | 139602|-1.0% |No Change | |memtier_benchmark-1key-geo-60M-elements-geosearch-fromlonlat-pipeline-10 | 560698 +- 0.8% (2 datapoints) | 548806|-2.1% |No Change | |memtier_benchmark-1key-list-10-elements-lrange-all-elements | 166132 +- 0.9% (2 datapoints) | 170259|2.5% |No Change | |memtier_benchmark-1key-list-100-elements-lrange-all-elements | 92657 +- 2.0% (2 datapoints) | 101445|9.5% |IMPROVEMENT | |memtier_benchmark-1key-list-1K-elements-lrange-all-elements | 14965 +- 1.3% (2 datapoints) | 16296|8.9% |IMPROVEMENT | |memtier_benchmark-1key-pfadd-4KB-values-pipeline-10 | 264156 +- 0.2% (2 datapoints) | 262582|-0.6% |No Change | |memtier_benchmark-1key-set-10-elements-smembers | 138916 +- 1.7% (2 datapoints) | 138016|-0.6% |No Change | |memtier_benchmark-1key-set-10-elements-smembers-pipeline-10 | 431019 +- 5.2% (2 datapoints) | 461039|7.0% |waterline=5.2%. IMPROVEMENT | |memtier_benchmark-1key-set-10-elements-smismember | 173545 +- 1.1% (2 datapoints) | 173488|-0.0% |No Change | |memtier_benchmark-1key-set-100-elements-smembers | 74367 +- 0.0% (2 datapoints) | 80190|7.8% |IMPROVEMENT | |memtier_benchmark-1key-set-100-elements-smismember | 155682 +- 1.6% (2 datapoints) | 151367|-2.8% |No Change | |memtier_benchmark-1key-set-1K-elements-smembers | 11730 +- 0.4% (2 datapoints) | 13519|15.3% |IMPROVEMENT | |memtier_benchmark-1key-set-200K-elements-sadd-constant | 181070 +- 1.1% (2 datapoints) | 180214|-0.5% |No Change | |memtier_benchmark-1key-set-2M-elements-sadd-increasing | 166364 +- 0.1% (2 datapoints) | 166944|0.3% |No Change | |memtier_benchmark-1key-zincrby-1M-elements-pipeline-1 | 46071 +- 0.6% (2 datapoints) | 44979|-2.4% |No Change | |memtier_benchmark-1key-zrank-1M-elements-pipeline-1 | 48429 +- 0.4% (2 datapoints) | 49265|1.7% |No Change | |memtier_benchmark-1key-zrem-5M-elements-pipeline-1 | 48528 +- 0.4% (2 datapoints) | 48869|0.7% |No Change | |memtier_benchmark-1key-zrevrangebyscore-256K-elements-pipeline-1 | 100580 +- 1.5% (2 datapoints) | 101782|1.2% |No Change | |memtier_benchmark-1key-zrevrank-1M-elements-pipeline-1 | 48621 +- 2.0% (2 datapoints) | 48473|-0.3% |No Change | |memtier_benchmark-1key-zset-10-elements-zrange-all-elements | 83485 +- 0.6% (2 datapoints) | 83095|-0.5% |No Change | |memtier_benchmark-1key-zset-10-elements-zrange-all-elements-long-scores | 118673 +- 0.8% (2 datapoints) | 118006|-0.6% |No Change | |memtier_benchmark-1key-zset-100-elements-zrange-all-elements | 19009 +- 1.1% (2 datapoints) | 19293|1.5% |No Change | |memtier_benchmark-1key-zset-100-elements-zrangebyscore-all-elements | 18957 +- 0.5% (2 datapoints) | 19419|2.4% |No Change | |memtier_benchmark-1key-zset-100-elements-zrangebyscore-all-elements-long-scores| 171693 +- 0.5% (2 datapoints) | 172432|0.4% |No Change | |memtier_benchmark-1key-zset-1K-elements-zrange-all-elements | 3566 +- 0.6% (2 datapoints) | 3672|3.0% |No Change | |memtier_benchmark-1key-zset-1M-elements-zcard-pipeline-10 | 1067713 +- 0.4% (2 datapoints) | 1071550|0.4% |No Change | |memtier_benchmark-1key-zset-1M-elements-zrevrange-5-elements | 169195 +- 0.7% (2 datapoints) | 169620|0.3% |No Change | |memtier_benchmark-1key-zset-1M-elements-zscore-pipeline-10 | 914338 +- 0.2% (2 datapoints) | 905540|-1.0% |No Change | |memtier_benchmark-2keys-lua-eval-hset-expire | 88346 +- 1.7% (2 datapoints) | 87259|-1.2% |No Change | |memtier_benchmark-2keys-lua-evalsha-hset-expire | 103273 +- 1.2% (2 datapoints) | 102393|-0.9% |No Change | |memtier_benchmark-2keys-set-10-100-elements-sdiff | 15418 +- 10.9% UNSTABLE (2 datapoints) | 14369|-6.8% |UNSTABLE (very high variance) | |memtier_benchmark-2keys-set-10-100-elements-sinter | 83601 +- 3.6% (2 datapoints) | 82508|-1.3% |No Change | |memtier_benchmark-2keys-set-10-100-elements-sunion | 14942 +- 11.2% UNSTABLE (2 datapoints) | 14001|-6.3% |UNSTABLE (very high variance) | |memtier_benchmark-2keys-stream-5-entries-xread-all-entries | 75938 +- 0.4% (2 datapoints) | 76565|0.8% |No Change | |memtier_benchmark-2keys-stream-5-entries-xread-all-entries-pipeline-10 | 120781 +- 1.1% (2 datapoints) | 119142|-1.4% |No Change |
-
debing.sun authored
This is a missing of the PR https://github.com/redis/redis/pull/13383. We will call `functionsLibCtxClear()` in bio, so we shouldn't touch `curr_functions_lib_ctx` in it.
-
- 16 Aug, 2024 1 commit
-
-
debing.sun authored
## Describe When using the `XTRIM` command to trim a stream, it does not update the maximal tombstone (`max_deleted_entry_id`). This leads to an issue where the lag calculation incorrectly assumes that there are no tombstones after the consumer group's last_id, resulting in an inaccurate lag. The reason XTRIM doesn't need to update the maximal tombstone is that it always trims from the beginning of the stream. This means that it consistently changes the position of the first entry, leading to the following scenarios: 1) First entry trimmed after maximal tombstone: If the first entry is trimmed to a position after the maximal tombstone, all tombstones will be before the first entry, so they won't affect the consumer group's lag. 2) First entry trimmed before maximal tombstone: If the first entry is trimmed to a position before the maximal tombstone, the maximal tombstone will not be updated. ## Solution Therefore, this PR optimizes the lag calculation by ensuring that when both the consumer group's last_id and the maximal tombstone are behind the first entry, the consumer group's lag is always equal to the number of remaining elements in the stream. Supplement to PR https://github.com/redis/redis/pull/13338
-
- 14 Aug, 2024 1 commit
-
-
debing.sun authored
Fixed a missing from #13117. When the number of streams is incorrect, the error message for `XREAD` needs to include the '+' symbol.
-
- 11 Aug, 2024 1 commit
-
-
Moti Cohen authored
Hash field expiration is optimized to avoid frequent update global HFE DS for each field deletion. Eventually active-expiration will run and update or remove the hash from global HFE DS gracefully. Nevertheless, statistic "subexpiry" might reflect wrong number of hashes with HFE to the user if HDEL deletes the last field with expiration in hash (yet there are more fields without expiration). Following this change, if HDEL the last field with expiration in the hash then take care to remove the hash from global HFE DS as well.
-
- 08 Aug, 2024 2 commits
-
-
LuMingYinDetect authored
Fix memory leak related to variable slot_nodes in the clusterManagerFixSlotsCoverage() function. --------- Co-authored-by:
debing.sun <debing.sun@redis.com>
-
debing.sun authored
This PR is based on the commits from PR https://github.com/valkey-io/valkey/pull/52. Ref: https://github.com/redis/redis/pull/12760 Close https://github.com/redis/redis/issues/13401 This PR will replace https://github.com/redis/redis/pull/13449 Fixes compatibilty of Redis cluster (7.2 - extensions enabled by default) with older Redis cluster (< 7.0 - extensions not handled) . With some of the extensions enabled by default in 7.2 version, new nodes running 7.2 and above start sending out larger clusterbus message payload including the ping extensions. This caused an incompatibility with node running engine versions < 7.0. Old nodes (< 7.0) would receive the payload from new nodes (> 7.2) would observe a payload length (totlen) > (estlen) and would perform an early exit and won't process the message. This fix does the following things: 1. Always set `CLUSTERMSG_FLAG0_EXT_DATA`, because during the meet phase, we do not know whether the connected node supports ext data, we need to make sure that it knows and send back its ext data if it has. 2. If another node does not support ext data, we will not send it ext data to avoid the handshake failure due to the incorrect payload length. Note: A successful `PING`/`PONG` is required as a sender for a given node to be marked as `CLUSTERMSG_FLAG0_EXT_DATA` and then extensions message will be sent to it. This could cause a slight delay in receiving the extensions message(s). --------- Signed-off-by:
Harkrishn Patro <harkrisp@amazon.com> Co-authored-by:
Harkrishn Patro <harkrisp@amazon.com> --------- Signed-off-by:
Harkrishn Patro <harkrisp@amazon.com> Co-authored-by:
Harkrishn Patro <harkrisp@amazon.com>
-
- 06 Aug, 2024 1 commit
-
-
debing.sun authored
First, we need to ensure that `curmaster` in `clusterUpdateSlotsConfigWith()` is not NULL in the line https://github.com/redis/redis/blob/82f00f5179720c8cee6cd650763d184ba943be92/src/cluster_legacy.c#L2320 otherwise, it will crash in the https://github.com/redis/redis/blob/82f00f5179720c8cee6cd650763d184ba943be92/src/cluster_legacy.c#L2395 So when loading cluster node config, we need to ensure that the following conditions are met: 1. A node must be at least one of the master or replica. 2. If a node is a replica, its master can't be NULL.
-
- 05 Aug, 2024 3 commits
-
-
Josh Hershberg authored
This and the previous commit make the cluster shards command a generic implementation instead of a specific implementation for each cluster API implementation. This commit (a) adds functions to the cluster API and (b) modifies the cluster shards cmd implementation to use cluster API functions instead of directly accessing the legacy clustering implementation. Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
Josh Hershberg authored
This and the next following commit makes the cluster shards command a generic implementation instead of a specific implementation for each cluster API implementation. This commit simply moves the cluster shards implementation from cluster_legacy.c to cluster.c without changing the implementation at all. The reason for doing so was to help with reviewing the changes in the diff. Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
Zhongxian Pan authored
## Replace bit shift with `__builtin_ctzll` in HyperLogLog Builtin function `__builtin_ctzll` is more effective than bit shift even though "in the average case there are high probabilities to find a 1 after a few iterations" mentioned in the source file comment. --------- Co-authored-by:
debing.sun <debing.sun@redis.com>
-
- 04 Aug, 2024 1 commit
-
-
Moti Cohen authored
H[P]TTL should be marked as NONDETERMINISTIC_OUTPUT just like [P]TTL.
-
- 03 Aug, 2024 1 commit
-
-
Vitah Lin authored
When the server restarts while the CLI is connecting, the reconnection does not automatically select the previous db. This may lead users to believe they are still in the previous db, in fact, they are in db0. This PR will automatically reset the current dbnum and `cliSelect()` again when reconnecting. --------- Co-authored-by:
debing.sun <debing.sun@redis.com>
-
- 01 Aug, 2024 2 commits
-
-
c8ef authored
-
debing.sun authored
Close https://github.com/redis/redis/issues/13414 When the cluster's master node fails and is switched to another node, the first node in the shard node list (the old master) is no longer valid. Add a new method clusterGetMasterFromShard() to obtain the current master.
-
- 30 Jul, 2024 1 commit
-
-
debing.sun authored
Fix #13337 Ths PR fixes fixed two bugs that caused lag calculation errors. 1. When the latest tombstone is before the first entry, the tombstone may stil be after the last id of consume group. 2. When a tombstone is after the last id of consume group, the group's counter will be invalid, we should caculate the entries_read by using estimates.
-
- 28 Jul, 2024 1 commit
-
-
Lior Kogan authored
-
- 25 Jul, 2024 1 commit
-
-
Moti Cohen authored
-
- 24 Jul, 2024 1 commit
-
-
Moti Cohen authored
Modify RDB_TYPE_HASH_METADATA layout to store expiration times relative to the minimum expiration time, which is written at the start as absolute time.
-
- 22 Jul, 2024 1 commit
-
-
Oran Agra authored
Recently in #13361, i attempted to fix a race between FLUSHALL and BGSAVE, where despite calling killRDBChild, the backgroundSaveDoneHandler will terminate with success. Turns out that even if the child didn't yet exit, there's a chance it'll still miss our signal and exit with success. in that case, we will still mess up the dirty counter (deducting dirty_before_bgsave) which is reset by FLUSHALL, and override the synchronous rdb file we saved. instead, we'll set a flag to treat the next done handler as a failed one.
-
- 16 Jul, 2024 2 commits
-
-
debing.sun authored
Nowdays we do not trigger LUA GC after loading lua script. This means that when a large number of scripts are loaded, such as when functions are propagating from the master to the replica, if the LUA scripts are never touched on the replica, the garbage might remain there indefinitely. Before this PR, we would share a gc_count between scripts and functions. This means that, under certain circumstances, the GC trigger for scripts and functions was not fair. For example, loading a large number of scripts followed by a small number of functions could result in the functions triggering GC. In this PR, we assign a unique `gc_count` to each of them, so the GC triggers between them will no longer affect each other. on the other hand, this PR will to bring regession for script loading commands(`FUNCTION LOAD` and `SCRIPT LOAD`), but they are not hot path, we can ignore it, and it will be replaced https://github.com/redis/redis/pull/13375 in the future. --------- Co-authored-by:
Oran Agra <oran@redislabs.com>
-
debing.sun authored
When we terminate the diskless RDB saving child process and, at the same time, we start a new BGSAVE for new replicas, we should not delete the RDB read event. Otherwise, these replicas will never receive a response. this is a result of the recent change in https://github.com/redis/redis/pull/13361 --------- Co-authored-by:
oranagra <oran@redislabs.com>
-
- 15 Jul, 2024 1 commit
-
-
debing.sun authored
This PR is based on the commits from PR https://github.com/valkey-io/valkey/pull/670. ## Description While exploring hotspots with profiling some benchmark workloads, we noticed the high cycles ratio of `prepareClientToWrite`, taking about 9% of the CPU of `smembers`, `lrange` commands. After deep dive the code logic, we thought we can gain the performance by reducing the redundant call of `prepareClientToWrite` when call addReply* continuously. For example: In https://github.com/valkey-io/valkey/blob/unstable/src/networking.c#L1080-L1082 , `prepareClientToWrite` is called three times in a row. --------- Signed-off-by:
Lipeng Zhu <lipeng.zhu@intel.com> Co-authored-by:
Lipeng Zhu <lipeng.zhu@intel.com> Co-authored-by:
Wangyang Guo <wangyang.guo@intel.com>
-
- 14 Jul, 2024 1 commit
-
-
guybe7 authored
128 is not enough chars when we're talking about commands like RESTORE. Of course, it's impossible to find the perfect number, but 1024 is better than 128, and it's not obscenely large.
-
- 11 Jul, 2024 2 commits
-
-
guybe7 authored
To be more similar to EXPIRE-like commands, which emit a "del" notification if the expire-time is in the past
-
guybe7 authored
This commit reverts the deletion of the condition `!bc->blocked_on_keys` that was accidentally introduced by https://github.com/redis/redis/pull/12817 . In case a blocked-on-keys module client is unblocked both `moduleUnblockClientOnKey` and `moduleHandleBlockedClients` are called which resulted in `updateStatsOnUnblock` being called twice Now, that `moduleHandleBlockedClients` doesn't call `updateStatsOnUnblock` in case of unblocked module key-blocked clients, in the unlikely event that the module decides to call `RM_UnblockClient` on a key-blocked client, we need to call `updateStatsOnUnblock` from within `moduleBlockedClientTimedOut`, but since `moduleBlockedClientTimedOut` is not tread-safe we can't call it directly from withing `RM_UnblockClient`. Added a new flag `blocked_on_keys_explicit_unblock` for that specific case, which will cause `moduleBlockedClientTimedOut` to be called from `moduleHandleBlockedClients` (which is only called from the main thread) --------- Co-authored-by:
debing.sun <debing.sun@redis.com>
-
- 10 Jul, 2024 1 commit
-
-
debing.sun authored
### Issue The current implementation of `FUNCTION FLUSH` command uses `lua_unref()` to unreference script closures in Lua vm. However, invoking `lua_unref()` during lazy free (`ASYNC` argument) is risky since it is not thread-safe. Another issue is that using `lua_unref()` to unreference references does not trigger GC, This can result in the Lua VM leaves a significant amount of garbage, which may never be cleaned up if not properly GC. ### Solution The proposed solution is to completely rebuild the engines, resulting in a brand new Lua VM. --------- Co-authored-by:
meir <meir@redis.com>
-
- 09 Jul, 2024 1 commit
-
-
debing.sun authored
This PR is based on the commits from PR #11747. In the event of an assertion failure, hide command arguments from the operator. In some cases, private client information can be voluntarily exposed when a redis instance crashes due to an assertion failure. This commit prevent וnintentional client info exposure. Operators can still access the hidden data, but they must actively request it. Any of the client info commands remains the unchanged. ### Config Add a new config `hide-user-data-from-log` to turn this feature on and off, default off. --------- Co-authored-by:
naglera <anagler123@gmail.com> Co-authored-by:
naglera <58042354+naglera@users.noreply.github.com>
-
- 07 Jul, 2024 1 commit
-
-
Moti Cohen authored
* Following this feature, Redis (ROF) may implement flow that allows hashes to be dumped directly from RDB to FLUSH without parsing. In this scenario, it is still essential to determine when to update hashes due to expired fields. By writing and reading the next minimum hash-field expiration before serializing objects to and from RDB, we can effectively track and expire hash fields without the need to parse the hash during loading. Before: #define RDB_TYPE_HASH_METADATA 22 #define RDB_TYPE_HASH_LISTPACK_EX 23 After: /* Hash with HFEs. Doesn't attach min TTL at start */ #define RDB_TYPE_HASH_METADATA_PRE_GA 22 /* Hash LP with HFEs. Doesn't attach min TTL at start */ #define RDB_TYPE_HASH_LISTPACK_EX_PRE_GA 23 /* Hash with HFEs. Attach min TTL at start */ #define RDB_TYPE_HASH_METADATA 24 /* Hash LP with HFEs. Attach min TTL at start */ #define RDB_TYPE_HASH_LISTPACK_EX 25 * Manually test loading RDB file before the change and verify hash and its HFEs is as expected. * Added `subexpires` counter to `redis-check-rdb`
-
- 04 Jul, 2024 1 commit
-
-
debing.sun authored
1. Add help for `DEBUG SCRIPT` command. 2. Remove a duplicate `getLuaScripts()` which is same as `evalScriptsDict()`.
-
- 03 Jul, 2024 1 commit
-
-
Filipe Oliveira (Redis) authored
The following PR does the following changes based upon on CPU profile info. The `getNodeByQuery` function represents 8.2% of an overhead of 12.3% when comparing single shard cluster with standalone. Proposed changes: - inlinging keyHashSlot to reduce overhead of that function call - Reduce duplicate calls to getCommandFlags within getNodeByQuery The above changes represent an improvement of approximately 5% on the achievable ops/sec. Co-authored-by:
filipecosta90 <filipecosta.90@gmail.com>
-
- 02 Jul, 2024 1 commit
-
-
Moti Cohen authored
* INFO command : rename `hashes_with_expiry_fields` to `subexpiry` * INFO command : rename `expired_hash_fields` to `expired_subkeys` * Fix statistic of `expired_subkeys` to count also lazy expired * Remove TODOs comments leftover in TCL * Fix potential flaky test of rdb load of hash-field-expiration
-
- 01 Jul, 2024 1 commit
-
-
Oran Agra authored
If we run FLUSHALL when the 'save' config is set, and there's a fork child ding BGSAVE, there's a chance the child is already finished, and the parent process is unaware of it. in that case the child will not get the kill signal and will finish successfully, but the parent process thinks it killed it and will reset the dirty counter to 0, then the backgroundSaveDoneHandlerDisk method can set the dirty counter to a negative value.
-