- 15 Oct, 2024 1 commit
-
-
YaacovHazan authored
- Add a new 'EXPERIMENTAL' command flag, which causes the command generator to skip over it and make the command to be unavailable for execution - Skip experimental tests by default - Move the SFLUSH tests from the old framework to the new one --------- Co-authored-by:
YaacovHazan <yaacov.hazan@redislabs.com>
-
- 12 Oct, 2024 1 commit
-
- 10 Oct, 2024 1 commit
-
-
guybe7 authored
1. `dbRandomKey`: excessive call to `dbFindExpires` (will always return 1 if `allvolatile` + anyway called inside `expireIfNeeded` 2. Add `deleteKeyAndPropagate` that is used by both expiry/eviction 3. Change the order of calls in `expireIfNeeded` to save redundant calls to `keyIsExpired` 4. `expireIfNeeded`: move `OBJ_STATIC_REFCOUNT` to `deleteKeyAndPropagate` 5. `performEvictions` now uses `deleteEvictedKeyAndPropagate` 6. active-expire: moved `postExecutionUnitOperations` inside `activeExpireCycleTryExpire` 7. `activeExpireCycleTryExpire`: less indentation + expire a key if `now == t` 8. rename `lazy_expire_disabled` to `allow_access_expired`
-
- 08 Oct, 2024 5 commits
-
-
Oran Agra authored
-
Oran Agra authored
The '%' rule must contain one or both of R/W
-
Oran Agra authored
INT_MIN value must be explicitly checked, and cannot be negated.
-
alonre24 authored
Update search target path and version from M02
-
chx9 authored
fix typo in test_helper.tcl: even driven => event driven
-
- 29 Sep, 2024 1 commit
-
-
Moti Cohen authored
This PR introduces a new `SFLUSH` command to cluster mode that allows partial flushing of nodes based on specified slot ranges. Current implementation is designed to flush all slots of a shard, but future extensions could allow for more granular flushing. **Command Usage:** `SFLUSH <start-slot> <end-slot> [<start-slot> <end-slot>]* [SYNC|ASYNC]` This command removes all data from the specified slots, either synchronously or asynchronously depending on the optional SYNC/ASYNC argument. **Functionality:** Current imp of `SFLUSH` command verifies that the provided slot ranges are valid and cover all of the node's slots before proceeding. If slots are partially or incorrectly specified, the command will fail and return an error, ensuring that all slots of a node must be fully covered for the flush to proceed. The function supports both synchronous (default) and asynchronous flushing. In addition, if possible, SFLUSH SYNC will be run as blocking ASYNC as an optimization.
-
- 25 Sep, 2024 2 commits
-
-
Ozan Tezcan authored
This PR is based on https://github.com/valkey-io/valkey/pull/996 Currently, for operations like SUNION or SDIFF, temporary set object can be intset or listpack. Search operation is costly for these encodings. This patch tries to set the temporary set object as hash table by default. It also tries to determine correct encoding for the temporary set object to reduce the unnecessary conversation. This change is supposed to give performance boost for tests like: - [memtier_benchmark-2keys-set-10-100-elements-sdiff](https://github.com/redis/redis-benchmarks-specification/blob/main/redis_benchmarks_specification/test-suites/memtier_benchmark-2keys-set-10-100-elements-sdiff.yml) 66.2% IMPROVEMENT - [memtier_benchmark-2keys-set-10-100-elements-sunion](https://github.com/redis/redis-benchmarks-specification/blob/main/redis_benchmarks_specification/test-suites/memtier_benchmark-2keys-set-10-100-elements-sunion.yml ) 126.5% IMPROVEMENT ------- Co-authored-by:
Lipeng Zhu <lipeng.zhu@intel.com> Co-authored-by:
Wangyang Guo <wangyang.guo@intel.com> Co-authored-by:
Lipeng Zhu <lipeng.zhu@intel.com> Co-authored-by:
Wangyang Guo <wangyang.guo@intel.com>
-
Moti Cohen authored
This PR is based on valkey-io/valkey#829 Previously, ZUNION and ZUNIONSTORE commands used a temporary accumulator dict and at the end copied it as-is to dstzset->dict. This PR removes accumulator and directly stores into dstzset->dict, eliminating the extra copy. Co-authored-by: Rayacoo zisong.cw@alibaba-inc.com
-
- 23 Sep, 2024 2 commits
-
-
Moti Cohen authored
Test 1 - give more time for expiration Test 2 - Evaluate expiration time boundaries [+1,+2] before setting expiration [+1] Test 3 - Avoid race on test HFEs propagated to replica
-
debing.sun authored
#13258 Incorrect use of free instead of zfree
-
- 19 Sep, 2024 1 commit
-
-
Moti Cohen authored
The PR extends `RedisModule_OpenKey`'s flags to include `REDISMODULE_OPEN_KEY_ACCESS_EXPIRED`, which allows to access expired keys. It also allows to access expired subkeys. Currently relevant only for hash fields and has its impact on `RM_HashGet` and `RM_Scan`.
-
- 18 Sep, 2024 1 commit
-
-
debing.sun authored
Since `\\` is only one character, we need to add an extra space to the right.
-
- 15 Sep, 2024 3 commits
-
-
adamiBs authored
This PR introduces the installation of the `musl`-based version of Rust, in order to support alpine-based runtime environments (Rust is used by [RedisJSON](https://github.com/RedisJSON/RedisJSON)).
-
Filipe Oliveira (Redis) authored
Replace usage of _addReplyLongLongWithPrefix with specific bulk/mbulk functions to reduce condition checks in hotpath. (#13520) Instead of adding runtime logic to decide which prefix/shared object to use when doing the reply we can simply use an inline method to avoid runtime overhead of condition checks, and also keep the code change small. Preliminary data show improvements on commands that heavily rely on bulk/mbulk replies (example of LRANGE). --------- Co-authored-by:
debing.sun <debing.sun@redis.com>
-
Filipe Oliveira (Personal) authored
Fixes #8825 We're using the fast_float library[1] in our (compiled-in) floating-point fast_float_strtod implementation for faster and more portable parsing of 64 decimal strings. The single file fast_float.h is an amalgamation of the entire library, which can be (re)generated with the amalgamate.py script (from the fast_float repository) via the command: ``` python3 ./script/amalgamate.py --license=MIT > $REDIS_SRC/deps/fast_float/fast_float.h ``` [1]: https://github.com/fastfloat/fast_float The used commit from fast_float library was the one from https://github.com/fastfloat/fast_float/releases/tag/v3.10.1 --------- Co-authored-by:
fcostaoliveira <filipe@redis.com>
-
- 13 Sep, 2024 2 commits
-
-
Filipe Oliveira (Redis) authored
Optimize HSCAN/ZSCAN command in case of listpack encoding: avoid the usage of intermediate list (#13531) Similar to #13530 , applied to HSCAN and ZSCAN in case of listpack encoding. **Preliminary benchmark results showcase an improvement of 108% on the achievable ops/sec for ZSCAN and 65% for HSCAN**. --------- Co-authored-by:
debing.sun <debing.sun@redis.com>
-
debing.sun authored
in https://github.com/redis/redis/pull/13519, when `eb` is empty, `isRax` is not correctly initialized to 0, which can lead to `ebStop()` potentially entering the wrong rax branch.
-
- 12 Sep, 2024 6 commits
-
-
Filipe Oliveira (Redis) authored
Optimize SSCAN command in case of listpack or intset encoding: avoid the usage of intermediate list. From 2N to N iterations (#13530) On SSCAN, in case of listpack and intset encoding we actually reply the entire set, and always reply with the cursor 0. For those cases, we don't need to accumulate the replies in a list and can completely avoid the overhead of list appending and then iterating over the list again -- meaning we do N iterations instead of 2N iterations over the SET and save intermediate memory as well. Preliminary benchmarks, `SSCAN set:100 0`, showcased an improvement of 60% as visible bellow on a SET with 100 string elements (listpack encoded).
-
Moti Cohen authored
Add basic iterator API for ebuckets of start, next, nextBucket and stop.
-
Moti Cohen authored
spell check error : ./src/t_hash.c:1141: RESOTRE ==> RESTORE
-
Moti Cohen authored
If the hash previously had HFEs (hash-fields with expiration) but later no longer does, the key ref in the hash might become outdated after a MOVE, COPY, RENAME or RESTORE operation. These commands maintain the key ref only if HFEs are present. That is, we can only be sure that key ref is valid as long as the hash has HFEs.
-
Oran Agra authored
When a client in no-touch mode issues a TOUCH command on a key, the key’s access time should be updated, but in scripts, and module's RM_Call, it isn’t updated. Command proc should be matched to the executing client, not the current client. Co-authored-by:
Udi Ron <udi@speedb.io>
-
Steve authored
PR #13428 doesn't fully resolve an issue where corruption errors can still occur on loading of cluster.nodes file - seen on upgrade where there were no shard_ids (from old Redis), 7.2.5 loading generated new random ones, and persisted them to the file before gossip/handshake could propagate the correct ones (or some other nodes unreachable). This results in a primary/replica having differing shard_id in the cluster.nodes and then the server cannot startup - reports corruption. This PR builds on #13428 by simply ignoring the replica's shard_id in cluster.nodes (if it exists), and uses the replica's primary's shard_id. Additional handling was necessary to cover the case where the replica appears before the primary in cluster.nodes, where it will first use a generated shard_id for the primary, and then correct after it loads the primary cluster.nodes entry. --------- Co-authored-by:
debing.sun <debing.sun@redis.com>
-
- 11 Sep, 2024 1 commit
-
-
debing.sun authored
Found by @oranagra Currently, when the size of dict becomes 1, we do not check whether `delta` is positive or negative. As a result, `non_empty_dicts` is still incremented when the size of dict changes from 2 to 1. We should only increment `non_empty_dicts` when `delta` is positive, as this indicates the first time an element is inserted into the dict. --------- Co-authored-by:
oranagra <oran@redislabs.com>
-
- 10 Sep, 2024 1 commit
-
-
Filipe Oliveira (Redis) authored
This is a very easy optimization, that avoids duplicate computation of the object length for LREM, LPOS, LINSERT na LINDEX. We can see that sdslen takes 7.7% of the total CPU cycles of the benchmarks. Function Stack | CPU Time: Total | CPU Time: Self | Module | Function (Full) | Source File | Start Address -- | -- | -- | -- | -- | -- | -- listTypeEqual | 15.50% | 2.346s | redis-server | listTypeEqual | t_list.c | 0x845dd sdslen | 7.70% | 2.300s | redis-server | sdslen | sds.h | 0x845e4 Preliminary data showcases 4% improvement in the achievable ops/sec of LPOS in string elements, and 2% in int elements.
-
- 09 Sep, 2024 1 commit
-
-
YaacovHazan authored
A new BUILD_WITH_MODULES flag was added to the Makefile to control building the module directory. The new module directory includes a general Makefile that iterates over each module, fetch a specific version, and build it. Co-authored-by:
YaacovHazan <yaacov.hazan@redislabs.com>
-
- 08 Sep, 2024 1 commit
-
-
Ozan Tezcan authored
#13495 introduced a change to reply -LOADING while flushing existing db on a replica. Some of our tests are sensitive to this change and do no expect -LOADING reply. Fixing a couple of tests that fail time to time.
-
- 06 Sep, 2024 1 commit
-
-
Filipe Oliveira (Redis) authored
## Proposed improvement This PR introduces the static inlined function `clientTypeIsSlave` which is doing only 1 condition check vs 3 checks of `getClientType`, and also uses the `unlikely` to tell the compiler that the most common outcome is for the client not to be a slave. Preliminary data show 3% improvement on the achievable ops/sec on the specific LRANGE benchmark. After running the entire suite we see up to 5% improvement in 2 tests. https://github.com/redis/redis/pull/13516#issuecomment-2331326052 ## Context This optimization efforts comes from analyzing the profile info from the [memtier_benchmark-1key-list-1K-elements-lrange-all-elements](https://github.com/redis/redis-benchmarks-specification/blob/main/redis_benchmarks_specification/test-suites/memtier_benchmark-1key-list-1K-elements-lrange-all-elements.yml) benchmark. By going over it, we can see that `getClientType` consumes 2% of the cpu time, strictly to check if the client is a slave ( https://github.com/redis/redis/blob/unstable/src/networking.c#L397 , and https://github.com/redis/redis/blob/unstable/src/networking.c#L1254 ) Function | CPU Time: Total | CPU Time: Self | Module | Function (Full) -- | -- | -- | -- | -- _addReplyToBufferOrList->getClientType | 1.20% | 0.728s | redis-server | getClientType clientHasPendingReplies->getClientType | 0.80% | 0.482s | redis-server | getClientType --------- Co-authored-by:
debing.sun <debing.sun@redis.com>
-
- 05 Sep, 2024 2 commits
-
-
Max Malekzadeh authored
-
Moti Cohen authored
-
- 04 Sep, 2024 4 commits
-
-
debing.sun authored
This PR is based on the commits from PR https://github.com/valkey-io/valkey/pull/258, https://github.com/valkey-io/valkey/pull/593, https://github.com/valkey-io/valkey/pull/639 This PR optimizes client query buffer handling in Redis by introducing a reusable query buffer that is used by default for client reads. This reduces memory usage by ~20KB per client by avoiding allocations for most clients using short (<16KB) complete commands. For larger or partial commands, the client still gets its own private buffer. The primary changes are: * Adding a reusable query buffer `thread_shared_qb` that clients use by default. * Modifying client querybuf initialization and reset logic. * Freeing idle client query buffers when empty to allow reuse of the reusable query buffer. * Master client query buffers are kept private as their contents need to be preserved for replication stream. * When nested commands is executed, only the first user uses the reuse buffer, and subsequent users will still use the private buffer. In addition to the memory savings, this change shows a 3% improvement in latency and throughput when running with 1000 active clients. The memory reduction may also help reduce the need to evict clients when reaching max memory limit, as the query buffer is the main memory consumer per client. This PR is different from https://github.com/valkey-io/valkey/pull/258 1. When a client is in the mid of requiring a reused buffer and returning it, regardless of whether the query buffer has changed (expanded), we do not update the reused query buffer in the middle, but return the reused query buffer (expanded or with data remaining) or reset it at the end. 2. Adding a new thread variable `thread_shared_qb_used` to avoid multiple clients requiring the reusable query buffer at the same time. --------- Signed-off-by:
Uri Yagelnik <uriy@amazon.com> Signed-off-by:
Madelyn Olson <matolson@amazon.com> Co-authored-by:
Uri Yagelnik <uriy@amazon.com> Co-authored-by:
Madelyn Olson <madelyneolson@gmail.com> Co-authored-by:
oranagra <oran@redislabs.com>
-
debing.sun authored
After https://github.com/redis/redis/pull/13499, If the length set by `addReplySetLen()` does not match the actual number of elements in the reply, it will cause protocol broken and result in the client hanging.
-
Ozan Tezcan authored
RM_RdbLoad() disables AOF temporarily while loading RDB. Later, it does not enable it back as it checks AOF state (disabled by then) rather than AOF config parameter. Added a change to restart AOF according to config parameter.
-
Filipe Oliveira (Redis) authored
- Avoid addReplyLongLong (which converts back to string) the value we already have as a robj, by using addReplyProto + addReply - Avoid doing dbFind Twice for the same dictEntry on INCR*/DECR*/SETRANGE/APPEND commands. - Avoid multiple sdslen calls with the same input on setrangeCommand and appendCommand - Introduce setKeyWithDictEntry, which is like setKey(), but accepts an optional dictEntry input: Avoids the second dictFind in SET command --------- Co-authored-by:
debing.sun <debing.sun@redis.com>
-
- 03 Sep, 2024 3 commits
-
-
debing.sun authored
In #13279 (found by @filipecosta90), for custom lookups, we introduce a comparison function for `lpFind()` to compare entry, but it also introduces some overhead. To avoid the overhead of function pointer calls: 1. Extract the lpFindCb() method into a lpFindCbInternal() method that is easier to inline. 2. Use unlikely to annotate the comparison method, as can only success once. --------- Co-authored-by:
Ozan Tezcan <ozantezcan@gmail.com>
-
Filipe Oliveira (Redis) authored
# Summary - Addresses https://github.com/redis/redis/issues/11565 - Measured improvements of 30% and 37% on the simple use-case (GEOSEARCH and GEOPOS) (check https://github.com/redis/redis/pull/13494#issuecomment-2313668934), and of 66% on a dataset with >60M datapoints and pipeline 10 benchmark.
-
Meir Shpilraien (Spielrein) authored
All the defrag allocations API expects to get a value and replace it, leaving the old value untouchable. In some cases a value might be shared between multiple keys, in such cases we can not simply replace it when the defrag callback is called. To allow support such use cases, the PR adds two new API's to the defrag API: 1. `RM_DefragAllocRaw` - allocate memory base on a given size. 2. `RM_DefragFreeRaw` - Free the given pointer. Those API's avoid using tcache so they operate just like `RM_DefragAlloc` but allows the user to split the allocation and the memory free operations into two stages and control when those happen. In addition the PR adds new API to allow the module to receive notifications when defrag start and end: `RM_RegisterDefragCallbacks` Those callbacks are the same as `RM_RegisterDefragFunc` but promised to be called and the start and the end of the defrag process.
-