- 14 Nov, 2023 1 commit
-
-
Binbin authored
When using DB iterator, it will use dictInitSafeIterator to init a old safe dict iterator. When dbIteratorNext is used, it will jump to the next slot db dict when we are done a dict. During this process, we do not have any calls to dictResumeRehashing, which causes the dict's pauserehash to always be > 0. And at last, it will be returned directly in dictRehashMilliseconds, which causes us to have slot dict in a state where rehash cannot be completed. In the "expire scan should skip dictionaries with lot's of empty buckets" test, adding a `keys *` can reproduce the problem stably. `keys *` will call dbIteratorNext to trigger a traversal of all slot dicts. Added dbReleaseIterator and dbIteratorInitNextSafeIterator methods to call dictResetIterator. Issue was introduced in #11695.
-
- 12 Nov, 2023 1 commit
-
-
Roshan Khatri authored
This PR introduces a new macro, serverAssertWithInfoDebug, to do complex assertions only for debugging. The main intention is to allow running complex operations during tests without impacting runtime performance. This assertion is enabled when setting DEBUG_ASSERTIONS. The DEBUG_ASSERTIONS flag is set for the daily and CI variants of `test-sanitizer-address`.
-
- 10 Nov, 2023 1 commit
-
-
zhaozhao.zz authored
Introduced in #12697 , should reset bucket_count when empty db, or the overhead memory usage of db can be miscalculated.
-
- 01 Nov, 2023 1 commit
-
-
Viktor Söderqvist authored
Optimize the performance of SCAN commands when a match pattern can only contain keys from a single slot in cluster mode. This can happen when the pattern contains a hash tag before any wildcard matchers or when the key contains no matchers.
-
- 28 Oct, 2023 1 commit
-
-
Roshan Khatri authored
This PR optimizes the time complexity of findSlotByKeyIndex from O(log^2(N)) to O(log(N)) by using the tree structure of binary index tree to find a slot in one search of the index.
-
- 27 Oct, 2023 1 commit
-
-
Harkrishn Patro authored
As part of #11695 independent dictionaries were introduced per slot. Time complexity to discover total no. of buckets across all dictionaries increased to O(N) with straightforward implementation of iterating over all dictionaries and adding dictBuckets of each. To optimize the time complexity, we could maintain a global counter at db level to keep track of the count of buckets and update it on the start and end of rehashing. --------- Co-authored-by:
Roshan Khatri <rvkhatri@amazon.com>
-
- 24 Oct, 2023 1 commit
-
-
Binbin authored
Fix some outdated comments and add comment for moduleNotifyKeyspaceEvent we added in #11084 since it seems a bit implicit. --------- Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 15 Oct, 2023 1 commit
-
-
Vitaly authored
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data. ## Important changes * Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms. * getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time. * Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree. * scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue. * Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot. * Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading. * DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well. ## Performance This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict. RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load. ## Interface changes * Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS` * Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored. * New RDB version to support the new op code for SLOT information. --------- Co-authored-by:
Vitaly Arbuzov <arvit@amazon.com> Co-authored-by:
Harkrishn Patro <harkrisp@amazon.com> Co-authored-by:
Roshan Khatri <rvkhatri@amazon.com> Co-authored-by:
Madelyn Olson <madelyneolson@gmail.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 12 Oct, 2023 1 commit
-
-
Ye Lin Aung authored
The function was renamed, but the comments were outdated.
-
- 08 Sep, 2023 1 commit
-
-
Binbin authored
and adjustments.
-
- 30 Aug, 2023 1 commit
-
-
bodong.ybd authored
Before: ``` 127.0.0.1:6379> command getkeys sort_ro key (empty array) 127.0.0.1:6379> ``` After: ``` 127.0.0.1:6379> command getkeys sort_ro key 1) "key" 127.0.0.1:6379> ```
-
- 06 Jul, 2023 1 commit
-
-
zhaozhao.zz authored
This is an addition to #12380, to prevent potential bugs when collecting keys from multiple commands in the future. Note that this function also resets numkeys in some cases.
-
- 03 Jul, 2023 1 commit
-
-
Lior Lahav authored
When getKeysUsingKeySpecs processes a command with more than one key-spec, and called with a total of more than 256 keys, it'll call getKeysPrepareResult again, but since numkeys isn't updated, getKeysPrepareResult will not bother to copy key names from the old result (leaving these slots uninitialized). Furthermore, it did not consider the keys it already found when allocating more space. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 27 Jun, 2023 1 commit
-
-
judeng authored
Optimized the performance of the SCAN command in a few ways: 1. Move the key filtering (by MATCH pattern) in the scan callback, so as to avoid collecting them for later filtering. 2. Reduce a many memory allocations and copying (use a reference to the original sds, instead of creating an robj, an excessive 2 mallocs and one string duplication) 3. Compare TYPE filter directly (as integers), instead of inefficient string compare per key. 4. fixed a small bug: when scan zset and hash types, maxiterations uses a more accurate number to avoid wrong double maxiterations. Changes **postponed** for a later version (8.0): 1. Prepare to move the TYPE filtering to the scan callback as well. this was put on hold since it has side effects that can be considered a breaking change, which is that we will not attempt to do lazy expire (delete) a key that was filtered by not matching the TYPE (changing it would mean TYPE filter starts behaving the same as MATCH filter already does in that respect). 2. when the specified key TYPE filter is an unknown type, server will reply a error immediately instead of doing a full scan that comes back empty handed. Benchmark result: For different scenarios, we obtained about 30% or more performance improvement. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 14 Jun, 2023 1 commit
-
-
judeng authored
This change only affects keys with expiry time. For SETEX, the average improvement is 5%, and for GET with expiation key, we gain a improvement of 13%. When keys have expiration time, Redis has an assertion to look up the main dict every time when it touches the expires. This comes with a performance const, especially during rehash. the damage will be double. It looks like that assert was added some ten years old, maybe out of paranoia, and there's probably no reason to keep it at that cost.
-
- 28 May, 2023 1 commit
-
-
Oran Agra authored
This is a redo of #11594 which got reverted in #11940 It improves performance by avoiding double lookup of the the key.
-
- 24 May, 2023 2 commits
-
-
mojh7 authored
functoin -> function
-
judeng authored
postpone the initialization of oject's lru&lfu until it is added to the db as a value object (#11626) This pr can get two performance benefits: 1. Stop redundant initialization when most robj objects are created 2. LRU_CLOCK will no longer be called in io threads, so we can avoid the `atomicGet` Another code optimization: deleted the redundant judgment in dbSetValue, no matter in LFU or LRU, the lru field inold robj is always the freshest (it is always updated in lookupkey), so we don't need to judge if in LFU
-
- 09 May, 2023 1 commit
-
-
Leibale Eidelman authored
in GEO commands, `STORE` and `STOREDIST` are mutually exclusive. use `oneof` block to contain them
-
- 03 May, 2023 1 commit
-
-
Madelyn Olson authored
Technically declaring a prototype with an empty declaration has been deprecated since the early days of C, but we never got a warning for it. C2x will apparently be introducing a breaking change if you are using this type of declarator, so Clang 15 has started issuing a warning with -pedantic. Although not apparently a problem for any of the compiler we build on, if feels like the right thing is to properly adhere to the C standard and use (void).
-
- 16 Apr, 2023 1 commit
-
-
judeng authored
Improve performance by avoiding redundancy memory malloc/free
-
- 23 Feb, 2023 1 commit
-
-
Chen Tianjie authored
When no-touch mode is enabled, the client will not touch LRU/LFU of the keys it accesses, except when executing command `TOUCH`. This allows inspecting or modifying the key-space without affecting their eviction. Changes: - A command `CLIENT NO-TOUCH ON|OFF` to switch on and off this mode. - A client flag `#define CLIENT_NOTOUCH (1ULL<<45)`, which can be shown with `CLIENT INFO`, by the letter "T" in the "flags" field. - Clear `NO-TOUCH` flag in `clearClientConnectionState`, which is used by `RESET` command and resetting temp clients used by modules. - Also clear `NO-EVICT` flag in `clearClientConnectionState`, this might have been an oversight, spotted by @madolson. - A test using `DEBUG OBJECT` command to verify that LRU stat is not touched when no-touch mode is on. Co-authored-by:
chentianjie <chentianjie@alibaba-inc.com> Co-authored-by:
Madelyn Olson <34459052+madolson@users.noreply.github.com> Co-authored-by:
sundb <sundbcn@gmail.com>
-
- 14 Feb, 2023 1 commit
-
-
guybe7 authored
Starting from Redis 7.0 (#9890) we started wrapping everything a command propagates with MULTI/EXEC. The problem is that both SCAN and RANDOMKEY can lazy-expire arbitrary keys (similar behavior to active-expire), and put DELs in a transaction. Fix: When these commands are called without a parent exec-unit (e.g. not in EVAL or MULTI) we avoid wrapping their DELs in a transaction (for the same reasons active-expire and eviction avoids a transaction) This PR adds a per-command flag that indicates that the command may touch arbitrary keys (not the ones in the arguments), and uses that flag to avoid the MULTI-EXEC. For now, this flag is internal, since we're considering other solutions for the future. Note for cluster mode: if SCAN/RANDOMKEY is inside EVAL/MULTI it can still cause the same situation (as it always did), but it won't cause a CROSSSLOT because replicas and AOF do not perform slot checks. The problem with the above is mainly for 3rd party ecosystem tools that propagate commands from master to master, or feed an AOF file with redis-cli into a master. This PR aims to fix the regression in redis 7.0, and we opened #11792 to try to handle the bigger problem with lazy expire better for another release.
-
- 12 Feb, 2023 1 commit
-
-
Tian authored
# Background The RDB file is usually generated and used once and seldom used again, but the content would reside in page cache until OS evicts it. A potential problem is that once the free memory exhausts, the OS have to reclaim some memory from page cache or swap anonymous page out, which may result in a jitters to the Redis service. Supposing an exact scenario, a high-capacity machine hosts many redis instances, and we're upgrading the Redis together. The page cache in host machine increases as RDBs are generated. Once the free memory drop into low watermark(which is more likely to happen in older Linux kernel like 3.10, before [watermark_scale_factor](https://lore.kernel.org/lkml/1455813719-2395-1-git-send-email-hannes@cmpxchg.org/) is introduced, the `low watermark` is linear to `min watermark`, and there'is not too much buffer space for `kswapd` to be wake up to reclaim memory), a `direct reclaim` happens, which means the process would stall to wait for memory allocation. # What the PR does The PR introduces a capability to reclaim the cache when the RDB is operated. Generally there're two cases, read and write the RDB. For read it's a little messy to address the incremental reclaim, so the reclaim is done in one go in background after the load is finished to avoid blocking the work thread. For write, incremental reclaim amortizes the work of reclaim so no need to put it into background, and the peak watermark of cache can be reduced in this way. Two cases are addresses specially, replication and restart, for both of which the cache is leveraged to speed up the processing, so the reclaim is postponed to a right time. To do this, a flag is added to`rdbSave` and `rdbLoad` to control whether the cache need to be kept, with the default value false. # Something deserve noting 1. Though `posix_fadvise` is the POSIX standard, but only few platform support it, e.g. Linux, FreeBSD 10.0. 2. In Linux `posix_fadvise` only take effect on writeback-ed pages, so a `sync`(or `fsync`, `fdatasync`) is needed to flush the dirty page before `posix_fadvise` if we reclaim write cache. # About test A unit test is added to verify the effect of `posix_fadvise`. In integration test overall cache increase is checked, as well as the cache backed by RDB as a specific TCL test is executed in isolated Github action job.
-
- 30 Jan, 2023 1 commit
-
-
Qu Chen authored
Redis 7.0 introduced new logic in expireIfNeeded() where a read-only replica would never consider a key as expired when replicating commands from the master. See acf3495e. This was done by checking server.current_client with server.master. However, we should instead check for CLIENT_MASTER flag for this logic to be more robust and consistent with the rest of the Redis code base.
-
- 16 Jan, 2023 1 commit
-
-
Oran Agra authored
Related to the hang reported in #11671 Currently, redis can disconnect a client due to reaching output buffer limit, it'll also avoid feeding that output buffer with more data, but it will keep running the loop in the command (despite the client already being marked for disconnection) This PR is an attempt to mitigate the problem, specifically for commands that are easy to abuse, specifically: KEYS, HRANDFIELD, SRANDMEMBER, ZRANDMEMBER. The RAND family of commands can take a negative COUNT argument (which is not bound to the number of elements in the key), so it's enough to create a key with one field, and then these commands can be used to hang redis. For KEYS the caller can use the existing keyspace in redis (if big enough).
-
- 11 Jan, 2023 2 commits
-
-
Viktor Söderqvist authored
This change deletes the dictGetNext and dictGetNextRef functions, so the dict API doesn't expose the next field at all. The bucket function in dictScan is deleted. A separate dictScanDefrag function is added which takes a defrag alloc function to defrag-reallocate the dict entries. "Dirty" code accessing the dict internals in active defrag is removed. An 'afterReplaceEntry' is added to dictType, which allows the dict user to keep the dictEntry metadata up to date after reallocation/defrag/move. Additionally, for updating the cluster slot-to-key mapping, after a dictEntry has been reallocated, we need to know which db a dict belongs to, so we store a pointer to the db in a new metadata section in the dict struct, which is a new mechanism similar to dictEntry metadata. This adds some complexity but provides better isolation.
-
Viktor Söderqvist authored
Use functions for all accesses to dictEntry (except in dict.c). Dict abuses e.g. in defrag.c have been replaced by support functions provided by dict.
-
- 01 Jan, 2023 1 commit
-
-
ranshid authored
*TL;DR* --------------------------------------- Following the discussion over the issue [#7551](https://github.com/redis/redis/issues/7551 ) We decided to refactor the client blocking code to eliminate some of the code duplications and to rebuild the infrastructure better for future key blocking cases. *In this PR* --------------------------------------- 1. reprocess the command once a client becomes unblocked on key (instead of running custom code for the unblocked path that's different than the one that would have run if blocking wasn't needed) 2. eliminate some (now) irrelevant code for handling unblocking lists/zsets/streams etc... 3. modify some tests to intercept the error in cases of error on reprocess after unblock (see details in the notes section below) 4. replace '$' on the client argv with current stream id. Since once we reprocess the stream XREAD we need to read from the last msg and not wait for new msg in order to prevent endless block loop. 5. Added statistics to the info "Clients" section to report the: * `total_blocking_keys` - number of blocking keys * `total_blocking_keys_on_nokey` - number of blocking keys which have at least 1 client which would like to be unblocked on when the key is deleted. 6. Avoid expiring unblocked key during unblock. Previously we used to lookup the unblocked key which might have been expired during the lookup. Now we lookup the key using NOTOUCH and NOEXPIRE to avoid deleting it at this point, so propagating commands in blocked.c is no longer needed. 7. deprecated command flags. We decided to remove the CMD_CALL_STATS and CMD_CALL_SLOWLOG and make an explicit verification in the call() function in order to decide if stats update should take place. This should simplify the logic and also mitigate existing issues: for example module calls which are triggered as part of AOF loading might still report stats even though they are called during AOF loading. *Behavior changes* --------------------------------------------------- 1. As this implementation prevents writing dedicated code handling unblocked streams/lists/zsets, since we now re-process the command once the client is unblocked some errors will be reported differently. The old implementation used to issue ``UNBLOCKED the stream key no longer exists`` in the following cases: - The stream key has been deleted (ie. calling DEL) - The stream and group existed but the key type was changed by overriding it (ie. with set command) - The key not longer exists after we swapdb with a db which does not contains this key - After swapdb when the new db has this key but with different type. In the new implementation the reported errors will be the same as if the command was processed after effect: **NOGROUP** - in case key no longer exists, or **WRONGTYPE** in case the key was overridden with a different type. 2. Reprocessing the command means that some checks will be reevaluated once the client is unblocked. For example, ACL rules might change since the command originally was executed and will fail once the client is unblocked. Another example is OOM condition checks which might enable the command to run and block but fail the command reprocess once the client is unblocked. 3. One of the changes in this PR is that no command stats are being updated once the command is blocked (all stats will be updated once the client is unblocked). This implies that when we have many clients blocked, users will no longer be able to get that information from the command stats. However the information can still be gathered from the client list. **Client blocking** --------------------------------------------------- the blocking on key will still be triggered the same way as it is done today. in order to block the current client on list of keys, the call to blockForKeys will still need to be made which will perform the same as it is today: * add the client to the list of blocked clients on each key * keep the key with a matching list node (position in the global blocking clients list for that key) in the client private blocking key dict. * flag the client with CLIENT_BLOCKED * update blocking statistics * register the client on the timeout table **Key Unblock** --------------------------------------------------- Unblocking a specific key will be triggered (same as today) by calling signalKeyAsReady. the implementation in that part will stay the same as today - adding the key to the global readyList. The reason to maintain the readyList (as apposed to iterating over all clients blocked on the specific key) is in order to keep the signal operation as short as possible, since it is called during the command processing. The main change is that instead of going through a dedicated code path that operates the blocked command we will just call processPendingCommandsAndResetClient. **ClientUnblock (keys)** --------------------------------------------------- 1. Unblocking clients on keys will be triggered after command is processed and during the beforeSleep 8. the general schema is: 9. For each key *k* in the readyList: ``` For each client *c* which is blocked on *k*: in case either: 1. *k* exists AND the *k* type matches the current client blocking type OR 2. *k* exists and *c* is blocked on module command OR 3. *k* does not exists and *c* was blocked with the flag unblock_on_deleted_key do: 1. remove the client from the list of clients blocked on this key 2. remove the blocking list node from the client blocking key dict 3. remove the client from the timeout list 10. queue the client on the unblocked_clients list 11. *NEW*: call processCommandAndResetClient(c); ``` *NOTE:* for module blocked clients we will still call the moduleUnblockClientByHandle which will queue the client for processing in moduleUnblockedClients list. **Process Unblocked clients** --------------------------------------------------- The process of all unblocked clients is done in the beforeSleep and no change is planned in that part. The general schema will be: For each client *c* in server.unblocked_clients: * remove client from the server.unblocked_clients * set back the client readHandler * continue processing the pending command and input buffer. *Some notes regarding the new implementation* --------------------------------------------------- 1. Although it was proposed, it is currently difficult to remove the read handler from the client while it is blocked. The reason is that a blocked client should be unblocked when it is disconnected, or we might consume data into void. 2. While this PR mainly keep the current blocking logic as-is, there might be some future additions to the infrastructure that we would like to have: - allow non-preemptive blocking of client - sometimes we can think that a new kind of blocking can be expected to not be preempt. for example lets imagine we hold some keys on disk and when a command needs to process them it will block until the keys are uploaded. in this case we will want the client to not disconnect or be unblocked until the process is completed (remove the client read handler, prevent client timeout, disable unblock via debug command etc...). - allow generic blocking based on command declared keys - we might want to add a hook before command processing to check if any of the declared keys require the command to block. this way it would be easier to add new kinds of key-based blocking mechanisms. Co-authored-by:
Oran Agra <oran@redislabs.com> Signed-off-by:
Ran Shidlansik <ranshid@amazon.com>
-
- 20 Dec, 2022 1 commit
-
-
guybe7 authored
1. Get rid of server.core_propagates - we can just rely on module/call nesting levels 2. Rename in_nested_call to execution_nesting and update the comment 3. Remove module_ctx_nesting (redundant, we can use execution_nesting) 4. Modify postExecutionUnitOperations according to the comment (The main purpose of this PR) 5. trackingHandlePendingKeyInvalidations: Check the nesting level inside this function
-
- 09 Dec, 2022 1 commit
-
-
Binbin authored
In #11290, we added listpack encoding for SET object. But forgot to support it in zuiFind, causes ZINTER, ZINTERSTORE, ZINTERCARD, ZIDFF, ZDIFFSTORE to crash. And forgot to support it in RM_ScanKey, causes it hang. This PR add support SET listpack in zuiFind, and in RM_ScanKey. And add tests for related commands to cover this case. Other changes: - There is no reason for zuiFind to go into the internals of the SET. It can simply use setTypeIsMember and don't care about encoding. - Remove the `#include "intset.h"` from server.h reduce the chance of accidental intset API use. - Move setTypeAddAux, setTypeRemoveAux and setTypeIsMemberAux interfaces to the header. - In scanGenericCommand, use setTypeInitIterator and setTypeNext to handle OBJ_SET scan. - In RM_ScanKey, improve hash scan mode, use lpGetValue like zset, they can share code and better performance. The zuiFind part fixes #11578 Co-authored-by:
Oran Agra <oran@redislabs.com> Co-authored-by:
Viktor Söderqvist <viktor.soderqvist@est.tech>
-
- 30 Nov, 2022 1 commit
-
-
Huang Zhw authored
Add a new module event `RedisModule_Event_Key`, this event is fired when a key is removed from the keyspace. The event includes an open key that can be used for reading the key before it is removed. Modules can also extract the key-name, and use RM_Open or RM_Call to access key from within that event, but shouldn't modify anything from within this event. The following sub events are available: - `REDISMODULE_SUBEVENT_KEY_DELETED` - `REDISMODULE_SUBEVENT_KEY_EXPIRED` - `REDISMODULE_SUBEVENT_KEY_EVICTED` - `REDISMODULE_SUBEVENT_KEY_OVERWRITE` The data pointer can be casted to a RedisModuleKeyInfo structure with the following fields: ``` RedisModuleKey *key; // Opened Key ``` ### internals * We also add two dict functions: `dictTwoPhaseUnlinkFind` finds an element from the table, also get the plink of the entry. The entry is returned if the element is found. The user should later call `dictTwoPhaseUnlinkFree` with it in order to unlink and release it. Otherwise if the key is not found, NULL is returned. These two functions should be used in pair. `dictTwoPhaseUnlinkFind` pauses rehash and `dictTwoPhaseUnlinkFree` resumes rehash. * We change `dbOverwrite` to `dbReplaceValue` which just replaces the value of the key and doesn't fire any events. The "overwrite" part (which emits events) is just when called from `setKey`, the other places that called dbOverwrite were ones that just update the value in-place (INCR*, SPOP, and dbUnshareStringValue). This should not have any real impact since `moduleNotifyKeyUnlink` and `signalDeletedKeyAsReady` wouldn't have mattered in these cases anyway (i.e. module keys and stream keys didn't have direct calls to dbOverwrite) * since we allow doing RM_OpenKey from withing these callbacks, we temporarily disable lazy expiry. * We also temporarily disable lazy expiry when we are in unlink/unlink2 callback and keyspace notification callback. * Move special definitions to the top of redismodule.h This is needed to resolve compilation errors with RedisModuleKeyInfoV1 that carries a RedisModuleKey member. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 09 Nov, 2022 1 commit
-
-
Viktor Söderqvist authored
Small sets with not only integer elements are listpack encoded, by default up to 128 elements, max 64 bytes per element, new config `set-max-listpack-entries` and `set-max-listpack-value`. This saves memory for small sets compared to using a hashtable. Sets with only integers, even very small sets, are still intset encoded (up to 1G limit, etc.). Larger sets are hashtable encoded. This PR increments the RDB version, and has an effect on OBJECT ENCODING Possible conversions when elements are added: intset -> listpack listpack -> hashtable intset -> hashtable Note: No conversion happens when elements are deleted. If all elements are deleted and then added again, the set is deleted and recreated, thus implicitly converted to a smaller encoding.
-
- 27 Oct, 2022 1 commit
-
-
Moti Cohen authored
Renamed from "Pause Clients" to "Pause Actions" since the mechanism can pause several actions in redis, not just clients (e.g. eviction, expiration). Previously each pause purpose (which has a timeout that's tracked separately from others purposes), also implicitly dictated what it pauses (reads, writes, eviction, etc). Now it is explicit, and the actions that are paused (bit flags) are defined separately from the purpose. - Previously, when using feature pause-client it also implicitly means to make the server static: - Pause replica traffic - Pauses eviction processing - Pauses expire processing Making the server static is used also for failover and shutdown. This PR internally rebrand pause-client API to become pause-action API. It also Simplifies pauseClients structure by replacing pointers array with static array. The context of this PR is to add another trigger to pause-client which will activated in case of OOM as throttling mechanism ([see here](https://github.com/redis/redis/issues/10907)). In this case we want only to pause client, and eviction actions.
-
- 18 Oct, 2022 2 commits
-
-
guybe7 authored
The use case is a module that wants to implement a blocking command on a key that necessarily exists and wants to unblock the client in case the key is deleted (much like what we implemented for XREADGROUP in #10306) New module API: * RedisModule_BlockClientOnKeysWithFlags Flags: * REDISMODULE_BLOCK_UNBLOCK_NONE * REDISMODULE_BLOCK_UNBLOCK_DELETED ### Detailed description of code changes blocked.c: 1. Both module and stream functions are called whether the key exists or not, regardless of its type. We do that in order to allow modules/stream to unblock the client in case the key is no longer present or has changed type (the behavior for streams didn't change, just code that moved into serveClientsBlockedOnStreamKey) 2. Make sure afterCommand is called in serveClientsBlockedOnKeyByModule, in order to propagate actions from moduleTryServeClientBlockedOnKey. 3. handleClientsBlockedOnKeys: call propagatePendingCommands directly after lookupKeyReadWithFlags to prevent a possible lazy-expire DEL from being mixed with any command propagated by the preceding functions. 4. blockForKeys: Caller can specifiy that it wants to be awakened if key is deleted. Minor optimizations (use dictAddRaw). 5. signalKeyAsReady became signalKeyAsReadyLogic which can take a boolean in case the key is deleted. It will only signal if there's at least one client that awaits key deletion (to save calls to handleClientsBlockedOnKeys). Minor optimizations (use dictAddRaw) db.c: 1. scanDatabaseForDeletedStreams is now scanDatabaseForDeletedKeys and will signalKeyAsReady for any key that was removed from the database or changed type. It is the responsibility of the code in blocked.c to ignore or act on deleted/type-changed keys. 2. Use the new signalDeletedKeyAsReady where needed blockedonkey.c + tcl: 1. Added test of new capabilities (FSL.BPOPGT now requires the key to exist in order to work)
-
Shuning authored
Seems excessive to call getExpire if we don't need it. This can maybe have some speedup on AOF file loading (saving a dictFind call) Co-authored-by:
lvshuning <lvshuning@meituan.com>
-
- 13 Oct, 2022 1 commit
-
-
C Charles authored
MIGTATE with AUTH that contains "keys" is getting wrong key names in migrateGetKeys, leads to ACL errors (#11253) When using the MIGRATE, with a destination Redis that has the user name or password set to the string "keys", Redis would have determine the wrong set of key names the command is gonna access. This lead to ACL returning wrong authentication result. Destination instance: ``` 127.0.0.1:6380> acl setuser default >keys OK 127.0.0.1:6380> acl setuser keys on nopass ~* &* +@all OK ``` Source instance: ``` 127.0.0.1:6379> set a 123 OK 127.0.0.1:6379> acl setuser cc on nopass ~a* +@all OK 127.0.0.1:6379> auth cc 1 OK 127.0.0.1:6379> migrate 127.0.0.1 6380 "" 0 1000 auth keys keys a (error) NOPERM this user has no permissions to access one of the keys used as arguments 127.0.0.1:6379> migrate 127.0.0.1 6380 "" 0 1000 auth2 keys pswd keys a (error) NOPERM this user has no permissions to access one of the keys used as arguments ``` Using `acl dryrun` we know that the parameters of `auth` and `auth2` are mistaken for the `keys` option. ``` 127.0.0.1:6379> acl dryrun cc migrate whatever whatever "" 0 1000 auth keys keys a "This user has no permissions to access the 'keys' key" 127.0.0.1:6379> acl dryrun cc migrate whatever whatever "" 0 1000 auth2 keys pswd keys a "This user has no permissions to access the 'pswd' key" ``` Fix the bug by editing db.c/migrateGetKeys function, which finds the `keys` option and all the keys following.
-
- 09 Oct, 2022 1 commit
-
-
Binbin authored
Freeze time during execution of scripts and all other commands. This means that a key is either expired or not, and doesn't change state during a script execution. resolves #10182 This PR try to add a new `commandTimeSnapshot` function. The function logic is extracted from `keyIsExpired`, but the related calls to `fixed_time_expire` and `mstime()` are removed, see below. In commands, we will avoid calling `mstime()` multiple times and just use the one that sampled in call. The background is, e.g. using `PEXPIRE 1` with valgrind sometimes result in the key being deleted rather than expired. The reason is that both `PEXPIRE` command and `checkAlreadyExpired` call `mstime()` separately. There are other more important changes in this PR: 1. Eliminate `fixed_time_expire`, it is no longer needed. When we want to sample time we should always use a time snapshot. We will use `in_nested_call` instead to update the cached time in `call`. 2. Move the call for `updateCachedTime` from `serverCron` to `afterSleep`. Now `commandTimeSnapshot` will always return the sample time, the `lookupKeyReadWithFlags` call in `getNodeByQuery` will get a outdated cached time (because `processCommand` is out of the `call` context). We put the call to `updateCachedTime` in `aftersleep`. 3. Cache the time each time the module lock Redis. Call `updateCachedTime` in `moduleGILAfterLock`, affecting `RM_ThreadSafeContextLock` and `RM_ThreadSafeContextTryLock` Currently the commandTimeSnapshot change affects the following TTL commands: - SET EX / SET PX - EXPIRE / PEXPIRE - SETEX / PSETEX - GETEX EX / GETEX PX - TTL / PTTL - EXPIRETIME / PEXPIRETIME - RESTORE key TTL And other commands just use the cached mstime (including TIME). This is considered to be a breaking change since it can break a script that uses a loop to wait for a key to expire.
-
- 28 Sep, 2022 1 commit
-
-
guybe7 authored
The original idea behind auto-setting the default (first,last,step) spec was to use the most "open" flags when the user didn't provide any key-spec flags information. While the above idea is a good approach, it really makes no sense to set CMD_KEY_VARIABLE_FLAGS if the user didn't provide the getkeys-api flag: in this case there's not way to retrieve these variable flags, so what's the point? Internally in redis there was code to ignore this already, so this fix doesn't change redis's behavior, it only affects the output of COMMAND command.
-
- 24 Aug, 2022 1 commit
-
-
Oran Agra authored
Redis 7.0 has #9890 which added an assertion when the propagation queue was not flushed and we got to beforeSleep. But it turns out that when processCommands calls getNodeByQuery and decides to reject the command, it can lead to a key that was lazy expired and is deleted without later flushing the propagation queue. This change prevents lazy expiry from deleting the key at this stage (not as part of a command being processed in `call`)
-