- 24 Jun, 2024 1 commit
-
-
Moti Cohen authored
Considerations for the selected imp of HRANDFIELD & HFE feature: HRANDFIELD might access any of the fields in the hash as some of them might be expired. And so the Implementation of HRANDFIELD along with HFEs might be one of the two options: 1. Expire hash-fields before diving into handling HRANDFIELD. 2. Refine HRANDFIELD cases to deal with expired fields. Regarding the first option, as reference, the command RANDOMKEY also declareson O(1) complexity, yet might be stuck on a very long (but not infinite) loop trying to find non-expired keys. Furthermore RANDOMKEY also evicts expired keys along the way even though it is categorized as a read-only command. Note that the case of HRANDFIELD is more lightweight versus RANDOMKEY since HFEs have much more effective and aggressive active-expiration for fields behind. The second option introduces additional implementation complexity to HRANDFIELD. We could further refine HRANDFIELD cases to differentiate between scenarios with many expired fields versus few expired fields, and adjust based on the percentage of expired fields. However, this approach could still lead to long loops or necessitate expiring fields before selecting them. For the “lightweight” cases it is also expected to have a lightweight expiration. Considering the pros and cons, and the fact that HRANDFIELD is an infrequent command (particularly with HFEs) and the fact we have effective active-expiration behind for hash-fields, it is better to keep it simple and choose option number 1. Other changes: * Don't mark command dirty by internal hashTypeExpire(). It causes to read only command of HRANDFIELD to be accidently propagated (This flag should be indicated at higher level, by the command functions). * Align `hashTypeExpireIfNeeded()` and `hashTypeGetValue()` to be more aligned with `expireIfNeeded()` logic of keyspace.
-
- 29 May, 2024 1 commit
-
-
Moti Cohen authored
* For replica sake, rewrite commands `H*EXPIRE*` , `HSETF`, `HGETF` to have absolute unix time in msec. * On active-expiration of field, propagate HDEL to replica (`propagateHashFieldDeletion()`) * On lazy-expiration, propagate HDEL to replica (`hashTypeGetValue()` now calls `hashTypeDelete()`. It also takes care to call `propagateHashFieldDeletion()`). * Fix `H*EXPIRE*` command such that if it gets flag `LT` and it doesn’t have any expiration on the field then it will considered as valid condition. Note, replicas doesn’t make any active expiration, and should avoid lazy expiration. On `hashTypeGetValue()` it doesn't check expiration (As long as the master didn’t request to delete the field, it is valid) TODO: * Attach `dbid` to HASH metadata. See [here](https://github.com/redis/redis/pull/13209#discussion_r1593385850 ) --------- Co-authored-by:
debing.sun <debing.sun@redis.com>
-
- 08 May, 2024 1 commit
-
-
Ozan Tezcan authored
**Changes:** - Adds listpack support to hash field expiration - Implements hgetf/hsetf commands **Listpack support for hash field expiration** We keep field name and value pairs in listpack for the hash type. With this PR, if one of hash field expiration command is called on the key for the first time, it converts listpack layout to triplets to hold field name, value and ttl per field. If a field does not have a TTL, we store zero as the ttl value. Zero is encoded as two bytes in the listpack. So, once we convert listpack to hold triplets, for the fields that don't have a TTL, it will be consuming those extra 2 bytes per item. Fields are ordered by ttl in the listpack to find the field with minimum expiry time efficiently. **New command implementations as part of this PR:** - HGETF command For each specified field get its value and optionally set the field's expiration time in sec/msec /unix-sec/unix-msec: ``` HGETF key [NX | XX | GT | LT] [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | PERSIST] <FIELDS count field [field ...]> ``` - HSETF command For each specified field value pair: set field to value and optionally set the field's expiration time in sec/msec /unix-sec/unix-msec: ``` HSETF key [DC] [DCF | DOF] [NX | XX | GT | LT] [GETNEW | GETOLD] [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | KEEPTTL] <FVS count field value [field value …]> ``` Todo: - Performance improvement. - rdb load/save - aof - defrag
-
- 18 Apr, 2024 1 commit
-
-
Moti Cohen authored
- Add ebuckets & mstr data structures - Integrate active & lazy expiration - Add most of the commands - Add support for dict (listpack is missing) TODOs: RDB, notification, listpack, HSET, HGETF, defrag, aof
-
- 20 Mar, 2024 1 commit
-
-
Pieter Cailliau authored
[Read more about the license change here](https://redis.com/blog/redis-adopts-dual-source-available-licensing/) Live long and prosper
🖖
-
- 12 Mar, 2024 1 commit
-
-
Binbin authored
The check in fileIsManifest misjudged the manifest file. For example, if resp aof contains "file", it will be considered a manifest file and the check will fail: ``` *3 $3 set $4 file $4 file ``` In #12951, if the preamble aof also contains it, it will also fail. Fixes #12951. the bug was happening if the the word "file" is mentioned in the first 1024 lines of the AOF. and now as soon as it finds a non-comment line it'll break (if it contains "file" or doesn't)
-
- 22 Feb, 2024 1 commit
-
-
Binbin authored
It seems to be a leak caused by code refactoring in #11290. it's a small leak, that only happens if there's an IO error.
-
- 18 Feb, 2024 1 commit
-
-
Binbin authored
AOF_FSYNC_EVERYSEC higher resolution, change aof_last_fsync and aof_flush_postponed_start to use mstime (#13041) Currently aof_last_fsync is using a low resolution unixtime is really bad, it checks if the absolute number of (full) seconds changed by one. depending on which side of the second barrier it falls, we can get very different results. This PR change the resolution to use milliseconds instead of complete seconds. In cases where the event loop cycle duration is short and their rapid (e.g. running many fast commands with short pipeline, or a high `hz` config), this change will not make much difference, since in anyway, we'll be quick to detect that we're on a "new second", and it's likely that these fsync will always be executed close to the second switch barrier. But in cases of rare or slow event loops cycles (e.g. either slow commands, or very low rate of traffic to redis, and low `hz`), it could easily be that with the old code, in some cases we'll have over 1.5 seconds between fsyncs, and in others less than 0.5. see discussion in #8612 This PR also handle aof_flush_postponed_start as well, the damage there is smaller since the threshold is 2 seconds, and not 1. --------- Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 05 Feb, 2024 1 commit
-
-
guybe7 authored
# Description Gather most of the scattered `redisDb`-related code from the per-slot dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e. it's a class that represents an array of dictionaries. # Motivation The main motivation is code cleanliness, the idea of using an array of dictionaries is very well-suited to becoming a self-contained data structure. This allowed cleaning some ugly code, among others: loops that run twice on the main dict and expires dict, and duplicate code for allocating and releasing this data structure. # Notes 1. This PR reverts the part of https://github.com/redis/redis/pull/12848 where the `rehashing` list is global (handling rehashing `dict`s is under the responsibility of `kvstore`, and should not be managed by the server) 2. This PR also replaces the type of `server.pubsubshard_channels` from `dict**` to `kvstore` (original PR: https://github.com/redis/redis/pull/12804). After that was done, server.pubsub_channels was also chosen to be a `kvstore` (with only one `dict`, which seems odd) just to make the code cleaner by making it the same type as `server.pubsubshard_channels`, see `pubsubtype.serverPubSubChannels` 3. the keys and expires kvstores are currenlty configured to allocate the individual dicts only when the first key is added (unlike before, in which they allocated them in advance), but they won't release them when the last key is deleted. Worth mentioning that due to the recent change the reply of DEBUG HTSTATS changed, in case no keys were ever added to the db. before: ``` 127.0.0.1:6379> DEBUG htstats 9 [Dictionary HT] Hash table 0 stats (main hash table): No stats available for empty dictionaries [Expires HT] Hash table 0 stats (main hash table): No stats available for empty dictionaries ``` after: ``` 127.0.0.1:6379> DEBUG htstats 9 [Dictionary HT] [Expires HT] ```
-
- 17 Nov, 2023 1 commit
-
-
zhaozhao.zz authored
Introduced by #11695, should skip empty db in case unnecessary SELECT.
-
- 14 Nov, 2023 1 commit
-
-
Binbin authored
When using DB iterator, it will use dictInitSafeIterator to init a old safe dict iterator. When dbIteratorNext is used, it will jump to the next slot db dict when we are done a dict. During this process, we do not have any calls to dictResumeRehashing, which causes the dict's pauserehash to always be > 0. And at last, it will be returned directly in dictRehashMilliseconds, which causes us to have slot dict in a state where rehash cannot be completed. In the "expire scan should skip dictionaries with lot's of empty buckets" test, adding a `keys *` can reproduce the problem stably. `keys *` will call dbIteratorNext to trigger a traversal of all slot dicts. Added dbReleaseIterator and dbIteratorInitNextSafeIterator methods to call dictResetIterator. Issue was introduced in #11695.
-
- 15 Oct, 2023 1 commit
-
-
Vitaly authored
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data. ## Important changes * Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms. * getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time. * Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree. * scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue. * Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot. * Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading. * DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well. ## Performance This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict. RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load. ## Interface changes * Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS` * Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored. * New RDB version to support the new op code for SLOT information. --------- Co-authored-by:
Vitaly Arbuzov <arvit@amazon.com> Co-authored-by:
Harkrishn Patro <harkrisp@amazon.com> Co-authored-by:
Roshan Khatri <rvkhatri@amazon.com> Co-authored-by:
Madelyn Olson <madelyneolson@gmail.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 28 Sep, 2023 2 commits
-
-
guybe7 authored
The problem is that WAITAOF could have hang in case commands were propagated only to replicas. This can happen if a module uses RM_Call with the REDISMODULE_ARGV_NO_AOF flag. In that case, master_repl_offset would increase, but there would be nothing to fsync, so in the absence of other traffic, fsynced_reploff_pending would stay the static, and WAITAOF can hang. This commit updates fsynced_reploff_pending to the latest offset in flushAppendOnlyFile in case there's nothing to fsync. i.e. in case it's behind because of the above mentions case it'll be refreshed and release the WAITAOF. Other changes: Fix a race in wait.tcl (client getting blocked vs. the fsync thread)
-
guybe7 authored
If we set `fsynced_reploff_pending` in `startAppendOnly`, and the fork doesn't start immediately (e.g. there's another fork active at the time), any subsequent commands will increment `server.master_repl_offset`, but will not cause a fsync (given they were executed before the fork started, they just ended up in the RDB part of it) Therefore, any WAITAOF will wait on the new master_repl_offset, but it will time out because no fsync will be executed. Release notes: ``` WAITAOF could timeout in the absence of write traffic in case a new AOF is created and an AOFRW can't immediately start. This can happen by the appendonly config is changed at runtime, but also after FLUSHALL, and replica full sync. ```
-
- 03 May, 2023 1 commit
-
-
Madelyn Olson authored
Technically declaring a prototype with an empty declaration has been deprecated since the early days of C, but we never got a warning for it. C2x will apparently be introducing a breaking change if you are using this type of declarator, so Clang 15 has started issuing a warning with -pedantic. Although not apparently a problem for any of the compiler we build on, if feels like the right thing is to properly adhere to the C standard and use (void).
-
- 29 Mar, 2023 1 commit
-
-
Binbin authored
This PR fix several unrelated bugs that were discovered by the same set of tests (WAITAOF tests in #11713), could make the `WAITAOF` test hang. The change in `backgroundRewriteDoneHandler` is about MP-AOF. That leftover / old code assumes that we started a new AOF file just now (when we have a new base into which we're gonna incrementally write), but the fact is that with MP-AOF, the fork done handler doesn't really affect the incremental file being maintained by the parent process, there's no reason to re-issue `SELECT`, and no reason to update any of the fsync variables in that flow. This should have been deleted with MP-AOF (introduced in #9788, 7.0). The damage is that the update to `aof_fsync_offset` will cause us to miss an fsync in `flushAppendOnlyFile`, that happens if we stop write commands in `AOF_FSYNC_EVERYSEC` while an AOFRW is in progress. This caused a new `WAITAOF` test to sometime hang forever. Also because of MP-AOF, we needed to change `aof_fsync_offset` to `aof_last_incr_fsync_offset` and match it to `aof_last_incr_size` in `flushAppendOnlyFile`. This is because in the past we compared `aof_fsync_offset` and `aof_current_size`, but with MP-AOF it could be the total AOF file will be smaller after AOFRW, and the (already existing) incr file still has data that needs to be fsynced. The change in `flushAppendOnlyFile`, about the `AOF_FSYNC_ALWAYS`, it is follow #6053 (the details is in #5985), we also check `AOF_FSYNC_ALWAYS` to handle a case where appendfsync is changed from everysec to always while there is data that's written but not yet fsynced.
-
- 14 Mar, 2023 1 commit
-
-
Slava Koyfman authored
Implementing the WAITAOF functionality which would allow the user to block until a specified number of Redises have fsynced all previous write commands to the AOF. Syntax: `WAITAOF <num_local> <num_replicas> <timeout>` Response: Array containing two elements: num_local, num_replicas num_local is always either 0 or 1 representing the local AOF on the master. num_replicas is the number of replicas that acknowledged the a replication offset of the last write being fsynced to the AOF. Returns an error when called on replicas, or when called with non-zero num_local on a master with AOF disabled, in all other cases the response just contains number of fsync copies. Main changes: * Added code to keep track of replication offsets that are confirmed to have been fsynced to disk. * Keep advancing master_repl_offset even when replication is disabled (and there's no replication backlog, only if there's an AOF enabled). This way we can use this command and it's mechanisms even when replication is disabled. * Extend REPLCONF ACK to `REPLCONF ACK <ofs> FACK <ofs>`, the FACK will be appended only if there's an AOF on the replica, and already ignored on old masters (thus backwards compatible) * WAIT now no longer wait for the replication offset after your last command, but rather the replication offset after your last write (or read command that caused propagation, e.g. lazy expiry). Unrelated changes: * WAIT command respects CLIENT_DENY_BLOCKING (not just CLIENT_MULTI) Implementation details: * Add an atomic var named `fsynced_reploff_pending` that's updated (usually by the bio thread) and later copied to the main `fsynced_reploff` variable (only if the AOF base file exists). I.e. during the initial AOF rewrite it will not be used as the fsynced offset since the AOF base is still missing. * Replace close+fsync bio job with new BIO_CLOSE_AOF (AOF specific) job that will also update fsync offset the field. * Handle all AOF jobs (BIO_CLOSE_AOF, BIO_AOF_FSYNC) in the same bio worker thread, to impose ordering on their execution. This solves a race condition where a job could set `fsynced_reploff_pending` to a higher value than another pending fsync job, resulting in indicating an offset for which parts of the data have not yet actually been fsynced. Imposing an ordering on the jobs guarantees that fsync jobs are executed in increasing order of replication offset. * Drain bio jobs when switching `appendfsync` to "always" This should prevent a write race between updates to `fsynced_reploff_pending` in the main thread (`flushAppendOnlyFile` when set to ALWAYS fsync), and those done in the bio thread. * Drain the pending fsync when starting over a new AOF to avoid race conditions with the previous AOF offsets overriding the new one (e.g. after switching to replicate from a new master). * Make sure to update the fsynced offset at the end of the initial AOF rewrite. a must in case there are no additional writes that trigger a periodic fsync, specifically for a replica that does a full sync. Limitations: It is possible to write a module and a Lua script that propagate to the AOF and doesn't propagate to the replication stream. see REDISMODULE_ARGV_NO_REPLICAS and luaRedisSetReplCommand. These features are incompatible with the WAITAOF command, and can result in two bad cases. The scenario is that the user executes command that only propagates to AOF, and then immediately issues a WAITAOF, and there's no further writes on the replication stream after that. 1. if the the last thing that happened on the replication stream is a PING (which increased the replication offset but won't trigger an fsync on the replica), then the client would hang forever (will wait for an fack that the replica will never send sine it doesn't trigger any fsyncs). 2. if the last thing that happened is a write command that got propagated properly, then WAITAOF will be released immediately, without waiting for an fsync (since the offset didn't change) Refactoring: * Plumbing to allow bio worker to handle multiple job types This introduces infrastructure necessary to allow BIO workers to not have a 1-1 mapping of worker to job-type. This allows in the future to assign multiple job types to a single worker, either as a performance/resource optimization, or as a way of enforcing ordering between specific classes of jobs. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 19 Feb, 2023 1 commit
-
-
Binbin authored
We have cases where we print information (might be important but by no means an error indicator) with the LL_WARNING level. Demoting these to LL_NOTICE: - oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo - User requested shutdown... This is also true for cases that we encounter a rare but normal situation. Demoting to LL_NOTICE. Examples: - AOF was enabled but there is already another background operation. An AOF background was scheduled to start when possible. - Connection with master lost. base on yoav-steinberg's https://github.com/redis/redis/pull/10650#issuecomment-1112280554 and yossigo's https://github.com/redis/redis/pull/10650#pullrequestreview-967677676
-
- 16 Feb, 2023 1 commit
-
-
Oran Agra authored
* Make it clear that current_client is the root client that was called by external connection * add executing_client which is the client that runs the current command (can be a module or a script) * Remove script_caller that was used for commands that have CLIENT_SCRIPT to get the client that called the script. in most cases, that's the current_client, and in others (when being called from a module), it could be an intermediate client when we actually want the original one used by the external connection. bugfixes: * RM_Call with C flag should log ACL errors with the requested user rather than the one used by the original client, this also solves a crash when RM_Call is used with C flag from a detached thread safe context. * addACLLogEntry would have logged info about the script_caller, but in case the script was issued by a module command we actually want the current_client. the exception is when RM_Call is called from a timer event, in which case we don't have a current_client. behavior changes: * client side tracking for scripts now tracks the keys that are read by the script instead of the keys that are declared by the caller for EVAL other changes: * Log both current_client and executing_client in the crash log. * remove prepareLuaClient and resetLuaClient, being dead code that was forgotten. * remove scriptTimeSnapshot and snapshot_time and instead add cmd_time_snapshot that serves all commands and is reset only when execution nesting starts. * remove code to propagate CLIENT_FORCE_REPL from the executed command to the script caller since scripts aren't propagated anyway these days and anyway this flag wouldn't have had an effect since CLIENT_PREVENT_PROP is added by scriptResetRun. * fix a module GIL violation issue in afterSleep that was introduced in #10300 (unreleased)
-
- 12 Feb, 2023 1 commit
-
-
Tian authored
# Background The RDB file is usually generated and used once and seldom used again, but the content would reside in page cache until OS evicts it. A potential problem is that once the free memory exhausts, the OS have to reclaim some memory from page cache or swap anonymous page out, which may result in a jitters to the Redis service. Supposing an exact scenario, a high-capacity machine hosts many redis instances, and we're upgrading the Redis together. The page cache in host machine increases as RDBs are generated. Once the free memory drop into low watermark(which is more likely to happen in older Linux kernel like 3.10, before [watermark_scale_factor](https://lore.kernel.org/lkml/1455813719-2395-1-git-send-email-hannes@cmpxchg.org/) is introduced, the `low watermark` is linear to `min watermark`, and there'is not too much buffer space for `kswapd` to be wake up to reclaim memory), a `direct reclaim` happens, which means the process would stall to wait for memory allocation. # What the PR does The PR introduces a capability to reclaim the cache when the RDB is operated. Generally there're two cases, read and write the RDB. For read it's a little messy to address the incremental reclaim, so the reclaim is done in one go in background after the load is finished to avoid blocking the work thread. For write, incremental reclaim amortizes the work of reclaim so no need to put it into background, and the peak watermark of cache can be reduced in this way. Two cases are addresses specially, replication and restart, for both of which the cache is leveraged to speed up the processing, so the reclaim is postponed to a right time. To do this, a flag is added to`rdbSave` and `rdbLoad` to control whether the cache need to be kept, with the default value false. # Something deserve noting 1. Though `posix_fadvise` is the POSIX standard, but only few platform support it, e.g. Linux, FreeBSD 10.0. 2. In Linux `posix_fadvise` only take effect on writeback-ed pages, so a `sync`(or `fsync`, `fdatasync`) is needed to flush the dirty page before `posix_fadvise` if we reclaim write cache. # About test A unit test is added to verify the effect of `posix_fadvise`. In integration test overall cache increase is checked, as well as the cache backed by RDB as a specific TCL test is executed in isolated Github action job.
-
- 22 Nov, 2022 1 commit
-
-
Wen Hui authored
Now, according to the comments, if the truncated file is not the last file, it will be considered as a fatal error. And the return code will updated to AOF_FAILED, then server will exit without any error message to the client. Similar to other error situations, this PR add an explicit error message for this case and make the client know clearly what happens.
-
- 16 Nov, 2022 1 commit
-
-
sundb authored
Improve memory efficiency of list keys ## Description of the feature The new listpack encoding uses the old `list-max-listpack-size` config to perform the conversion, which we can think it of as a node inside a quicklist, but without 80 bytes overhead (internal fragmentation included) of quicklist and quicklistNode structs. For example, a list key with 5 items of 10 chars each, now takes 128 bytes instead of 208 it used to take. ## Conversion rules * Convert listpack to quicklist When the listpack length or size reaches the `list-max-listpack-size` limit, it will be converted to a quicklist. * Convert quicklist to listpack When a quicklist has only one node, and its length or size is reduced to half of the `list-max-listpack-size` limit, it will be converted to a listpack. This is done to avoid frequent conversions when we add or remove at the bounding size or length. ## Interface changes 1. add list entry param to listTypeSetIteratorDirection When list encoding is listpack, `listTypeIterator->lpi` points to the next entry of current entry, so when changing the direction, we need to use the current node (listTypeEntry->p) to update `listTypeIterator->lpi` to the next node in the reverse direction. ## Benchmark ### Listpack VS Quicklist with one node * LPUSH - roughly 0.3% improvement * LRANGE - roughly 13% improvement ### Both are quicklist * LRANGE - roughly 3% improvement * LRANGE without pipeline - roughly 3% improvement From the benchmark, as we can see from the results 1. When list is quicklist encoding, LRANGE improves performance by <5%. 2. When list is listpack encoding, LRANGE improves performance by ~13%, the main enhancement is brought by `addListListpackRangeReply()`. ## Memory usage 1M lists(key:0~key:1000000) with 5 items of 10 chars ("hellohello") each. shows memory usage down by 35.49%, from 214MB to 138MB. ## Note 1. Add conversion callback to support doing some work before conversion Since the quicklist iterator decompresses the current node when it is released, we can no longer decompress the quicklist after we convert the list.
-
- 09 Nov, 2022 1 commit
-
-
Viktor Söderqvist authored
Small sets with not only integer elements are listpack encoded, by default up to 128 elements, max 64 bytes per element, new config `set-max-listpack-entries` and `set-max-listpack-value`. This saves memory for small sets compared to using a hashtable. Sets with only integers, even very small sets, are still intset encoded (up to 1G limit, etc.). Larger sets are hashtable encoded. This PR increments the RDB version, and has an effect on OBJECT ENCODING Possible conversions when elements are added: intset -> listpack listpack -> hashtable intset -> hashtable Note: No conversion happens when elements are deleted. If all elements are deleted and then added again, the set is deleted and recreated, thus implicitly converted to a smaller encoding.
-
- 18 Aug, 2022 1 commit
-
-
Meir Shpilraien (Spielrein) authored
Fix replication inconsistency on modules that uses key space notifications. ### The Problem In general, key space notifications are invoked after the command logic was executed (this is not always the case, we will discuss later about specific command that do not follow this rules). For example, the `set x 1` will trigger a `set` notification that will be invoked after the `set` logic was performed, so if the notification logic will try to fetch `x`, it will see the new data that was written. Consider the scenario on which the notification logic performs some write commands. for example, the notification logic increase some counter, `incr x{counter}`, indicating how many times `x` was changed. The logical order by which the logic was executed is has follow: ``` set x 1 incr x{counter} ``` The issue is that the `set x 1` command is added to the replication buffer at the end of the command invocation (specifically after the key space notification logic was invoked and performed the `incr` command). The replication/aof sees the commands in the wrong order: ``` incr x{counter} set x 1 ``` In this specific example the order is less important. But if, for example, the notification would have deleted `x` then we would end up with primary-replica inconsistency. ### The Solution Put the command that cause the notification in its rightful place. In the above example, the `set x 1` command logic was executed before the notification logic, so it should be added to the replication buffer before the commands that is invoked by the notification logic. To achieve this, without a major code refactoring, we save a placeholder in the replication buffer, when finishing invoking the command logic we check if the command need to be replicated, and if it does, we use the placeholder to add it to the replication buffer instead of appending it to the end. To be efficient and not allocating memory on each command to save the placeholder, the replication buffer array was modified to reuse memory (instead of allocating it each time we want to replicate commands). Also, to avoid saving a placeholder when not needed, we do it only for WRITE or MAY_REPLICATE commands. #### Additional Fixes * Expire and Eviction notifications: * Expire/Eviction logical order was to first perform the Expire/Eviction and then the notification logic. The replication buffer got this in the other way around (first notification effect and then the `del` command). The PR fixes this issue. * The notification effect and the `del` command was not wrap with `multi-exec` (if needed). The PR also fix this issue. * SPOP command: * On spop, the `spop` notification was fired before the command logic was executed. The change in this PR would have cause the replication order to be change (first `spop` command and then notification `logic`) although the logical order is first the notification logic and then the `spop` logic. The right fix would have been to move the notification to be fired after the command was executed (like all the other commands), but this can be considered a breaking change. To overcome this, the PR keeps the current behavior and changes the `spop` code to keep the right logical order when pushing commands to the replication buffer. Another PR will follow to fix the SPOP properly and match it to the other command (we split it to 2 separate PR's so it will be easy to cherry-pick this PR to 7.0 if we chose to). #### Unhanded Known Limitations * key miss event: * On key miss event, if a module performed some write command on the event (using `RM_Call`), the `dirty` counter would increase and the read command that cause the key miss event would be replicated to the replication and aof. This problem can also happened on a write command that open some keys but eventually decides not to perform any action. We decided not to handle this problem on this PR because the solution is complex and will cause additional risks in case we will want to cherry-pick this PR. We should decide if we want to handle it in future PR's. For now, modules writers is advice not to perform any write commands on key miss event. #### Testing * We already have tests to cover cases where a notification is invoking write commands that are also added to the replication buffer, the tests was modified to verify that the replica gets the command in the correct logical order. * Test was added to verify that `spop` behavior was kept unchanged. * Test was added to verify key miss event behave as expected. * Test was added to verify the changes do not break lazy expiration. #### Additional Changes * `propagateNow` function can accept a special dbid, -1, indicating not to replicate `select`. We use this to replicate `multi/exec` on `propagatePendingCommands` function. The side effect of this change is that now the `select` command will appear inside the `multi/exec` block on the replication stream (instead of outside of the `multi/exec` block). Tests was modified to match this new behavior.
-
- 04 Aug, 2022 1 commit
-
-
Binbin authored
This is an addition to #11039, which cleans up rdbLoad* related errno. Remove the errno print from the outer message (may be invalid since errno may have been overwritten). Our aim should be the code that detects the error and knows which system call triggered it, is the one to print errno, and not the code way up above (in some cases a result of a logical error and not a system one). Remove the code to update errno in rdbLoadRioWithLoadingCtx, signature check and the rdb version check, in these cases, we do print the error message. The caller dose not have the specific logic for handling EINVAL. Small fix around rdb-preamble AOF: A truncated RDB is considered a failure, not handled the same as a truncated AOF file.
-
- 25 Jul, 2022 1 commit
-
-
Binbin authored
In rewriteAppendOnlyFileBackground, after flushAppendOnlyFile(1), and before openNewIncrAofForAppend, we should call redis_fsync to fsync the aof file. Because we may open a new INCR AOF in openNewIncrAofForAppend, in the case of using everysec policy, the old AOF file may not be fsynced in time (or even at all). When using everysec, we don't want to pay the disk latency from the main thread, so we will do a background fsync. Adding a argument for bioCreateCloseJob, a `need_fsync` flag to indicate that a fsync is required before the file is closed. So we will fsync the old AOF file before we close it. A cleanup, we make union become a union, since the free_* args and the fd / fsync args are never used together. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 03 Jul, 2022 2 commits
-
-
Binbin authored
We should also set aof_lastbgrewrite_status to C_ERR on these errors. Because aof rewrite did fail, and we did not finish the manifest update. Also maintain the stat_aofrw_consecutive_failures.
-
Binbin authored
The `can_log` variable prevents us from outputting too many error logs. But it should not include the modification of server.aof_last_write_errno. We are doing this because: 1. In the short write case, we always set aof_last_write_errno to ENOSPC, we don't care the `can_log` flag. 2. And we always set aof_last_write_status to C_ERR in aof write error (except for FSYNC_ALWAYS, we exit). So there may be a chance that `aof_last_write_errno` is not right. An innocent bug or just a code cleanup.
-
- 23 Jun, 2022 1 commit
-
-
Binbin authored
When dirCreateIfMissing or openNewIncrAofForAppend fail, set aof_lastbgrewrite_status to err (#10775) It will be displayed in the `aof_last_bgrewrite_status` field of the INFO command.
-
- 20 Jun, 2022 1 commit
-
-
Tian authored
The current process to persist files is `write` the data, `fsync` and `rename` the file, but a underlying problem is that the rename may be lost when a sudden crash like power outage and the directory hasn't been persisted. The article [Ensuring data reaches disk](https://lwn.net/Articles/457667/) mentions a safe way to update file should be: 1. create a new temp file (on the same file system!) 2. write data to the temp file 3. fsync() the temp file 4. rename the temp file to the appropriate name 5. fsync() the containing directory This commit handles CONFIG REWRITE, AOF manifest, and RDB file (both for persistence, and the one the replica gets from the master). It doesn't handle (yet), ACL SAVE and Cluster configs, since these don't yet follow this pattern.
-
- 01 Jun, 2022 1 commit
-
-
Oran Agra authored
The important part is that read-only scripts (not just EVAL_RO and FCALL_RO, but also ones with `no-writes` executed by normal EVAL or FCALL), will now be permitted to run during CLIENT PAUSE WRITE (unlike before where only the _RO commands would be processed). Other than that, some errors like OOM, READONLY, MASTERDOWN are now handled by processCommand, rather than the command itself affects the error string (and even error code in some cases), and command stats. Besides that, now the `may-replicate` commands, PFCOUNT and PUBLISH, will be considered `write` commands in scripts and will be blocked in all read-only scripts just like other write commands. They'll also be blocked in EVAL_RO (i.e. even for scripts without the `no-writes` shebang flag. This commit also hides the `may_replicate` flag from the COMMAND command output. this is a **breaking change**. background about may_replicate: We don't want to expose a no...
-
- 29 May, 2022 1 commit
-
-
Binbin authored
The ret value should be AOF_OK instead of C_OK. AOF_OK and C_OK are both 0, so this is just a cleanup. Also updated some outdated comments.
-
- 26 May, 2022 1 commit
-
-
chenyang8094 authored
instead of printing a log when a folder or a manifest is missing (level reduced), we print: total time it took to load all the aof files when creating a new base or incr file starting to write to an existing incr file on startup
-
- 27 Apr, 2022 1 commit
-
-
chenyang8094 authored
Followup fix for #10616
-
- 26 Apr, 2022 1 commit
-
-
chenyang8094 authored
Fix bug when AOF enabled after startup. put the new incr file in the manifest only when AOFRW is done. (#10616) Changes: - When AOF is enabled **after** startup, the data accumulated during `AOF_WAIT_REWRITE` will only be stored in a temp INCR AOF file. Only after the first AOFRW is successful, we will add it to manifest file. Before this fix, the manifest referred to the temp file which could cause a restart during that time to load it without it's base. - Add `aof_rewrites_consecutive_failures` info field for aofrw limiting implementation. Now we can guarantee that these behaviors of MP-AOF are the same as before (past redis releases): - When AOF is enabled after startup, the data accumulated during `AOF_WAIT_REWRITE` will only be stored in a visible place. Only after the first AOFRW is successful, we will add it to manifest file. - When disable AOF, we did not delete the AOF file in the past so there's no need to change that behavior now (yet). - When toggling AOF off and then on (could be as part of a full-sync), a crash or restart before the first rewrite is completed, would result with the previous version being loaded (might not be right thing, but that's what we always had).
-
- 19 Apr, 2022 1 commit
-
-
judeng authored
Changes: 1. Check the failed rewrite time threshold only when we actually consider triggering a rewrite. i.e. this should be the last condition tested, since the test has side effects (increasing time threshold) Could have happened in some rare scenarios 2. no limit in startup state (e.g. after restarting redis that previously failed and had many incr files) 3. the “triggered the limit” log would be recorded only when the limit status is returned 4. remove failure count in log (could be misleading in some cases) Co-authored-by:
chenyang8094 <chenyang8094@users.noreply.github.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 07 Apr, 2022 1 commit
-
-
chenyang8094 authored
The `auto-aof-rewrite-percentage` config defines at what growth percentage an automatic AOF rewrite is triggered. This normally works OK since the size of the AOF file at the end of a rewrite is stored in `server.aof_rewrite_base_size`. However, on startup, redis used to store the entire size of the AOF file into that variable, resulting in a wrong automatic AOF rewrite trigger (could have been triggered much later than desired). This issue would only affect the first AOFRW after startup, after that future AOFRW would have been triggered correctly. This bug existed in all previous versions of Redis. This PR unifies the meaning of `server.aof_rewrite_base_size`, which only represents the size of BASE AOF. Note that after an AOFRW this size includes the size of the incremental file (all the commands that executed during rewrite), so that auto-aof-rewrite-percentage is the ratio from the size of the AOF after rewrite. However, on startup, it is complicated to know that size, and we compromised on taking just the size of the base file, this means that the first rewrite after startup can happen a little bit too soon. Co-authored-by:
Oran Agra <oran@redislabs.com> Co-authored-by:
yoav-steinberg <yoav@redislabs.com>
-
- 05 Apr, 2022 1 commit
-
-
Meir Shpilraien (Spielrein) authored
## Move library meta data to be part of the library payload. Following the discussion on https://github.com/redis/redis/issues/10429 and the intention to add (in the future) library versioning support, we believe that the entire library metadata (like name and engine) should be part of the library payload and not provided by the `FUNCTION LOAD` command. The reasoning behind this is that the programmer who developed the library should be the one who set those values (name, engine, and in the future also version). **It is not the responsibility of the admin who load the library into the database.** The PR moves all the library metadata (engine and function name) to be part of the library payload. The metadata needs to be provided on the first line of the payload using the shebang format (`#!<engine> name=<name>`), example: ```lua #!lua name=test redis.register_function('foo', function() return 1 end) ``` The above script will run on the Lua engine and will create a library called `test`. ## API Changes (compare to 7.0 rc2) * `FUNCTION LOAD` command was change and now it simply gets the library payload and extract the engine and name from the payload. In addition, the command will now return the function name which can later be used on `FUNCTION DELETE` and `FUNCTION LIST`. * The description field was completely removed from`FUNCTION LOAD`, and `FUNCTION LIST` ## Breaking Changes (compare to 7.0 rc2) * Library description was removed (we can re-add it in the future either as part of the shebang line or an additional line). * Loading an AOF file that was generated by either 7.0 rc1 or 7.0 rc2 will fail because the old command syntax is invalid. ## Notes * Loading an RDB file that was generated by rc1 / rc2 **is** supported, Redis will automatically add the shebang to the libraries payloads (we can probably delete that code after 7.0.3 or so since there's no need to keep supporting upgrades from an RC build).
-
- 04 Apr, 2022 1 commit
-
-
judeng authored
function aofRewriteLimited in aof.c, deley->delay and NAX->MAX Co-authored-by:
judeng <judeng@didiglobal.com>
-
- 23 Feb, 2022 1 commit
-
-
Itamar Haber authored
Adds the ability to track the lag of a consumer group (CG), that is, the number of entries yet-to-be-delivered from the stream. The proposed constant-time solution is in the spirit of "best-effort." Partially addresses #8737. ## Description of approach We add a new "entries_added" property to the stream. This starts at 0 for a new stream and is incremented by 1 with every `XADD`. It is essentially an all-time counter of the entries added to the stream. Given the stream's length and this counter value, we can trivially find the logical "entries_added" counter of the first ID if and only if the stream is contiguous. A fragmented stream contains one or more tombstones generated by `XDEL`s. The new "xdel_max_id" stream property tracks the latest tombstone. The CG also tracks its last delivered ID's as an "entries_read" counter and increments it independently when delivering new messages, unless the this read counter is invalid (-1 means invalid offset). When the CG's counter is available, the reported lag is the difference between added and read counters. Lastly, this also adds a "first_id" field to the stream structure in order to make looking it up cheaper in most cases. ## Limitations There are two cases in which the mechanism isn't able to track the lag. In these cases, `XINFO` replies with `null` in the "lag" field. The first case is when a CG is created with an arbitrary last delivered ID, that isn't "0-0", nor the first or the last entries of the stream. In this case, it is impossible to obtain a valid read counter (short of an O(N) operation). The second case is when there are one or more tombstones fragmenting the stream's entries range. In both cases, given enough time and assuming that the consumers are active (reading and lacking) and advancing, the CG should be able to catch up with the tip of the stream and report zero lag. Once that's achieved, lag tracking would resume as normal (until the next tombstone is set). ## API changes * `XGROUP CREATE` added with the optional named argument `[ENTRIESREAD entries-read]` for explicitly specifying the new CG's counter. * `XGROUP SETID` added with an optional positional argument `[ENTRIESREAD entries-read]` for specifying the CG's counter. * `XINFO` reports the maximal tombstone ID, the recorded first entry ID, and total number of entries added to the stream. * `XINFO` reports the current lag and logical read counter of CGs. * `XSETID` is an internal command that's used in replication/aof. It has been added with the optional positional arguments `[ENTRIESADDED entries-added] [MAXDELETEDID max-deleted-entry-id]` for propagating the CG's offset and maximal tombstone ID of the stream. ## The generic unsolved problem The current stream implementation doesn't provide an efficient way to obtain the approximate/exact size of a range of entries. While it could've been nice to have that ability (#5813) in general, let alone specifically in the context of CGs, the risk and complexities involved in such implementation are in all likelihood prohibitive. ## A refactoring note The `streamGetEdgeID` has been refactored to accommodate both the existing seek of any entry as well as seeking non-deleted entries (the addition of the `skip_tombstones` argument). Furthermore, this refactoring also migrated the seek logic to use the `streamIterator` (rather than `raxIterator`) that was, in turn, extended with the `skip_tombstones` Boolean struct field to control the emission of these. Co-authored-by:
Guy Benoish <guy.benoish@redislabs.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-