- 18 Nov, 2021 2 commits
-
-
Binbin authored
Moves ZPOP ... 0 fast exit path after type check to reply with WRONGTYPE. In the past it will return an empty array. Also now count is not allowed to be negative. see #9680 before: ``` 127.0.0.1:6379> set zset str OK 127.0.0.1:6379> zpopmin zset 0 (empty array) 127.0.0.1:6379> zpopmin zset -1 (empty array) ``` after: ``` 127.0.0.1:6379> set zset str OK 127.0.0.1:6379> zpopmin zset 0 (error) WRONGTYPE Operation against a key holding the wrong kind of value 127.0.0.1:6379> zpopmin zset -1 (error) ERR value is out of range, must be positive ```
-
Madelyn Olson authored
Unpause clients after manual failover ends instead of the timed offset
-
- 16 Nov, 2021 5 commits
-
-
Axlgrep authored
in `scan 0 match ""` case, pat is empty sds(patlen is 0), I don't think should access the first character directly in this case(even though the first character is ' \0 '), for the code readability, I switch the two positions of judgment logic.
-
harleyliao authored
when aof rewrite is failed by fork(), It'll be indicated by aof_last_bgrewrite_status INFO field, same as when the fork child fails later on.
-
sundb authored
Redis supports inserting data over 4GB into string (and recently for lists too, see #9357), But LZF compression used in RDB files (see `rdbcompression` config), and in quicklist (see `list-compress-depth` config) does not support compress/decompress data over UINT32_MAX, which will result in corrupting the rdb after compression. Internal changes: 1. Modify the `unsigned int` parameter of `lzf_compress/lzf_decompress` to `size_t`. 2. Modify the variable types in `lzf_compress` involving offsets and lengths to `size_t`. 3. Set LZF_USE_OFFSETS to 0. When LZF_USE_OFFSETS is 1, lzf store offset into `LZF_HSLOT`(32bit). Even in 64-bit, `LZF_USE_OFFSETS` defaults to 1, because lzf assumes that it only compresses and decompresses data smaller than UINT32_MAX. But now we need to make lzf support 64-bit, turning on `LZF_USE_OFFSETS` will make it impossible to store 64-bit offsets or pointers. BTW, disable LZF_USE_OFFS...
-
sundb authored
This is a preparation step in order to add a new test in quicklist.c see #9776
-
guoxiang1996 authored
The client flags is a 64 bit integer, but the temporary cached value on the stack of call() is 32 bit. luckily this doesn't lead to any bugs since the only flags used against this variables are below 32 bit.
-
- 11 Nov, 2021 1 commit
-
-
Ozan Tezcan authored
- Added sanitizer support. `address`, `undefined` and `thread` sanitizers are available. - To build Redis with desired sanitizer : `make SANITIZER=undefined` - There were some sanitizer findings, cleaned up codebase - Added tests with address and undefined behavior sanitizers to daily CI. - Added tests with address sanitizer to the per-PR CI (smoke out mem leaks sooner). Basically, there are three types of issues : **1- Unaligned load/store** : Most probably, this issue may cause a crash on a platform that does not support unaligned access. Redis does unaligned access only on supported platforms. **2- Signed integer overflow.** Although, signed overflow issue can be problematic time to time and change how compiler generates code, current findings mostly about signed shift or simple addition overflow. For most platforms Redis can be compiled for, this wouldn't cause any issue as far as I can tell (checked generated code on godbolt.org). **3 -Minor leak** (redis-cli), **use-after-free**(just before calling exit()); UB means nothing guaranteed and risky to reason about program behavior but I don't think any of the fixes here worth backporting. As sanitizers are now part of the CI, preventing new issues will be the real benefit.
-
- 09 Nov, 2021 2 commits
-
-
Jim Brunner authored
-
Eduardo Semprebon authored
During diskless replication, the check for broken EOF mark is misplaced and should be earlier. Now we do not swap db, we do proper cleanup and correctly raise module events on this kind of failure. This issue existed prior to #9323, but before, the side effect was not restoring backup and not raising the correct module events on this failure.
-
- 08 Nov, 2021 3 commits
-
-
Wen Hui authored
There was a memory leak when tls is used in Sentinels. The memory leak is noticed when some of the replicas are offline.
-
Yossi Gottlieb authored
* Clean up EINTR handling so EINTR will not change connection state to begin with. * On TLS, catch EINTR and return it as-is before going through OpenSSL error handling (which seems to not distinguish it from EAGAIN).
-
Huang Zhw authored
-
- 07 Nov, 2021 1 commit
-
-
yoav-steinberg authored
This refactors all `CONFIG SET`s and conf file loading arguments go through the generic config handling interface. Refactoring changes: - All config params go through the `standardConfig` interface (some stuff which is only related to the config file and not the `CONFIG` command still has special handling for rewrite/config file parsing, `loadmodule`, for example.) . - Added `MULTI_ARG_CONFIG` flag for configs to signify they receive a variable number of arguments instead of a single argument. This is used to break up space separated arguments to `CONFIG SET` so the generic setter interface can pass multiple arguments to the setter function. When parsing the config file we also break up anything after the config name into multiple arguments to the setter function. Interface changes: - A side effect of the above interface is that the `bind` argument in the config file can be empty (no argument at all) this is treated the same as passing an single empty string argument (same as `save` already used to work). - Support rewrite and setting `watchdog-period` from config file (was only supported by the CONFIG command till now). - Another side effect is that the `save T X` config argument now supports multiple Time-Changes pairs in a single line like its `CONFIG SET` counterpart. So in the config file you can either do: ``` save 3600 1 save 600 10 ``` or do ``` save 3600 1 600 10 ``` Co-authored-by:
Bjorn Svensson <bjorn.a.svensson@est.tech>
-
- 05 Nov, 2021 1 commit
-
-
Binbin authored
-
- 04 Nov, 2021 3 commits
-
-
Eduardo Semprebon authored
For diskless replication in swapdb mode, considering we already spend replica memory having a backup of current db to restore in case of failure, we can have the following benefits by instead swapping database only in case we succeeded in transferring db from master: - Avoid `LOADING` response during failed and successful synchronization for cases where the replica is already up and running with data. - Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping. - This could be implemented also for disk replication with similar benefits if consumers are willing to spend the extra memory usage. General notes: - The concept of `backupDb` becomes `tempDb` for clarity. - Async loading mode will only kick in if the replica is syncing from a master that has the same repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline. - New property in INFO: `async_loading` to differentiate from the blocking loading - Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db and the tempDb that is passed around. - Because this is affecting replicas only, we assume that if they are not readonly and write commands during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET here anyways to avoid complications. Considerations for review: - We have many cases where server.loading flag is used and even though I tried my best, there may be cases where async_loading should be checked as well and cases where it shouldn't (would require very good understanding of whole code) - Several places that had different behavior depending on the loading flag where actually meant to just handle commands coming from the AOF client differently than ones coming from real clients, changed to check CLIENT_ID_AOF instead. **Additional for Release Notes** - Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't contribute on triggering next database SAVE - New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING - Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event. Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED, ABORTED and COMPLETED. - New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions to allow modules to declare they support the diskless replication with async loading (when absent, we fall back to disk-based loading). Co-authored-by:
Eduardo Semprebon <edus@saxobank.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Itamar Haber authored
Introduced in #8179, this fixes the command's replies in the 0 count edge case. [BREAKING] changes the reply type when count is 0 to an empty array (instead of nil) Moves LPOP ... 0 fast exit path after type check to reply with WRONGTYPE
-
menwen authored
When repl-diskless-load is enabled, the connection is set to the blocking state. The connection may be interrupted by a signal during a system call. This would have resulted in a disconnection and possibly a reconnection loop. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 03 Nov, 2021 3 commits
-
-
perryitay authored
Redis lists are stored in quicklist, which is currently a linked list of ziplists. Ziplists are limited to storing elements no larger than 4GB, so when bigger items are added they're getting truncated. This PR changes quicklists so that they're capable of storing large items in quicklist nodes that are plain string buffers rather than ziplist. As part of the PR there were few other changes in redis: 1. new DEBUG sub-commands: - QUICKLIST-PACKED-THRESHOLD - set the threshold of for the node type to be plan or ziplist. default (1GB) - QUICKLIST <key> - Shows low level info about the quicklist encoding of <key> 2. rdb format change: - A new type was added - RDB_TYPE_LIST_QUICKLIST_2 . - container type (packed / plain) was added to the beginning of the rdb object (before the actual node list). 3. testing: - Tests that requires over 100MB will be by default skipped. a new flag was added to 'runtest' to run the large memory tests (not used by default) Co-authored-by:
sundb <sundbcn@gmail.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
guybe7 authored
Add new no-mandatory-keys flag to support COMMAND GETKEYS of commands which have no mandatory keys. In the past we would have got this error: ``` 127.0.0.1:6379> command getkeys eval "return 1" 0 (error) ERR Invalid arguments specified for command ```
-
perryitay authored
When using SETNX and SETXX we could end up doing key lookup twice. This presents a small inefficiency price. Also once we have statistics of write hit and miss they'll be wrong (recording the same key hit twice)
-
- 02 Nov, 2021 3 commits
-
-
yiyuaner authored
Co-authored-by:
guoyiyuan <guoyiyuan@sbrella.com>
-
Wang Yuan authored
Since the loop in incrementalTrimReplicationBacklog checks the size of histlen, we cannot afford to update it only when the loop exits, this may cause deleting much more replication blocks, and replication backlog may be less than setting size. introduce in #9166 Co-authored-by:
sundb <sundbcn@gmail.com>
-
zhaozhao.zz authored
After PR #9166 , replication backlog is not a real block of memory, just contains a reference points to replication buffer's block and the blocks index (to accelerate search offset when partial sync), so we need update both replication buffer's block's offset and replication backlog blocks index's offset when master restart from RDB, since the `server.master_repl_offset` is changed. The implications of this bug was just a slow search, but not a replication failure.
-
- 01 Nov, 2021 2 commits
-
-
Oran Agra authored
The module test in reply.tcl was introduced by #8521 but didn't run until recently (see #9639) and then it started failing with valgrind. This is because valgrind uses 64 bit long double (unlike most other platforms that have at least 80 bits) But besides valgrind, the tests where also incompatible with ARM32, which also uses 64 bit long doubles. We now use appropriate value to avoid issues with either valgrind or ARM32 In all the double tests, i use 3.141, which is safe since since addReplyDouble uses `%.17Lg` which is able to represent this value without adding any digits due to precision loss. In the long double, since we use `%.17Lf` in ld2string, it preserves 17 significant digits, rather than 17 digit after the decimal point (like in `%.17Lg`). So to make these similar, i use value lower than 1 (no digits left of the period) Lastly, we have the same issue with TCL (no long doubles) so we read raw protocol in that test. Note that the only error before this fix (in both valgrind and ARM32 is this: ``` *** [err]: RM_ReplyWithLongDouble: a float reply in tests/unit/moduleapi/reply.tcl Expected '3.141' to be equal to '3.14100000000000001' (context: type eval line 2 cmd {assert_equal 3.141 [r rw.longdouble 3.141]} proc ::test) ``` so the changes to debug.c and scripting.tcl aren't really needed, but i consider them a cleanup (i.e. scripting.c validated a different constant than the one that's sent to it from debug.c). Another unrelated change is to add the RESP version to the repeated tests in reply.tcl
-
罗泽轩 authored
These definitions already exist in server.h.
-
- 31 Oct, 2021 3 commits
-
-
Binbin authored
The previous code did not check whether COUNT is set. So we can use `lmpop 2 key1 key2 left count 1 count 2`. This situation can occur in LMPOP/BLMPOP/ZMPOP/BZMPOP commands. LMPOP/BLMPOP introduced in #9373, ZMPOP/BZMPOP introduced in #9484.
-
lijinliang authored
Co-authored-by:
lijinliang <lijl@newdt.cn>
-
Rafi Einstein authored
-
- 27 Oct, 2021 2 commits
- 26 Oct, 2021 1 commit
-
-
Wen Hui authored
-
- 25 Oct, 2021 6 commits
-
-
Wang Yuan authored
Add timestamp annotation in AOF, one part of #9325. Enabled with the new `aof-timestamp-enabled` config option. Timestamp annotation format is "#TS:${timestamp}\r\n"." TS" is short of timestamp and this method could save extra bytes in AOF. We can use timestamp annotation for some special functions. - know the executing time of commands - restore data to a specific point-in-time (by using redis-check-rdb to truncate the file)
-
Oran Agra authored
Improve code doc for allowed_firstargs (used to be allowed_commands before #9504. I don't think the text in the code needs to refer to the history (it's not there just for backwards compatibility). instead it should just describe what it does.
-
Guy Korland authored
REDISMODULE_POSTPONED_ARRAY_LEN is deprecated, use REDISMODULE_POSTPONED_LEN instead
-
Itamar Haber authored
overlooked in #9504
-
Shaya Potter authored
Let modules use additional type of RESP3 response (unused by redis so far) Also fix tests that where introduced in #8521 but didn't actually run. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Wang Yuan authored
## Background For redis master, one replica uses one copy of replication buffer, that is a big waste of memory, more replicas more waste, and allocate/free memory for every reply list also cost much. If we set client-output-buffer-limit small and write traffic is heavy, master may disconnect with replicas and can't finish synchronization with replica. If we set client-output-buffer-limit big, master may be OOM when there are many replicas that separately keep much memory. Because replication buffers of different replica client are the same, one simple idea is that all replicas only use one replication buffer, that will effectively save memory. Since replication backlog content is the same as replicas' output buffer, now we can discard replication backlog memory and use global shared replication buffer to implement replication backlog mechanism. ## Implementation I create one global "replication buffer" which contains content of replication stream. The structure of "replication buffer" is similar to the reply list that exists in every client. But the node of list is `replBufBlock`, which has `id, repl_offset, refcount` fields. ```c /* Replication buffer blocks is the list of replBufBlock. * * +--------------+ +--------------+ +--------------+ * | refcount = 1 | ... | refcount = 0 | ... | refcount = 2 | * +--------------+ +--------------+ +--------------+ * | / \ * | / \ * | / \ * Repl Backlog Replia_A Replia_B * * Each replica or replication backlog increments only the refcount of the * 'ref_repl_buf_node' which it points to. So when replica walks to the next * node, it should first increase the next node's refcount, and when we trim * the replication buffer nodes, we remove node always from the head node which * refcount is 0. If the refcount of the head node is not 0, we must stop * trimming and never iterate the next node. */ /* Similar with 'clientReplyBlock', it is used for shared buffers between * all replica clients and replication backlog. */ typedef struct replBufBlock { int refcount; /* Number of replicas or repl backlog using. */ long long id; /* The unique incremental number. */ long long repl_offset; /* Start replication offset of the block. */ size_t size, used; char buf[]; } replBufBlock; ``` So now when we feed replication stream into replication backlog and all replicas, we only need to feed stream into replication buffer `feedReplicationBuffer`. In this function, we set some fields of replication backlog and replicas to references of the global replication buffer blocks. And we also need to check replicas' output buffer limit to free if exceeding `client-output-buffer-limit`, and trim replication backlog if exceeding `repl-backlog-size`. When sending reply to replicas, we also need to iterate replication buffer blocks and send its content, when totally sending one block for replica, we decrease current node count and increase the next current node count, and then free the block which reference is 0 from the head of replication buffer blocks. Since now we use linked list to manage replication backlog, it may cost much time for iterating all linked list nodes to find corresponding replication buffer node. So we create a rax tree to store some nodes for index, but to avoid rax tree occupying too much memory, i record one per 64 nodes for index. Currently, to make partial resynchronization as possible as much, we always let replication backlog as the last reference of replication buffer blocks, backlog size may exceeds our setting if slow replicas that reference vast replication buffer blocks, and this method doesn't increase memory usage since they share replication buffer. To avoid freezing server for freeing unreferenced replication buffer blocks when we need to trim backlog for exceeding backlog size setting, we trim backlog incrementally (free 64 blocks per call now), and make it faster in `beforeSleep` (free 640 blocks). ### Other changes - `mem_total_replication_buffers`: we add this field in INFO command, it means the total memory of replication buffers used. - `mem_clients_slaves`: now even replica is slow to replicate, and its output buffer memory is not 0, but it still may be 0, since replication backlog and replicas share one global replication buffer, only if replication buffer memory is more than the repl backlog setting size, we consider the excess as replicas' memory. Otherwise, we think replication buffer memory is the consumption of repl backlog. - Key eviction Since all replicas and replication backlog share global replication buffer, we think only the part of exceeding backlog size the extra separate consumption of replicas. Because we trim backlog incrementally in the background, backlog size may exceeds our setting if slow replicas that reference vast replication buffer blocks disconnect. To avoid massive eviction loop, we don't count the delayed freed replication backlog into used memory even if there are no replicas, i.e. we also regard this memory as replicas's memory. - `client-output-buffer-limit` check for replica clients It doesn't make sense to set the replica clients output buffer limit lower than the repl-backlog-size config (partial sync will succeed and then replica will get disconnected). Such a configuration is ignored (the size of repl-backlog-size will be used). This doesn't have memory consumption implications since the replica client will share the backlog buffers memory. - Drop replication backlog after loading data if needed We always create replication backlog if server is a master, we need it because we put DELs in it when loading expired keys in RDB, but if RDB doesn't have replication info or there is no rdb, it is not possible to support partial resynchronization, to avoid extra memory of replication backlog, we drop it. - Multi IO threads Since all replicas and replication backlog use global replication buffer, if I/O threads are enabled, to guarantee data accessing thread safe, we must let main thread handle sending the output buffer to all replicas. But before, other IO threads could handle sending output buffer of all replicas. ## Other optimizations This solution resolve some other problem: - When replicas disconnect with master since of out of output buffer limit, releasing the output buffer of replicas may freeze server if we set big `client-output-buffer-limit` for replicas, but now, it doesn't cause freezing. - This implementation may mitigate reply list copy cost time(also freezes server) when one replication has huge reply buffer and another replica can copy buffer for full synchronization. now, we just copy reference info, it is very light. - If we set replication backlog size big, it also may cost much time to copy replication backlog into replica's output buffer. But this commit eliminates this problem. - Resizing replication backlog size doesn't empty current replication backlog content.
-
- 24 Oct, 2021 2 commits
-
-
Oran Agra authored
I moved a bunch of stats in redisFork to be executed only on successful fork, since they seem wrong to be done when it failed. I guess when fork fails it does that immediately, no latency spike.
-
Itamar Haber authored
Introduced via typo in #9504. Also adds a sanity test for coverage.
-