- 21 Dec, 2021 1 commit
-
-
zhugezy authored
# Background The main goal of this PR is to remove relevant logics on Lua script verbatim replication, only keeping effects replication logic, which has been set as default since Redis 5.0. As a result, Lua in Redis 7.0 would be acting the same as Redis 6.0 with default configuration from users' point of view. There are lots of reasons to remove verbatim replication. Antirez has listed some of the benefits in Issue #5292: >1. No longer need to explain to users side effects into scripts. They can do whatever they want. >2. No need for a cache about scripts that we sent or not to the slaves. >3. No need to sort the output of certain commands inside scripts (SMEMBERS and others): this both simplifies and gains speed. >4. No need to store scripts inside the RDB file in order to startup correctly. >5. No problems about evicting keys during the script execution. When looking back at Redis 5.0, antirez and core team decided to set the config `lua-replicate-commands yes` by default instead of removing verbatim replication directly, in case some bad situations happened. 3 years later now before Redis 7.0, it's time to remove it formally. # Changes - configuration for lua-replicate-commands removed - created config file stub for backward compatibility - Replication script cache removed - this is useless under script effects replication - relevant statistics also removed - script persistence in RDB files is also removed - Propagation of SCRIPT LOAD and SCRIPT FLUSH to replica / AOF removed - Deterministic execution logic in scripts removed (i.e. don't run write commands after random ones, and sorting output of commands with random order) - the flags indicating which commands have non-deterministic results are kept as hints to clients. - `redis.replicate_commands()` & `redis.set_repl()` changed - now `redis.replicate_commands()` does nothing and return an 1 - ...and then `redis.set_repl()` can be issued before `redis.replicate_commands()` now - Relevant TCL cases adjusted - DEBUG lua-always-replicate-commands removed # Other changes - Fix a recent bug comparing CLIENT_ID_AOF to original_client->flags instead of id. (introduced in #9780) Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 20 Dec, 2021 1 commit
-
-
Binbin authored
Recent PRs have introduced some failures, this commit try to fix these CI failures. Here are the changes: 1. Enable debug-command in sentinel test. ``` Master reboot in very short time: ERR DEBUG command not allowed. If the enable-debug-command option is set to "local", you can run it from a local connection, otherwise you need to set this option in the configuration file, and then restart the server. ``` 2. Enable protected-config in sentinel test. ``` SDOWN is triggered by misconfigured instance replying with errors: ERR CONFIG SET failed (possibly related to argument 'dir') - can't set protected config ``` 3. Enable debug-command in cluster test. ``` Verify slaves consistency: ERR DEBUG command not allowed. If the enable-debug-command option is set to "local", you can run it from a local connection, otherwise you need to set this option in the configuration file, and then restart the server. ``` 4. quicklist fill should be signed int. The reason for the modification is to eliminate the warning. Modify `int fill: QL_FILL_BITS` to `signed int fill: QL_FILL_BITS` The first three were introduced at #9920 (same issue). And the last one was introduced at #9962.
-
- 19 Dec, 2021 1 commit
-
-
YaacovHazan authored
Block sensitive configs and commands by default. * `enable-protected-configs` - block modification of configs with the new `PROTECTED_CONFIG` flag. Currently we add this flag to `dbfilename`, and `dir` configs, all of which are non-mutable configs that can set a file redis will write to. * `enable-debug-command` - block the `DEBUG` command * `enable-module-command` - block the `MODULE` command These have a default value set to `no`, so that these features are not exposed by default to client connections, and can only be set by modifying the config file. Users can change each of these to either `yes` (allow all access), or `local` (allow access from local TCP connections and unix domain connections) Note that this is a **breaking change** (specifically the part about MODULE command being disabled by default). I.e. we don't consider DEBUG command being blocked as an issue (people shouldn't have been using it), and the few configs we protected are unlikely to have been set at runtime anyway. On the other hand, it's likely to assume some users who use modules, load them from the config file anyway. Note that's the whole point of this PR, for redis to be more secure by default and reduce the attack surface on innocent users, so secure defaults will necessarily mean a breaking change.
-
- 18 Dec, 2021 1 commit
-
-
guybe7 authored
some languages can build a json-like object by parsing a textual json, but it works poorly when attributes contain hyphens example in JS: ``` let j = JSON.parse(json) j['key-name'] <- works j.key-name <= illegal syntax ```
-
- 17 Dec, 2021 1 commit
-
-
ny0312 authored
Introduce memory management on cluster link buffers: * Introduce a new `cluster-link-sendbuf-limit` config that caps memory usage of cluster bus link send buffers. * Introduce a new `CLUSTER LINKS` command that displays current TCP links to/from peers. * Introduce a new `mem_cluster_links` field under `INFO` command output, which displays the overall memory usage by all current cluster links. * Introduce a new `total_cluster_links_buffer_limit_exceeded` field under `CLUSTER INFO` command output, which displays the accumulated count of cluster links freed due to `cluster-link-sendbuf-limit`.
-
- 16 Dec, 2021 1 commit
-
-
ranshid authored
* Fix too long unix domain socket file path Co-authored-by:
Madelyn Olson <madelyneolson@gmail.com>
-
- 15 Dec, 2021 2 commits
-
-
guybe7 authored
Delete the hardcoded command table and replace it with an auto-generated table, based on a JSON file that describes the commands (each command must have a JSON file). These JSON files are the SSOT of everything there is to know about Redis commands, and it is reflected fully in COMMAND INFO. These JSON files are used to generate commands.c (using a python script), which is then committed to the repo and compiled. The purpose is: * Clients and proxies will be able to get much more info from redis, instead of relying on hard coded logic. * drop the dependency between Redis-user and the commands.json in redis-doc. * delete help.h and have redis-cli learn everything it needs to know just by issuing COMMAND (will be done in a separate PR) * redis.io should stop using commands.json and learn everything from Redis (ultimately one of the release artifacts should be a large JSON, containing all the information about all of the commands, which will be generated from COMMAND's reply) * the byproduct of this is: * module commands will be able to provide that info and possibly be more of a first-class citizens * in theory, one may be able to generate a redis client library for a strictly typed language, by using this info. ### Interface changes #### COMMAND INFO's reply change (and arg-less COMMAND) Before this commit the reply at index 7 contained the key-specs list and reply at index 8 contained the sub-commands list (Both unreleased). Now, reply at index 7 is a map of: - summary - short command description - since - debut version - group - command group - complexity - complexity string - doc-flags - flags used for documentation (e.g. "deprecated") - deprecated-since - if deprecated, from which version? - replaced-by - if deprecated, which command replaced it? - history - a list of (version, what-changed) tuples - hints - a list of strings, meant to provide hints for clients/proxies. see https://github.com/redis/redis/issues/9876 - arguments - an array of arguments. each element is a map, with the possibility of nesting (sub-arguments) - key-specs - an array of keys specs (already in unstable, just changed location) - subcommands - a list of sub-commands (already in unstable, just changed location) - reply-schema - will be added in the future (see https://github.com/redis/redis/issues/9845) more details on these can be found in https://github.com/redis/redis-doc/pull/1697 only the first three fields are mandatory #### API changes (unreleased API obviously) now they take RedisModuleCommand opaque pointer instead of looking up the command by name - RM_CreateSubcommand - RM_AddCommandKeySpec - RM_SetCommandKeySpecBeginSearchIndex - RM_SetCommandKeySpecBeginSearchKeyword - RM_SetCommandKeySpecFindKeysRange - RM_SetCommandKeySpecFindKeysKeynum Currently, we did not add module API to provide additional information about their commands because we couldn't agree on how the API should look like, see https://github.com/redis/redis/issues/9944 . ### Somehow related changes 1. Literals should be in uppercase while placeholder in lowercase. Now all the GEO* command will be documented with M|KM|FT|MI and can take both lowercase and uppercase ### Unrelated changes 1. Bugfix: no_madaory_keys was absent in COMMAND's reply 2. expose CMD_MODULE as "module" via COMMAND 3. have a dedicated uint64 for ACL categories (instead of having them in the same uint64 as command flags) Co-authored-by:
Itamar Haber <itamar@garantiadata.com>
-
丽媛自己动 authored
to do here
-
- 08 Dec, 2021 1 commit
-
-
Binbin authored
For `SENTINEL SET`, we can use in these ways: 1. SENTINEL SET mymaster quorum 3 2. SENTINEL SET mymaster quorum 5 parallel-syncs 1 For `SENTINEL SIMULATE-FAILURE`, although it is only used for testing: 1. SENTINEL SIMULATE-FAILURE CRASH-AFTER-ELECTION 2. SENTINEL SIMULATE-FAILURE CRASH-AFTER-ELECTION CRASH-AFTER-PROMOTION
-
- 07 Dec, 2021 1 commit
-
-
yoav-steinberg authored
When disabling redis oom-score-adj managment we restore the base value read before enabling oom-score-adj management. This fixes an issue introduced in #9748 where updating `oom-score-adj-values` while `oom-score-adj` was set to `no` would write the base oom score adj value read on startup to `/proc`. This is a bug since while `oom-score-adj` is disabled we should never write to proc and let external processes manage it. Added appropriate tests.
-
- 02 Dec, 2021 1 commit
-
-
meir@redislabs.com authored
Redis function unit is located inside functions.c and contains Redis Function implementation: 1. FUNCTION commands: * FUNCTION CREATE * FCALL * FCALL_RO * FUNCTION DELETE * FUNCTION KILL * FUNCTION INFO 2. Register engine In addition, this commit introduce the first engine that uses the Redis Function capabilities, the Lua engine.
-
- 01 Dec, 2021 3 commits
-
-
meir@redislabs.com authored
Script unit is a new unit located on script.c. Its purpose is to provides an API for functions (and eval) to interact with Redis. Interaction includes mostly executing commands, but also functionalities like calling Redis back on long scripts or check if the script was killed. The interaction is done using a scriptRunCtx object that need to be created by the user and initialized using scriptPrepareForRun. Detailed list of functionalities expose by the unit: 1. Calling commands (including all the validation checks such as acl, cluster, read only run, ...) 2. Set Resp 3. Set Replication method (AOF/REPLICATION/NONE) 4. Call Redis back to on long running scripts to allow Redis reply to clients and perform script kill The commit introduce the new unit and uses it on eval commands to interact with Redis.
-
meir@redislabs.com authored
The following variable was renamed: 1. lua_caller -> script_caller 2. lua_time_limit -> script_time_limit 3. lua_timedout -> script_timedout 4. lua_oom -> script_oom 5. lua_disable_deny_script -> script_disable_deny_script 6. in_eval -> in_script The following variables was moved to lctx under eval.c 1. lua 2. lua_client 3. lua_cur_script 4. lua_scripts 5. lua_scripts_mem 6. lua_replicate_commands 7. lua_write_dirty 8. lua_random_dirty 9. lua_multi_emitted 10. lua_repl 11. lua_kill 12. lua_time_start 13. lua_time_snapshot This commit is in a low risk of introducing any issues and it is just moving varibales around and not changing any logic.
-
yoav-steinberg authored
We can now do: `config set maxmemory 10m repl-backlog-size 5m` ## Basic algorithm to support "transaction like" config sets: 1. Backup all relevant current values (via get). 2. Run "verify" and "set" on everything, if we fail run "restore". 3. Run "apply" on everything (optional optimization: skip functions already run). If we fail run "restore". 4. Return success. ### restore 1. Run set on everything in backup. If we fail log it and continue (this puts us in an undefined state but we decided it's better than the alternative of panicking). This indicates either a bug or some unsupported external state. 2. Run apply on everything in backup (optimization: skip functions already run). If we fail log it (see comment above). 3. Return error. ## Implementation/design changes: * Apply function are idempotent (have no effect if they are run more than once for the same config). * No indication in set functions if we're reading the config or running from the `CONFIG SET` command (removed `update` argument). * Set function should set some config variable and assume an (optional) apply function will use that later to apply. If we know this setting can be safely applied immediately and can always be reverted and doesn't depend on any other configuration we can apply immediately from within the set function (and not store the setting anywhere). This is the case of this `dir` config, for example, which has no apply function. No apply function is need also in the case that setting the variable in the `server` struct is all that needs to be done to make the configuration take effect. Note that the original concept of `update_fn`, which received the old and new values was removed and replaced by the optional apply function. * Apply functions use settings written to the `server` struct and don't receive any inputs. * I take care that for the generic (non-special) configs if there's no change I avoid calling the setter (possible optimization: avoid calling the apply function as well). * Passing the same config parameter more than once to `config set` will fail. You can't do `config set my-setting value1 my-setting value2`. Note that getting `save` in the context of the conf file parsing to work here as before was a pain. The conf file supports an aggregate `save` definition, where each `save` line is added to the server's save params. This is unlike any other line in the config file where each line overwrites any previous configuration. Since we now support passing multiple save params in a single line (see top comments about `save` in https://github.com/redis/redis/pull/9644) we should deprecate the aggregate nature of this config line and perhaps reduce this ugly code in the future.
-
- 28 Nov, 2021 1 commit
-
-
sundb authored
Remove lcsGetKeys to clean up the remaining STRALGO after #9733. i.e. it still used a getkeys_proc which was still looking for the KEYS or STRINGS arguments
-
- 24 Nov, 2021 1 commit
-
-
sundb authored
Part three of implementing #8702, following #8887 and #9366 . ## Description of the feature 1. Replace the ziplist container of quicklist with listpack. 2. Convert existing quicklist ziplists on RDB loading time. an O(n) operation. ## Interface changes 1. New `list-max-listpack-size` config is an alias for `list-max-ziplist-size`. 2. Replace `debug ziplist` command with `debug listpack`. ## Internal changes 1. Add `lpMerge` to merge two listpacks . (same as `ziplistMerge`) 2. Add `lpRepr` to print info of listpack which is used in debugCommand and `quicklistRepr`. (same as `ziplistRepr`) 3. Replace `QUICKLIST_NODE_CONTAINER_ZIPLIST` with `QUICKLIST_NODE_CONTAINER_PACKED`(following #9357 ). It represent that a quicklistNode is a packed node, as opposed to a plain node. 4. Remove `createZiplistObject` method, which is never used. 5. Calculate listpack entry size using overhead overestimation in `quicklistAllowInsert`. We prefer an overestimation, which would at worse lead to a few bytes below the lowest limit of 4k. ## Improvements 1. Calling `lpShrinkToFit` after converting Ziplist to listpack, which was missed at #9366. 2. Optimize `quicklistAppendPlainNode` to avoid memcpy data. ## Bugfix 1. Fix crash in `quicklistRepr` when ziplist is compressed, introduced from #9366. ## Test 1. Add unittest for `lpMerge`. 2. Modify the old quicklist ziplist corrupt dump test. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 23 Nov, 2021 1 commit
-
-
guybe7 authored
Some people complain that QUIT is missing from help/command table. Not appearing in COMMAND command, command stats, ACL, etc. and instead, there's a hack in processCommand with a comment that looks outdated. Note that it is [documented](https://redis.io/commands/quit) At the same time, HOST: and POST are there in the command table although these are not real commands. They would appear in the COMMAND command, and even in commandstats. Other changes: 1. Initialize the static logged_time static var in securityWarningCommand 2. add `no-auth` flag to RESET so it can always be executed.
-
- 18 Nov, 2021 2 commits
-
-
Eduardo Semprebon authored
Currently PING returns different status when server is not serving data, for example when `LOADING` or `BUSY`. But same was not true for `MASTERDOWN` This commit makes PING reply with `MASTERDOWN` when replica-serve-stale-data=no and link is MASTER is down.
-
guybe7 authored
Drop the STRALGO command, now LCS is a command of its own and it only works on keys (not input strings). The motivation is that STRALGO's syntax was really messed-up... - assumes all (future) string algorithms will take similar arguments - mixes command that takes keys and one that doesn't in the same command. - make it nearly impossible to expose the right key spec in COMMAND INFO (issues cluster clients) - hard for cluster clients to determine the key names (firstkey, lastkey, etc) - hard for ACL / flags (is it a read command?) This is a breaking change.
-
- 16 Nov, 2021 2 commits
-
-
sundb authored
This is a preparation step in order to add a new test in quicklist.c see #9776
-
guoxiang1996 authored
The client flags is a 64 bit integer, but the temporary cached value on the stack of call() is 32 bit. luckily this doesn't lead to any bugs since the only flags used against this variables are below 32 bit.
-
- 07 Nov, 2021 1 commit
-
-
yoav-steinberg authored
This refactors all `CONFIG SET`s and conf file loading arguments go through the generic config handling interface. Refactoring changes: - All config params go through the `standardConfig` interface (some stuff which is only related to the config file and not the `CONFIG` command still has special handling for rewrite/config file parsing, `loadmodule`, for example.) . - Added `MULTI_ARG_CONFIG` flag for configs to signify they receive a variable number of arguments instead of a single argument. This is used to break up space separated arguments to `CONFIG SET` so the generic setter interface can pass multiple arguments to the setter function. When parsing the config file we also break up anything after the config name into multiple arguments to the setter function. Interface changes: - A side effect of the above interface is that the `bind` argument in the config file can be empty (no argument at all) this is treated the same as passing an single empty string argument (same as `save` already used to work). - Support rewrite and setting `watchdog-period` from config file (was only supported by the CONFIG command till now). - Another side effect is that the `save T X` config argument now supports multiple Time-Changes pairs in a single line like its `CONFIG SET` counterpart. So in the config file you can either do: ``` save 3600 1 save 600 10 ``` or do ``` save 3600 1 600 10 ``` Co-authored-by:
Bjorn Svensson <bjorn.a.svensson@est.tech>
-
- 04 Nov, 2021 1 commit
-
-
Eduardo Semprebon authored
For diskless replication in swapdb mode, considering we already spend replica memory having a backup of current db to restore in case of failure, we can have the following benefits by instead swapping database only in case we succeeded in transferring db from master: - Avoid `LOADING` response during failed and successful synchronization for cases where the replica is already up and running with data. - Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping. - This could be implemented also for disk replication with similar benefits if consumers are willing to spend the extra memory usage. General notes: - The concept of `backupDb` becomes `tempDb` for clarity. - Async loading mode will only kick in if the replica is syncing from a master that has the same repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline. - New property in INFO: `async_loading` to differentiate from the blocking loading - Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db and the tempDb that is passed around. - Because this is affecting replicas only, we assume that if they are not readonly and write commands during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET here anyways to avoid complications. Considerations for review: - We have many cases where server.loading flag is used and even though I tried my best, there may be cases where async_loading should be checked as well and cases where it shouldn't (would require very good understanding of whole code) - Several places that had different behavior depending on the loading flag where actually meant to just handle commands coming from the AOF client differently than ones coming from real clients, changed to check CLIENT_ID_AOF instead. **Additional for Release Notes** - Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't contribute on triggering next database SAVE - New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING - Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event. Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED, ABORTED and COMPLETED. - New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions to allow modules to declare they support the diskless replication with async loading (when absent, we fall back to disk-based loading). Co-authored-by:
Eduardo Semprebon <edus@saxobank.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 03 Nov, 2021 1 commit
-
-
guybe7 authored
Add new no-mandatory-keys flag to support COMMAND GETKEYS of commands which have no mandatory keys. In the past we would have got this error: ``` 127.0.0.1:6379> command getkeys eval "return 1" 0 (error) ERR Invalid arguments specified for command ```
-
- 02 Nov, 2021 1 commit
-
-
zhaozhao.zz authored
After PR #9166 , replication backlog is not a real block of memory, just contains a reference points to replication buffer's block and the blocks index (to accelerate search offset when partial sync), so we need update both replication buffer's block's offset and replication backlog blocks index's offset when master restart from RDB, since the `server.master_repl_offset` is changed. The implications of this bug was just a slow search, but not a replication failure.
-
- 27 Oct, 2021 1 commit
-
-
guybe7 authored
-
- 25 Oct, 2021 3 commits
-
-
Wang Yuan authored
Add timestamp annotation in AOF, one part of #9325. Enabled with the new `aof-timestamp-enabled` config option. Timestamp annotation format is "#TS:${timestamp}\r\n"." TS" is short of timestamp and this method could save extra bytes in AOF. We can use timestamp annotation for some special functions. - know the executing time of commands - restore data to a specific point-in-time (by using redis-check-rdb to truncate the file)
-
Itamar Haber authored
overlooked in #9504
-
Wang Yuan authored
## Background For redis master, one replica uses one copy of replication buffer, that is a big waste of memory, more replicas more waste, and allocate/free memory for every reply list also cost much. If we set client-output-buffer-limit small and write traffic is heavy, master may disconnect with replicas and can't finish synchronization with replica. If we set client-output-buffer-limit big, master may be OOM when there are many replicas that separately keep much memory. Because replication buffers of different replica client are the same, one simple idea is that all replicas only use one replication buffer, that will effectively save memory. Since replication backlog content is the same as replicas' output buffer, now we can discard replication backlog memory and use global shared replication buffer to implement replication backlog mechanism. ## Implementation I create one global "replication buffer" which contains content of replication stream. The structure of "replication buffer" is similar to the reply list that exists in every client. But the node of list is `replBufBlock`, which has `id, repl_offset, refcount` fields. ```c /* Replication buffer blocks is the list of replBufBlock. * * +--------------+ +--------------+ +--------------+ * | refcount = 1 | ... | refcount = 0 | ... | refcount = 2 | * +--------------+ +--------------+ +--------------+ * | / \ * | / \ * | / \ * Repl Backlog Replia_A Replia_B * * Each replica or replication backlog increments only the refcount of the * 'ref_repl_buf_node' which it points to. So when replica walks to the next * node, it should first increase the next node's refcount, and when we trim * the replication buffer nodes, we remove node always from the head node which * refcount is 0. If the refcount of the head node is not 0, we must stop * trimming and never iterate the next node. */ /* Similar with 'clientReplyBlock', it is used for shared buffers between * all replica clients and replication backlog. */ typedef struct replBufBlock { int refcount; /* Number of replicas or repl backlog using. */ long long id; /* The unique incremental number. */ long long repl_offset; /* Start replication offset of the block. */ size_t size, used; char buf[]; } replBufBlock; ``` So now when we feed replication stream into replication backlog and all replicas, we only need to feed stream into replication buffer `feedReplicationBuffer`. In this function, we set some fields of replication backlog and replicas to references of the global replication buffer blocks. And we also need to check replicas' output buffer limit to free if exceeding `client-output-buffer-limit`, and trim replication backlog if exceeding `repl-backlog-size`. When sending reply to replicas, we also need to iterate replication buffer blocks and send its content, when totally sending one block for replica, we decrease current node count and increase the next current node count, and then free the block which reference is 0 from the head of replication buffer blocks. Since now we use linked list to manage replication backlog, it may cost much time for iterating all linked list nodes to find corresponding replication buffer node. So we create a rax tree to store some nodes for index, but to avoid rax tree occupying too much memory, i record one per 64 nodes for index. Currently, to make partial resynchronization as possible as much, we always let replication backlog as the last reference of replication buffer blocks, backlog size may exceeds our setting if slow replicas that reference vast replication buffer blocks, and this method doesn't increase memory usage since they share replication buffer. To avoid freezing server for freeing unreferenced replication buffer blocks when we need to trim backlog for exceeding backlog size setting, we trim backlog incrementally (free 64 blocks per call now), and make it faster in `beforeSleep` (free 640 blocks). ### Other changes - `mem_total_replication_buffers`: we add this field in INFO command, it means the total memory of replication buffers used. - `mem_clients_slaves`: now even replica is slow to replicate, and its output buffer memory is not 0, but it still may be 0, since replication backlog and replicas share one global replication buffer, only if replication buffer memory is more than the repl backlog setting size, we consider the excess as replicas' memory. Otherwise, we think replication buffer memory is the consumption of repl backlog. - Key eviction Since all replicas and replication backlog share global replication buffer, we think only the part of exceeding backlog size the extra separate consumption of replicas. Because we trim backlog incrementally in the background, backlog size may exceeds our setting if slow replicas that reference vast replication buffer blocks disconnect. To avoid massive eviction loop, we don't count the delayed freed replication backlog into used memory even if there are no replicas, i.e. we also regard this memory as replicas's memory. - `client-output-buffer-limit` check for replica clients It doesn't make sense to set the replica clients output buffer limit lower than the repl-backlog-size config (partial sync will succeed and then replica will get disconnected). Such a configuration is ignored (the size of repl-backlog-size will be used). This doesn't have memory consumption implications since the replica client will share the backlog buffers memory. - Drop replication backlog after loading data if needed We always create replication backlog if server is a master, we need it because we put DELs in it when loading expired keys in RDB, but if RDB doesn't have replication info or there is no rdb, it is not possible to support partial resynchronization, to avoid extra memory of replication backlog, we drop it. - Multi IO threads Since all replicas and replication backlog use global replication buffer, if I/O threads are enabled, to guarantee data accessing thread safe, we must let main thread handle sending the output buffer to all replicas. But before, other IO threads could handle sending output buffer of all replicas. ## Other optimizations This solution resolve some other problem: - When replicas disconnect with master since of out of output buffer limit, releasing the output buffer of replicas may freeze server if we set big `client-output-buffer-limit` for replicas, but now, it doesn't cause freezing. - This implementation may mitigate reply list copy cost time(also freezes server) when one replication has huge reply buffer and another replica can copy buffer for full synchronization. now, we just copy reference info, it is very light. - If we set replication backlog size big, it also may cost much time to copy replication backlog into replica's output buffer. But this commit eliminates this problem. - Resizing replication backlog size doesn't empty current replication backlog content.
-
- 24 Oct, 2021 2 commits
-
-
Oran Agra authored
I moved a bunch of stats in redisFork to be executed only on successful fork, since they seem wrong to be done when it failed. I guess when fork fails it does that immediately, no latency spike.
-
Itamar Haber authored
Introduced via typo in #9504. Also adds a sanity test for coverage.
-
- 21 Oct, 2021 1 commit
-
-
guybe7 authored
-
- 20 Oct, 2021 2 commits
-
-
Oran Agra authored
Following #9483 the daily CI exposed a few problems. * The cluster creation code (uses redis-cli) is complicated to test with TLS enabled. for now i'm just skipping them since the tests we run there don't really need that kind of coverage * cluster port binding failures note that `find_available_port` already looks for a free cluster port but the code in `wait_server_started` couldn't detect the failure of binding (the text it greps for wasn't found in the log)
-
guybe7 authored
## Intro The purpose is to allow having different flags/ACL categories for subcommands (Example: CONFIG GET is ok-loading but CONFIG SET isn't) We create a small command table for every command that has subcommands and each subcommand has its own flags, etc. (same as a "regular" command) This commit also unites the Redis and the Sentinel command tables ## Affected commands CONFIG Used to have "admin ok-loading ok-stale no-script" Changes: 1. Dropped "ok-loading" in all except GET (this doesn't change behavior since there were checks in the code doing that) XINFO Used to have "read-only random" Changes: 1. Dropped "random" in all except CONSUMERS XGROUP Used to have "write use-memory" Changes: 1. Dropped "use-memory" in all except CREATE and CREATECONSUMER COMMAND No changes. MEMORY Used to have "random read-only" Changes: 1. Dropped "random" in PURGE and USAGE ACL Used to have "admin no-script ok-loading ok-stale" Changes: 1. Dropped "admin" in WHOAMI, GENPASS, and CAT LATENCY No changes. MODULE No changes. SLOWLOG Used to have "admin random ok-loading ok-stale" Changes: 1. Dropped "random" in RESET OBJECT Used to have "read-only random" Changes: 1. Dropped "random" in ENCODING and REFCOUNT SCRIPT Used to have "may-replicate no-script" Changes: 1. Dropped "may-replicate" in all except FLUSH and LOAD CLIENT Used to have "admin no-script random ok-loading ok-stale" Changes: 1. Dropped "random" in all except INFO and LIST 2. Dropped "admin" in ID, TRACKING, CACHING, GETREDIR, INFO, SETNAME, GETNAME, and REPLY STRALGO No changes. PUBSUB No changes. CLUSTER Changes: 1. Dropped "admin in countkeysinslots, getkeysinslot, info, nodes, keyslot, myid, and slots SENTINEL No changes. (note that DEBUG also fits, but we decided not to convert it since it's for debugging and anyway undocumented) ## New sub-command This commit adds another element to the per-command output of COMMAND, describing the list of subcommands, if any (in the same structure as "regular" commands) Also, it adds a new subcommand: ``` COMMAND LIST [FILTERBY (MODULE <module-name>|ACLCAT <cat>|PATTERN <pattern>)] ``` which returns a set of all commands (unless filters), but excluding subcommands. ## Module API A new module API, RM_CreateSubcommand, was added, in order to allow module writer to define subcommands ## ACL changes: 1. Now, that each subcommand is actually a command, each has its own ACL id. 2. The old mechanism of allowed_subcommands is redundant (blocking/allowing a subcommand is the same as blocking/allowing a regular command), but we had to keep it, to support the widespread usage of allowed_subcommands to block commands with certain args, that aren't subcommands (e.g. "-select +select|0"). 3. I have renamed allowed_subcommands to allowed_firstargs to emphasize the difference. 4. Because subcommands are commands in ACL too, you can now use "-" to block subcommands (e.g. "+client -client|kill"), which wasn't possible in the past. 5. It is also possible to use the allowed_firstargs mechanism with subcommand. For example: `+config -config|set +config|set|loglevel` will block all CONFIG SET except for setting the log level. 6. All of the ACL changes above required some amount of refactoring. ## Misc 1. There are two approaches: Either each subcommand has its own function or all subcommands use the same function, determining what to do according to argv[0]. For now, I took the former approaches only with CONFIG and COMMAND, while other commands use the latter approach (for smaller blamelog diff). 2. Deleted memoryGetKeys: It is no longer needed because MEMORY USAGE now uses the "range" key spec. 4. Bugfix: GETNAME was missing from CLIENT's help message. 5. Sentinel and Redis now use the same table, with the same function pointer. Some commands have a different implementation in Sentinel, so we redirect them (these are ROLE, PUBLISH, and INFO). 6. Command stats now show the stats per subcommand (e.g. instead of stats just for "config" you will have stats for "config|set", "config|get", etc.) 7. It is now possible to use COMMAND directly on subcommands: COMMAND INFO CONFIG|GET (The pipeline syntax was inspired from ACL, and can be used in functions lookupCommandBySds and lookupCommandByCString) 8. STRALGO is now a container command (has "help") ## Breaking changes: 1. Command stats now show the stats per subcommand (see (5) above)
-
- 19 Oct, 2021 1 commit
-
-
Bjorn Svensson authored
Since the size of mode_t is platform dependant we handle the `unixsocketperm` configuration as a generic int type. mode_t is either an unsigned int or unsigned short (macOS) and the range-limits allows for a simple cast to a mode_t.
-
- 13 Oct, 2021 1 commit
-
-
Madelyn Olson authored
Improved the reliability of cluster replica sync tests
-
- 08 Oct, 2021 2 commits
-
-
Bjorn Svensson authored
Move config `logfile` to generic configs
-
Bjorn Svensson authored
-
- 07 Oct, 2021 1 commit
-
-
Huang Zhw authored
Tracking invalidation messages were sometimes sent in inconsistent order, before the command's reply rather than after. In addition to that, they were sometimes embedded inside other commands responses, like MULTI-EXEC and MGET.
-
- 06 Oct, 2021 1 commit
-
-
Andy Pan authored
Implement createPipe() to combine creating pipe and setting flags, also reduce system calls by prioritizing pipe2() over pipe(). Without createPipe(), we have to call pipe() to create a pipe and then call some functions (like anetCloexec() and anetNonBlock()) of anet.c to set flags respectively, which leads to some extra system calls, now we can leverage pipe2() to combine them and make the process of creating pipe more convergent in createPipe(). Co-authored-by:
Viktor Söderqvist <viktor.soderqvist@est.tech> Co-authored-by:
Oran Agra <oran@redislabs.com>
-