- 04 Feb, 2022 1 commit
-
-
Viktor Söderqvist authored
Adds RM_SetCommandInfo, allowing modules to provide the following command info: * summary * complexity * since * history * hints * arity * key specs * args This information affects the output of `COMMAND`, `COMMAND INFO` and `COMMAND DOCS`, Cluster, ACL and is used to filter commands with the wrong number of arguments before the call reaches the module code. The recently added API functions for key specs (never released) are removed. A minimalist example would look like so: ```c RedisModuleCommand *mycmd = RedisModule_GetCommand(ctx,"mymodule.mycommand"); RedisModuleCommandInfo mycmd_info = { .version = REDISMODULE_COMMAND_INFO_VERSION, .arity = -5, .summary = "some description", }; if (RedisModule_SetCommandInfo(mycmd, &mycmd_info) == REDISMODULE_ERR) return REDISMODULE_ERR; ```` Notes: * All the provided information (including strings) is copied, not keeping references to the API input data. * The version field is actually a static struct that contains the sizes of the the structs used in arrays, so we can extend these in the future and old version will still be able to take the part they can support.
-
- 01 Feb, 2022 1 commit
-
-
郭伟光 authored
This is done to avoid a crash when the timer fires after the module was unloaded. Or memory leaks in case we wanted to just ignore the timer. It'll cause the MODULE UNLOAD command to return with an error Co-authored-by:
sundb <sundbcn@gmail.com>
-
- 26 Jan, 2022 1 commit
-
-
Binbin authored
SET is a R+W command, because it can also do `GET` on the data. SET without GET is a write-only command. SET with GET is a read+write command. In #9974, we added ACL to let users define write-only access. So when the user uses SET with GET option, and the user doesn't have the READ permission on the key, we need to reject it, but we rather not reject users with write-only permissions from using the SET command when they don't use GET. In this commit, we add a `getkeys_proc` function to control key flags in SET command. We also add a new key spec flag (VARIABLE_FLAGS) means that some keys might have different flags depending on arguments. We also handle BITFIELD command, add a `bitfieldGetKeys` function. BITFIELD GET is a READ ONLY command. BITFIELD SET or BITFIELD INCR are READ WRITE commands. Other changes: 1. SET GET was added in 6.2, add the missing since in set.json 2. Added tests to cover the changes in acl-v2.tcl 3. Fix some typos in server.h and cleanups in acl-v2.tcl Co-authored-by:
Madelyn Olson <madelyneolson@gmail.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 24 Jan, 2022 1 commit
-
-
Viktor Söderqvist authored
The keyspec API is not yet released and there is a plan to change it in #10108, which is going to be included in RC2. Therefore, we hide it in RC1 to avoid introducing a breaking change in RC2. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 23 Jan, 2022 1 commit
-
-
Binbin authored
Summary of changes: 1. Rename `redisCommand->name` to `redisCommand->declared_name`, it is a const char * for native commands and SDS for module commands. 2. Store the [sub]command fullname in `redisCommand->fullname` (sds). 3. List subcommands in `ACL CAT` 4. List subcommands in `COMMAND LIST` 5. `moduleUnregisterCommands` now will also free the module subcommands. 6. RM_GetCurrentCommandName returns full command name Other changes: 1. Add `addReplyErrorArity` and `addReplyErrorExpireTime` 2. Remove `getFullCommandName` function that now is useless. 3. Some cleanups about `fullname` since now it is SDS. 4. Delete `populateSingleCommand` function from server.h that is useless. 5. Added tests to cover this change. 6. Add some module unload tests and fix the leaks 7. Make error messages uniform, make sure they always contain the full command name and that it's quoted. 7. Fixes some typos see the history in #9504, fixes #10124 Co-authored-by:
Oran Agra <oran@redislabs.com> Co-authored-by:
guybe7 <guy.benoish@redislabs.com>
-
- 20 Jan, 2022 2 commits
-
-
Madelyn Olson authored
* Implemented selectors which provide multiple different sets of permissions to users * Implemented key based permissions * Added a new ACL dry-run command to test permissions before execution * Updated module APIs to support checking key based permissions Co-authored-by:
Oran Agra <oran@redislabs.com>
-
perryitay authored
Adding module api for processing commands during busy jobs and allow flagging the commands that should be handled at this status (#9963) Some modules might perform a long-running logic in different stages of Redis lifetime, for example: * command execution * RDB loading * thread safe context During this long-running logic Redis is not responsive. This PR offers 1. An API to process events while a busy command is running (`RM_Yield`) 2. A new flag (`ALLOW_BUSY`) to mark the commands that should be handled during busy jobs which can also be used by modules (`allow-busy`) 3. In slow commands and thread safe contexts, this flag will start rejecting commands with -BUSY only after `busy-reply-threshold` 4. During loading (`rdb_load` callback), it'll process events right away (not wait for `busy-reply-threshold`), but either way, the processing is throttled to the server hz rate. 5. Allow modules to Yield to redis background tasks, but not to client commands * rename `script-time-limit` to `busy-reply-threshold` (an alias to the pre-7.0 `lua-time-limit`) Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 19 Jan, 2022 1 commit
-
-
Ozan Tezcan authored
was eating too much memory, and taking too long with valgrind
-
- 18 Jan, 2022 2 commits
-
-
Oran Agra authored
The new ACL key based permissions in #9974 require the key-specs (#8324) to have more explicit flags rather than just READ and WRITE. See discussion in #10040 This PR defines two groups of flags: One about how redis internally handles the key (mutually-exclusive). The other is about the logical operation done from the user's point of view (3 mutually exclusive write flags, and one read flag, all optional). In both groups, if we can't explicitly flag something as explicit read-only, delete-only, or insert-only, we flag it as `RW` or `UPDATE`. here's the definition from the code: ``` /* Key-spec flags * * -------------- */ /* The following refer what the command actually does with the value or metadata * of the key, and not necessarily the user data or how it affects it. * Each key-spec may must have exaclty one of these. Any operation that's not * distinctly deletion, overwrite or read-only would be marked as RW. */ #define CMD_KEY_RO (1ULL<<0) /* Read-Only - Reads the value of the key, but * doesn't necessarily returns it. */ #define CMD_KEY_RW (1ULL<<1) /* Read-Write - Modifies the data stored in the * value of the key or its metadata. */ #define CMD_KEY_OW (1ULL<<2) /* Overwrite - Overwrites the data stored in * the value of the key. */ #define CMD_KEY_RM (1ULL<<3) /* Deletes the key. */ /* The follwing refer to user data inside the value of the key, not the metadata * like LRU, type, cardinality. It refers to the logical operation on the user's * data (actual input strings / TTL), being used / returned / copied / changed, * It doesn't refer to modification or returning of metadata (like type, count, * presence of data). Any write that's not INSERT or DELETE, would be an UPADTE. * Each key-spec may have one of the writes with or without access, or none: */ #define CMD_KEY_ACCESS (1ULL<<4) /* Returns, copies or uses the user data from * the value of the key. */ #define CMD_KEY_UPDATE (1ULL<<5) /* Updates data to the value, new value may * depend on the old value. */ #define CMD_KEY_INSERT (1ULL<<6) /* Adds data to the value with no chance of, * modification or deletion of existing data. */ #define CMD_KEY_DELETE (1ULL<<7) /* Explicitly deletes some content * from the value of the key. */ ``` Unrelated changes: - generate-command-code.py is only compatible with python3 (modified the shabang) - generate-command-code.py print file on json parsing error - rename `shard_channel` key-spec flag to just `channel`. - add INCOMPLETE flag in input spec of SORT and SORT_RO
-
Ozan Tezcan authored
Modules can now register sockets/pipe to the Redis main thread event loop and do network operations asynchronously. Previously, modules had to maintain an event loop and another thread for asynchronous network operations. Also, if a module is calling API functions after doing some network operations, it had to synchronize its event loop thread's access with Redis main thread by locking the GIL, causing contention on the lock. After this commit, no synchronization is needed as module can operate in Redis main thread context. So, this commit may improve the performance for some use cases. Added three functions to the module API: * RedisModule_EventLoopAdd(int fd, int mask, RedisModuleEventLoopFunc func, void *user_data) * RedisModule_EventLoopDel(int fd, int mask) * RedisModule_EventLoopAddOneShot(RedisModuleEventLoopOneShotFunc func, void *user_data) - This function can be called from other threads to trigger callback on Redis main thread. Callback will be triggered only once. If Redis main thread is sleeping, this call will wake up the Redis main thread. Event loop callbacks are called by Redis main thread after locking the GIL. Inside callbacks, modules can operate as if they are holding the GIL. Added REDISMODULE_EVENT_EVENTLOOP event with two subevents: * REDISMODULE_SUBEVENT_EVENTLOOP_BEFORE_SLEEP * REDISMODULE_SUBEVENT_EVENTLOOP_AFTER_SLEEP These events are for modules that want to participate in the before and after sleep action. e.g It might be useful to implement batching : Read data from the network, write all to a file in one go on BEFORE_SLEEP event.
-
- 17 Jan, 2022 1 commit
-
-
Oran Agra authored
1. enable diskless replication by default 2. add a new config named repl-diskless-sync-max-replicas that enables replication to start before the full repl-diskless-sync-delay was reached. 3. put replica online sooner on the master (see below) 4. test suite uses repl-diskless-sync-delay of 0 to be faster 5. a few tests that use multiple replica on a pre-populated master, are now using the new repl-diskless-sync-max-replicas 6. fix possible timing issues in a few cluster tests (see below) put replica online sooner on the master ---------------------------------------------------- there were two tests that failed because they needed for the master to realize that the replica is online, but the test code was actually only waiting for the replica to realize it's online, and in diskless it could have been before the master realized it. changes include two things: 1. the tests wait on the right thing 2. issues in the master, putting the replica online in two steps. the master used to put the replica as online in 2 steps. the first step was to mark it as online, and the second step was to enable the write event (only after getting ACK), but in fact the first step didn't contains some of the tasks to put it online (like updating good slave count, and sending the module event). this meant that if a test was waiting to see that the replica is online form the point of view of the master, and then confirm that the module got an event, or that the master has enough good replicas, it could fail due to timing issues. so now the full effect of putting the replica online, happens at once, and only the part about enabling the writes is delayed till the ACK. fix cluster tests -------------------- I added some code to wait for the replica to sync and avoid race conditions. later realized the sentinel and cluster tests where using the original 5 seconds delay, so changed it to 0. this means the other changes are probably not needed, but i suppose they're still better (avoid race conditions)
-
- 13 Jan, 2022 2 commits
-
-
Ozan Tezcan authored
Added RM_MonotonicMicroseconds(). Modules can use monotonic timestamp counter for measurements.
-
chenyang8094 authored
Force create a BASE file (use a foreground `rewriteAppendOnlyFile`) when redis starts from an empty data set and `appendonly` is yes. The reasoning is that normally, after redis is running for some time, and the AOF has gone though a few rewrites, there's always a base rdb file. and the scenario where the base file is missing, is kinda rare (happens only at empty startup), so this change normalizes it. But more importantly, there are or could be some complex modules that are started with some configuration, when they create persistence they write that configuration to RDB AUX fields, so that can can always know with which configuration the persistence file they're loading was created (could be critical). there is (was) one scenario in which they could load their persisted data, and that configuration was missing, and this change fixes it. Add a new module event: REDISMODULE_SUBEVENT_PERSISTENCE_SYNC_AOF_START, similar to REDISMODULE_SUBEVENT_PERSISTENCE_AOF_START which is async. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 11 Jan, 2022 1 commit
-
-
Oran Agra authored
Syntax: `COMMAND DOCS [<command name> ...]` Background: Apparently old version of hiredis (and thus also redis-cli) can't support more than 7 levels of multi-bulk nesting. The solution is to move all the doc related metadata from COMMAND to a new COMMAND DOCS sub-command. The new DOCS sub-command returns a map of commands (not an array like in COMMAND), And the same goes for the `subcommands` field inside it (also contains a map) Besides that, the remaining new fields of COMMAND (hints, key-specs, and sub-commands), are placed in the outer array rather than a nested map. this was done mainly for consistency with the old format. Other changes: --- * Allow COMMAND INFO with no arguments, which returns all commands, so that we can some day deprecated the plain COMMAND (no args) * Reduce the amount of deferred replies from both COMMAND and COMMAND DOCS, especially in the inner loops, since these create many small reply objects, which lead to many small write syscalls and many small TCP packets. To make this easier, when populating the command table, we count the history, args, and hints so we later know their size in advance. Additionally, the movablekeys flag was moved into the flags register. * Update generate-commands-json.py to take the data from both command, it now executes redis-cli directly, instead of taking input from stdin. * Sub-commands in both COMMAND (and COMMAND INFO), and also COMMAND DOCS, show their full name. i.e. CONFIG * GET will be shown as `config|get` rather than just `get`. This will be visible both when asking for `COMMAND INFO config` and COMMAND INFO config|get`, but is especially important for the later. i.e. imagine someone doing `COMMAND INFO slowlog|get config|get` not being able to distinguish between the two items in the array response.
-
- 10 Jan, 2022 1 commit
-
-
Madelyn Olson authored
Changed latency percentile output to omit trailing 0s and periods
-
- 06 Jan, 2022 1 commit
-
-
Ozan Tezcan authored
Callers of redisFork() are logging `strerror(errno)` on failure. `errno` is not set when there is already a child process, causing printing current value of errno which was set before `redisFork()` call. Setting errno to EEXIST on this failure to provide more meaningful error message.
-
- 05 Jan, 2022 1 commit
-
-
filipe oliveira authored
Added INFO LATENCYSTATS section: latency by percentile distribution/latency by cumulative distribution of latencies (#9462) # Short description The Redis extended latency stats track per command latencies and enables: - exporting the per-command percentile distribution via the `INFO LATENCYSTATS` command. **( percentile distribution is not mergeable between cluster nodes ).** - exporting the per-command cumulative latency distributions via the `LATENCY HISTOGRAM` command. Using the cumulative distribution of latencies we can merge several stats from different cluster nodes to calculate aggregate metrics . By default, the extended latency monitoring is enabled since the overhead of keeping track of the command latency is very small. If you don't want to track extended latency metrics, you can easily disable it at runtime using the command: - `CONFIG SET latency-tracking no` By default, the exported latency percentiles are the p50, p99, and p999. You can alter them at runtime using the command: - `CONFIG SET latency-tracking-info-percentiles "0.0 50.0 100.0"` ## Some details: - The total size per histogram should sit around 40 KiB. We only allocate those 40KiB when a command was called for the first time. - With regards to the WRITE overhead As seen below, there is no measurable overhead on the achievable ops/sec or full latency spectrum on the client. Including also the measured redis-benchmark for unstable vs this branch. - We track from 1 nanosecond to 1 second ( everything above 1 second is considered +Inf ) ## `INFO LATENCYSTATS` exposition format - Format: `latency_percentiles_usec_<CMDNAME>:p0=XX,p50....` ## `LATENCY HISTOGRAM [command ...]` exposition format Return a cumulative distribution of latencies in the format of a histogram for the specified command names. The histogram is composed of a map of time buckets: - Each representing a latency range, between 1 nanosecond and roughly 1 second. - Each bucket covers twice the previous bucket's range. - Empty buckets are not printed. - Everything above 1 sec is considered +Inf. - At max there will be log2(1000000000)=30 buckets We reply a map for each command in the format: `<command name> : { `calls`: <total command calls> , `histogram` : { <bucket 1> : latency , < bucket 2> : latency, ... } }` Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 04 Jan, 2022 1 commit
-
-
zhaozhao.zz authored
This would mean that the effects of `CONFIG SET maxmemory` may not be visible once the command returns. That could anyway happen since incremental eviction was added in redis 6.2 (see #7653) We do this to fix one of the propagation bugs about eviction see #9890 and #10014.
-
- 28 Dec, 2021 1 commit
-
-
guybe7 authored
Preventing COFIG SET maxmemory from propagating is just the tip of the iceberg. Module that performs a write operation in a notification can cause any command to be propagated, based on server.dirty We need to come up with a better solution.
-
- 27 Dec, 2021 3 commits
-
-
chenyang8094 authored
It turns out that libc malloc can return an allocation of a different size on requests of the same size. this means that matching MEMORY USAGE of one key to another copy of the same data can fail. Solution: Keep running the test that calls MEMORY USAGE, but ignore the response. We do that by introducing a new utility function to get the memory usage, which always returns 1 when the allocator is not jemalloc. Other changes: Some formatting for datatype2.tcl Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Itamar Haber authored
Following #9656, this script generates a "commands.json" file from the output of the new COMMAND. The output of this script is used in redis/redis-doc#1714 and by redis/redis-io#259. This also converts a couple of rogue dashes (in 'key-specs' and 'multiple-token' flags) to underscores (continues #9959).
-
guybe7 authored
There's a race between testing DBSIZE and the thread starting. If the thread hadn't started by the time we checked DBISZE, no keys will have been evicted. The correct way is to check the evicted_keys stat.
-
- 22 Dec, 2021 1 commit
-
-
guybe7 authored
The mess: Some parts use alsoPropagate for late propagation, others using an immediate one (propagate()), causing edge cases, ugly/hacky code, and the tendency for bugs The basic idea is that all commands are propagated via alsoPropagate (i.e. added to a list) and the top-most call() is responsible for going over that list and actually propagating them (and wrapping them in MULTI/EXEC if there's more than one command). This is done in the new function, propagatePendingCommands. Callers to propagatePendingCommands: 1. top-most call() (we want all nested call()s to add to the also_propagate array and just the top-most one to propagate them) - via `afterCommand` 2. handleClientsBlockedOnKeys: it is out of call() context and it may propagate stuff - via `afterCommand`. 3. handleClientsBlockedOnKeys edge case: if the looked-up key is already expired, we will propagate the expire but will not unblock any client so `afterCommand` isn't called. in that case, we have to propagate the deletion explicitly. 4. cron stuff: active-expire and eviction may also propagate stuff 5. modules: the module API allows to propagate stuff from just about anywhere (timers, keyspace notifications, threads). I could have tried to catch all the out-of-call-context places but it seemed easier to handle it in one place: when we free the context. in the spirit of what was done in call(), only the top-most freeing of a module context may cause propagation. 6. modules: when using a thread-safe ctx it's not clear when/if the ctx will be freed. we do know that the module must lock the GIL before calling RM_Replicate/RM_Call so we propagate the pending commands when releasing the GIL. A "known limitation", which were actually a bug, was fixed because of this commit (see propagate.tcl): When using a mix of RM_Call with `!` and RM_Replicate, the command would propagate out-of-order: first all the commands from RM_Call, and then the ones from RM_Replicate Another thing worth mentioning is that if, in the past, a client would issue a MULTI/EXEC with just one write command the server would blindly propagate the MULTI/EXEC too, even though it's redundant. not anymore. This commit renames propagate() to propagateNow() in order to cause conflicts in pending PRs. propagatePendingCommands is the only caller of propagateNow, which is now a static, internal helper function. Optimizations: 1. alsoPropagate will not add stuff to also_propagate if there's no AOF and replicas 2. alsoPropagate reallocs also_propagagte exponentially, to save calls to memmove Bugfixes: 1. CONFIG SET can create evictions, sending notifications which can cause to dirty++ with modules. we need to prevent it from propagating to AOF/replicas 2. We need to set current_client in RM_Call. buggy scenario: - CONFIG SET maxmemory, eviction notifications, module hook calls RM_Call - assertion in lookupKey crashes, because current_client has CONFIG SET, which isn't CMD_WRITE 3. minor: in eviction, call propagateDeletion after notification, like active-expire and all commands (we always send a notification before propagating the command)
-
- 19 Dec, 2021 2 commits
-
-
Oran Agra authored
- add needs:debug flag for some tests - disable "save" in external tests (speedup?) - use debug_digest proc instead of debug command directly so it can be skipped - use OBJECT ENCODING instead of DEBUG OBJECT to get encoding - add a proc for OBJECT REFCOUNT so it can be skipped - move a bunch of tests in latency_monitor tests to happen later so that latency monitor has some values in it - add missing close_replication_stream calls - make sure to close the temp client if DEBUG LOG fails
-
YaacovHazan authored
Block sensitive configs and commands by default. * `enable-protected-configs` - block modification of configs with the new `PROTECTED_CONFIG` flag. Currently we add this flag to `dbfilename`, and `dir` configs, all of which are non-mutable configs that can set a file redis will write to. * `enable-debug-command` - block the `DEBUG` command * `enable-module-command` - block the `MODULE` command These have a default value set to `no`, so that these features are not exposed by default to client connections, and can only be set by modifying the config file. Users can change each of these to either `yes` (allow all access), or `local` (allow access from local TCP connections and unix domain connections) Note that this is a **breaking change** (specifically the part about MODULE command being disabled by default). I.e. we don't consider DEBUG command being blocked as an issue (people shouldn't have been using it), and the few configs we protected are unlikely to have been set at runtime anyway. On the other hand, it's likely to assume some users who use modules, load them from the config file anyway. Note that's the whole point of this PR, for redis to be more secure by default and reduce the attack surface on innocent users, so secure defaults will necessarily mean a breaking change.
-
- 18 Dec, 2021 1 commit
-
-
guybe7 authored
some languages can build a json-like object by parsing a textual json, but it works poorly when attributes contain hyphens example in JS: ``` let j = JSON.parse(json) j['key-name'] <- works j.key-name <= illegal syntax ```
-
- 15 Dec, 2021 1 commit
-
-
guybe7 authored
Delete the hardcoded command table and replace it with an auto-generated table, based on a JSON file that describes the commands (each command must have a JSON file). These JSON files are the SSOT of everything there is to know about Redis commands, and it is reflected fully in COMMAND INFO. These JSON files are used to generate commands.c (using a python script), which is then committed to the repo and compiled. The purpose is: * Clients and proxies will be able to get much more info from redis, instead of relying on hard coded logic. * drop the dependency between Redis-user and the commands.json in redis-doc. * delete help.h and have redis-cli learn everything it needs to know just by issuing COMMAND (will be done in a separate PR) * redis.io should stop using commands.json and learn everything from Redis (ultimately one of the release artifacts should be a large JSON, containing all the information about all of the commands, which will be generated from COMMAND's reply) * the byproduct of this is: * module commands will be able to provide that info and possibly be more of a first-class citizens * in theory, one may be able to generate a redis client library for a strictly typed language, by using this info. ### Interface changes #### COMMAND INFO's reply change (and arg-less COMMAND) Before this commit the reply at index 7 contained the key-specs list and reply at index 8 contained the sub-commands list (Both unreleased). Now, reply at index 7 is a map of: - summary - short command description - since - debut version - group - command group - complexity - complexity string - doc-flags - flags used for documentation (e.g. "deprecated") - deprecated-since - if deprecated, from which version? - replaced-by - if deprecated, which command replaced it? - history - a list of (version, what-changed) tuples - hints - a list of strings, meant to provide hints for clients/proxies. see https://github.com/redis/redis/issues/9876 - arguments - an array of arguments. each element is a map, with the possibility of nesting (sub-arguments) - key-specs - an array of keys specs (already in unstable, just changed location) - subcommands - a list of sub-commands (already in unstable, just changed location) - reply-schema - will be added in the future (see https://github.com/redis/redis/issues/9845) more details on these can be found in https://github.com/redis/redis-doc/pull/1697 only the first three fields are mandatory #### API changes (unreleased API obviously) now they take RedisModuleCommand opaque pointer instead of looking up the command by name - RM_CreateSubcommand - RM_AddCommandKeySpec - RM_SetCommandKeySpecBeginSearchIndex - RM_SetCommandKeySpecBeginSearchKeyword - RM_SetCommandKeySpecFindKeysRange - RM_SetCommandKeySpecFindKeysKeynum Currently, we did not add module API to provide additional information about their commands because we couldn't agree on how the API should look like, see https://github.com/redis/redis/issues/9944 . ### Somehow related changes 1. Literals should be in uppercase while placeholder in lowercase. Now all the GEO* command will be documented with M|KM|FT|MI and can take both lowercase and uppercase ### Unrelated changes 1. Bugfix: no_madaory_keys was absent in COMMAND's reply 2. expose CMD_MODULE as "module" via COMMAND 3. have a dedicated uint64 for ACL categories (instead of having them in the same uint64 as command flags) Co-authored-by:
Itamar Haber <itamar@garantiadata.com>
-
- 24 Nov, 2021 1 commit
-
-
Binbin authored
In #9323, when `repl-diskless-load` is enabled and set to `swapdb`, if the master replication ID hasn't changed, we can load data-set asynchronously, and serving read commands during the full resync. In `diskless loading short read` test, after a loading successfully, we will wait for the loading to stop and continue the for loop. After the introduction of `async_loading`, we also need to check it. Otherwise the next loop will start too soon, may trigger a timing issue.
-
- 23 Nov, 2021 1 commit
-
-
guybe7 authored
Some people complain that QUIT is missing from help/command table. Not appearing in COMMAND command, command stats, ACL, etc. and instead, there's a hack in processCommand with a comment that looks outdated. Note that it is [documented](https://redis.io/commands/quit) At the same time, HOST: and POST are there in the command table although these are not real commands. They would appear in the COMMAND command, and even in commandstats. Other changes: 1. Initialize the static logged_time static var in securityWarningCommand 2. add `no-auth` flag to RESET so it can always be executed.
-
- 09 Nov, 2021 1 commit
-
-
YaacovHazan authored
In both tests, "diskless loading short read" and "diskless loading short read with module", the timeout of waiting for the replica to respond to a short read and log it, is too short. Also, add --dump-logs in runtest-moduleapi for valgrind runs.
-
- 04 Nov, 2021 1 commit
-
-
Eduardo Semprebon authored
For diskless replication in swapdb mode, considering we already spend replica memory having a backup of current db to restore in case of failure, we can have the following benefits by instead swapping database only in case we succeeded in transferring db from master: - Avoid `LOADING` response during failed and successful synchronization for cases where the replica is already up and running with data. - Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping. - This could be implemented also for disk replication with similar benefits if consumers are willing to spend the extra memory usage. General notes: - The concept of `backupDb` becomes `tempDb` for clarity. - Async loading mode will only kick in if the replica is syncing from a master that has the same repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline. - New property in INFO: `async_loading` to differentiate from the blocking loading - Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db and the tempDb that is passed around. - Because this is affecting replicas only, we assume that if they are not readonly and write commands during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET here anyways to avoid complications. Considerations for review: - We have many cases where server.loading flag is used and even though I tried my best, there may be cases where async_loading should be checked as well and cases where it shouldn't (would require very good understanding of whole code) - Several places that had different behavior depending on the loading flag where actually meant to just handle commands coming from the AOF client differently than ones coming from real clients, changed to check CLIENT_ID_AOF instead. **Additional for Release Notes** - Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't contribute on triggering next database SAVE - New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING - Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event. Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED, ABORTED and COMPLETED. - New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions to allow modules to declare they support the diskless replication with async loading (when absent, we fall back to disk-based loading). Co-authored-by:
Eduardo Semprebon <edus@saxobank.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 01 Nov, 2021 1 commit
-
-
Oran Agra authored
The module test in reply.tcl was introduced by #8521 but didn't run until recently (see #9639) and then it started failing with valgrind. This is because valgrind uses 64 bit long double (unlike most other platforms that have at least 80 bits) But besides valgrind, the tests where also incompatible with ARM32, which also uses 64 bit long doubles. We now use appropriate value to avoid issues with either valgrind or ARM32 In all the double tests, i use 3.141, which is safe since since addReplyDouble uses `%.17Lg` which is able to represent this value without adding any digits due to precision loss. In the long double, since we use `%.17Lf` in ld2string, it preserves 17 significant digits, rather than 17 digit after the decimal point (like in `%.17Lg`). So to make these similar, i use value lower than 1 (no digits left of the period) Lastly, we have the same issue with TCL (no long doubles) so we read raw protocol in that test. Note that the only error before this fix (in both valgrind and ARM32 is this: ``` *** [err]: RM_ReplyWithLongDouble: a float reply in tests/unit/moduleapi/reply.tcl Expected '3.141' to be equal to '3.14100000000000001' (context: type eval line 2 cmd {assert_equal 3.141 [r rw.longdouble 3.141]} proc ::test) ``` so the changes to debug.c and scripting.tcl aren't really needed, but i consider them a cleanup (i.e. scripting.c validated a different constant than the one that's sent to it from debug.c). Another unrelated change is to add the RESP version to the repeated tests in reply.tcl
-
- 25 Oct, 2021 2 commits
-
-
Shaya Potter authored
Let modules use additional type of RESP3 response (unused by redis so far) Also fix tests that where introduced in #8521 but didn't actually run. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Wang Yuan authored
## Background For redis master, one replica uses one copy of replication buffer, that is a big waste of memory, more replicas more waste, and allocate/free memory for every reply list also cost much. If we set client-output-buffer-limit small and write traffic is heavy, master may disconnect with replicas and can't finish synchronization with replica. If we set client-output-buffer-limit big, master may be OOM when there are many replicas that separately keep much memory. Because replication buffers of different replica client are the same, one simple idea is that all replicas only use one replication buffer, that will effectively save memory. Since replication backlog content is the same as replicas' output buffer, now we can discard replication backlog memory and use global shared replication buffer to implement replication backlog mechanism. ## Implementation I create one global "replication buffer" which contains content of replication stream. The structure of "replication buffer" is similar to the reply list that exists in every client. But the node of list is `replBufBlock`, which has `id, repl_offset, refcount` fields. ```c /* Replication buffer blocks is the list of replBufBlock. * * +--------------+ +--------------+ +--------------+ * | refcount = 1 | ... | refcount = 0 | ... | refcount = 2 | * +--------------+ +--------------+ +--------------+ * | / \ * | / \ * | / \ * Repl Backlog Replia_A Replia_B * * Each replica or replication backlog increments only the refcount of the * 'ref_repl_buf_node' which it points to. So when replica walks to the next * node, it should first increase the next node's refcount, and when we trim * the replication buffer nodes, we remove node always from the head node which * refcount is 0. If the refcount of the head node is not 0, we must stop * trimming and never iterate the next node. */ /* Similar with 'clientReplyBlock', it is used for shared buffers between * all replica clients and replication backlog. */ typedef struct replBufBlock { int refcount; /* Number of replicas or repl backlog using. */ long long id; /* The unique incremental number. */ long long repl_offset; /* Start replication offset of the block. */ size_t size, used; char buf[]; } replBufBlock; ``` So now when we feed replication stream into replication backlog and all replicas, we only need to feed stream into replication buffer `feedReplicationBuffer`. In this function, we set some fields of replication backlog and replicas to references of the global replication buffer blocks. And we also need to check replicas' output buffer limit to free if exceeding `client-output-buffer-limit`, and trim replication backlog if exceeding `repl-backlog-size`. When sending reply to replicas, we also need to iterate replication buffer blocks and send its content, when totally sending one block for replica, we decrease current node count and increase the next current node count, and then free the block which reference is 0 from the head of replication buffer blocks. Since now we use linked list to manage replication backlog, it may cost much time for iterating all linked list nodes to find corresponding replication buffer node. So we create a rax tree to store some nodes for index, but to avoid rax tree occupying too much memory, i record one per 64 nodes for index. Currently, to make partial resynchronization as possible as much, we always let replication backlog as the last reference of replication buffer blocks, backlog size may exceeds our setting if slow replicas that reference vast replication buffer blocks, and this method doesn't increase memory usage since they share replication buffer. To avoid freezing server for freeing unreferenced replication buffer blocks when we need to trim backlog for exceeding backlog size setting, we trim backlog incrementally (free 64 blocks per call now), and make it faster in `beforeSleep` (free 640 blocks). ### Other changes - `mem_total_replication_buffers`: we add this field in INFO command, it means the total memory of replication buffers used. - `mem_clients_slaves`: now even replica is slow to replicate, and its output buffer memory is not 0, but it still may be 0, since replication backlog and replicas share one global replication buffer, only if replication buffer memory is more than the repl backlog setting size, we consider the excess as replicas' memory. Otherwise, we think replication buffer memory is the consumption of repl backlog. - Key eviction Since all replicas and replication backlog share global replication buffer, we think only the part of exceeding backlog size the extra separate consumption of replicas. Because we trim backlog incrementally in the background, backlog size may exceeds our setting if slow replicas that reference vast replication buffer blocks disconnect. To avoid massive eviction loop, we don't count the delayed freed replication backlog into used memory even if there are no replicas, i.e. we also regard this memory as replicas's memory. - `client-output-buffer-limit` check for replica clients It doesn't make sense to set the replica clients output buffer limit lower than the repl-backlog-size config (partial sync will succeed and then replica will get disconnected). Such a configuration is ignored (the size of repl-backlog-size will be used). This doesn't have memory consumption implications since the replica client will share the backlog buffers memory. - Drop replication backlog after loading data if needed We always create replication backlog if server is a master, we need it because we put DELs in it when loading expired keys in RDB, but if RDB doesn't have replication info or there is no rdb, it is not possible to support partial resynchronization, to avoid extra memory of replication backlog, we drop it. - Multi IO threads Since all replicas and replication backlog use global replication buffer, if I/O threads are enabled, to guarantee data accessing thread safe, we must let main thread handle sending the output buffer to all replicas. But before, other IO threads could handle sending output buffer of all replicas. ## Other optimizations This solution resolve some other problem: - When replicas disconnect with master since of out of output buffer limit, releasing the output buffer of replicas may freeze server if we set big `client-output-buffer-limit` for replicas, but now, it doesn't cause freezing. - This implementation may mitigate reply list copy cost time(also freezes server) when one replication has huge reply buffer and another replica can copy buffer for full synchronization. now, we just copy reference info, it is very light. - If we set replication backlog size big, it also may cost much time to copy replication backlog into replica's output buffer. But this commit eliminates this problem. - Resizing replication backlog size doesn't empty current replication backlog content.
-
- 21 Oct, 2021 1 commit
-
-
Shaya Potter authored
Before this commit, module blocked clients did not carry through the original RESP version, resulting with RESP3 clients receiving unexpected RESP2 replies.
-
- 20 Oct, 2021 2 commits
-
-
Oran Agra authored
Following #9483 the daily CI exposed a few problems. * The cluster creation code (uses redis-cli) is complicated to test with TLS enabled. for now i'm just skipping them since the tests we run there don't really need that kind of coverage * cluster port binding failures note that `find_available_port` already looks for a free cluster port but the code in `wait_server_started` couldn't detect the failure of binding (the text it greps for wasn't found in the log)
-
guybe7 authored
## Intro The purpose is to allow having different flags/ACL categories for subcommands (Example: CONFIG GET is ok-loading but CONFIG SET isn't) We create a small command table for every command that has subcommands and each subcommand has its own flags, etc. (same as a "regular" command) This commit also unites the Redis and the Sentinel command tables ## Affected commands CONFIG Used to have "admin ok-loading ok-stale no-script" Changes: 1. Dropped "ok-loading" in all except GET (this doesn't change behavior since there were checks in the code doing that) XINFO Used to have "read-only random" Changes: 1. Dropped "random" in all except CONSUMERS XGROUP Used to have "write use-memory" Changes: 1. Dropped "use-memory" in all except CREATE and CREATECONSUMER COMMAND No changes. MEMORY Used to have "random read-only" Changes: 1. Dropped "random" in PURGE and USAGE ACL Used to have "admin no-script ok-loading ok-stale" Changes: 1. Dropped "admin" in WHOAMI, GENPASS, and CAT LATENCY No changes. MODULE No changes. SLOWLOG Used to have "admin random ok-loading ok-stale" Changes: 1. Dropped "random" in RESET OBJECT Used to have "read-only random" Changes: 1. Dropped "random" in ENCODING and REFCOUNT SCRIPT Used to have "may-replicate no-script" Changes: 1. Dropped "may-replicate" in all except FLUSH and LOAD CLIENT Used to have "admin no-script random ok-loading ok-stale" Changes: 1. Dropped "random" in all except INFO and LIST 2. Dropped "admin" in ID, TRACKING, CACHING, GETREDIR, INFO, SETNAME, GETNAME, and REPLY STRALGO No changes. PUBSUB No changes. CLUSTER Changes: 1. Dropped "admin in countkeysinslots, getkeysinslot, info, nodes, keyslot, myid, and slots SENTINEL No changes. (note that DEBUG also fits, but we decided not to convert it since it's for debugging and anyway undocumented) ## New sub-command This commit adds another element to the per-command output of COMMAND, describing the list of subcommands, if any (in the same structure as "regular" commands) Also, it adds a new subcommand: ``` COMMAND LIST [FILTERBY (MODULE <module-name>|ACLCAT <cat>|PATTERN <pattern>)] ``` which returns a set of all commands (unless filters), but excluding subcommands. ## Module API A new module API, RM_CreateSubcommand, was added, in order to allow module writer to define subcommands ## ACL changes: 1. Now, that each subcommand is actually a command, each has its own ACL id. 2. The old mechanism of allowed_subcommands is redundant (blocking/allowing a subcommand is the same as blocking/allowing a regular command), but we had to keep it, to support the widespread usage of allowed_subcommands to block commands with certain args, that aren't subcommands (e.g. "-select +select|0"). 3. I have renamed allowed_subcommands to allowed_firstargs to emphasize the difference. 4. Because subcommands are commands in ACL too, you can now use "-" to block subcommands (e.g. "+client -client|kill"), which wasn't possible in the past. 5. It is also possible to use the allowed_firstargs mechanism with subcommand. For example: `+config -config|set +config|set|loglevel` will block all CONFIG SET except for setting the log level. 6. All of the ACL changes above required some amount of refactoring. ## Misc 1. There are two approaches: Either each subcommand has its own function or all subcommands use the same function, determining what to do according to argv[0]. For now, I took the former approaches only with CONFIG and COMMAND, while other commands use the latter approach (for smaller blamelog diff). 2. Deleted memoryGetKeys: It is no longer needed because MEMORY USAGE now uses the "range" key spec. 4. Bugfix: GETNAME was missing from CLIENT's help message. 5. Sentinel and Redis now use the same table, with the same function pointer. Some commands have a different implementation in Sentinel, so we redirect them (these are ROLE, PUBLISH, and INFO). 6. Command stats now show the stats per subcommand (e.g. instead of stats just for "config" you will have stats for "config|set", "config|get", etc.) 7. It is now possible to use COMMAND directly on subcommands: COMMAND INFO CONFIG|GET (The pipeline syntax was inspired from ACL, and can be used in functions lookupCommandBySds and lookupCommandByCString) 8. STRALGO is now a container command (has "help") ## Breaking changes: 1. Command stats now show the stats per subcommand (see (5) above)
-
- 19 Oct, 2021 1 commit
-
-
qetu3790 authored
Prevent clients from being blocked forever in cluster when they block with their own module command and the hash slot is migrated to another master at the same time. These will get a redirection message when unblocked. Also, release clients blocked on module commands when cluster is down (same as other blocked clients) This commit adds basic tests for the main (non-cluster) redis test infra that test the cluster. This was done because the cluster test infra can't handle some common test features, but most importantly we only build the test modules with the non-cluster test suite. note that rather than really supporting cluster operations by the test infra, it was added (as dup code) in two files, one for module tests and one for non-modules tests, maybe in the future we'll refactor that. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 18 Oct, 2021 1 commit
-
-
Viktor Söderqvist authored
The LRU of the key is not touched. Locically expired keys are logically not existing, so they're treated as such.
-
- 30 Sep, 2021 1 commit
-
-
Hanna Fadida authored
adding an advanced api to enable loading data that was sereialized with a specific encoding version
-