- 06 Sep, 2023 1 commit
-
-
bodong.ybd authored
Before: ``` 127.0.0.1:6379> command getkeys sort_ro key (empty array) 127.0.0.1:6379> ``` After: ``` 127.0.0.1:6379> command getkeys sort_ro key 1) "key" 127.0.0.1:6379> ``` (cherry picked from commit b59f53ef)
-
- 10 Jul, 2023 1 commit
-
-
Lior Lahav authored
When getKeysUsingKeySpecs processes a command with more than one key-spec, and called with a total of more than 256 keys, it'll call getKeysPrepareResult again, but since numkeys isn't updated, getKeysPrepareResult will not bother to copy key names from the old result (leaving these slots uninitialized). Furthermore, it did not consider the keys it already found when allocating more space. Co-authored-by:
Oran Agra <oran@redislabs.com> (cherry picked from commit b7559d9f)
-
- 28 Feb, 2023 1 commit
-
-
guybe7 authored
Starting from Redis 7.0 (#9890) we started wrapping everything a command propagates with MULTI/EXEC. The problem is that both SCAN and RANDOMKEY can lazy-expire arbitrary keys (similar behavior to active-expire), and put DELs in a transaction. Fix: When these commands are called without a parent exec-unit (e.g. not in EVAL or MULTI) we avoid wrapping their DELs in a transaction (for the same reasons active-expire and eviction avoids a transaction) This PR adds a per-command flag that indicates that the command may touch arbitrary keys (not the ones in the arguments), and uses that flag to avoid the MULTI-EXEC. For now, this flag is internal, since we're considering other solutions for the future. Note for cluster mode: if SCAN/RANDOMKEY is inside EVAL/MULTI it can still cause the same situation (as it always did), but it won't cause a CROSSSLOT because replicas and AOF do not perform slot checks. The problem with the above is mainly for 3rd party ecosystem tools that propagate commands from master to master, or feed an AOF file with redis-cli into a master. This PR aims to fix the regression in redis 7.0, and we opened #11792 to try to handle the bigger problem with lazy expire better for another release. (cherry picked from commit fd82bccd)
-
- 16 Jan, 2023 1 commit
-
-
Oran Agra authored
Related to the hang reported in #11671 Currently, redis can disconnect a client due to reaching output buffer limit, it'll also avoid feeding that output buffer with more data, but it will keep running the loop in the command (despite the client already being marked for disconnection) This PR is an attempt to mitigate the problem, specifically for commands that are easy to abuse, specifically: KEYS, HRANDFIELD, SRANDMEMBER, ZRANDMEMBER. The RAND family of commands can take a negative COUNT argument (which is not bound to the number of elements in the key), so it's enough to create a key with one field, and then these commands can be used to hang redis. For KEYS the caller can use the existing keyspace in redis (if big enough).
-
- 12 Dec, 2022 2 commits
-
-
C Charles authored
MIGTATE with AUTH that contains "keys" is getting wrong key names in migrateGetKeys, leads to ACL errors (#11253) When using the MIGRATE, with a destination Redis that has the user name or password set to the string "keys", Redis would have determine the wrong set of key names the command is gonna access. This lead to ACL returning wrong authentication result. Destination instance: ``` 127.0.0.1:6380> acl setuser default >keys OK 127.0.0.1:6380> acl setuser keys on nopass ~* &* +@all OK ``` Source instance: ``` 127.0.0.1:6379> set a 123 OK 127.0.0.1:6379> acl setuser cc on nopass ~a* +@all OK 127.0.0.1:6379> auth cc 1 OK 127.0.0.1:6379> migrate 127.0.0.1 6380 "" 0 1000 auth keys keys a (error) NOPERM this user has no permissions to access one of the keys used as arguments 127.0.0.1:6379> migrate 127.0.0.1 6380 "" 0 1000 auth2 keys pswd keys a (error) NOPERM this user has no permissions to access one of the keys used as arguments ``` Using `acl dryrun` we know that the parameters of `auth` and `auth2` are mistaken for the `keys` option. ``` 127.0.0.1:6379> acl dryrun cc migrate whatever whatever "" 0 1000 auth keys keys a "This user has no permissions to access the 'keys' key" 127.0.0.1:6379> acl dryrun cc migrate whatever whatever "" 0 1000 auth2 keys pswd keys a "This user has no permissions to access the 'pswd' key" ``` Fix the bug by editing db.c/migrateGetKeys function, which finds the `keys` option and all the keys following. (cherry picked from commit 9ab873d9)
-
guybe7 authored
The original idea behind auto-setting the default (first,last,step) spec was to use the most "open" flags when the user didn't provide any key-spec flags information. While the above idea is a good approach, it really makes no sense to set CMD_KEY_VARIABLE_FLAGS if the user didn't provide the getkeys-api flag: in this case there's not way to retrieve these variable flags, so what's the point? Internally in redis there was code to ignore this already, so this fix doesn't change redis's behavior, it only affects the output of COMMAND command. (cherry picked from commit 3330ea18)
-
- 21 Sep, 2022 2 commits
-
-
Oran Agra authored
Redis 7.0 has #9890 which added an assertion when the propagation queue was not flushed and we got to beforeSleep. But it turns out that when processCommands calls getNodeByQuery and decides to reject the command, it can lead to a key that was lazy expired and is deleted without later flushing the propagation queue. This change prevents lazy expiry from deleting the key at this stage (not as part of a command being processed in `call`) (cherry picked from commit c789fb0a)
-
Huang Zhw authored
`bitfield` with `get` may not be readonly. ``` 127.0.0.1:6384> acl setuser hello on nopass %R~* +@all OK 127.0.0.1:6384> auth hello 1 OK 127.0.0.1:6384> bitfield hello set i8 0 1 (error) NOPERM this user has no permissions to access one of the keys used as arguments 127.0.0.1:6384> bitfield hello set i8 0 1 get i8 0 1) (integer) 0 2) (integer) 1 ``` Co-authored-by:
Oran Agra <oran@redislabs.com> (cherry picked from commit ec5034a2)
-
- 01 Jun, 2022 1 commit
-
-
Oran Agra authored
Fix broken protocol in MISCONF error, RM_Yield bugs, RM_Call(EVAL) OOM check bug, and new RM_Call checks. (#10786) * Fix broken protocol when redis can't persist to RDB (general commands, not modules), excessive newline. regression of #10372 (7.0 RC3) * Fix broken protocol when Redis can't persist to AOF (modules and scripts), missing newline. * Fix bug in OOM check of EVAL scripts called from RM_Call. set the cached OOM state for scripts before executing module commands too, so that it can serve scripts that are executed by modules. i.e. in the past EVAL executed by RM_Call could have either falsely fail or falsely succeeded because of a wrong cached OOM state flag. * Fix bugs with RM_Yield: 1. SHUTDOWN should only accept the NOSAVE mode 2. Avoid eviction during yield command processing. 3. Avoid processing master client commands while yielding from another client * Add new two more checks to RM_Call script mode. 1. READONLY You can't write against a read only replica 2. MASTERDOWN Link with MASTER is down and `replica-serve-stale-data` is set to `no` * Add new RM_Call flag to let redis automatically refuse `deny-oom` commands while over the memory limit. * Add tests to cover various errors from Scripts, Modules, Modules calling scripts, and Modules calling commands in script mode. Add tests: * Looks like the MISCONF error was completely uncovered by the tests, add tests for it, including from scripts, and modules * Add tests for NOREPLICAS from scripts * Add tests for the various errors in module RM_Call, including RM_Call that calls EVAL, and RM_call in "eval mode". that includes: NOREPLICAS, READONLY, MASTERDOWN, MISCONF
-
- 11 May, 2022 1 commit
-
-
Binbin authored
FLUSHDB and FLUSHALL add call forceCommandPropagation / FLUSHALL reset dirty counter to 0 if we enable save (#10691) ## FLUSHALL We used to restore the dirty counter after `rdbSave` zeroed it if we enable save. Otherwise FLUSHALL will not be replicated nor put into the AOF. And then we do increment it again below. Without that extra dirty++, when db was already empty, FLUSHALL will not be replicated nor put into the AOF. We now gonna replace all that dirty counter magic with a call to forceCommandPropagation (REPL and AOF), instead of all the messing around with the dirty counter. Added tests to cover three part (dirty counter, REPL, AOF). One benefit other than cleaner code is that the `rdb_changes_since_last_save` is correct in this case. ## FLUSHDB FLUSHDB was not replicated nor put into the AOF when db was already empty. Unlike DEL on a non-existing key, FLUSHDB always does something, and that's to call the module hook. So basically FLUSHDB is never a NOP, and thus it should always be propagated. Not doing that, could mean that if a module does something in that hook, and wants to avoid issues of that hook being missing on the replica if the db is empty, it'll need to do complicated things. So now FLUSHDB add call forceCommandPropagation, we will always propagate FLUSHDB. Always propagating FLUSHDB seems like a safe approach that shouldn't have any drawbacks (other than looking odd) This was mentioned in #8972 ## Test section: We actually found it while solving a race condition in the BGSAVE test (other.tcl). It was found in extra_ci Daily Arm64 (test-libc-malloc). ``` [exception]: Executing test client: ERR Background save already in progress. ERR Background save already in progress ``` It look like `r flushdb` trigger (schedule) a bgsave right after `waitForBgsave r` and before `r save`. Changing flushdb to flushall, FLUSHALL will do a foreground save and then set the dirty counter to 0.
-
- 10 May, 2022 1 commit
-
-
guybe7 authored
Fix #10552 We no longer piggyback getkeys_proc to hold the RedisModuleCommand struct, when exists Others: Use `doesCommandHaveKeys` in `RM_GetCommandKeysWithFlags` and `getKeysSubcommandImpl`. It causes a very minor behavioral change in commands that don't have actual keys, but have a spec with `CMD_KEY_NOT_KEY`. For example, before this command `COMMAND GETKEYS SPUBLISH` would return `Invalid arguments specified for command` but not it returns `The command has no key arguments`
-
- 13 Apr, 2022 1 commit
-
-
Luke Palmer authored
Add an optional keyspace event when new keys are added to the db. This is useful for applications where clients need to be aware of the redis keyspace. Such an application can SCAN once at startup and then listen for "new" events (plus others associated with DEL, RENAME, etc).
-
- 08 Mar, 2022 1 commit
-
-
guybe7 authored
Deleting a stream while a client is blocked XREADGROUP should unblock the client. The idea is that if a client is blocked via XREADGROUP is different from any other blocking type in the sense that it depends on the existence of both the key and the group. Even if the key is deleted and then revived with XADD it won't help any clients blocked on XREADGROUP because the group no longer exist, so they would fail with -NOGROUP anyway. The conclusion is that it's better to unblock these clients (with error) upon the deletion of the key, rather than waiting for the first XADD. Other changes: 1. Slightly optimize all `serveClientsBlockedOn*` functions by checking `server.blocked_clients_by_type` 2. All `serveClientsBlockedOn*` functions now use a list iterator rather than looking at `listFirst`, relying on `unblockClient` to delete the head of the list. Before this commit, only `serveClientsBlockedOnStreams` used to work like that. 3. bugfix: CLIENT UNBLOCK ERROR should work even if the command doesn't have a timeout_callback (only relevant to module commands)
-
- 22 Feb, 2022 2 commits
-
-
Viktor Söderqvist authored
When WATCH is called on a key that's already logically expired, avoid discarding the transaction when the keys is actually deleted. When WATCH is called, a flag is stored if the key is already expired at the time of watch. The expired key is not deleted, only checked. When a key is "touched", if it is deleted and it was already expired when a client watched it, the client is not marked as dirty. Co-authored-by:
Oran Agra <oran@redislabs.com> Co-authored-by:
zhaozhao.zz <zhaozhao.zz@alibaba-inc.com>
-
Madelyn Olson authored
This implements the following main pieces of functionality: * Renames key spec "CHANNEL" to be "NOT_KEY", and update the documentation to indicate it's for cluster routing and not for any other key related purpose. * Add the getchannels-api, so that modules can now define commands that are subject to ACL channel permission checks. * Add 4 new flags that describe how a module interacts with a command (SUBSCRIBE, PUBLISH, UNSUBSCRIBE, and PATTERN). They are all technically composable, however not sure how a command could both subscribe and unsubscribe from a command at once, but didn't see a reason to add explicit validation there. * Add two new module apis RM_ChannelAtPosWithFlags and RM_IsChannelsPositionRequest to duplicate the functionality provided by the keys position APIs. * The RM_ACLCheckChannelPermissions (only released in 7.0 RC1) was changed to take flags rather than a boolean literal. * The RM_ACLCheckKeyPermissions (only released in 7.0 RC1) was changed to take flags corresponding to keyspecs instead of custom permission flags. These keyspec flags mimic the flags for ACLCheckChannelPermissions.
-
- 08 Feb, 2022 2 commits
-
-
Viktor Söderqvist authored
There's an assertion added recently to make sure that non-write commands don't use lookupKeyWrite, It was initially meant to be used only on read-only replicas, but we thought it'll not have enough coverage, so used it on the masters too. We now realize that in some cases this can cause issues for modules, so we remove the assert. Other than that, we also make sure not to force expireIfNeeded on read-only replicas. even if they somehow run a write command. See https://github.com/redis/redis/pull/9572#discussion_r800179373
-
Oran Agra authored
- add COMMAND GETKEYSANDFLAGS sub-command - add RM_KeyAtPosWithFlags and GetCommandKeysWithFlags - RM_KeyAtPos and RM_CreateCommand set flags requiring full access for keys - RM_CreateCommand set VARIABLE_FLAGS - expose `variable_flags` flag in COMMAND INFO key-specs - getKeysFromCommandWithSpecs prefers key-specs over getkeys-api - add tests for all of these
-
- 30 Jan, 2022 1 commit
-
-
guybe7 authored
Add optional `notes` to keyspecs. Other changes: 1. Remove the "incomplete" flag from SORT and SORT_RO: it is misleading since "incomplete" means "this spec may not return all the keys it describes" but SORT and SORT_RO's specs (except the input key) do not return any keys at all. So basically: If a spec's begin_search is "unknown" you should not use it at all, you must use COMMAND KEYS; if a spec itself is "incomplete", you can use it to get a partial list of keys, but if you want all of them you must use COMMAND GETKEYS; otherwise, the spec will return all the keys 2. `getKeysUsingKeySpecs` handles incomplete specs internally
-
- 26 Jan, 2022 1 commit
-
-
Binbin authored
SET is a R+W command, because it can also do `GET` on the data. SET without GET is a write-only command. SET with GET is a read+write command. In #9974, we added ACL to let users define write-only access. So when the user uses SET with GET option, and the user doesn't have the READ permission on the key, we need to reject it, but we rather not reject users with write-only permissions from using the SET command when they don't use GET. In this commit, we add a `getkeys_proc` function to control key flags in SET command. We also add a new key spec flag (VARIABLE_FLAGS) means that some keys might have different flags depending on arguments. We also handle BITFIELD command, add a `bitfieldGetKeys` function. BITFIELD GET is a READ ONLY command. BITFIELD SET or BITFIELD INCR are READ WRITE commands. Other changes: 1. SET GET was added in 6.2, add the missing since in set.json 2. Added tests to cover the changes in acl-v2.tcl 3. Fix some typos in server.h and cleanups in acl-v2.tcl Co-authored-by:
Madelyn Olson <madelyneolson@gmail.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 20 Jan, 2022 2 commits
-
-
Madelyn Olson authored
* Implemented selectors which provide multiple different sets of permissions to users * Implemented key based permissions * Added a new ACL dry-run command to test permissions before execution * Updated module APIs to support checking key based permissions Co-authored-by:
Oran Agra <oran@redislabs.com>
-
perryitay authored
Adding module api for processing commands during busy jobs and allow flagging the commands that should be handled at this status (#9963) Some modules might perform a long-running logic in different stages of Redis lifetime, for example: * command execution * RDB loading * thread safe context During this long-running logic Redis is not responsive. This PR offers 1. An API to process events while a busy command is running (`RM_Yield`) 2. A new flag (`ALLOW_BUSY`) to mark the commands that should be handled during busy jobs which can also be used by modules (`allow-busy`) 3. In slow commands and thread safe contexts, this flag will start rejecting commands with -BUSY only after `busy-reply-threshold` 4. During loading (`rdb_load` callback), it'll process events right away (not wait for `busy-reply-threshold`), but either way, the processing is throttled to the server hz rate. 5. Allow modules to Yield to redis background tasks, but not to client commands * rename `script-time-limit` to `busy-reply-threshold` (an alias to the pre-7.0 `lua-time-limit`) Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 11 Jan, 2022 1 commit
-
-
Binbin authored
This commit adds some tests that the test cases will access the keys with expiration time set in the script call. There was no test case for this part before. See #10080 Also there is a test will cover #1525. we block the time so that the key can not expire in the middle of the script execution. Other changes: 1. Delete `evalTimeSnapshot` and just use `scriptTimeSnapshot` in it's place. 2. Some cleanups to scripting.tcl. 3. better names for tests that run in a loop to make them distinctable
-
- 06 Jan, 2022 1 commit
-
-
Meir Shpilraien (Spielrein) authored
# Redis Function Libraries This PR implements Redis Functions Libraries as describe on: https://github.com/redis/redis/issues/9906. Libraries purpose is to provide a better code sharing between functions by allowing to create multiple functions in a single command. Functions that were created together can safely share code between each other without worrying about compatibility issues and versioning. Creating a new library is done using 'FUNCTION LOAD' command (full API is described below) This PR introduces a new struct called libraryInfo, libraryInfo holds information about a library: * name - name of the library * engine - engine used to create the library * code - library code * description - library description * functions - the functions exposed by the library When Redis gets the `FUNCTION LOAD` command it creates a new empty libraryInfo. Redis passes the `CODE` to the relevant engine alongside the empty libraryInfo. As a result, the engine will create one or more functions by calling 'libraryCreateFunction'. The new funcion will be added to the newly created libraryInfo. So far Everything is happening locally on the libraryInfo so it is easy to abort the operation (in case of an error) by simply freeing the libraryInfo. After the library info is fully constructed we start the joining phase by which we will join the new library to the other libraries currently exist on Redis. The joining phase make sure there is no function collision and add the library to the librariesCtx (renamed from functionCtx). LibrariesCtx is used all around the code in the exact same way as functionCtx was used (with respect to RDB loading, replicatio, ...). The only difference is that apart from function dictionary (maps function name to functionInfo object), the librariesCtx contains also a libraries dictionary that maps library name to libraryInfo object. ## New API ### FUNCTION LOAD `FUNCTION LOAD <ENGINE> <LIBRARY NAME> [REPLACE] [DESCRIPTION <DESCRIPTION>] <CODE>` Create a new library with the given parameters: * ENGINE - REPLACE Engine name to use to create the library. * LIBRARY NAME - The new library name. * REPLACE - If the library already exists, replace it. * DESCRIPTION - Library description. * CODE - Library code. Return "OK" on success, or error on the following cases: * Library name already taken and REPLACE was not used * Name collision with another existing library (even if replace was uses) * Library registration failed by the engine (usually compilation error) ## Changed API ### FUNCTION LIST `FUNCTION LIST [LIBRARYNAME <LIBRARY NAME PATTERN>] [WITHCODE]` Command was modified to also allow getting libraries code (so `FUNCTION INFO` command is no longer needed and removed). In addition the command gets an option argument, `LIBRARYNAME` allows you to only get libraries that match the given `LIBRARYNAME` pattern. By default, it returns all libraries. ### INFO MEMORY Added number of libraries to `INFO MEMORY` ### Commands flags `DENYOOM` flag was set on `FUNCTION LOAD` and `FUNCTION RESTORE`. We consider those commands as commands that add new data to the dateset (functions are data) and so we want to disallows to run those commands on OOM. ## Removed API * FUNCTION CREATE - Decided on https://github.com/redis/redis/issues/9906 * FUNCTION INFO - Decided on https://github.com/redis/redis/issues/9899 ## Lua engine changes When the Lua engine gets the code given on `FUNCTION LOAD` command, it immediately runs it, we call this run the loading run. Loading run is not a usual script run, it is not possible to invoke any Redis command from within the load run. Instead there is a new API provided by `library` object. The new API's: * `redis.log` - behave the same as `redis.log` * `redis.register_function` - register a new function to the library The loading run purpose is to register functions using the new `redis.register_function` API. Any attempt to use any other API will result in an error. In addition, the load run is has a time limit of 500ms, error is raise on timeout and the entire operation is aborted. ### `redis.register_function` `redis.register_function(<function_name>, <callback>, [<description>])` This new API allows users to register a new function that will be linked to the newly created library. This API can only be called during the load run (see definition above). Any attempt to use it outside of the load run will result in an error. The parameters pass to the API are: * function_name - Function name (must be a Lua string) * callback - Lua function object that will be called when the function is invokes using fcall/fcall_ro * description - Function description, optional (must be a Lua string). ### Example The following example creates a library called `lib` with 2 functions, `f1` and `f1`, returns 1 and 2 respectively: ``` local function f1(keys, args) return 1 end local function f2(keys, args) return 2 end redis.register_function('f1', f1) redis.register_function('f2', f2) ``` Notice: Unlike `eval`, functions inside a library get the KEYS and ARGV as arguments to the functions and not as global. ### Technical Details On the load run we only want the user to be able to call a white list on API's. This way, in the future, if new API's will be added, the new API's will not be available to the load run unless specifically added to this white list. We put the while list on the `library` object and make sure the `library` object is only available to the load run by using [lua_setfenv](https://www.lua.org/manual/5.1/manual.html#lua_setfenv) API. This API allows us to set the `globals` of a function (and all the function it creates). Before starting the load run we create a new fresh Lua table (call it `g`) that only contains the `library` API (we make sure to set global protection on this table just like the general global protection already exists today), then we use [lua_setfenv](https://www.lua.org/manual/5.1/manual.html#lua_setfenv) to set `g` as the global table of the load run. After the load run finished we update `g` metatable and set `__index` and `__newindex` functions to be `_G` (Lua default globals), we also pop out the `library` object as we do not need it anymore. This way, any function that was created on the load run (and will be invoke using `fcall`) will see the default globals as it expected to see them and will not have the `library` API anymore. An important outcome of this new approach is that now we can achieve a distinct global table for each library (it is not yet like that but it is very easy to achieve it now). In the future we can decide to remove global protection because global on different libraries will not collide or we can chose to give different API to different libraries base on some configuration or input. Notice that this technique was meant to prevent errors and was not meant to prevent malicious user from exploit it. For example, the load run can still save the `library` object on some local variable and then using in `fcall` context. To prevent such a malicious use, the C code also make sure it is running in the right context and if not raise an error.
-
- 04 Jan, 2022 1 commit
-
-
Matthieu MOREL authored
This sets up dependabot to check weekly updates for pip and github-actions dependencies. If it finds an update it will create a PR to update the dependency. More information can be found here It includes the update of: * vmactions/freebsd-vm from 0.1.4 to 0.1.5 * codespell from 2.0.0 to 2.1.0 Also includes spelling fixes found by the latest version of codespell. Includes a dedicated .codespell folder so dependabot can read a requirements.txt file and every files dedicated to codespell can be grouped in the same place Co-Authored-By:
Matthieu MOREL <mmorel-35@users.noreply.github.com> Co-Authored-By:
MOREL Matthieu <matthieu.morel@cnp.fr>
-
- 02 Jan, 2022 2 commits
-
-
Viktor Söderqvist authored
To avoid data loss, this commit adds a grace period for lagging replicas to catch up the replication offset. Done: * Wait for replicas when shutdown is triggered by SIGTERM and SIGINT. * Wait for replicas when shutdown is triggered by the SHUTDOWN command. A new blocked client type BLOCKED_SHUTDOWN is introduced, allowing multiple clients to call SHUTDOWN in parallel. Note that they don't expect a response unless an error happens and shutdown is aborted. * Log warning for each replica lagging behind when finishing shutdown. * CLIENT_PAUSE_WRITE while waiting for replicas. * Configurable grace period 'shutdown-timeout' in seconds (default 10). * New flags for the SHUTDOWN command: - NOW disables the grace period for lagging replicas. - FORCE ignores errors writing the RDB or AOF files which would normally prevent a shutdown. - ABORT cancels ongoing shutdown. Can't be combined with other flags. * New field in the output of the INFO command: 'shutdown_in_milliseconds'. The value is the remaining maximum time to wait for lagging replicas before finishing the shutdown. This field is present in the Server section **only** during shutdown. Not directly related: * When shutting down, if there is an AOF saving child, it is killed **even** if AOF is disabled. This can happen if BGREWRITEAOF is used when AOF is off. * Client pause now has end time and type (WRITE or ALL) per purpose. The different pause purposes are *CLIENT PAUSE command*, *failover* and *shutdown*. If clients are unpaused for one purpose, it doesn't affect client pause for other purposes. For example, the CLIENT UNPAUSE command doesn't affect client pause initiated by the failover or shutdown procedures. A completed failover or a failed shutdown doesn't unpause clients paused by the CLIENT PAUSE command. Notes: * DEBUG RESTART doesn't wait for replicas. * We already have a warning logged when a replica disconnects. This means that if any replica connection is lost during the shutdown, it is either logged as disconnected or as lagging at the time of exit. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
yoav-steinberg authored
This is needed in order to ease the deployment of functions for ephemeral cases, where user needs to spin up a server with functions pre-loaded. #### Details: * Added `--functions-rdb` option to _redis-cli_. * Functions only rdb via `REPLCONF rdb-filter-only functions`. This is a placeholder for a space separated inclusion filter for the RDB. In the future can be `REPLCONF rdb-filter-only "functions db:3 key-patten:user*"` and a complementing `rdb-filter-exclude` `REPLCONF` can also be added. * Handle "slave requirements" specification to RDB saving code so we can use the same RDB when different slaves express the same requirements (like functions-only) and not share the RDB when their requirements differ. This is currently just a flags `int`, but can be extended to a more complex structure with various filter fields. * make sure to support filters only in diskless replication mode (not to override the persistence file), we do that by forcing diskless (even if disabled by config) other changes: * some refactoring in rdb.c (extract portion of a big function to a sub-function) * rdb_key_save_delay used in AOFRW too * sendChildInfo takes the number of updated keys (incremental, rather than absolute) Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 22 Dec, 2021 1 commit
-
-
guybe7 authored
The mess: Some parts use alsoPropagate for late propagation, others using an immediate one (propagate()), causing edge cases, ugly/hacky code, and the tendency for bugs The basic idea is that all commands are propagated via alsoPropagate (i.e. added to a list) and the top-most call() is responsible for going over that list and actually propagating them (and wrapping them in MULTI/EXEC if there's more than one command). This is done in the new function, propagatePendingCommands. Callers to propagatePendingCommands: 1. top-most call() (we want all nested call()s to add to the also_propagate array and just the top-most one to propagate them) - via `afterCommand` 2. handleClientsBlockedOnKeys: it is out of call() context and it may propagate stuff - via `afterCommand`. 3. handleClientsBlockedOnKeys edge case: if the looked-up key is already expired, we will propagate the expire but will not unblock any client so `afterCommand` isn't called. in that case, we have to propagate the deletion explicitly. 4. cron stuff: active-expire and eviction may also propagate stuff 5. modules: the module API allows to propagate stuff from just about anywhere (timers, keyspace notifications, threads). I could have tried to catch all the out-of-call-context places but it seemed easier to handle it in one place: when we free the context. in the spirit of what was done in call(), only the top-most freeing of a module context may cause propagation. 6. modules: when using a thread-safe ctx it's not clear when/if the ctx will be freed. we do know that the module must lock the GIL before calling RM_Replicate/RM_Call so we propagate the pending commands when releasing the GIL. A "known limitation", which were actually a bug, was fixed because of this commit (see propagate.tcl): When using a mix of RM_Call with `!` and RM_Replicate, the command would propagate out-of-order: first all the commands from RM_Call, and then the ones from RM_Replicate Another thing worth mentioning is that if, in the past, a client would issue a MULTI/EXEC with just one write command the server would blindly propagate the MULTI/EXEC too, even though it's redundant. not anymore. This commit renames propagate() to propagateNow() in order to cause conflicts in pending PRs. propagatePendingCommands is the only caller of propagateNow, which is now a static, internal helper function. Optimizations: 1. alsoPropagate will not add stuff to also_propagate if there's no AOF and replicas 2. alsoPropagate reallocs also_propagagte exponentially, to save calls to memmove Bugfixes: 1. CONFIG SET can create evictions, sending notifications which can cause to dirty++ with modules. we need to prevent it from propagating to AOF/replicas 2. We need to set current_client in RM_Call. buggy scenario: - CONFIG SET maxmemory, eviction notifications, module hook calls RM_Call - assertion in lookupKey crashes, because current_client has CONFIG SET, which isn't CMD_WRITE 3. minor: in eviction, call propagateDeletion after notification, like active-expire and all commands (we always send a notification before propagating the command)
-
- 21 Dec, 2021 1 commit
-
-
Meir Shpilraien (Spielrein) authored
The issue with MAY_REPLICATE is that all automatic mechanisms to handle write commands will not work. This require have a special treatment for: * Not allow those commands to be executed on RO replica. * Allow those commands to be executed on RO replica from primary connection. * Allow those commands to be executed on the RO replica from AOF. By setting those commands as WRITE commands we are getting all those properties from Redis. Test was added to verify that those properties work as expected. In addition, rearrange when and where functions are flushed. Before this PR functions were flushed manually on `rdbLoadRio` and cleaned manually on failure. This contradicts the assumptions that functions are data and need to be created/deleted alongside with the data. A side effect of this, for example, `debug reload noflush` did not flush the data but did flush the functions, `debug loadaof` flush the data but not the functions. This PR move functions deletion into `emptyDb`. `emptyDb` (renamed to `emptyData`) will now accept an additional flag, `NOFUNCTIONS` which specifically indicate that we do not want to flush the functions (on all other cases, functions will be flushed). Used the new flag on FLUSHALL and FLUSHDB only! Tests were added to `debug reload` and `debug loadaof` to verify that functions behave the same as the data. Notice that because now functions will be deleted along side with the data we can not allow `CLUSTER RESET` to be called from within a function (it will cause the function to be released while running), this PR adds `NO_SCRIPT` flag to `CLUSTER RESET` so it will not be possible to be called from within a function. The other cluster commands are allowed from within a function (there are use-cases that uses `GETKEYSINSLOT` to iterate over all the keys on a given slot). Tests was added to verify `CLUSTER RESET` is denied from within a script. Another small change on this PR is that `RDBFLAGS_ALLOW_DUP` is also applicable on functions. When loading functions, if this flag is set, we will replace old functions with new ones on collisions.
-
- 02 Dec, 2021 1 commit
-
-
meir@redislabs.com authored
Redis function unit is located inside functions.c and contains Redis Function implementation: 1. FUNCTION commands: * FUNCTION CREATE * FCALL * FCALL_RO * FUNCTION DELETE * FUNCTION KILL * FUNCTION INFO 2. Register engine In addition, this commit introduce the first engine that uses the Redis Function capabilities, the Lua engine.
-
- 01 Dec, 2021 2 commits
-
-
meir@redislabs.com authored
Script unit is a new unit located on script.c. Its purpose is to provides an API for functions (and eval) to interact with Redis. Interaction includes mostly executing commands, but also functionalities like calling Redis back on long scripts or check if the script was killed. The interaction is done using a scriptRunCtx object that need to be created by the user and initialized using scriptPrepareForRun. Detailed list of functionalities expose by the unit: 1. Calling commands (including all the validation checks such as acl, cluster, read only run, ...) 2. Set Resp 3. Set Replication method (AOF/REPLICATION/NONE) 4. Call Redis back to on long running scripts to allow Redis reply to clients and perform script kill The commit introduce the new unit and uses it on eval commands to interact with Redis.
-
meir@redislabs.com authored
The following variable was renamed: 1. lua_caller -> script_caller 2. lua_time_limit -> script_time_limit 3. lua_timedout -> script_timedout 4. lua_oom -> script_oom 5. lua_disable_deny_script -> script_disable_deny_script 6. in_eval -> in_script The following variables was moved to lctx under eval.c 1. lua 2. lua_client 3. lua_cur_script 4. lua_scripts 5. lua_scripts_mem 6. lua_replicate_commands 7. lua_write_dirty 8. lua_random_dirty 9. lua_multi_emitted 10. lua_repl 11. lua_kill 12. lua_time_start 13. lua_time_snapshot This commit is in a low risk of introducing any issues and it is just moving varibales around and not changing any logic.
-
- 28 Nov, 2021 2 commits
-
-
Viktor Söderqvist authored
Writable replicas now no longer use the values of expired keys. Expired keys are deleted when lookupKeyWrite() is used, even on a writable replica. Previously, writable replicas could use the value of an expired key in write commands such as INCR, SUNIONSTORE, etc.. This commit also sorts out the mess around the functions lookupKeyRead() and lookupKeyWrite() so they now indicate what we intend to do with the key and are not affected by the command calling them. Multi-key commands like SUNIONSTORE, ZUNIONSTORE, COPY and SORT with the store option now use lookupKeyRead() for the keys they're reading from (which will not allow reading from logically expired keys). This commit also fixes a bug where PFCOUNT could return a value of an expired key. Test modules commands have their readonly and write flags updated to correctly reflect their lookups for reading or writing. Modules are not required to correctly reflect this in their command flags, but this change is made for consistency since the tests serve as usage examples. Fixes #6842. Fixes #7475.
-
sundb authored
Remove lcsGetKeys to clean up the remaining STRALGO after #9733. i.e. it still used a getkeys_proc which was still looking for the KEYS or STRINGS arguments
-
- 24 Nov, 2021 1 commit
-
-
sundb authored
Part three of implementing #8702, following #8887 and #9366 . ## Description of the feature 1. Replace the ziplist container of quicklist with listpack. 2. Convert existing quicklist ziplists on RDB loading time. an O(n) operation. ## Interface changes 1. New `list-max-listpack-size` config is an alias for `list-max-ziplist-size`. 2. Replace `debug ziplist` command with `debug listpack`. ## Internal changes 1. Add `lpMerge` to merge two listpacks . (same as `ziplistMerge`) 2. Add `lpRepr` to print info of listpack which is used in debugCommand and `quicklistRepr`. (same as `ziplistRepr`) 3. Replace `QUICKLIST_NODE_CONTAINER_ZIPLIST` with `QUICKLIST_NODE_CONTAINER_PACKED`(following #9357 ). It represent that a quicklistNode is a packed node, as opposed to a plain node. 4. Remove `createZiplistObject` method, which is never used. 5. Calculate listpack entry size using overhead overestimation in `quicklistAllowInsert`. We prefer an overestimation, which would at worse lead to a few bytes below the lowest limit of 4k. ## Improvements 1. Calling `lpShrinkToFit` after converting Ziplist to listpack, which was missed at #9366. 2. Optimize `quicklistAppendPlainNode` to avoid memcpy data. ## Bugfix 1. Fix crash in `quicklistRepr` when ziplist is compressed, introduced from #9366. ## Test 1. Add unittest for `lpMerge`. 2. Modify the old quicklist ziplist corrupt dump test. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 16 Nov, 2021 1 commit
-
-
Axlgrep authored
in `scan 0 match ""` case, pat is empty sds(patlen is 0), I don't think should access the first character directly in this case(even though the first character is ' \0 '), for the code readability, I switch the two positions of judgment logic.
-
- 04 Nov, 2021 1 commit
-
-
Eduardo Semprebon authored
For diskless replication in swapdb mode, considering we already spend replica memory having a backup of current db to restore in case of failure, we can have the following benefits by instead swapping database only in case we succeeded in transferring db from master: - Avoid `LOADING` response during failed and successful synchronization for cases where the replica is already up and running with data. - Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping. - This could be implemented also for disk replication with similar benefits if consumers are willing to spend the extra memory usage. General notes: - The concept of `backupDb` becomes `tempDb` for clarity. - Async loading mode will only kick in if the replica is syncing from a master that has the same repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline. - New property in INFO: `async_loading` to differentiate from the blocking loading - Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db and the tempDb that is passed around. - Because this is affecting replicas only, we assume that if they are not readonly and write commands during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET here anyways to avoid complications. Considerations for review: - We have many cases where server.loading flag is used and even though I tried my best, there may be cases where async_loading should be checked as well and cases where it shouldn't (would require very good understanding of whole code) - Several places that had different behavior depending on the loading flag where actually meant to just handle commands coming from the AOF client differently than ones coming from real clients, changed to check CLIENT_ID_AOF instead. **Additional for Release Notes** - Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't contribute on triggering next database SAVE - New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING - Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event. Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED, ABORTED and COMPLETED. - New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions to allow modules to declare they support the diskless replication with async loading (when absent, we fall back to disk-based loading). Co-authored-by:
Eduardo Semprebon <edus@saxobank.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 03 Nov, 2021 1 commit
-
-
perryitay authored
When using SETNX and SETXX we could end up doing key lookup twice. This presents a small inefficiency price. Also once we have statistics of write hit and miss they'll be wrong (recording the same key hit twice)
-
- 31 Oct, 2021 1 commit
-
-
lijinliang authored
Co-authored-by:
lijinliang <lijl@newdt.cn>
-
- 20 Oct, 2021 1 commit
-
-
guybe7 authored
## Intro The purpose is to allow having different flags/ACL categories for subcommands (Example: CONFIG GET is ok-loading but CONFIG SET isn't) We create a small command table for every command that has subcommands and each subcommand has its own flags, etc. (same as a "regular" command) This commit also unites the Redis and the Sentinel command tables ## Affected commands CONFIG Used to have "admin ok-loading ok-stale no-script" Changes: 1. Dropped "ok-loading" in all except GET (this doesn't change behavior since there were checks in the code doing that) XINFO Used to have "read-only random" Changes: 1. Dropped "random" in all except CONSUMERS XGROUP Used to have "write use-memory" Changes: 1. Dropped "use-memory" in all except CREATE and CREATECONSUMER COMMAND No changes. MEMORY Used to have "random read-only" Changes: 1. Dropped "random" in PURGE and USAGE ACL Used to have "admin no-script ok-loading ok-stale" Changes: 1. Dropped "admin" in WHOAMI, GENPASS, and CAT LATENCY No changes. MODULE No changes. SLOWLOG Used to have "admin random ok-loading ok-stale" Changes: 1. Dropped "random" in RESET OBJECT Used to have "read-only random" Changes: 1. Dropped "random" in ENCODING and REFCOUNT SCRIPT Used to have "may-replicate no-script" Changes: 1. Dropped "may-replicate" in all except FLUSH and LOAD CLIENT Used to have "admin no-script random ok-loading ok-stale" Changes: 1. Dropped "random" in all except INFO and LIST 2. Dropped "admin" in ID, TRACKING, CACHING, GETREDIR, INFO, SETNAME, GETNAME, and REPLY STRALGO No changes. PUBSUB No changes. CLUSTER Changes: 1. Dropped "admin in countkeysinslots, getkeysinslot, info, nodes, keyslot, myid, and slots SENTINEL No changes. (note that DEBUG also fits, but we decided not to convert it since it's for debugging and anyway undocumented) ## New sub-command This commit adds another element to the per-command output of COMMAND, describing the list of subcommands, if any (in the same structure as "regular" commands) Also, it adds a new subcommand: ``` COMMAND LIST [FILTERBY (MODULE <module-name>|ACLCAT <cat>|PATTERN <pattern>)] ``` which returns a set of all commands (unless filters), but excluding subcommands. ## Module API A new module API, RM_CreateSubcommand, was added, in order to allow module writer to define subcommands ## ACL changes: 1. Now, that each subcommand is actually a command, each has its own ACL id. 2. The old mechanism of allowed_subcommands is redundant (blocking/allowing a subcommand is the same as blocking/allowing a regular command), but we had to keep it, to support the widespread usage of allowed_subcommands to block commands with certain args, that aren't subcommands (e.g. "-select +select|0"). 3. I have renamed allowed_subcommands to allowed_firstargs to emphasize the difference. 4. Because subcommands are commands in ACL too, you can now use "-" to block subcommands (e.g. "+client -client|kill"), which wasn't possible in the past. 5. It is also possible to use the allowed_firstargs mechanism with subcommand. For example: `+config -config|set +config|set|loglevel` will block all CONFIG SET except for setting the log level. 6. All of the ACL changes above required some amount of refactoring. ## Misc 1. There are two approaches: Either each subcommand has its own function or all subcommands use the same function, determining what to do according to argv[0]. For now, I took the former approaches only with CONFIG and COMMAND, while other commands use the latter approach (for smaller blamelog diff). 2. Deleted memoryGetKeys: It is no longer needed because MEMORY USAGE now uses the "range" key spec. 4. Bugfix: GETNAME was missing from CLIENT's help message. 5. Sentinel and Redis now use the same table, with the same function pointer. Some commands have a different implementation in Sentinel, so we redirect them (these are ROLE, PUBLISH, and INFO). 6. Command stats now show the stats per subcommand (e.g. instead of stats just for "config" you will have stats for "config|set", "config|get", etc.) 7. It is now possible to use COMMAND directly on subcommands: COMMAND INFO CONFIG|GET (The pipeline syntax was inspired from ACL, and can be used in functions lookupCommandBySds and lookupCommandByCString) 8. STRALGO is now a container command (has "help") ## Breaking changes: 1. Command stats now show the stats per subcommand (see (5) above)
-
- 07 Oct, 2021 1 commit
-
-
Huang Zhw authored
Tracking invalidation messages were sometimes sent in inconsistent order, before the command's reply rather than after. In addition to that, they were sometimes embedded inside other commands responses, like MULTI-EXEC and MGET.
-