- 20 Nov, 2022 1 commit
-
-
Binbin authored
The following example will create an empty set (listpack encoding): ``` > RESTORE key 0 "\x14\x25\x25\x00\x00\x00\x00\x00\x02\x01\x82\x5F\x37\x03\x06\x01\x82\x5F\x35\x03\x82\x5F\x33\x03\x00\x01\x82\x5F\x31\x03\x82\x5F\x39\x03\x04\xA9\x08\x01\xFF\x0B\x00\xA3\x26\x49\xB4\x86\xB0\x0F\x41" OK > SCARD key (integer) 0 > SRANDMEMBER key Error: Server closed the connection ``` In the spirit of #9297, skip empty set when loading RDB_TYPE_SET_LISTPACK. Introduced in #11290
-
- 16 Nov, 2022 1 commit
-
-
sundb authored
Improve memory efficiency of list keys ## Description of the feature The new listpack encoding uses the old `list-max-listpack-size` config to perform the conversion, which we can think it of as a node inside a quicklist, but without 80 bytes overhead (internal fragmentation included) of quicklist and quicklistNode structs. For example, a list key with 5 items of 10 chars each, now takes 128 bytes instead of 208 it used to take. ## Conversion rules * Convert listpack to quicklist When the listpack length or size reaches the `list-max-listpack-size` limit, it will be converted to a quicklist. * Convert quicklist to listpack When a quicklist has only one node, and its length or size is reduced to half of the `list-max-listpack-size` limit, it will be converted to a listpack. This is done to avoid frequent conversions when we add or remove at the bounding size or length. ## Interface changes 1. add list entry param to listTypeSetIteratorDirection When list encoding is listpack, `listTypeIterator->lpi` points to the next entry of current entry, so when changing the direction, we need to use the current node (listTypeEntry->p) to update `listTypeIterator->lpi` to the next node in the reverse direction. ## Benchmark ### Listpack VS Quicklist with one node * LPUSH - roughly 0.3% improvement * LRANGE - roughly 13% improvement ### Both are quicklist * LRANGE - roughly 3% improvement * LRANGE without pipeline - roughly 3% improvement From the benchmark, as we can see from the results 1. When list is quicklist encoding, LRANGE improves performance by <5%. 2. When list is listpack encoding, LRANGE improves performance by ~13%, the main enhancement is brought by `addListListpackRangeReply()`. ## Memory usage 1M lists(key:0~key:1000000) with 5 items of 10 chars ("hellohello") each. shows memory usage down by 35.49%, from 214MB to 138MB. ## Note 1. Add conversion callback to support doing some work before conversion Since the quicklist iterator decompresses the current node when it is released, we can no longer decompress the quicklist after we convert the list.
-
- 12 Nov, 2022 1 commit
-
-
Oran Agra authored
Fix a few issues with the recent #11463 * use exitFromChild instead of exit * test should ignore defunct process since that's what we expect to happen for thees child processes when the parent dies. * fix typo Co-authored-by:
Binbin <binloveplay1314@qq.com>
-
- 09 Nov, 2022 1 commit
-
-
Oran Agra authored
During a diskless sync, if the master main process crashes, the child would have hung in `write`. This fix closes the read fd on the child side, so that if the parent crashes, the child will get a write error and exit. This change also fixes disk-based replication, BGSAVE and AOFRW. In that case the child wouldn't have been hang, it would have just kept running until done which may be pointless. There is a certain degree of risk here. in case there's a BGSAVE child that could maybe succeed and the parent dies for some reason, the old code would have let the child keep running and maybe succeed and avoid data loss. On the other hand, if the parent is restarted, it would have loaded an old rdb file (or none), and then the child could reach the end and rename the rdb file (data conflicting with what the parent has), or also have a race with another BGSAVE child that the new parent started. Note that i removed a comment saying a write error will be ignored in the child and handled by the parent (this comment was very old and i don't think relevant).
-
- 15 Oct, 2022 1 commit
-
-
filipe oliveira authored
optimizing d2string() and addReplyDouble() with grisu2: double to string conversion based on Florian Loitsch's Grisu-algorithm (#10587) All commands / use cases that heavily rely on double to a string representation conversion, (e.g. meaning take a double-precision floating-point number like 1.5 and return a string like "1.5" ), could benefit from a performance boost by swapping snprintf(buf,len,"%.17g",value) by the equivalent [fpconv_dtoa](https://github.com/night-shift/fpconv) or any other algorithm that ensures 100% coverage of conversion. This is a well-studied topic and Projects like MongoDB. RedPanda, PyTorch leverage libraries ( fmtlib ) that use the optimized double to string conversion underneath. The positive impact can be substantial. This PR uses the grisu2 approach ( grisu explained on https://www.cs.tufts.edu/~nr/cs257/archive/florian-loitsch/printf.pdf section 5 ). test suite changes: Despite being compatible, in some cases it produces a different result from printf, and some tests had to be adjusted. one case is that `%.17g` (which means %e or %f which ever is shorter), chose to use `5000000000` instead of 5e+9, which sounds like a bug? In other cases, we changed TCL to compare numbers instead of strings to ignore minor rounding issues (`expr 0.8 == 0.79999999999999999`)
-
- 24 Aug, 2022 2 commits
-
-
Meir Shpilraien (Spielrein) authored
Fix `Test replication with lazy expire` test to not timeout the wait command. This fix will allow the test to pass on slow environments and when running with valgrind.
-
Meir Shpilraien (Spielrein) authored
The PR reverts the changes made on #10969. The reason for revert was trigger because of occasional test failure that started after the PR was merged. The issue is that if there is a lazy expire during the command invocation, the `del` command is added to the replication stream after the command placeholder. So the logical order on the primary is: * Delete the key (lazy expiration) * Command invocation But the replication stream gets it the other way around: * Command invocation (because the command is written into the placeholder) * Delete the key (lazy expiration) So if the command write to the key that was just lazy expired we will get inconsistency between primary and replica. One solution we considered is to add another lazy expire replication stream and write all the lazy expire there. Then when replicating, we will replicate the lazy expire replication stream first. This will solve this specific test failure but we realize that the issues does not ends here and the more we dig the more problems we find.One of the example we thought about (that can actually crashes Redis) is as follow: * User perform SINTERSTORE * When Redis tries to fetch the second input key it triggers lazy expire * The lazy expire trigger a module logic that deletes the first input key * Now Redis hold the robj of the first input key that was actually freed We believe we took the wrong approach and we will come up with another PR that solve the problem differently, for now we revert the changes so we will not have the tests failure. Notice that not the entire code was revert, some parts of the PR are changes that we would like to keep. The changes that **was** reverted are: * Saving a placeholder for replication at the beginning of the command (`call` function) * Order of the replication stream on active expire and eviction (we will decide how to handle it correctly on follow up PR) * `Spop` changes are no longer needed (because we reverted the placeholder code) Changes that **was not** reverted: * On expire/eviction, wrap the `del` and the notification effect in a multi exec. * `PropagateNow` function can still accept a special dbid, -1, indicating not to replicate select. * Keep optimisation for reusing the `alsoPropagate` array instead of allocating it each time. Tests: * All tests was kept and only few tests was modify to work correctly with the changes * Test was added to verify that the revert fixes the issues.
-
- 04 Aug, 2022 1 commit
-
- 27 Jul, 2022 1 commit
-
-
Binbin authored
The kill above is sometimes successful and sometimes already too late. The PING in pysnc wrong offset test got rejected by bgsaveerr because lastbgsave_status is C_ERR. In theory, using diskless can avoid PING being affected, because when the replica is dropped, we will kill the child with SIGUSR1, and this will not affect lastbgsave_status. Anyway, this kill is not particularly needed here, dropping the kill is the best one, since we do have the waitForBgsave, so just let it take care of the bgsave. No need for fast termination.
-
- 13 Jul, 2022 1 commit
-
-
Oran Agra authored
The corrupt dump fuzzer uncovered a valgrind warning saying: ``` ==76370== Argument 'size' of function malloc has a fishy (possibly negative) value: -3744781444216323815 ``` This allocation would have failed (returning NULL) and being handled properly by redis (even before this change), but we also want to silence the valgrind warnings (which are checking that casting to ssize_t produces a non-negative value). The solution i opted for is to explicitly fail these allocations (returning NULL), before even reaching `malloc` (which would have failed and return NULL too). The implication is that we will not be able to support a single allocation of more than 2GB on a 32bit system (which i don't think is a realistic scenario). i.e. i do think we could be facing cases were redis consumes more than 2gb on a 32bit system, but not in a single allocation. The byproduct of this, is that i dropped the overflow assertions, since these will now lead to the same OOM panic we have for failed allocations.
-
- 23 Jun, 2022 1 commit
-
-
judeng authored
The new test added in #10891 can fail with a different error. see comment in networking.c saying ```c /* That's a best effort error message, don't check write errors. * Note that for TLS connections, no handshake was done yet so nothing * is written and the connection will just drop. */ ```
-
- 22 Jun, 2022 1 commit
-
-
judeng authored
my maxclients config: ``` redis-cli config get maxclients 1) "maxclients" 2) "4064" ``` Before this bug was fixed, creating 4065 clients appeared to be successful, but only 4064 were actually created``` ``` ./redis-benchmark -c 4065 -I Creating 4065 idle connections and waiting forever (Ctrl+C when done) cients: 4065 ``` now : ``` ./redis-benchmark -c 4065 -I Creating 4065 idle connections and waiting forever (Ctrl+C when done) Error from server: ERR max number of clients reached ./redis-benchmark -c 4064 -I Creating 4064 idle connections and waiting forever (Ctrl+C when done) clients: 4064 ```
-
- 09 Jun, 2022 1 commit
-
-
Christian Krieg authored
The test calls `ldd` on `redis-server` in order to find out whether the binary was linked against `libmusl`; However, `ldd` returns a value different from `0` when statically linking the binaries agains libc-musl, because `redis-server` is not a dynamic executable (as given by the exception thrown by the failing test), and `make test` terminates with an error:: $ ldd src/redis-server not a dynamic executable $ echo $? 1 This commit fixes the test by ignoring such failures. Co-authored-by:
Yossi Gottlieb <yossigo@gmail.com>
-
- 07 Jun, 2022 1 commit
-
-
Petr Vaněk authored
This change fixes failing `integration/logging.tcl` test in Gentoo with musl libc, where `ldd` returns ``` libc.so => /lib/ld-musl-x86_64.so.1 (0x7f9d5f171000) ``` unlike Alpine's ``` libc.musl-x86_64.so.1 => /lib/ld-musl-x86_64.so.1 (0x7f82cfa16000) ``` The solution is to extend matching pattern introduced in #8532.
-
- 01 Jun, 2022 1 commit
-
-
Oran Agra authored
Fix broken protocol in MISCONF error, RM_Yield bugs, RM_Call(EVAL) OOM check bug, and new RM_Call checks. (#10786) * Fix broken protocol when redis can't persist to RDB (general commands, not modules), excessive newline. regression of #10372 (7.0 RC3) * Fix broken protocol when Redis can't persist to AOF (modules and scripts), missing newline. * Fix bug in OOM check of EVAL scripts called from RM_Call. set the cached OOM state for scripts before executing module commands too, so that it can serve scripts that are executed by modules. i.e. in the past EVAL executed by RM_Call could have either falsely fail or falsely succeeded because of a wrong cached OOM state flag. * Fix bugs with RM_Yield: 1. SHUTDOWN should only accept the NOSAVE mode 2. Avoid eviction during yield command processing. 3. Avoid processing master client commands while yielding from another client * Add new two more checks to RM_Call script mode. 1. READONLY You can't write against a read only replica 2. MASTERDOWN Link with MASTER is down and `replica-serve-stale-data` is set to `no` * Add new RM_Call flag to let redis automatically refuse `deny-oom` commands while over the memory limit. * Add tests to cover various errors from Scripts, Modules, Modules calling scripts, and Modules calling commands in script mode. Add tests: * Looks like the MISCONF error was completely uncovered by the tests, add tests for it, including from scripts, and modules * Add tests for NOREPLICAS from scripts * Add tests for the various errors in module RM_Call, including RM_Call that calls EVAL, and RM_call in "eval mode". that includes: NOREPLICAS, READONLY, MASTERDOWN, MISCONF
-
- 11 May, 2022 1 commit
-
-
Binbin authored
FLUSHDB and FLUSHALL add call forceCommandPropagation / FLUSHALL reset dirty counter to 0 if we enable save (#10691) ## FLUSHALL We used to restore the dirty counter after `rdbSave` zeroed it if we enable save. Otherwise FLUSHALL will not be replicated nor put into the AOF. And then we do increment it again below. Without that extra dirty++, when db was already empty, FLUSHALL will not be replicated nor put into the AOF. We now gonna replace all that dirty counter magic with a call to forceCommandPropagation (REPL and AOF), instead of all the messing around with the dirty counter. Added tests to cover three part (dirty counter, REPL, AOF). One benefit other than cleaner code is that the `rdb_changes_since_last_save` is correct in this case. ## FLUSHDB FLUSHDB was not replicated nor put into the AOF when db was already empty. Unlike DEL on a non-existing key, FLUSHDB always does something, and that's to call the module hook. So basically FLUSHDB is never a NOP, and thus it should always be propagated. Not doing that, could mean that if a module does something in that hook, and wants to avoid issues of that hook being missing on the replica if the db is empty, it'll need to do complicated things. So now FLUSHDB add call forceCommandPropagation, we will always propagate FLUSHDB. Always propagating FLUSHDB seems like a safe approach that shouldn't have any drawbacks (other than looking odd) This was mentioned in #8972 ## Test section: We actually found it while solving a race condition in the BGSAVE test (other.tcl). It was found in extra_ci Daily Arm64 (test-libc-malloc). ``` [exception]: Executing test client: ERR Background save already in progress. ERR Background save already in progress ``` It look like `r flushdb` trigger (schedule) a bgsave right after `waitForBgsave r` and before `r save`. Changing flushdb to flushall, FLUSHALL will do a foreground save and then set the dirty counter to 0.
-
- 26 Apr, 2022 2 commits
-
-
chenyang8094 authored
Fix bug when AOF enabled after startup. put the new incr file in the manifest only when AOFRW is done. (#10616) Changes: - When AOF is enabled **after** startup, the data accumulated during `AOF_WAIT_REWRITE` will only be stored in a temp INCR AOF file. Only after the first AOFRW is successful, we will add it to manifest file. Before this fix, the manifest referred to the temp file which could cause a restart during that time to load it without it's base. - Add `aof_rewrites_consecutive_failures` info field for aofrw limiting implementation. Now we can guarantee that these behaviors of MP-AOF are the same as before (past redis releases): - When AOF is enabled after startup, the data accumulated during `AOF_WAIT_REWRITE` will only be stored in a visible place. Only after the first AOFRW is successful, we will add it to manifest file. - When disable AOF, we did not delete the AOF file in the past so there's no need to change that behavior now (yet). - When toggling AOF off and then on (could be as part of a full-sync), a crash or restart before the first rewrite is completed, would result with the previous version being loaded (might not be right thing, but that's what we always had).
-
Madelyn Olson authored
* Till now, replicas that were unable to persist, would still execute the commands they got from the master, now they'll panic by default, and we add a new `replica-ignore-disk-errors` config to change that. * Till now, when a command failed on a replica or AOF-loading, it only logged a warning and a stat, we add a new `propagation-error-behavior` config to allow panicking in that state (may become the default one day) Note that commands that fail on the replica can either indicate a bug that could cause data inconsistency between the replica and the master, or they could be in some cases (specifically in previous versions), a result of a command (e.g. EVAL) that failed on the master, but still had to be propagated to fail on the replica as well.
-
- 14 Apr, 2022 1 commit
-
-
Oran Agra authored
we had a panic in streamLastValidID when the stream metadata said it's not empty, but the rax is empty.
-
- 05 Apr, 2022 1 commit
-
-
Meir Shpilraien (Spielrein) authored
## Move library meta data to be part of the library payload. Following the discussion on https://github.com/redis/redis/issues/10429 and the intention to add (in the future) library versioning support, we believe that the entire library metadata (like name and engine) should be part of the library payload and not provided by the `FUNCTION LOAD` command. The reasoning behind this is that the programmer who developed the library should be the one who set those values (name, engine, and in the future also version). **It is not the responsibility of the admin who load the library into the database.** The PR moves all the library metadata (engine and function name) to be part of the library payload. The metadata needs to be provided on the first line of the payload using the shebang format (`#!<engine> name=<name>`), example: ```lua #!lua name=test redis.register_function('foo', function() return 1 end) ``` The above script will run on the Lua engine and will create a library called `test`. ## API Changes (compare to 7.0 rc2) * `FUNCTION LOAD` command was change and now it simply gets the library payload and extract the engine and name from the payload. In addition, the command will now return the function name which can later be used on `FUNCTION DELETE` and `FUNCTION LIST`. * The description field was completely removed from`FUNCTION LOAD`, and `FUNCTION LIST` ## Breaking Changes (compare to 7.0 rc2) * Library description was removed (we can re-add it in the future either as part of the shebang line or an additional line). * Loading an AOF file that was generated by either 7.0 rc1 or 7.0 rc2 will fail because the old command syntax is invalid. ## Notes * Loading an RDB file that was generated by rc1 / rc2 **is** supported, Redis will automatically add the shebang to the libraries payloads (we can probably delete that code after 7.0.3 or so since there's no need to keep supporting upgrades from an RC build).
-
- 06 Mar, 2022 1 commit
-
-
Yossi Gottlieb authored
Apparently using `\x` produces different results between tclsh 8.5 and 8.6, whereas `\u` is more consistent.
-
- 05 Mar, 2022 1 commit
-
-
Yuta Hongo authored
Normally, `redis-cli` escapes non-printable data received from Redis, using a custom scheme (which is also used to handle quoted input). When using `--json` this is not desired as it is not compatible with RFC 7159, which specifies JSON strings are assumed to be Unicode and how they should be escaped. This commit changes `--json` to follow RFC 7159, which means that properly encoded Unicode strings in Redis will result with a valid Unicode JSON. However, this introduces a new problem with `--json` and data that is not valid Unicode (e.g., random binary data, text that follows other encoding, etc.). To address this, we add `--quoted-json` which produces JSON strings that follow the original redis-cli quoting scheme. For example, a value that consists of only null (0x00) bytes will show up as: * `"\u0000\u0000\u0000"` when using `--json` * `"\\x00\\x00\\x00"` when using `--quoted-json`
-
- 23 Feb, 2022 1 commit
-
-
Itamar Haber authored
Adds the ability to track the lag of a consumer group (CG), that is, the number of entries yet-to-be-delivered from the stream. The proposed constant-time solution is in the spirit of "best-effort." Partially addresses #8737. ## Description of approach We add a new "entries_added" property to the stream. This starts at 0 for a new stream and is incremented by 1 with every `XADD`. It is essentially an all-time counter of the entries added to the stream. Given the stream's length and this counter value, we can trivially find the logical "entries_added" counter of the first ID if and only if the stream is contiguous. A fragmented stream contains one or more tombstones generated by `XDEL`s. The new "xdel_max_id" stream property tracks the latest tombstone. The CG also tracks its last delivered ID's as an "entries_read" counter and increments it independently when delivering new messages, unless the this read counter is invalid (-1 means invalid offset). When the CG's counter is available, the reported lag is the difference between added and read counters. Lastly, this also adds a "first_id" field to the stream structure in order to make looking it up cheaper in most cases. ## Limitations There are two cases in which the mechanism isn't able to track the lag. In these cases, `XINFO` replies with `null` in the "lag" field. The first case is when a CG is created with an arbitrary last delivered ID, that isn't "0-0", nor the first or the last entries of the stream. In this case, it is impossible to obtain a valid read counter (short of an O(N) operation). The second case is when there are one or more tombstones fragmenting the stream's entries range. In both cases, given enough time and assuming that the consumers are active (reading and lacking) and advancing, the CG should be able to catch up with the tip of the stream and report zero lag. Once that's achieved, lag tracking would resume as normal (until the next tombstone is set). ## API changes * `XGROUP CREATE` added with the optional named argument `[ENTRIESREAD entries-read]` for explicitly specifying the new CG's counter. * `XGROUP SETID` added with an optional positional argument `[ENTRIESREAD entries-read]` for specifying the CG's counter. * `XINFO` reports the maximal tombstone ID, the recorded first entry ID, and total number of entries added to the stream. * `XINFO` reports the current lag and logical read counter of CGs. * `XSETID` is an internal command that's used in replication/aof. It has been added with the optional positional arguments `[ENTRIESADDED entries-added] [MAXDELETEDID max-deleted-entry-id]` for propagating the CG's offset and maximal tombstone ID of the stream. ## The generic unsolved problem The current stream implementation doesn't provide an efficient way to obtain the approximate/exact size of a range of entries. While it could've been nice to have that ability (#5813) in general, let alone specifically in the context of CGs, the risk and complexities involved in such implementation are in all likelihood prohibitive. ## A refactoring note The `streamGetEdgeID` has been refactored to accommodate both the existing seek of any entry as well as seeking non-deleted entries (the addition of the `skip_tombstones` argument). Furthermore, this refactoring also migrated the seek logic to use the `streamIterator` (rather than `raxIterator`) that was, in turn, extended with the `skip_tombstones` Boolean struct field to control the emission of these. Co-authored-by:
Guy Benoish <guy.benoish@redislabs.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 22 Feb, 2022 1 commit
-
-
YaacovHazan authored
Make sure the status return from loading multiple AOF files reflects the overall result, not just the one of the last file. When one of the AOF files succeeded to load, but the last AOF file was empty, the loadAppendOnlyFiles will return AOF_EMPTY. This commit changes this behavior, and return AOF_OK in that case. This can happen for example, when loading old AOF file, and no more commands processed, the manifest file will include base AOF file with data, and empty incr AOF file. Co-authored-by:
chenyang8094 <chenyang8094@users.noreply.github.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 17 Feb, 2022 1 commit
-
-
chenyang8094 authored
Modifications of this PR: 1. Support the verification of `Multi Part AOF`, while still maintaining support for the old-style `AOF/RDB-preamble`. `redis-check-aof` will automatically choose which mode to use according to the incoming file format. `Usage: redis-check-aof [--fix|--truncate-to-timestamp $timestamp] <AOF/manifest>` 2. Refactor part of the code to make it easier to understand 3. Currently only supports truncate (`--fix` or `--truncate-to-timestamp`) the last AOF file (may be `BASE` or `INCR`) The reasons for 3 above: - for `--fix`: Only the last AOF may be truncated, this is guaranteed by redis - for `--truncate-to-timestamp`: Normally, we only have `BASE` + `INCR` files at most, and `BASE` cannot be truncated(It only contains a timestamp annotation at the beginning of the file), so only `INCR` can be truncated. If we have a `BASE+INCR1+INCR2` file (meaning we have an interrupted AOFRW), Only `INCR2` files can be truncated at this time. If we still insist on truncate `INCR1`, we need to manually delete `INCR2` and update the manifest file, then re-run `redis-check-aof` - If we want to support truncate any file, we need to add very complicated code to support the atomic modification of multiple file deletion and update manifest, I think this is unnecessary
-
- 14 Feb, 2022 1 commit
-
-
YaacovHazan authored
In order to make sure no more commands processed, we wait that the 'load handlers' will disconncet. The test by mistake waited on the (last) slave instead of the master.
-
- 13 Feb, 2022 1 commit
-
-
Binbin authored
Added regression tests for #10020 / #10081 / #10243. The above PRs fixed some crashes due to an asserting, see function `clientHasPendingReplies` (introduced in #9166). This commit added some tests to cover the above scenario. These tests will all fail in #9166, althought fixed not, there is value in adding these tests to cover and verify the changes. And it also can cover #8868 (verify the logs). Other changes: 1. Reduces the wait time in `waitForBgsave` and `waitForBgrewriteaof` from 1s to 50ms, which should reduce the time for some tests. 2. Improve the test infra to print context when `assert_match` fails. 3. Improve the test infra to print `$error` when `assert_error` fails. ``` Expected an error matching 'ERR*' but got 'OK' (context: type eval line 4 cmd {assert_error "ERR*" {r set a b}} proc ::test) ```
-
- 03 Feb, 2022 1 commit
-
-
Vo Trong Phuc authored
Add check enough good slaves for write command when evaluating scripts. This check is made before the script is executed, if we have function flags, and per redis command if we don't. Co-authored-by:
Phuc. Vo Trong <phucvt@vng.com.vn> Co-authored-by:
Oran Agra <oran@redislabs.com> Co-authored-by:
Meir Shpilraien (Spielrein) <meir@redis.com>
-
- 26 Jan, 2022 2 commits
-
-
Oran Agra authored
This PR attempts to solve two problems that happen sometime in valgrind: `ERR Background save already in progress` and `not bgsave not aborted` the test used to populate the database with DEBUG, which didn't increment the dirty counter, so couldn't trigger an automatic bgsave. then it used a manual bgsave, and aborted it (when it got aborted it populated the dirty counter), and then it tried to do another bgsave. that other bgsave could have failed if the automatic one already started.
-
Oran Agra authored
Failed on a non-valgrind run. on this line: ``` assert_equal 0 [$slave exists k] ``` the condition in `keyIsExpired` is `now > when`. so if the test is really fast, maybe it can get to EXISTS exactly 1000 milliseconds after the expiration was set, and the key isn't yet gone)
-
- 24 Jan, 2022 3 commits
-
-
Oran Agra authored
The edit i made to #10098 before merging was broken. (INFO is forbidden in LOADING)
-
chenyang8094 authored
* Refactor EVAL timeout test * since the test used r config set appendonly yes which generates a rewrite, it missed it's purpose * Fix the bug that start_server returns before redis starts ready, which affects when multiple tests share the same dir. * Elapsed time tracking no loner needed Co-authored-by:
Oran Agra <oran@redislabs.com>
-
chenyang8094 authored
-
- 18 Jan, 2022 1 commit
-
-
Yossi Gottlieb authored
This extends the previous fix (#10049) to address any form of non-printable or whitespace character (including newlines, quotes, non-printables, etc.) Also, removes the limitation on appenddirname, to align with the way filenames are handled elsewhere in Redis.
-
- 17 Jan, 2022 1 commit
-
-
Oran Agra authored
1. enable diskless replication by default 2. add a new config named repl-diskless-sync-max-replicas that enables replication to start before the full repl-diskless-sync-delay was reached. 3. put replica online sooner on the master (see below) 4. test suite uses repl-diskless-sync-delay of 0 to be faster 5. a few tests that use multiple replica on a pre-populated master, are now using the new repl-diskless-sync-max-replicas 6. fix possible timing issues in a few cluster tests (see below) put replica online sooner on the master ---------------------------------------------------- there were two tests that failed because they needed for the master to realize that the replica is online, but the test code was actually only waiting for the replica to realize it's online, and in diskless it could have been before the master realized it. changes include two things: 1. the tests wait on the right thing 2. issues in the master, putting the replica online in two steps. the master used to put the replica as online in 2 steps. the first step was to mark it as online, and the second step was to enable the write event (only after getting ACK), but in fact the first step didn't contains some of the tasks to put it online (like updating good slave count, and sending the module event). this meant that if a test was waiting to see that the replica is online form the point of view of the master, and then confirm that the module got an event, or that the master has enough good replicas, it could fail due to timing issues. so now the full effect of putting the replica online, happens at once, and only the part about enabling the writes is delayed till the ACK. fix cluster tests -------------------- I added some code to wait for the replica to sync and avoid race conditions. later realized the sentinel and cluster tests where using the original 5 seconds delay, so changed it to 0. this means the other changes are probably not needed, but i suppose they're still better (avoid race conditions)
-
- 14 Jan, 2022 1 commit
-
-
Meir Shpilraien (Spielrein) authored
# Redis Functions Flags Following the discussion on #10025 Added Functions Flags support. The PR is divided to 2 sections: * Add named argument support to `redis.register_function` API. * Add support for function flags ## `redis.register_function` named argument support The first part of the PR adds support for named argument on `redis.register_function`, example: ``` redis.register_function{ function_name='f1', callback=function() return 'hello' end, description='some desc' } ``` The positional arguments is also kept, which means that it still possible to write: ``` redis.register_function('f1', function() return 'hello' end) ``` But notice that it is no longer possible to pass the optional description argument on the positional argument version. Positional argument was change to allow passing only the mandatory arguments (function name and callback). To pass more arguments the user must use the named argument version. As with positional arguments, the `function_name` and `callback` is mandatory and an error will be raise if those are missing. Also, an error will be raise if an unknown argument name is given or the arguments type is wrong. Tests was added to verify the new syntax. ## Functions Flags The second part of the PR is adding functions flags support. Flags are given to Redis when the engine calls `functionLibCreateFunction`, supported flags are: * `no-writes` - indicating the function perform no writes which means that it is OK to run it on: * read-only replica * Using FCALL_RO * If disk error detected It will not be possible to run a function in those situations unless the function turns on the `no-writes` flag * `allow-oom` - indicate that its OK to run the function even if Redis is in OOM state, if the function will not turn on this flag it will not be possible to run it if OOM reached (even if the function declares `no-writes` and even if `fcall_ro` is used). If this flag is set, any command will be allow on OOM (even those that is marked with CMD_DENYOOM). The assumption is that this flag is for advance users that knows its meaning and understand what they are doing, and Redis trust them to not increase the memory usage. (e.g. it could be an INCR or a modification on an existing key, or a DEL command) * `allow-state` - indicate that its OK to run the function on stale replica, in this case we will also make sure the function is only perform `stale` commands and raise an error if not. * `no-cluster` - indicate to disallow running the function if cluster is enabled. Default behaviure of functions (if no flags is given): 1. Allow functions to read and write 2. Do not run functions on OOM 3. Do not run functions on stale replica 4. Allow functions on cluster ### Lua API for functions flags On Lua engine, it is possible to give functions flags as `flags` named argument: ``` redis.register_function{function_name='f1', callback=function() return 1 end, flags={'no-writes', 'allow-oom'}, description='description'} ``` The function flags argument must be a Lua table that contains all the requested flags, The following will result in an error: * Unknown flag * Wrong flag type Default behaviour is the same as if no flags are used. Tests were added to verify all flags functionality ## Additional changes * mark FCALL and FCALL_RO with CMD_STALE flag (unlike EVAL), so that they can run if the function was registered with the `allow-stale` flag. * Verify `CMD_STALE` on `scriptCall` (`redis.call`), so it will not be possible to call commands from script while stale unless the command is marked with the `CMD_STALE` flags. so that even if the function is allowed while stale we do not allow it to bypass the `CMD_STALE` flag of commands. * Flags section was added to `FUNCTION LIST` command to provide the set of flags for each function: ``` > FUNCTION list withcode 1) 1) "library_name" 2) "test" 3) "engine" 4) "LUA" 5) "description" 6) (nil) 7) "functions" 8) 1) 1) "name" 2) "f1" 3) "description" 4) (nil) 5) "flags" 6) (empty array) 9) "library_code" 10) "redis.register_function{function_name='f1', callback=function() return 1 end}" ``` * Added API to get Redis version from within a script, The redis version can be provided using: 1. `redis.REDIS_VERSION` - string representation of the redis version in the format of MAJOR.MINOR.PATH 2. `redis.REDIS_VERSION_NUM` - number representation of the redis version in the format of `0x00MMmmpp` (`MM` - major, `mm` - minor, `pp` - patch). The number version can be used to check if version is greater or less another version. The string version can be used to return to the user or print as logs. This new API is provided to eval scripts and functions, it also possible to use this API during functions loading phase.
-
- 13 Jan, 2022 1 commit
-
-
chenyang8094 authored
Force create a BASE file (use a foreground `rewriteAppendOnlyFile`) when redis starts from an empty data set and `appendonly` is yes. The reasoning is that normally, after redis is running for some time, and the AOF has gone though a few rewrites, there's always a base rdb file. and the scenario where the base file is missing, is kinda rare (happens only at empty startup), so this change normalizes it. But more importantly, there are or could be some complex modules that are started with some configuration, when they create persistence they write that configuration to RDB AUX fields, so that can can always know with which configuration the persistence file they're loading was created (could be critical). there is (was) one scenario in which they could load their persisted data, and that configuration was missing, and this change fixes it. Add a new module event: REDISMODULE_SUBEVENT_PERSISTENCE_SYNC_AOF_START, similar to REDISMODULE_SUBEVENT_PERSISTENCE_AOF_START which is async. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 10 Jan, 2022 1 commit
-
-
chenyang8094 authored
1. Ban whitespace characters in `appenddirname` 2. Handle the case where `appendfilename` contains spaces (for backwards compatibility)
-
- 06 Jan, 2022 1 commit
-
-
Meir Shpilraien (Spielrein) authored
# Redis Function Libraries This PR implements Redis Functions Libraries as describe on: https://github.com/redis/redis/issues/9906. Libraries purpose is to provide a better code sharing between functions by allowing to create multiple functions in a single command. Functions that were created together can safely share code between each other without worrying about compatibility issues and versioning. Creating a new library is done using 'FUNCTION LOAD' command (full API is described below) This PR introduces a new struct called libraryInfo, libraryInfo holds information about a library: * name - name of the library * engine - engine used to create the library * code - library code * description - library description * functions - the functions exposed by the library When Redis gets the `FUNCTION LOAD` command it creates a new empty libraryInfo. Redis passes the `CODE` to the relevant engine alongside the empty libraryInfo. As a result, the engine will create one or more functions by calling 'libraryCreateFunction'. The new funcion will be added to the newly created libraryInfo. So far Everything is happening locally on the libraryInfo so it is easy to abort the operation (in case of an error) by simply freeing the libraryInfo. After the library info is fully constructed we start the joining phase by which we will join the new library to the other libraries currently exist on Redis. The joining phase make sure there is no function collision and add the library to the librariesCtx (renamed from functionCtx). LibrariesCtx is used all around the code in the exact same way as functionCtx was used (with respect to RDB loading, replicatio, ...). The only difference is that apart from function dictionary (maps function name to functionInfo object), the librariesCtx contains also a libraries dictionary that maps library name to libraryInfo object. ## New API ### FUNCTION LOAD `FUNCTION LOAD <ENGINE> <LIBRARY NAME> [REPLACE] [DESCRIPTION <DESCRIPTION>] <CODE>` Create a new library with the given parameters: * ENGINE - REPLACE Engine name to use to create the library. * LIBRARY NAME - The new library name. * REPLACE - If the library already exists, replace it. * DESCRIPTION - Library description. * CODE - Library code. Return "OK" on success, or error on the following cases: * Library name already taken and REPLACE was not used * Name collision with another existing library (even if replace was uses) * Library registration failed by the engine (usually compilation error) ## Changed API ### FUNCTION LIST `FUNCTION LIST [LIBRARYNAME <LIBRARY NAME PATTERN>] [WITHCODE]` Command was modified to also allow getting libraries code (so `FUNCTION INFO` command is no longer needed and removed). In addition the command gets an option argument, `LIBRARYNAME` allows you to only get libraries that match the given `LIBRARYNAME` pattern. By default, it returns all libraries. ### INFO MEMORY Added number of libraries to `INFO MEMORY` ### Commands flags `DENYOOM` flag was set on `FUNCTION LOAD` and `FUNCTION RESTORE`. We consider those commands as commands that add new data to the dateset (functions are data) and so we want to disallows to run those commands on OOM. ## Removed API * FUNCTION CREATE - Decided on https://github.com/redis/redis/issues/9906 * FUNCTION INFO - Decided on https://github.com/redis/redis/issues/9899 ## Lua engine changes When the Lua engine gets the code given on `FUNCTION LOAD` command, it immediately runs it, we call this run the loading run. Loading run is not a usual script run, it is not possible to invoke any Redis command from within the load run. Instead there is a new API provided by `library` object. The new API's: * `redis.log` - behave the same as `redis.log` * `redis.register_function` - register a new function to the library The loading run purpose is to register functions using the new `redis.register_function` API. Any attempt to use any other API will result in an error. In addition, the load run is has a time limit of 500ms, error is raise on timeout and the entire operation is aborted. ### `redis.register_function` `redis.register_function(<function_name>, <callback>, [<description>])` This new API allows users to register a new function that will be linked to the newly created library. This API can only be called during the load run (see definition above). Any attempt to use it outside of the load run will result in an error. The parameters pass to the API are: * function_name - Function name (must be a Lua string) * callback - Lua function object that will be called when the function is invokes using fcall/fcall_ro * description - Function description, optional (must be a Lua string). ### Example The following example creates a library called `lib` with 2 functions, `f1` and `f1`, returns 1 and 2 respectively: ``` local function f1(keys, args) return 1 end local function f2(keys, args) return 2 end redis.register_function('f1', f1) redis.register_function('f2', f2) ``` Notice: Unlike `eval`, functions inside a library get the KEYS and ARGV as arguments to the functions and not as global. ### Technical Details On the load run we only want the user to be able to call a white list on API's. This way, in the future, if new API's will be added, the new API's will not be available to the load run unless specifically added to this white list. We put the while list on the `library` object and make sure the `library` object is only available to the load run by using [lua_setfenv](https://www.lua.org/manual/5.1/manual.html#lua_setfenv) API. This API allows us to set the `globals` of a function (and all the function it creates). Before starting the load run we create a new fresh Lua table (call it `g`) that only contains the `library` API (we make sure to set global protection on this table just like the general global protection already exists today), then we use [lua_setfenv](https://www.lua.org/manual/5.1/manual.html#lua_setfenv) to set `g` as the global table of the load run. After the load run finished we update `g` metatable and set `__index` and `__newindex` functions to be `_G` (Lua default globals), we also pop out the `library` object as we do not need it anymore. This way, any function that was created on the load run (and will be invoke using `fcall`) will see the default globals as it expected to see them and will not have the `library` API anymore. An important outcome of this new approach is that now we can achieve a distinct global table for each library (it is not yet like that but it is very easy to achieve it now). In the future we can decide to remove global protection because global on different libraries will not collide or we can chose to give different API to different libraries base on some configuration or input. Notice that this technique was meant to prevent errors and was not meant to prevent malicious user from exploit it. For example, the load run can still save the `library` object on some local variable and then using in `fcall` context. To prevent such a malicious use, the C code also make sure it is running in the right context and if not raise an error.
-
- 05 Jan, 2022 1 commit
-
-
sundb authored
Following #10038. This PR introduces two changes. 1. Show the elapsed time of a single test in the test output, in order to have a more detailed understanding of the changes in test run time. 2. Speedup two tests related to `key-load-delay` configuration. other tests do not seem to be affected by #10003.
-