- 18 Jan, 2021 1 commit
-
-
Raghav Muddur authored
-
- 17 Jan, 2021 2 commits
-
-
Yossi Gottlieb authored
This adds basic coverage to IO threads by running the cluster and few selected Redis test suite tests with the IO threads enabled. Also provides some necessary additional improvements to the test suite: * Add --config to sentinel/cluster tests for arbitrary configuration. * Fix --tags whitelisting which was broken. * Add a `network` tag to some tests that are more network intensive. This is work in progress and more tests should be properly tagged in the future.
-
Wen Hui authored
Previously invalid configuration errors were not very specific and in some cases hard to understand. Co-authored-by:
Yossi Gottlieb <yossigo@gmail.com>
-
- 15 Jan, 2021 3 commits
-
-
Yang Bodong authored
Add lazyfree-lazy-user-flush config to control default behavior of FLUSH[ALL|DB], SCRIPT FLUSH (#8258) * Adds ASYNC and SYNC arguments to SCRIPT FLUSH * Adds SYNC argument to FLUSHDB and FLUSHALL * Adds new config to control the default behavior of FLUSHDB, FLUSHALL and SCRIPT FLUASH. the new behavior is as follows: * FLUSH[ALL|DB],SCRIPT FLUSH: Determine sync or async according to the value of lazyfree-lazy-user-flush. * FLUSH[ALL|DB],SCRIPT FLUSH ASYNC: Always flushes the database in an async manner. * FLUSH[ALL|DB],SCRIPT FLUSH SYNC: Always flushes the database in a sync manner.
-
Viktor Söderqvist authored
The prefix is changed from `RM_` to `module` on the following internal functions, to prevent them from appearing in the API docs: RM_LogRaw -> moduleLogRaw RM_FreeCallReplyRec -> moduleFreeCallReplyRec RM_ZsetAddFlagsToCoreFlags -> moduleZsetAddFlagsToCoreFlags RM_ZsetAddFlagsFromCoreFlags -> moduleZsetAddFlagsFromCoreFlags
-
Viktor Söderqvist authored
Fixes markdown formatting errors and some functions not showing up in the generated documentation at all. Ruby script (gendoc.rb) fixes: * Modified automatic instertion of backquotes: * Don't add backquotes around names which are already preceded by a backquote. Fixes for example \`RedisModule_Reply\*\` which turning into \`\`RedisModule_Reply\`\*\` messes up the formatting. * Add backquotes around types such as RedisModuleString (in addition to function names `RedisModule_[A-z()]*` and macro names `REDISMODULE_[A-z]*`). * Require 4 spaces indentation for disabling automatic backquotes, i.e. code blocks. Fixes continuations of list items (indented 2 spaces). * More permissive extraction of doc comments: * Allow doc comments starting with `/**`. * Make space before `*` on each line optional. * Make space after `/*` and `/**` optional (needed when appearing on its own line). Markdown fixes in module.c: * Fix code blocks not indented enough (4 spaces needed). * Add black line before code blocks and lists where missing (needed). * Enclose special markdown characters `_*^<>` in backticks to prevent them from messing up formatting. * Lists with `1)` changed to `1.` for proper markdown lists. * Remove excessive indentation which causes text to be unintentionally rendered as code blocks. * Other minor formatting fixes. Other fixes in module.c: * Remove blank lines between doc comment and function definition. A blank line here makes the Ruby script exclude the function in docs.
-
- 14 Jan, 2021 1 commit
-
-
charsyam authored
Replace many calling zrealloc to one zmalloc in sentinelResetMasterAndChangeAddress
-
- 13 Jan, 2021 4 commits
-
-
Wang Yuan authored
Optimize the performance of clusterGenNodesDescription by only checking slot ownership of each slot once, instead of checking each slot for each node.
-
houzj.fnst authored
Remove several checks that always evaluate to true.
-
sundb authored
Fix use of lookupKeyRead and lookupKeyWrite in zrangeGenericCommand, zunionInterDiffGenericCommand (#8316) * Change zunionInterDiffGenericCommand to use lookupKeyRead if dstkey is null * Change zrangeGenericCommand to use lookupKey Write if dstkey isn't null ZRANGESTORE and UNION, ZINTER, ZDIFF are all new commands (6.2 RC1 and RC2). In redis 6.0 the ZRANGE was using lookupKeyRead, and ZUNIONSTORE / ZINTERSTORE were using lookupKeyWrite. So there bugs are introduced in 6.2 and will be resolved before it is released. the implications of this bug are also not big: The sole difference between LookupKeyRead and LookupKeyWrite is for command executed on a replica, which are not received from its master client. (for the master, and for the master client on the replica, these two functions behave the same)!
-
sundb authored
-
- 12 Jan, 2021 4 commits
-
-
Oran Agra authored
we didn't wait for the commands executed on the master to reach the replica.
-
Madelyn Olson authored
This fixes three issues: 1. Using debug SLEEP was impacting the subsequent test, and causing it to pass reliably even though it should have failed. There was exactly 5 seconds of artificial pause (after 1000, wait 3000, wait 1000) between the debug sleep 5 and when we needed to unblock the client in the subsequent test. Now the test properly makes sure the client is unblocked, and the subsequent test is fixed. 2. Minor, the client pause types were using & comparisons instead of ==, since it was previously a flag. 3. Test is faster now that some of the hand wavy time is removed.
-
Oran Agra authored
The test was trying to wait for the replica to start loading the rdb from the master before it kills the master, but it was actually waiting for ROLE to be in "sync" mode, which corresponds to REPL_STATE_TRANSFER that starts before the actual loading starts. now instead it waits for the loading flag to be set. Besides, the test was dependent on the previous configuration of the servers, relying on the fact the replica is configured to persist (either RDB of AOF), now it is set explicitly.
-
Bob Li authored
Saving string of more than 2GB to the RDB file, can result in corrupt RDB, or failure in rdbSave. S
-
- 09 Jan, 2021 3 commits
-
-
sundb authored
Assert that clusterAddNode can't fail
-
Oran Agra authored
When the server state changes and blocked clients are being dropped, the paused clients should not be dropped, they're safe to keep since unlike other blocked types, these commands are not half way though processing, and the commands they sent may get rejected according to the new server state.
-
sundb authored
Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 08 Jan, 2021 6 commits
-
-
Oran Agra authored
- the last COW report wasn't always read from the pipe (receiveLastChildInfo wasn't used) - but in fact, there's no reason we won't always try to drain that pipe so i'm unifying receiveLastChildInfo with receiveChildInfo - adjust threshold of the COW test when run in accurate mode - add some prints in case this test fails again - fix indentation, page size, and PID! in MacOS proc info p.s. it seems that pri_pages_dirtied is always 0
-
Yang Bodong authored
Support ANY option to return some results that match the criteria ASAP, without a complete search and implicit sorting.
-
guybe7 authored
This PR adds another trimming strategy to XADD and XTRIM named MINID (complements the existing MAXLEN). It also adds a new LIMIT argument that allows incremental trimming by repeated calls (rather than all at once). This provides the ability to trim all records older than a certain ID (which makes it possible for the user to trim by age too). Example: XTRIM mystream MINID ~ 1608540753 will trim entries with id < 1608540753, but might not trim all (because of the ~ modifier) The purpose is to ease the use of streams. many users use streams as logs and the common case is wanting a log of the last X seconds rather than a log that contains maximum X entries (new MINID vs existing MAXLEN) The new LIMIT modifier is only supported when the trim strategy uses ~. i.e. when the user asked for exact trimming, it all happens in one go (no possibility for incremental trimming). However, when ~ is provided, we trim full rax nodes, up to the limit number of records. The default limit is 100*stream_node_max_entries (used when LIMIT is not provided). I.e. this is a behavior change (even if the existing MAXLEN strategy is used). An explicit limit of 0 means unlimited (but note that it's not the default). Other changes: Refactor arg parsing code for XADD and XTRIM to use common code.
-
Oran Agra authored
The defragger works well on these systems, but the tests and their thresholds are not adjusted for these big pages, so the defragger isn't able to get down the fragmentation to the levels the test expects and it fails on "defrag didn't stop". Randomly choosing 8k as the threshold for the skipping Fixes #8265 (which had 65k pages)
-
Madelyn Olson authored
Throw an error if there are conflicting bcast tracking prefixes.
-
Madelyn Olson authored
Implementation of client pause WRITE and client unpause
-
- 07 Jan, 2021 4 commits
-
-
George Prekas authored
Older arm64 Linux kernels have a bug that could lead to data corruption during background save under the following scenario: 1) jemalloc uses MADV_FREE on a page, 2) jemalloc reuses and writes the page, 3) Redis forks the background save process, and 4) Linux performs page reclamation. Under these conditions, Linux will reclaim the page wrongfully and the background save process will read zeros when it tries to read the page. The bug has been fixed in Linux with commit: ff1712f953e27f0b0718762ec17d0adb15c9fd0b ("arm64: pgtable: Ensure dirty bit is preserved across pte_wrprotect()") This Commit adds an ignore-warnings config, when not found, redis will print a warning and exit on startup (default behavior). Co-authored-by:
Oran Agra <oran@redislabs.com>
-
YaacovHazan authored
Add INFO field, rdb_active_cow_size, to report COW of a live fork child while it's active. - once in 1024 keys check the time, and if there's more than one second since the last report send a report to the parent via the pipe. - refactor the child_info_data struct, it's an implementation detail that shouldn't be in the server struct, and not used to communicate data between caller and callee - remove the magic value from that struct (not sure what it was good for), and instead add handling of short reads. - add another value to the structure, cow_type, to indicate if the report is for the new rdb_active_cow_size field, or it's the last report of a successful operation - add new Module API to report the active COW - add more asserts variants to test.tcl
-
YaacovHazan authored
This is a refactory commit, isn't suppose to have any actual impact. it does the following: - keep just one server struct fork child pid variable instead of 3 - have one server struct variable indicating the purpose of the current fork child. - redisFork is now responsible of updating the server struct with the pid, which means it can be the one that calls updateDictResizePolicy - move child info pipe handling into redisFork instead of having them repeated outside - there are two classes of fork purposes, mutually exclusive group (AOF, RDB, Module), and one that can create several forks to coexist in parallel (LDB, but maybe Modules some day too, Module API allows for that). - minor fix to killRDBChild: unlike killAppendOnlyChild and TerminateModuleForkChild, the killRDBChild doesn't clear the pid variable or call wait4, so checkChildrenDone does the cleanup for it. This commit removes the explicit calls to rdbRemoveTempFile, closeChildInfoPipe, updateDictResizePolicy, which didn't do any harm, but where unnecessary.
-
Jonah H. Harris authored
Add ZRANGESTORE command, and improve ZSTORE command to deprecated Z[REV]RANGE[BYSCORE|BYLEX]. Syntax for the new ZRANGESTORE command: ZRANGESTORE [BYSCORE | BYLEX] [REV] [LIMIT offset count] New syntax for ZRANGE: ZRANGE [BYSCORE | BYLEX] [REV] [WITHSCORES] [LIMIT offset count] Old syntax for ZRANGE: ZRANGE [WITHSCORES] Other ZRANGE commands remain unchanged. The implementation uses common code for all of these, by utilizing a consumer interface that in one command response to the client, and in the other command stores a zset key. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 06 Jan, 2021 2 commits
-
-
Wen Hui authored
This code path is normally executed only when v6.0 and above replicates from v2.4
-
guybe7 authored
New command: XAUTOCLAIM <key> <group> <consumer> <min-idle-time> <start> [COUNT <count>] [JUSTID] The purpose is to claim entries from a stale consumer without the usual XPENDING+XCLAIM combo which takes two round trips. The syntax for XAUTOCLAIM is similar to scan: A cursor is returned (streamID) by each call and should be used as start for the next call. 0-0 means the scan is complete. This PR extends the deferred reply mechanism for any bulk string (not just counts) This PR carries some unrelated test code changes: - Renames the term "client" into "consumer" in the stream-cgroups test - And also changes DEBUG SLEEP into "after" Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 05 Jan, 2021 3 commits
-
-
huangzhw authored
instead of asking for the extra new space it wanted, it asked to grow the string by the size it already has too. i.e. a string of 1000 bytes, needing to grow by 10 bytes, would have been asking for an **additional** 1010 bytes.
-
Oran Agra authored
Turns out the RDB checksum in Redis 6.0 on bigendian is broken. It always returned 0, so the RDB files are generated as if checksum is disabled, and will be loaded ok on littleendian, and on bigendian. But it'll not be able to load RDB files generated on littleendian or older versions. Similarly DUMP and RESTORE will work on the same version (0==0), but will be unable to exchange dump payloads with littleendian or old versions.
-
Oran Agra authored
When a Lua script returns a map to redis (a feature which was added in redis 6 together with RESP3), it would have returned the value first and the key second. If the client was using RESP2, it was getting them out of order, and if the client was in RESP3, it was getting a map of value => key. This was happening regardless of the Lua script using redis.setresp(3) or not. This also affects a case where the script was returning a map which it got from from redis by doing something like: redis.setresp(3); return redis.call() This fix is a breaking change for redis 6.0 users who happened to rely on the wrong order (either ones that used redis.setresp(3), or ones that returned a map explicitly). This commit also includes other two changes in the tests: 1. The test suite now handles RESP3 maps as dicts rather than nested lists 2. Remove some redundant (duplicate) tests from tracking.tcl
-
- 04 Jan, 2021 5 commits
-
-
Oran Agra authored
-
Itamar Haber authored
* man-like consistent long formatting * Uppercases commands, subcommands and options * Adds 'HELP' to HELP for all * Lexicographical order * Uses value notation and other .md likeness * Moves const char *help to top * Keeps it under 80 chars * Misc help typos, consistent conjuctioning (i.e return and not returns) * Uses addReplySubcommandSyntaxError(c) all over Signed-off-by:
Itamar Haber <itamar@redislabs.com>
-
Yang Bodong authored
This PR not only fixes the problem that swapdb does not make the transaction fail, but also optimizes the FLUSHALL and FLUSHDB command to set the CLIENT_DIRTY_CAS flag to avoid unnecessary traversal of clients. FLUSHDB was changed to first iterate on all watched keys, and then on the clients watching each key. Instead of iterating though all clients, and for each iterate on watched keys. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Meir Shpilraien (Spielrein) authored
If AOF file contains a long Lua script that timed out, then the `evalCommand` calls `blockingOperationEnds` which sets `server.blocked_last_cron` to 0. later on, the AOF `whileBlockedCron` function asserts that this value is not 0. The fix allows nesting call to `blockingOperationStarts` and `blockingOperationEnds`. The issue was first introduce in this commit: 9ef8d2f6 (Redis 6.2 RC1)
-
huangzhw authored
This is a recent problem, introduced by 74717438 (redis 6.0) The implications are: The sole difference between LookupKeyRead and LookupKeyWrite is for command executed on a replica, which are not received from its master client. (for the master, and for the master client on the replica, these two functions behave the same)! Since SORT is a write command, this bug only implicates a writable-replica. And these are its implications: - SORT STORE will behave as it did before the above mentioned commit (like before redis 6.0). on a writable-replica an already logically expired the key would have appeared missing. (store dest key would be deleted, instead of being populated with the data from the already logically expired key) - SORT (the non store variant, which in theory could have been executed on read-only-replica if it weren't for the write flag), will (in redis 6.0) have a new bug and return the data from the already logically expired key instead of empty response.
-
- 03 Jan, 2021 2 commits
-
-
kukey authored
New command flags similar to what SADD already has. Co-authored-by:
huangwei03 <huangwei03@kuaishou.com> Co-authored-by:
Itamar Haber <itamar@redislabs.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Oran Agra authored
-