- 09 Sep, 2021 3 commits
-
-
Binbin authored
We want to add COUNT option for BLPOP. But we can't do it without breaking compatibility due to the command arguments syntax. So this commit introduce two new commands. Syntax for the new LMPOP command: `LMPOP numkeys [<key> ...] LEFT|RIGHT [COUNT count]` Syntax for the new BLMPOP command: `BLMPOP timeout numkeys [<key> ...] LEFT|RIGHT [COUNT count]` Some background: - LPOP takes one key, and can return multiple elements. - BLPOP takes multiple keys, but returns one element from just one key. - LMPOP can take multiple keys and return multiple elements from just one key. Note that LMPOP/BLMPOP can take multiple keys, it eventually operates on just one key. And it will propagate as LPOP or RPOP with the COUNT option. As a new command, it still return NIL if we can't pop any elements. For the normal response is nested arrays in RESP2 and RESP3, like: ``` LMPOP/BLMPOP 1) keyname 2) 1) element1 2) element2 ``` I.e. unlike BLPOP that returns a key name and one element so it uses a flat array, and LPOP that returns multiple elements with no key name, and again uses a flat array, this one has to return a nested array, and it does for for both RESP2 and RESP3 (like SCAN does) Some discuss can see: #766 #8824
-
Wang Yuan authored
* Delay to discard cache master when full synchronization * Don't disconnect with replicas before loading transferred RDB when full sync Previously, once replica need to start full synchronization with master, it will discard cached master whatever full synchronization is failed or not. Now we discard cached master only when transferring RDB is finished and start to change data space, this make replica could start partial resynchronization with another new master if new master is failed during full synchronization.
-
chenyang8094 authored
When parsing an array type reply, ctx will be lost when recursively parsing its elements, which will cause a memory leak in automemory mode. This is a result of the changes in #9202 Add test for callReplyParseCollection fix
-
- 08 Sep, 2021 1 commit
-
-
zhaozhao.zz authored
When a replica paused, it would not apply any commands event the command comes from master, if we feed the non-applied command to replication stream, the replication offset would be wrong, and data would be lost after failover(since replica's `master_repl_offset` grows but command is not applied). To fix it, here are the changes: * Don't update replica's replication offset or propagate commands to sub-replicas when it's paused in `commandProcessed`. * Show `slave_read_repl_offset` in info reply. * Add an assert to make sure master client should never be blocked unless pause or module (some modules may use block way to do background (parallel) processing and forward original block module command to the replica, it's not a good way but it can work, so the assert excludes module now, but someday in future all modules should rewrite block command to propagate like what `BLPOP` does).
-
- 06 Sep, 2021 1 commit
-
-
Viktor Söderqvist authored
Until now, giving a negative index seeks from the end of a list and a positive seeks from the beginning. This change makes it seek from the nearest end, regardless of the sign of the given index. quicklistIndex is used by all list commands which operate by index. LINDEX key 999999 in a list if 1M elements is greately optimized by this change. Latency is cut by 75%. LINDEX key -1000000 in a list of 1M elements, likewise. LRANGE key -1 -1 is affected by this, since LRANGE converts the indices to positive numbers before seeking. The tests for corrupt dumps are updated to make sure the corrup data is seeked in the same direction as before.
-
- 05 Sep, 2021 1 commit
-
-
Wen Hui authored
Use sentinel debug to reduce default timeouts and allow tests to execute faster.
-
- 03 Sep, 2021 1 commit
-
-
Madelyn Olson authored
-
- 30 Aug, 2021 1 commit
-
-
Oran Agra authored
Failed on Raspberry Pi 3b where that single test took about 170 seconds
-
- 29 Aug, 2021 2 commits
-
-
Binbin authored
This one follow #9313 and goes deeper (validation of config file parsing) Move the check/update logic to a new updateClientOutputBufferLimit function. So that it can be used in CONFIG SET and config file parsing.
-
Viktor Söderqvist authored
1. The output of --help: * On the Usage line, just write [OPTIONS] [COMMAND ARGS...] instead listing only a few arbitrary options and no command. * For --cluster, describe that if the command is supplied on the command line, the key must contain "{tag}". Otherwise, the command will not be sent to the right cluster node. * For -r, add a note that if -r is omitted, all commands in a benchmark will use the same key. Also align the description. * For -t, describe that -t is ignored if a command is supplied on the command line. 2. Print a warning if -t is present when a specific command is supplied. 3. Print all warnings and errors to stderr. 4. Remove -e from calls in redis-benchmark test suite.
-
- 22 Aug, 2021 2 commits
-
-
Binbin authored
In old way, we always increase server.dirty in BITSET and BITFIELD SET. Even the command doesn't really change anything. This commit make sure BITSET and BITFIELD SET only increase dirty when the value changed. Because of that, if the value not changed, some others implications: - Avoid adding useless AOF - Reduce replication traffic - Will not trigger keyspace notifications (setbit) - Will not invalidate WATCH - Will not sent the invalidation message to the tracking client
-
Viktor Söderqvist authored
-
- 20 Aug, 2021 1 commit
-
-
sundb authored
-
- 18 Aug, 2021 1 commit
-
-
Yossi Gottlieb authored
We only run OOM related tests on x86_64 and aarch64, as jemalloc on other platforms (notably s390x) may actually succeed very large allocations. As a result the test may hang for a very long time at the cleanup phase, iterating as many as 2^61 hash table slots.
-
- 10 Aug, 2021 2 commits
-
-
yoav-steinberg authored
-
sundb authored
Part one of implementing #8702 (taking hashes first before other types) ## Description of the feature 1. Change ziplist encoded hash objects to listpack encoding. 2. Convert existing ziplists on RDB loading time. an O(n) operation. ## Rdb format changes 1. Add RDB_TYPE_HASH_LISTPACK rdb type. 2. Bump RDB_VERSION to 10 ## Interface changes 1. New `hash-max-listpack-entries` config is an alias for `hash-max-ziplist-entries` (same with `hash-max-listpack-value`) 2. OBJECT ENCODING will return `listpack` instead of `ziplist` ## Listpack improvements: 1. Support direct insert, replace integer element (rather than convert back and forth from string) 3. Add more listpack capabilities to match the ziplist ones (like `lpFind`, `lpRandomPairs` and such) 4. Optimize element length fetching, avoid multiple calculations 5. Use inline to avoid function call overhead. ## Tests 1. Add a new test to the RDB load time conversion 2. Adding the listpack unit tests. (based on the one in ziplist.c) 3. Add a few "corrupt payload: fuzzer findings" tests, and slightly modify existing ones. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 09 Aug, 2021 3 commits
-
-
sundb authored
This commit mainly fixes empty keys due to RDB loading and restore command, which was omitted in #9297. 1) When loading quicklsit, if all the ziplists in the quicklist are empty, NULL will be returned. If only some of the ziplists are empty, then we will skip the empty ziplists silently. 2) When loading hash zipmap, if zipmap is empty, sanitization check will fail. 3) When loading hash ziplist, if ziplist is empty, NULL will be returned. 4) Add RDB loading test with sanitize.
-
Eduardo Semprebon authored
Add a readonly variant of the STORE command, so it can be used on read-only workloads (replica, ACL, etc)
-
Qu Chen authored
Replication client no longer checks incoming command length against the client-query-buffer-limit. This makes the master able to replicate commands longer than replica's configured client-query-buffer-limit
-
- 07 Aug, 2021 1 commit
-
-
DarrenJiang13 authored
add error counting for some missed behaviors.
-
- 06 Aug, 2021 1 commit
-
-
yoav-steinberg authored
Also update qbuf tests to verify both idle and peak based resizing logic. And delete unused function: getClientsMaxBuffers
-
- 05 Aug, 2021 6 commits
-
-
Oran Agra authored
The execution of the RPOPLPUSH command by the fuzzer created junk keys, that were later being selected by RANDOMKEY and modified. This also meant that lists were statistically tested more than other files. Fix the fuzzer not to pass junk key names to RPOPLPUSH, and add a check that detects that new keys are not added by the fuzzer to detect future similar issues.
-
Oran Agra authored
Recently we found two issues in the fuzzer tester: #9302 #9285 After fixing them, more problems surfaced and this PR (as well as #9297) aims to fix them. Here's a list of the fixes - Prevent an overflow when allocating a dict hashtable - Prevent OOM when attempting to allocate a huge string - Prevent a few invalid accesses in listpack - Improve sanitization of listpack first entry - Validate integrity of stream consumer groups PEL - Validate integrity of stream listpack entry IDs - Validate ziplist tail followed by extra data which start with 0xff Co-authored-by:
sundb <sundbcn@gmail.com>
-
sundb authored
When we load rdb or restore command, if we encounter a length of 0, it will result in the creation of an empty key. This could either be a corrupt payload, or a result of a bug (see #8453 ) This PR mainly fixes the following: 1) When restore command will return `Bad data format` error. 2) When loading RDB, we will silently discard the key. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Binbin authored
The psync2 test has failed several times recently. In #9159 we only solved half of the problem. i.e. reordering of the replica that's already connected to the newly promoted master. Consider this scenario: 0 slaveof 2 1 slaveof 2 3 slaveof 2 4 slaveof 1 0 slaveof no one, became a new master got a new replid 2 slaveof 0, partial resync and got the new replid 3 reconnect 2, inherit the new replid 3 slaveof 4, use the new replid and got a full resync And another scenario: 1 slaveof 3 2 slaveof 4 3 slaveof 0 4 slaveof 0 4 slaveof no one, became a new master got a new replid 2 reconnect 4, inherit the new replid 2 slaveof 1, use the new replid and got a full resync So maybe we should reattach replicas in the right order. i.e. In the above example, if it would have reattached 1, 3 and 0 to the new chain formed by 4 before trying to attach 2 to 1, it would succeed. This commit break the SLAVEOF loop into two loops. (ideas from oran) First loop that uses random to decide who replicates from who. Second loop that does the actual SLAVEOF command. In the second loop, we make sure to execute it in the right order, and after each SLAVEOF, wait for it to be connected before we proceed. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Wen Hui authored
This makes it possible to tune many parameters that were previously hard coded. We don't intend these to be user configurable, but only used by tests to accelerate certain conditions which would otherwise take a long time and slow down the test suite. Co-authored-by:
Lucas Guang Yang <l84193800@china.huawei.com>
-
Viktor Söderqvist authored
-
- 04 Aug, 2021 3 commits
-
-
Wang Yuan authored
## Backgroud As we know, after `fork`, one process will copy pages when writing data to these pages(CoW), and another process still keep old pages, they totally cost more memory. For redis, we suffered that redis consumed much memory when the fork child is serializing key/values, even that maybe cause OOM. But actually we find, in redis fork child process, the child process don't need to keep some memory and parent process may write or update that, for example, child process will never access the key-value that is serialized but users may update it in parent process. So we think it may reduce COW if the child process release memory that it is not needed. ## Implementation For releasing key value in child process, we may think we call `decrRefCount` to free memory, but i find the fork child process still use much memory when we don't write any data to redis, and it costs much more time that slows down bgsave. Maybe because memory allocator doesn't really release memory to OS, and it may modify some inner data for this free operation, especially when we free small objects. Moreover, CoW is based on pages, so it is a easy way that we only free the memory bulk that is not less than kernel page size. madvise(MADV_DONTNEED) can quickly release specified region pages to OS bypassing memory allocator, and allocator still consider that this memory still is used and don't change its inner data. There are some buffers we can release in the fork child process: - **Serialized key-values** the fork child process never access serialized key-values, so we try to free them. Because we only can release big bulk memory, and it is time consumed to iterate all items/members/fields/entries of complex data type. So we decide to iterate them and try to release them only when their average size of item/member/field/entry is more than page size of OS. - **Replication backlog** Because replication backlog is a cycle buffer, it will be changed quickly if redis has heavy write traffic, but in fork child process, we don't need to access that. - **Client buffers** If clients have requests during having the fork child process, clients' buffer also be changed frequently. The memory includes client query buffer, output buffer, and client struct used memory. To get child process peak private dirty memory, we need to count peak memory instead of last used memory, because the child process may continue to release memory (since COW used to only grow till now, the last was equivalent to the peak). Also we're adding a new `current_cow_peak` info variable (to complement the existing `current_cow_size`) Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Meir Shpilraien (Spielrein) authored
Fix test introduced in #9202 that failed on 32bit CI. The failure was due to a wrong double comparison. Change code to stringify the double first and then compare.
-
Meir Shpilraien (Spielrein) authored
## Current state 1. Lua has its own parser that handles parsing `reds.call` replies and translates them to Lua objects that can be used by the user Lua code. The parser partially handles resp3 (missing big number, verbatim, attribute, ...) 2. Modules have their own parser that handles parsing `RM_Call` replies and translates them to RedisModuleCallReply objects. The parser does not support resp3. In addition, in the future, we want to add Redis Function (#8693) that will probably support more languages. At some point maintaining so many parsers will stop scaling (bug fixes and protocol changes will need to be applied on all of them). We will probably end up with different parsers that support different parts of the resp protocol (like we already have today with Lua and modules) ## PR Changes This PR attempt to unified the reply parsing of Lua and modules (and in the future Redis Function) by introducing a new parser unit (`resp_parser.c`). The new parser handles parsing the reply and calls different callbacks to allow the users (another unit that uses the parser, i.e, Lua, modules, or Redis Function) to analyze the reply. ### Lua API Additions The code that handles reply parsing on `scripting.c` was removed. Instead, it uses the resp_parser to parse and create a Lua object out of the reply. As mentioned above the Lua parser did not handle parsing big numbers, verbatim, and attribute. The new parser can handle those and so Lua also gets it for free. Those are translated to Lua objects in the following way: 1. Big Number - Lua table `{'big_number':'<str representation for big number>'}` 2. Verbatim - Lua table `{'verbatim_string':{'format':'<verbatim format>', 'string':'<verbatim string value>'}}` 3. Attribute - currently ignored and not expose to the Lua parser, another issue will be open to decide how to expose it. Tests were added to check resp3 reply parsing on Lua ### Modules API Additions The reply parsing code on `module.c` was also removed and the new resp_parser is used instead. In addition, the RedisModuleCallReply was also extracted to a separate unit located on `call_reply.c` (in the future, this unit will also be used by Redis Function). A nice side effect of unified parsing is that modules now also support resp3. Resp3 can be enabled by giving `3` as a parameter to the fmt argument of `RM_Call`. It is also possible to give `0`, which will indicate an auto mode. i.e, Redis will automatically chose the reply protocol base on the current client set on the RedisModuleCtx (this mode will mostly be used when the module want to pass the reply to the client as is). In addition, the following RedisModuleAPI were added to allow analyzing resp3 replies: * New RedisModuleCallReply types: * `REDISMODULE_REPLY_MAP` * `REDISMODULE_REPLY_SET` * `REDISMODULE_REPLY_BOOL` * `REDISMODULE_REPLY_DOUBLE` * `REDISMODULE_REPLY_BIG_NUMBER` * `REDISMODULE_REPLY_VERBATIM_STRING` * `REDISMODULE_REPLY_ATTRIBUTE` * New RedisModuleAPI: * `RedisModule_CallReplyDouble` - getting double value from resp3 double reply * `RedisModule_CallReplyBool` - getting boolean value from resp3 boolean reply * `RedisModule_CallReplyBigNumber` - getting big number value from resp3 big number reply * `RedisModule_CallReplyVerbatim` - getting format and value from resp3 verbatim reply * `RedisModule_CallReplySetElement` - getting element from resp3 set reply * `RedisModule_CallReplyMapElement` - getting key and value from resp3 map reply * `RedisModule_CallReplyAttribute` - getting a reply attribute * `RedisModule_CallReplyAttributeElement` - getting key and value from resp3 attribute reply * New context flags: * `REDISMODULE_CTX_FLAGS_RESP3` - indicate that the client is using resp3 Tests were added to check the new RedisModuleAPI ### Modules API Changes * RM_ReplyWithCallReply might return REDISMODULE_ERR if the given CallReply is in resp3 but the client expects resp2. This is not a breaking change because in order to get a resp3 CallReply one needs to specifically specify `3` as a parameter to the fmt argument of `RM_Call` (as mentioned above). Tests were added to check this change ### More small Additions * Added `debug set-disable-deny-scripts` that allows to turn on and off the commands no-script flag protection. This is used by the Lua resp3 tests so it will be possible to run `debug protocol` and check the resp3 parsing code. Co-authored-by:
Oran Agra <oran@redislabs.com> Co-authored-by:
Yossi Gottlieb <yossigo@gmail.com>
-
- 03 Aug, 2021 3 commits
-
-
Oran Agra authored
-
Jonah H. Harris authored
Add SINTERCARD and ZINTERCARD commands that are similar to ZINTER and SINTER but only return the cardinality with minimum processing and memory overheads. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Ariel Shtul authored
Add new Module APS for RESP3 responses: - RM_ReplyWithMap - RM_ReplyWithSet - RM_ReplyWithAttribute - RM_ReplySetMapLength - RM_ReplySetSetLength - RM_ReplySetAttributeLength - RM_ReplyWithBool Deprecate REDISMODULE_POSTPONED_ARRAY_LEN in favor of a generic REDISMODULE_POSTPONED_LEN Improve documentation Add tests Co-authored-by:
Guy Benoish <guy.benoish@redislabs.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 02 Aug, 2021 4 commits
-
-
Yossi Gottlieb authored
Loading and unloading the shared object does not initialize global vars on alpine.
-
Huang Zhw authored
When redis-cli received ASK, it used string matching wrong and didn't handle it. When we access a slot which is in migrating state, it maybe return ASK. After redirect to the new node, we need send ASKING command before retry the command. In this PR after redis-cli receives ASK, we send a ASKING command before send the origin command after reconnecting. Other changes: * Make redis-cli -u and -c (unix socket and cluster mode) incompatible with one another. * When send command fails, we avoid the 2nd reconnect retry and just print the error info. Users will decide how to do next. See #9277. * Add a test faking two redis nodes in TCL to just send ASK and OK in redis protocol to test ASK behavior. Co-authored-by:
Viktor Söderqvist <viktor.soderqvist@est.tech> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Ning Sun authored
Add NX, XX, GT, and LT flags to EXPIRE, PEXPIRE, EXPIREAT, PEXAPIREAT. - NX - only modify the TTL if no TTL is currently set - XX - only modify the TTL if there is a TTL currently set - GT - only increase the TTL (considering non-volatile keys as infinite expire time) - LT - only decrease the TTL (considering non-volatile keys as infinite expire time) return value of the command is 0 when the operation was skipped due to one of these flags. Signed-off-by:
Ning Sun <sunng@protonmail.com>
-
menwen authored
Fixes: - When a consumer is created as a side effect, redis didn't issue a keyspace notification, nor incremented the server.dirty (affects periodic snapshots). this was a bug in XREADGROUP, XCLAIM, and XAUTOCLAIM. - When attempting to delete a non-existent consumer, don't issue a keyspace notification and don't increment server.dirty this was a bug in XGROUP DELCONSUMER Other changes: - Changed streamLookupConsumer() to always only do lookup consumer (never do implicit creation), Its last seen time is updated unless the SLC_NO_REFRESH flag is specified. - Added streamCreateConsumer() to create a new consumer. When the creation is successful, it will notify and dirty++ unless the SCC_NO_NOTIFY or SCC_NO_DIRTIFY flags is specified. - Changed streamDelConsumer() to always only do delete consumer. - Added keyspace notifications tests about stream events.
-
- 01 Aug, 2021 3 commits
-
-
Binbin authored
With an empty src key, we need to deal with two situations: 1. non-STORE: We should return emptyarray. 2. STORE: Try to delete the store key and return 0. This applies to both GEOSEARCHSTORE (new to v6.2), and also GEORADIUS STORE (which was broken since forever) This pr try to fix #9261. i.e. both STORE variants would have behaved like the non-STORE variants when the source key was missing, returning an empty array and not deleting the destination key, instead of returning 0, and deleting the destination key. Also add more tests for some commands. - GEORADIUS: wrong type src key, non existing src key, empty search, store with non existing src key, store with empty search - GEORADIUSBYMEMBER: wrong type src key, non existing src key, non existing member, store with non existing src key - GEOSEARCH: wrong type src key, non existing src key, empty search, frommember with non existing member - GEOSEARCHSTORE: wrong type key, non existing src key, fromlonlat with empty search, frommember with non existing member Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Yossi Gottlieb authored
In some cases large replies on slow systems may only be partially read by the test suite, resulting with parsing errors. This fix is still timing sensitive but should greatly reduce the chances of this happening.
-
Guy Korland authored
Co-authored-by:
Yossi Gottlieb <yossigo@gmail.com>
-