- 22 Aug, 2021 2 commits
-
-
Binbin authored
In old way, we always increase server.dirty in BITSET and BITFIELD SET. Even the command doesn't really change anything. This commit make sure BITSET and BITFIELD SET only increase dirty when the value changed. Because of that, if the value not changed, some others implications: - Avoid adding useless AOF - Reduce replication traffic - Will not trigger keyspace notifications (setbit) - Will not invalidate WATCH - Will not sent the invalidation message to the tracking client
-
Viktor Söderqvist authored
-
- 20 Aug, 2021 1 commit
-
-
sundb authored
-
- 18 Aug, 2021 2 commits
-
-
Yossi Gottlieb authored
We only run OOM related tests on x86_64 and aarch64, as jemalloc on other platforms (notably s390x) may actually succeed very large allocations. As a result the test may hang for a very long time at the cleanup phase, iterating as many as 2^61 hash table slots.
-
yoav-steinberg authored
Following compilation warnings on s390x.
-
- 15 Aug, 2021 1 commit
-
-
Yossi Gottlieb authored
On systems that have unsigned char by default (s390x, arm), redis-server could crash as soon as it populates the command table.
-
- 14 Aug, 2021 1 commit
-
-
Wang Yuan authored
If we want to check `defined(SYNC_FILE_RANGE_WAIT_BEFORE)`, we should include fcntl.h. otherwise, SYNC_FILE_RANGE_WAIT_BEFORE is not defined, and there is alway not `sync_file_range` system call. Introduced by #8532
-
- 12 Aug, 2021 3 commits
-
-
Madelyn Olson authored
-
Madelyn Olson authored
-
Yossi Gottlieb authored
The order of setting things up follows some reasoning: Setup signal handlers first because a signal could fire at any time. Adjust OOM score before everything else to assist the OOM killer if memory resources are low. The trigger for this is a valgrind test failure which resulted with the child catching a SIGUSR1 before initializing the handler.
-
- 11 Aug, 2021 1 commit
-
-
Yossi Gottlieb authored
Making sure Redis builds properly on older compiler is important given the wide range of systems it is built for. So far Ubuntu 16.04 has been used for this purpose, but as it's getting phased out we'll move to `oldoldstable` Debian as an "old system" precursor.
-
- 10 Aug, 2021 6 commits
-
-
sundb authored
-
Huang Zhw authored
Abort cli blocking modes with SIGINT without exiting the cli. Co-authored-by:
charsyam <charsyam@gmail.com>
-
yoav-steinberg authored
-
DarrenJiang13 authored
We only use MADV_DONTNEED on Linux, that's were it was tested.
-
Meir Shpilraien (Spielrein) authored
Following the comments on #8659, this PR fix some formatting and naming issues.
-
sundb authored
Part one of implementing #8702 (taking hashes first before other types) ## Description of the feature 1. Change ziplist encoded hash objects to listpack encoding. 2. Convert existing ziplists on RDB loading time. an O(n) operation. ## Rdb format changes 1. Add RDB_TYPE_HASH_LISTPACK rdb type. 2. Bump RDB_VERSION to 10 ## Interface changes 1. New `hash-max-listpack-entries` config is an alias for `hash-max-ziplist-entries` (same with `hash-max-listpack-value`) 2. OBJECT ENCODING will return `listpack` instead of `ziplist` ## Listpack improvements: 1. Support direct insert, replace integer element (rather than convert back and forth from string) 3. Add more listpack capabilities to match the ziplist ones (like `lpFind`, `lpRandomPairs` and such) 4. Optimize element length fetching, avoid multiple calculations 5. Use inline to avoid function call overhead. ## Tests 1. Add a new test to the RDB load time conversion 2. Adding the listpack unit tests. (based on the one in ziplist.c) 3. Add a few "corrupt payload: fuzzer findings" tests, and slightly modify existing ones. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 09 Aug, 2021 4 commits
-
-
sundb authored
This commit mainly fixes empty keys due to RDB loading and restore command, which was omitted in #9297. 1) When loading quicklsit, if all the ziplists in the quicklist are empty, NULL will be returned. If only some of the ziplists are empty, then we will skip the empty ziplists silently. 2) When loading hash zipmap, if zipmap is empty, sanitization check will fail. 3) When loading hash ziplist, if ziplist is empty, NULL will be returned. 4) Add RDB loading test with sanitize.
-
Qu Chen authored
AOF fake client creation (createAOFClient) was doing similar work as createClient, with some minor differences, most of which unintended, this was dangerous and meant that many changes to createClient should have always been reflected to aof.c This cleanup changes createAOFClient to call createClient with NULL, like we do in module.c and elsewhere.
-
Eduardo Semprebon authored
Add a readonly variant of the STORE command, so it can be used on read-only workloads (replica, ACL, etc)
-
Qu Chen authored
Replication client no longer checks incoming command length against the client-query-buffer-limit. This makes the master able to replicate commands longer than replica's configured client-query-buffer-limit
-
- 08 Aug, 2021 3 commits
-
-
Yossi Gottlieb authored
-
Binbin authored
Fixes an undefined behavior, same way as our `ll2string` does.
-
Binbin authored
The test try to test `insert before 1 element`, but it use quicklist InsertAfter, a copy-paste typo. The commit also add an assert to verify results in some tests to make sure it is as expected.
-
- 07 Aug, 2021 1 commit
-
-
DarrenJiang13 authored
add error counting for some missed behaviors.
-
- 06 Aug, 2021 1 commit
-
-
yoav-steinberg authored
Also update qbuf tests to verify both idle and peak based resizing logic. And delete unused function: getClientsMaxBuffers
-
- 05 Aug, 2021 10 commits
-
-
Oran Agra authored
The execution of the RPOPLPUSH command by the fuzzer created junk keys, that were later being selected by RANDOMKEY and modified. This also meant that lists were statistically tested more than other files. Fix the fuzzer not to pass junk key names to RPOPLPUSH, and add a check that detects that new keys are not added by the fuzzer to detect future similar issues.
-
Oran Agra authored
Recently we found two issues in the fuzzer tester: #9302 #9285 After fixing them, more problems surfaced and this PR (as well as #9297) aims to fix them. Here's a list of the fixes - Prevent an overflow when allocating a dict hashtable - Prevent OOM when attempting to allocate a huge string - Prevent a few invalid accesses in listpack - Improve sanitization of listpack first entry - Validate integrity of stream consumer groups PEL - Validate integrity of stream listpack entry IDs - Validate ziplist tail followed by extra data which start with 0xff Co-authored-by:
sundb <sundbcn@gmail.com>
-
sundb authored
When we load rdb or restore command, if we encounter a length of 0, it will result in the creation of an empty key. This could either be a corrupt payload, or a result of a bug (see #8453 ) This PR mainly fixes the following: 1) When restore command will return `Bad data format` error. 2) When loading RDB, we will silently discard the key. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Madelyn Olson authored
Add debug config flag to print certain config values on engine crash
-
Binbin authored
The psync2 test has failed several times recently. In #9159 we only solved half of the problem. i.e. reordering of the replica that's already connected to the newly promoted master. Consider this scenario: 0 slaveof 2 1 slaveof 2 3 slaveof 2 4 slaveof 1 0 slaveof no one, became a new master got a new replid 2 slaveof 0, partial resync and got the new replid 3 reconnect 2, inherit the new replid 3 slaveof 4, use the new replid and got a full resync And another scenario: 1 slaveof 3 2 slaveof 4 3 slaveof 0 4 slaveof 0 4 slaveof no one, became a new master got a new replid 2 reconnect 4, inherit the new replid 2 slaveof 1, use the new replid and got a full resync So maybe we should reattach replicas in the right order. i.e. In the above example, if it would have reattached 1, 3 and 0 to the new chain formed by 4 before trying to attach 2 to 1, it would succeed. This commit break the SLAVEOF loop into two loops. (ideas from oran) First loop that uses random to decide who replicates from who. Second loop that does the actual SLAVEOF command. In the second loop, we make sure to execute it in the right order, and after each SLAVEOF, wait for it to be connected before we proceed. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Wen Hui authored
This makes it possible to tune many parameters that were previously hard coded. We don't intend these to be user configurable, but only used by tests to accelerate certain conditions which would otherwise take a long time and slow down the test suite. Co-authored-by:
Lucas Guang Yang <l84193800@china.huawei.com>
-
menwen authored
Fix that there is no sample latency after the key expires via expireIfNeeded(). Some refactoring for shared code.
-
yoav-steinberg authored
-
yoav-steinberg authored
Reduce dict struct memory overhead on 64bit dict size goes down from jemalloc's 96 byte bin to its 56 byte bin. summary of changes: - Remove `privdata` from callbacks and dict creation. (this affects many files, see "Interface change" below). - Meld `dictht` struct into the `dict` struct to eliminate struct padding. (this affects just dict.c and defrag.c) - Eliminate the `sizemask` field, can be calculated from size when needed. - Convert the `size` field into `size_exp` (exponent), utilizes one byte instead of 8. Interface change: pass dict pointer to dict type call back functions. This is instead of passing the removed privdata field. In the future if we'd like to have private data in the callbacks we can extract it from the dict type. We can extend dictType to include a custom dict struct allocator and use it to allocate more data at the end of the dict struct. This data can then be used to store private data later acccessed by the callbacks.
-
Viktor Söderqvist authored
-
- 04 Aug, 2021 4 commits
-
-
Wang Yuan authored
## Backgroud As we know, after `fork`, one process will copy pages when writing data to these pages(CoW), and another process still keep old pages, they totally cost more memory. For redis, we suffered that redis consumed much memory when the fork child is serializing key/values, even that maybe cause OOM. But actually we find, in redis fork child process, the child process don't need to keep some memory and parent process may write or update that, for example, child process will never access the key-value that is serialized but users may update it in parent process. So we think it may reduce COW if the child process release memory that it is not needed. ## Implementation For releasing key value in child process, we may think we call `decrRefCount` to free memory, but i find the fork child process still use much memory when we don't write any data to redis, and it costs much more time that slows down bgsave. Maybe because memory allocator doesn't really release memory to OS, and it may modify some inner data for this free operation, especially when we free small objects. Moreover, CoW is based on pages, so it is a easy way that we only free the memory bulk that is not less than kernel page size. madvise(MADV_DONTNEED) can quickly release specified region pages to OS bypassing memory allocator, and allocator still consider that this memory still is used and don't change its inner data. There are some buffers we can release in the fork child process: - **Serialized key-values** the fork child process never access serialized key-values, so we try to free them. Because we only can release big bulk memory, and it is time consumed to iterate all items/members/fields/entries of complex data type. So we decide to iterate them and try to release them only when their average size of item/member/field/entry is more than page size of OS. - **Replication backlog** Because replication backlog is a cycle buffer, it will be changed quickly if redis has heavy write traffic, but in fork child process, we don't need to access that. - **Client buffers** If clients have requests during having the fork child process, clients' buffer also be changed frequently. The memory includes client query buffer, output buffer, and client struct used memory. To get child process peak private dirty memory, we need to count peak memory instead of last used memory, because the child process may continue to release memory (since COW used to only grow till now, the last was equivalent to the peak). Also we're adding a new `current_cow_peak` info variable (to complement the existing `current_cow_size`) Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Meir Shpilraien (Spielrein) authored
Fix test introduced in #9202 that failed on 32bit CI. The failure was due to a wrong double comparison. Change code to stringify the double first and then compare.
-
Meir Shpilraien (Spielrein) authored
## Current state 1. Lua has its own parser that handles parsing `reds.call` replies and translates them to Lua objects that can be used by the user Lua code. The parser partially handles resp3 (missing big number, verbatim, attribute, ...) 2. Modules have their own parser that handles parsing `RM_Call` replies and translates them to RedisModuleCallReply objects. The parser does not support resp3. In addition, in the future, we want to add Redis Function (#8693) that will probably support more languages. At some point maintaining so many parsers will stop scaling (bug fixes and protocol changes will need to be applied on all of them). We will probably end up with different parsers that support different parts of the resp protocol (like we already have today with Lua and modules) ## PR Changes This PR attempt to unified the reply parsing of Lua and modules (and in the future Redis Function) by introducing a new parser unit (`resp_parser.c`). The new parser handles parsing the reply and calls different callbacks to allow the users (another unit that uses the parser, i.e, Lua, modules, or Redis Function) to analyze the reply. ### Lua API Additions The code that handles reply parsing on `scripting.c` was removed. Instead, it uses the resp_parser to parse and create a Lua object out of the reply. As mentioned above the Lua parser did not handle parsing big numbers, verbatim, and attribute. The new parser can handle those and so Lua also gets it for free. Those are translated to Lua objects in the following way: 1. Big Number - Lua table `{'big_number':'<str representation for big number>'}` 2. Verbatim - Lua table `{'verbatim_string':{'format':'<verbatim format>', 'string':'<verbatim string value>'}}` 3. Attribute - currently ignored and not expose to the Lua parser, another issue will be open to decide how to expose it. Tests were added to check resp3 reply parsing on Lua ### Modules API Additions The reply parsing code on `module.c` was also removed and the new resp_parser is used instead. In addition, the RedisModuleCallReply was also extracted to a separate unit located on `call_reply.c` (in the future, this unit will also be used by Redis Function). A nice side effect of unified parsing is that modules now also support resp3. Resp3 can be enabled by giving `3` as a parameter to the fmt argument of `RM_Call`. It is also possible to give `0`, which will indicate an auto mode. i.e, Redis will automatically chose the reply protocol base on the current client set on the RedisModuleCtx (this mode will mostly be used when the module want to pass the reply to the client as is). In addition, the following RedisModuleAPI were added to allow analyzing resp3 replies: * New RedisModuleCallReply types: * `REDISMODULE_REPLY_MAP` * `REDISMODULE_REPLY_SET` * `REDISMODULE_REPLY_BOOL` * `REDISMODULE_REPLY_DOUBLE` * `REDISMODULE_REPLY_BIG_NUMBER` * `REDISMODULE_REPLY_VERBATIM_STRING` * `REDISMODULE_REPLY_ATTRIBUTE` * New RedisModuleAPI: * `RedisModule_CallReplyDouble` - getting double value from resp3 double reply * `RedisModule_CallReplyBool` - getting boolean value from resp3 boolean reply * `RedisModule_CallReplyBigNumber` - getting big number value from resp3 big number reply * `RedisModule_CallReplyVerbatim` - getting format and value from resp3 verbatim reply * `RedisModule_CallReplySetElement` - getting element from resp3 set reply * `RedisModule_CallReplyMapElement` - getting key and value from resp3 map reply * `RedisModule_CallReplyAttribute` - getting a reply attribute * `RedisModule_CallReplyAttributeElement` - getting key and value from resp3 attribute reply * New context flags: * `REDISMODULE_CTX_FLAGS_RESP3` - indicate that the client is using resp3 Tests were added to check the new RedisModuleAPI ### Modules API Changes * RM_ReplyWithCallReply might return REDISMODULE_ERR if the given CallReply is in resp3 but the client expects resp2. This is not a breaking change because in order to get a resp3 CallReply one needs to specifically specify `3` as a parameter to the fmt argument of `RM_Call` (as mentioned above). Tests were added to check this change ### More small Additions * Added `debug set-disable-deny-scripts` that allows to turn on and off the commands no-script flag protection. This is used by the Lua resp3 tests so it will be possible to run `debug protocol` and check the resp3 parsing code. Co-authored-by:
Oran Agra <oran@redislabs.com> Co-authored-by:
Yossi Gottlieb <yossigo@gmail.com>
-
sundb authored
Some background: This fixes a problem that used to be dead code till now, but became alive (only in the unit tests, not in redis) when #9113 got merged. The problem it fixes doesn't actually cause any significant harm, but that PR also added a test that fails verification because of that. This test was merged with that problem due to human error, we didn't run it on the last modified version before merging. The fix in this PR existed in #8641 (closed because it's just dead code) and #4674 (still pending but has other changes in it). Now to the actual fix: On quicklist insertion, if the insertion offset is -1 or `-(quicklist->count)`, we can insert into the head of the next node rather than the tail of the current node. this is especially important when the current node is full, and adding anything to it will cause it to be split (or be over it's fill limit setting). The bug was that the code attempted to determine that we're adding to the tail of the current node by matching `offset == node->count` when in fact it should have been `offset == node->count-1` (so it never entered that `if`). and also that since we take negative offsets too, we can also match `-1`. same applies for the head, i.e. `0` and `-count`. The bug will cause the code to attempt inserting into the current node (thinking we have to insert into the middle of the node rather than head or tail), and in case the current node is full it'll have to be split (something that also happens in valid cases). On top of that, since it calls _quicklistSplitNode with an edge case, it'll actually split the node in a way that all the entries fall into one split, and 0 into the other, and then still insert the new entry into the first one, causing it to be populated beyond it's intended fill limit. This problem does not create any bug in redis, because the existing code does not iterate from tail to head, and the offset never has a negative value when insert. The other change this PR makes in the test code is just for some coverage, insertion at index 0 is tested a lot, so it's nice to test some negative offsets too.
-