- 05 Aug, 2021 1 commit
-
-
yoav-steinberg authored
Reduce dict struct memory overhead on 64bit dict size goes down from jemalloc's 96 byte bin to its 56 byte bin. summary of changes: - Remove `privdata` from callbacks and dict creation. (this affects many files, see "Interface change" below). - Meld `dictht` struct into the `dict` struct to eliminate struct padding. (this affects just dict.c and defrag.c) - Eliminate the `sizemask` field, can be calculated from size when needed. - Convert the `size` field into `size_exp` (exponent), utilizes one byte instead of 8. Interface change: pass dict pointer to dict type call back functions. This is instead of passing the removed privdata field. In the future if we'd like to have private data in the callbacks we can extract it from the dict type. We can extend dictType to include a custom dict struct allocator and use it to allocate more data at the end of the dict struct. This data can then be used to store private data later acccessed by the callbacks.
-
- 04 Aug, 2021 3 commits
-
-
Wang Yuan authored
## Backgroud As we know, after `fork`, one process will copy pages when writing data to these pages(CoW), and another process still keep old pages, they totally cost more memory. For redis, we suffered that redis consumed much memory when the fork child is serializing key/values, even that maybe cause OOM. But actually we find, in redis fork child process, the child process don't need to keep some memory and parent process may write or update that, for example, child process will never access the key-value that is serialized but users may update it in parent process. So we think it may reduce COW if the child process release memory that it is not needed. ## Implementation For releasing key value in child process, we may think we call `decrRefCount` to free memory, but i find the fork child process still use much memory when we don't write any data to redis, and it costs much more time that slows down bgsave. Maybe because memory allocator doesn't really release memory to OS, and it may modify some inner data for this free operation, especially when we free small objects. Moreover, CoW is based on pages, so it is a easy way that we only free the memory bulk that is not less than kernel page size. madvise(MADV_DONTNEED) can quickly release specified region pages to OS bypassing memory allocator, and allocator still consider that this memory still is used and don't change its inner data. There are some buffers we can release in the fork child process: - **Serialized key-values** the fork child process never access serialized key-values, so we try to free them. Because we only can release big bulk memory, and it is time consumed to iterate all items/members/fields/entries of complex data type. So we decide to iterate them and try to release them only when their average size of item/member/field/entry is more than page size of OS. - **Replication backlog** Because replication backlog is a cycle buffer, it will be changed quickly if redis has heavy write traffic, but in fork child process, we don't need to access that. - **Client buffers** If clients have requests during having the fork child process, clients' buffer also be changed frequently. The memory includes client query buffer, output buffer, and client struct used memory. To get child process peak private dirty memory, we need to count peak memory instead of last used memory, because the child process may continue to release memory (since COW used to only grow till now, the last was equivalent to the peak). Also we're adding a new `current_cow_peak` info variable (to complement the existing `current_cow_size`) Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Meir Shpilraien (Spielrein) authored
## Current state 1. Lua has its own parser that handles parsing `reds.call` replies and translates them to Lua objects that can be used by the user Lua code. The parser partially handles resp3 (missing big number, verbatim, attribute, ...) 2. Modules have their own parser that handles parsing `RM_Call` replies and translates them to RedisModuleCallReply objects. The parser does not support resp3. In addition, in the future, we want to add Redis Function (#8693) that will probably support more languages. At some point maintaining so many parsers will stop scaling (bug fixes and protocol changes will need to be applied on all of them). We will probably end up with different parsers that support different parts of the resp protocol (like we already have today with Lua and modules) ## PR Changes This PR attempt to unified the reply parsing of Lua and modules (and in the future Redis Function) by introducing a new parser unit (`resp_parser.c`). The new parser handles parsing the reply and calls different callbacks to allow the users (another unit that uses the parser, i.e, Lua, modules, or Redis Function) to analyze the reply. ### Lua API Additions The code that handles reply parsing on `scripting.c` was removed. Instead, it uses the resp_parser to parse and create a Lua object out of the reply. As mentioned above the Lua parser did not handle parsing big numbers, verbatim, and attribute. The new parser can handle those and so Lua also gets it for free. Those are translated to Lua objects in the following way: 1. Big Number - Lua table `{'big_number':'<str representation for big number>'}` 2. Verbatim - Lua table `{'verbatim_string':{'format':'<verbatim format>', 'string':'<verbatim string value>'}}` 3. Attribute - currently ignored and not expose to the Lua parser, another issue will be open to decide how to expose it. Tests were added to check resp3 reply parsing on Lua ### Modules API Additions The reply parsing code on `module.c` was also removed and the new resp_parser is used instead. In addition, the RedisModuleCallReply was also extracted to a separate unit located on `call_reply.c` (in the future, this unit will also be used by Redis Function). A nice side effect of unified parsing is that modules now also support resp3. Resp3 can be enabled by giving `3` as a parameter to the fmt argument of `RM_Call`. It is also possible to give `0`, which will indicate an auto mode. i.e, Redis will automatically chose the reply protocol base on the current client set on the RedisModuleCtx (this mode will mostly be used when the module want to pass the reply to the client as is). In addition, the following RedisModuleAPI were added to allow analyzing resp3 replies: * New RedisModuleCallReply types: * `REDISMODULE_REPLY_MAP` * `REDISMODULE_REPLY_SET` * `REDISMODULE_REPLY_BOOL` * `REDISMODULE_REPLY_DOUBLE` * `REDISMODULE_REPLY_BIG_NUMBER` * `REDISMODULE_REPLY_VERBATIM_STRING` * `REDISMODULE_REPLY_ATTRIBUTE` * New RedisModuleAPI: * `RedisModule_CallReplyDouble` - getting double value from resp3 double reply * `RedisModule_CallReplyBool` - getting boolean value from resp3 boolean reply * `RedisModule_CallReplyBigNumber` - getting big number value from resp3 big number reply * `RedisModule_CallReplyVerbatim` - getting format and value from resp3 verbatim reply * `RedisModule_CallReplySetElement` - getting element from resp3 set reply * `RedisModule_CallReplyMapElement` - getting key and value from resp3 map reply * `RedisModule_CallReplyAttribute` - getting a reply attribute * `RedisModule_CallReplyAttributeElement` - getting key and value from resp3 attribute reply * New context flags: * `REDISMODULE_CTX_FLAGS_RESP3` - indicate that the client is using resp3 Tests were added to check the new RedisModuleAPI ### Modules API Changes * RM_ReplyWithCallReply might return REDISMODULE_ERR if the given CallReply is in resp3 but the client expects resp2. This is not a breaking change because in order to get a resp3 CallReply one needs to specifically specify `3` as a parameter to the fmt argument of `RM_Call` (as mentioned above). Tests were added to check this change ### More small Additions * Added `debug set-disable-deny-scripts` that allows to turn on and off the commands no-script flag protection. This is used by the Lua resp3 tests so it will be possible to run `debug protocol` and check the resp3 parsing code. Co-authored-by:
Oran Agra <oran@redislabs.com> Co-authored-by:
Yossi Gottlieb <yossigo@gmail.com>
-
sundb authored
Some background: This fixes a problem that used to be dead code till now, but became alive (only in the unit tests, not in redis) when #9113 got merged. The problem it fixes doesn't actually cause any significant harm, but that PR also added a test that fails verification because of that. This test was merged with that problem due to human error, we didn't run it on the last modified version before merging. The fix in this PR existed in #8641 (closed because it's just dead code) and #4674 (still pending but has other changes in it). Now to the actual fix: On quicklist insertion, if the insertion offset is -1 or `-(quicklist->count)`, we can insert into the head of the next node rather than the tail of the current node. this is especially important when the current node is full, and adding anything to it will cause it to be split (or be over it's fill limit setting). The bug was that the code attempted to determine that we're adding to the tail of the current node by matching `offset == node->count` when in fact it should have been `offset == node->count-1` (so it never entered that `if`). and also that since we take negative offsets too, we can also match `-1`. same applies for the head, i.e. `0` and `-count`. The bug will cause the code to attempt inserting into the current node (thinking we have to insert into the middle of the node rather than head or tail), and in case the current node is full it'll have to be split (something that also happens in valid cases). On top of that, since it calls _quicklistSplitNode with an edge case, it'll actually split the node in a way that all the entries fall into one split, and 0 into the other, and then still insert the new entry into the first one, causing it to be populated beyond it's intended fill limit. This problem does not create any bug in redis, because the existing code does not iterate from tail to head, and the offset never has a negative value when insert. The other change this PR makes in the test code is just for some coverage, insertion at index 0 is tested a lot, so it's nice to test some negative offsets too.
-
- 03 Aug, 2021 3 commits
-
-
filipe oliveira authored
Add the -x option (Read last argument from STDIN) on redis-benchmark. Other changes: To be able to use the code from redis-cli some helper methods were moved to cli_common.(h|c) Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Jonah H. Harris authored
Add SINTERCARD and ZINTERCARD commands that are similar to ZINTER and SINTER but only return the cardinality with minimum processing and memory overheads. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Ariel Shtul authored
Add new Module APS for RESP3 responses: - RM_ReplyWithMap - RM_ReplyWithSet - RM_ReplyWithAttribute - RM_ReplySetMapLength - RM_ReplySetSetLength - RM_ReplySetAttributeLength - RM_ReplyWithBool Deprecate REDISMODULE_POSTPONED_ARRAY_LEN in favor of a generic REDISMODULE_POSTPONED_LEN Improve documentation Add tests Co-authored-by:
Guy Benoish <guy.benoish@redislabs.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 02 Aug, 2021 6 commits
-
-
Huang Zhw authored
When redis-cli received ASK, it used string matching wrong and didn't handle it. When we access a slot which is in migrating state, it maybe return ASK. After redirect to the new node, we need send ASKING command before retry the command. In this PR after redis-cli receives ASK, we send a ASKING command before send the origin command after reconnecting. Other changes: * Make redis-cli -u and -c (unix socket and cluster mode) incompatible with one another. * When send command fails, we avoid the 2nd reconnect retry and just print the error info. Users will decide how to do next. See #9277. * Add a test faking two redis nodes in TCL to just send ASK and OK in redis protocol to test ASK behavior. Co-authored-by:
Viktor Söderqvist <viktor.soderqvist@est.tech> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Binbin authored
1. In sendBulkToSlave, we used LL_VERBOSE in the past, changed to LL_WARNING. (all the other places that do freeClient(slave) use LL_WARNING) 2. The old style LOG_WARNING, chang it to LL_WARNING. Introduced in an old pr (#1690).
-
Ning Sun authored
Add NX, XX, GT, and LT flags to EXPIRE, PEXPIRE, EXPIREAT, PEXAPIREAT. - NX - only modify the TTL if no TTL is currently set - XX - only modify the TTL if there is a TTL currently set - GT - only increase the TTL (considering non-volatile keys as infinite expire time) - LT - only decrease the TTL (considering non-volatile keys as infinite expire time) return value of the command is 0 when the operation was skipped due to one of these flags. Signed-off-by:
Ning Sun <sunng@protonmail.com>
-
menwen authored
Fixes: - When a consumer is created as a side effect, redis didn't issue a keyspace notification, nor incremented the server.dirty (affects periodic snapshots). this was a bug in XREADGROUP, XCLAIM, and XAUTOCLAIM. - When attempting to delete a non-existent consumer, don't issue a keyspace notification and don't increment server.dirty this was a bug in XGROUP DELCONSUMER Other changes: - Changed streamLookupConsumer() to always only do lookup consumer (never do implicit creation), Its last seen time is updated unless the SLC_NO_REFRESH flag is specified. - Added streamCreateConsumer() to create a new consumer. When the creation is successful, it will notify and dirty++ unless the SCC_NO_NOTIFY or SCC_NO_DIRTIFY flags is specified. - Changed streamDelConsumer() to always only do delete consumer. - Added keyspace notifications tests about stream events.
-
cmemory authored
In _quicklistInsert when `at_head` / `at_tail` is true, but `prev` / `next` is NULL, the code was reaching the last if-else block at the bottom of the function, and would have unnecessarily executed _quicklistSplitNode, instead of just creating a new node. This was because the penultimate if-else was checking `node->next && full_next`. but in fact it was unnecessary to check if `node->next` exists, if we're gonna create one anyway, we only care that it's not full, or doesn't exist, so the condition could have been changed to `!node->next || full_next`. Instead, this PR makes a small refactory to negate `full_next` to a more meaningful variable `avail_next` that indicates that the next node is available for pushing additional elements or not (this would be true only if it exists and it is non-full)
-
SmartKeyerror authored
-
- 01 Aug, 2021 2 commits
-
-
Binbin authored
With an empty src key, we need to deal with two situations: 1. non-STORE: We should return emptyarray. 2. STORE: Try to delete the store key and return 0. This applies to both GEOSEARCHSTORE (new to v6.2), and also GEORADIUS STORE (which was broken since forever) This pr try to fix #9261. i.e. both STORE variants would have behaved like the non-STORE variants when the source key was missing, returning an empty array and not deleting the destination key, instead of returning 0, and deleting the destination key. Also add more tests for some commands. - GEORADIUS: wrong type src key, non existing src key, empty search, store with non existing src key, store with empty search - GEORADIUSBYMEMBER: wrong type src key, non existing src key, non existing member, store with non existing src key - GEOSEARCH: wrong type src key, non existing src key, empty search, frommember with non existing member - GEOSEARCHSTORE: wrong type key, non existing src key, fromlonlat with empty search, frommember with non existing member Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Guy Korland authored
Co-authored-by:
Yossi Gottlieb <yossigo@gmail.com>
-
- 30 Jul, 2021 2 commits
-
-
Quinn Klassen authored
* Free unused capacity in the cluster send buffer. * Refactor cluster cron to include a dedicated loop for node based cron jobs
-
Ewg-c authored
minor refactoring for rioConnRead and adding errno
-
- 29 Jul, 2021 2 commits
-
-
Wen Hui authored
The issue is that when a sentinel with the same address and IP is turned on with a different runid, its port is set to 0 but it is still present in the dictionary master->sentinels which contain all the sentinels for a master. This causes a problem when we do INFO SENTINEL because it takes the size of the dictionary of sentinels. This might also cause a problem for failover if enough sentinels have their port set to 0 since the number of voters in failover is also determined by the size of the dictionary of sentinels. This commits removes the sentinels with the port set to zero from the dictionary of sentinels. Fixes #8786
-
Oran Agra authored
The `lru_clock` and `lru` bits in `robj` save the least significant 24 bits of the unixtime (seconds since 1/1/1970), and wrap around every 194 days. The `objectSetLRUOrLFU` function, which is used in RESTORE with IDLETIME argument, and also in replica or master loading an RDB that contains LRU, and by a module API had a bug that's triggered when that happens. The scenario was that the idle time that came from the user, let's say RESTORE command is about 1000 seconds (e.g. in the `RESTORE can set LRU` test we have), and the current `lru_clock` just wrapped around and is less than 1000 (i.e. a period of 1000 seconds once in some 6 months), the expression in that function would produce a negative value and the code (and comment) specified that the best way to solve that is push the idle time backwards into the past by 3 months. i.e. an idle time of 3 months instead of 1000 seconds. instead, the right thing to do is to unwrap it, and put it near LRU_CLOCK_MAX. since now `lru_clock` is smaller than `obj->lru` it will be unwrapped again by `estimateObjectIdleTime`. bug was introduced by 052e0349, but the code before it also seemed wrong.
-
- 26 Jul, 2021 1 commit
-
-
Huang Zhw authored
Add two INFO metrics: ``` total_eviction_exceeded_time:69734 current_eviction_exceeded_time:10230 ``` `current_eviction_exceeded_time` if greater than 0, means how much time current used memory is greater than `maxmemory`. And we are still over the maxmemory. If used memory is below `maxmemory`, this metric is reset to 0. `total_eviction_exceeded_time` means total time used memory is greater than `maxmemory` since server startup. The units of these two metrics are ms. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 22 Jul, 2021 1 commit
-
-
Oran Agra authored
This fixes an issue with zslGetRank which will happen only if the skiplist data stracture is added two entries with the same element name, this can't happen in redis zsets (we use dict), but in theory this is a bug in the underlaying skiplist code. Fixes #3081 and #4032 Co-authored-by:
minjian.cai <cmjgithub@163.com>
-
- 21 Jul, 2021 1 commit
-
-
Huang Zhw authored
On 32 bit platform, the bit position of GETBIT/SETBIT/BITFIELD/BITCOUNT,BITPOS may overflow (see CVE-2021-32761) (#9191) GETBIT, SETBIT may access wrong address because of wrap. BITCOUNT and BITPOS may return wrapped results. BITFIELD may access the wrong address but also allocate insufficient memory and segfault (see CVE-2021-32761). This commit uses `uint64_t` or `long long` instead of `size_t`. related https://github.com/redis/redis/pull/8096 At 32bit platform: > setbit bit 4294967295 1 (integer) 0 > config set proto-max-bulk-len 536870913 OK > append bit "\xFF" (integer) 536870913 > getbit bit 4294967296 (integer) 0 When the bit index is larger than 4294967295, size_t can't hold bit index. In the past, `proto-max-bulk-len` is limit to 536870912, so there is no problem. After this commit, bit position is stored in `uint64_t` or `long long`. So when `proto-max-bulk-len > 536870912`, 32bit platforms can still be correct. For 64bit platform, this problem still exists. The major reason is bit pos 8 times of byte pos. When proto-max-bulk-len is very larger, bit pos may overflow. But at 64bit platform, we don't have so long string. So this bug may never happen. Additionally this commit add a test cost `512MB` memory which is tag as `large-memory`. Make freebsd ci and valgrind ci ignore this test.
-
- 20 Jul, 2021 1 commit
-
-
Oran Agra authored
- SELECT and WAIT don't read or write from the keyspace (unlike DEL, EXISTS, EXPIRE, DBSIZE, KEYS, etc). they're more similar to AUTH and HELLO (and maybe PING and COMMAND). they only affect the current connection, not the server state, so they should be `@connection`, not `@keyspace` - ROLE, like LASTSAVE is `@admin` (and `@dangerous` like INFO) - ASKING, READONLY, READWRITE are `@connection` too (not `@keyspace`) - Additionally, i'm now documenting the exact meaning of each ACL category so it's clearer which commands belong where.
-
- 19 Jul, 2021 2 commits
-
-
Huang Zhw authored
Fix module info genModulesInfoStringRenderModulesList lack separator when there's more than one module in the list. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Huang Zhw authored
src/modules make failed. As in #3718 testmodule.c was removed. But the makefile was not updated
-
- 18 Jul, 2021 3 commits
-
-
Binbin authored
If there are more than 2B entries in a zset. The calculated span will overflow.
-
Erik Dubbelboer authored
This doesn't have any real impact, just a cleanup.
-
Paul Kulchenko authored
-
- 17 Jul, 2021 1 commit
-
-
Binbin authored
in case dest key already contains the member, the dest key isn't modified, so the command shouldn't invalidate watch.
-
- 16 Jul, 2021 1 commit
-
-
qetu3790 authored
Set TCP keepalive on inbound clusterbus connections to prevent memory leak
-
- 15 Jul, 2021 1 commit
-
-
ZEEKLING authored
-
- 14 Jul, 2021 3 commits
-
-
Oran Agra authored
- promote the code in DEBUG PROTOCOL to addReplyBigNum - DEBUG PROTOCOL ATTRIB skips the attribute when client is RESP2 - networking.c addReply for push and attributes generate assertion when called on a RESP2 client, anything else would produce a broken protocol that clients can't handle.
-
Yossi Gottlieb authored
-
gourav authored
-
- 13 Jul, 2021 1 commit
-
-
Binbin authored
This would have resulted in missing newline in the help message
-
- 11 Jul, 2021 3 commits
-
-
perryitay authored
There are two issues fixed in this commit: 1. we want to fail the EXEC command in case there is a watched key that's logically expired but not yet deleted by active expire or lazy expire. 2. we saw that currently cache time is update in every `call()` (including nested calls), this time is being also being use for the isKeyExpired comparison, we want to update the cache time only in the first call (execCommand) Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Huang Zhw authored
Do not install a file event to send data to rewrite child when parent stop sending diff to child in aof rewrite. (#8767) In aof rewrite, when parent stop sending data to child, if there is new rewrite data, aofChildWriteDiffData write event will be installed. Then this event is issued and deletes the file event without do anyting. This will happen over and over again until aof rewrite finish. This bug used to waste a few system calls per excessive wake-up (epoll_ctl and epoll_wait) per cycle, each cycle triggered by receiving a write command from a client.
-
Binbin authored
The if judgement `nextdiff == -4 && reqlen < 4` in __ziplistInsert. It's strange, but it's useful. Without it there will be problems during chain update. Till now these lines didn't have coverage in the tests, and there was a question if they are at all needed (#7170)
-
- 10 Jul, 2021 1 commit
-
-
Shogo Hayashi authored
-
- 09 Jul, 2021 1 commit
-
-
Huang Zhw authored
redis-check-aof/redis-check-rdb. Related to #9176. Before this commit, redis-server starts as redis-check-aof/redis-check-rdb if the directory it is started from contains the string redis-check-aof/redis-check-rdb. We check the executable name instead of directory.
-