- 24 Nov, 2022 5 commits
-
-
Meir Shpilraien (Spielrein) authored
### Summary of API additions * `RedisModule_AddPostNotificationJob` - new API to call inside a key space notification (and on more locations in the future) and allow to add a post job as describe above. * New module option, `REDISMODULE_OPTIONS_ALLOW_NESTED_KEYSPACE_NOTIFICATIONS`, allows to disable Redis protection of nested key-space notifications. * `RedisModule_GetModuleOptionsAll` - gets the mask of all supported module options so a module will be able to check if a given option is supported by the current running Redis instance. ### Background The following PR is a proposal of handling write operations inside module key space notifications. After a lot of discussions we came to a conclusion that module should not perform any write operations on key space notification. Some examples of issues that such write operation can cause are describe on the following links: * Bad replication oreder - https://github.com/redis/redis/pull/10969 * Used after free - https://github.com/redis/redis/pull/10969#issuecomment-1223771006 * Used after free - https://github.com/redis/redis/pull/9406#issuecomment-1221684054 There are probably more issues that are yet to be discovered. The underline problem with writing inside key space notification is that the notification runs synchronously, this means that the notification code will be executed in the middle on Redis logic (commands logic, eviction, expire). Redis **do not assume** that the data might change while running the logic and such changes can crash Redis or cause unexpected behaviour. The solution is to state that modules **should not** perform any write command inside key space notification (we can chose whether or not we want to force it). To still cover the use-case where module wants to perform a write operation as a reaction to key space notifications, we introduce a new API , `RedisModule_AddPostNotificationJob`, that allows to register a callback that will be called by Redis when the following conditions hold: * It is safe to perform any write operation. * The job will be called atomically along side the operation that triggers it (in our case, key space notification). Module can use this new API to safely perform any write operation and still achieve atomicity between the notification and the write. Although currently the API is supported on key space notifications, the API is written in a generic way so that in the future we will be able to use it on other places (server events for example). ### Technical Details Whenever a module uses `RedisModule_AddPostNotificationJob` the callback is added to a list of callbacks (called `modulePostExecUnitJobs`) that need to be invoke after the current execution unit ends (whether its a command, eviction, or active expire). In order to trigger those callback atomically with the notification effect, we call those callbacks on `postExecutionUnitOperations` (which was `propagatePendingCommands` before this PR). The new function fires the post jobs and then calls `propagatePendingCommands`. If the callback perform more operations that triggers more key space notifications. Those keys space notifications might register more callbacks. Those callbacks will be added to the end of `modulePostExecUnitJobs` list and will be invoke atomically after the current callback ends. This raises a concerns of entering an infinite loops, we consider infinite loops as a logical bug that need to be fixed in the module, an attempt to protect against infinite loops by halting the execution could result in violation of the feature correctness and so **Redis will make no attempt to protect the module from infinite loops** In addition, currently key space notifications are not nested. Some modules might want to allow nesting key-space notifications. To allow that and keep backward compatibility, we introduce a new module option called `REDISMODULE_OPTIONS_ALLOW_NESTED_KEYSPACE_NOTIFICATIONS`. Setting this option will disable the Redis key-space notifications nesting protection and will pass this responsibility to the module. ### Redis infrastructure This PR promotes the existing `propagatePendingCommands` to an "Execution Unit" concept, which is called after each atomic unit of execution, Co-authored-by:
Oran Agra <oran@redislabs.com> Co-authored-by:
Yossi Gottlieb <yossigo@gmail.com> Co-authored-by:
Madelyn Olson <34459052+madolson@users.noreply.github.com>
-
filipe oliveira authored
GEOSEARCH BYBOX: Reduce wastefull computation on geohashGetDistanceIfInRectangle and geohashGetDistance (#11535) Optimize geohashGetDistanceIfInRectangle when there are many misses. It calls 3x geohashGetDistance. The first 2 times we call them to produce intermediate results. This PR focus on optimizing for those 2 intermediate results. 1 Reduce expensive computation on intermediate geohashGetDistance with same long 2 Avoid expensive lon_distance calculation if lat_distance fails beforehand Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Binbin authored
In #11511 we introduced member_offset which has a sanitizer warning: ``` multi.c:390:26: runtime error: member access within null pointer of type 'watchedKey' (aka 'struct watchedKey') SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior multi.c:390:26 ``` We can use offsetof() from stddef.h. This is part of the standard lib just to avoid this UB :) Sanitizer should not complain after we change this. 1. Use offsetof instead of member_offset, so we can delete this now 2. Changed (uint8_t*) cast to (char*). This does not matter much but according to standard, we are only allowed to cast pointers to its own type, char* and void*. Let's try to follow the rules. This change was suggested by tezc and the comments is also from him. Co-authored-by:
Ozan Tezcan <ozantezcan@gmail.com>
-
sundb authored
Fix compile warning for SHA1Transform() method under alpine with GCC 12. Warning: ``` In function 'SHA1Update', inlined from 'SHA1Final' at sha1.c:187:9: sha1.c:144:13: error: 'SHA1Transform' reading 64 bytes from a region of size 0 [-Werror=stringop-overread] 144 | SHA1Transform(context->state, &data[i]); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ sha1.c:144:13: note: referencing argument 2 of type 'const unsigned char[64]' sha1.c: In function 'SHA1Final': sha1.c:56:6: note: in a call to function 'SHA1Transform' 56 | void SHA1Transform(uint32_t state[5], const unsigned char buffer[64]) | ^~~~~~~~~~~~~ ``` This warning is a false positive because it has been determined in the loop judgment that there must be 64 chars after position `i` ```c for ( ; i + 63 < len; i += 64) { SHA1Transform(context->state, &data[i]); } ``` Reference: https://github.com/libevent/libevent/commit/e1d7d3e40a7fd50348d849046fbfd9bf976e643c
-
Wen Hui authored
Command SENTINEL DEBUG could be no arguments, which display all configurable arguments and their values. Update the command arguments in the docs (json file) to indicate that arguments are optional
-
- 23 Nov, 2022 1 commit
-
-
Mingyi Kang authored
In unwatchAllKeys() function, we traverse all the keys watched by the client, and for each key we need to remove the client from the list of clients watching that key. This is implemented by listSearchKey which traverses the list of clients. If we can reach the node of the list of clients from watchedKey in O(1) time, then we do not need to call listSearchKey anymore. Changes in this PR: put the node of the list of clients of each watched key in the db inside the watchedKey structure. In this way, for every key watched by the client, we can get the watchedKey structure and then reach the node in the list of clients in db->watched_keys to remove it from that list. From the perspective of the list of clients watching the key, the list node is inside a watchedKey structure, so we can get to the watchedKey struct from the listnode by struct member offset math. And because of this, node->value is not used, we can point node->value to the list itself, so that we don't need to fetch the list of clients from the dict.
-
- 22 Nov, 2022 4 commits
-
-
Itamar Haber authored
Technically, these commands were deprecated as of 2.6.12, with the introduction of the respective arguments to SET. In reality, the deprecation note will only be added in 7.2.0.
-
Binbin authored
This PR add `assert_refcount_morethan`, and modify `assert_refcount` to skip the `OBJECT REFCOUNT` check with `needs:debug` flag. Use them to modify all `OBJECT REFCOUNT` calls and also update the tests/README to be more specific. The reasoning is that some of these tests could be testing something important, and along the way also add a check for the refcount, and it could be a shame to skip the whole test just because the refcount functionality is missing or blocked. but much like the fact that some redis variants may not support DEBUG, and still we want to run the majority of the test for coverage, and just skip the digest match.
-
Wen Hui authored
Now, according to the comments, if the truncated file is not the last file, it will be considered as a fatal error. And the return code will updated to AOF_FAILED, then server will exit without any error message to the client. Similar to other error situations, this PR add an explicit error message for this case and make the client know clearly what happens.
-
Binbin authored
This payload produces a set with duplicate elements (listpack encoding): ``` restore _key 0 "\x14\x25\x25\x00\x00\x00\x0A\x00\x06\x01\x82\x5F\x35\x03\x04\x01\x82\x5F\x31\x03\x82\x5F\x33\x03\x00\x01\x82\x5F\x39\x03\x82\x5F\x33\x03\x08\x01\x02\x01\xFF\x0B\x00\x31\xBE\x7D\x41\x01\x03\x5B\xEC" smembers key 1) "6" 2) "_5" 3) "4" 4) "_1" 5) "_3" ---> dup 6) "0" 7) "_9" 8) "_3" ---> dup 9) "8" 10) "2" ``` This kind of sets will cause SDIFF to hang, SDIFF generated a broken protocol and left the client hung. (Expected ten elements, but only got nine elements due to the duplication.) If we set `sanitize-dump-payload` to yes, we will be able to find the duplicate elements and report "ERR Bad data format". Discovered and discussed in #11290. This PR also improve prints when corrupt-dump-fuzzer hangs, it will print the cmds and the payload, an example like: ``` Testing integration/corrupt-dump-fuzzer [TIMEOUT]: clients state report follows. sock6 => (SPAWNED SERVER) pid:28884 Killing still running Redis server 28884 commands caused test to hang: SDIFF __key payload that caused test to hang: "\x14\balabala" ``` Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 21 Nov, 2022 1 commit
-
-
Binbin authored
Apparently we used to set `loglevel debug` for tls in spawn_instance. I.e. cluster and sentinel tests used to run with debug logging, only when tls mode was enabled. this was probably a leftover from when creating the tls mode tests. it cause a new test created for #11214 to fail in tls mode. At the same time, in order to better distinguish the tests, change the name of `test-centos7-tls` to `test-centos7-tls-module`, change the name of `test-centos7-tls-no-tls` to `test-centos7-tls-module-no-tls`. Note that in `test-centos7-tls-module`, we did not pass `--tls-module` in sentinel test because it is not supported, see 4faddf18 , added in #9320. So only `test-ubuntu-tls` fails in daily CI. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 20 Nov, 2022 2 commits
-
-
Binbin authored
The following example will create an empty set (listpack encoding): ``` > RESTORE key 0 "\x14\x25\x25\x00\x00\x00\x00\x00\x02\x01\x82\x5F\x37\x03\x06\x01\x82\x5F\x35\x03\x82\x5F\x33\x03\x00\x01\x82\x5F\x31\x03\x82\x5F\x39\x03\x04\xA9\x08\x01\xFF\x0B\x00\xA3\x26\x49\xB4\x86\xB0\x0F\x41" OK > SCARD key (integer) 0 > SRANDMEMBER key Error: Server closed the connection ``` In the spirit of #9297, skip empty set when loading RDB_TYPE_SET_LISTPACK. Introduced in #11290
-
Wen Hui authored
Till now Sentinel allowed modifying the log level in the config file, but not at runtime. this makes it possible to tune the log level at runtime
-
- 17 Nov, 2022 1 commit
-
-
Ping Xie authored
Introduce Shard IDs to logically group nodes in cluster mode. 1. Added a new "shard_id" field to "cluster nodes" output and nodes.conf after "hostname" 2. Added a new PING extension to propagate "shard_id" 3. Handled upgrade from pre-7.2 releases automatically 4. Refactored PING extension assembling/parsing logic Behavior of Shard IDs: Replicas will always follow the shards of their reported primaries. If a primary updates its shard ID, the replica will follow. (This need not follow for cluster v2) This is not an expected use case.
-
- 16 Nov, 2022 2 commits
-
-
sundb authored
Improve memory efficiency of list keys ## Description of the feature The new listpack encoding uses the old `list-max-listpack-size` config to perform the conversion, which we can think it of as a node inside a quicklist, but without 80 bytes overhead (internal fragmentation included) of quicklist and quicklistNode structs. For example, a list key with 5 items of 10 chars each, now takes 128 bytes instead of 208 it used to take. ## Conversion rules * Convert listpack to quicklist When the listpack length or size reaches the `list-max-listpack-size` limit, it will be converted to a quicklist. * Convert quicklist to listpack When a quicklist has only one node, and its length or size is reduced to half of the `list-max-listpack-size` limit, it will be converted to a listpack. This is done to avoid frequent conversions when we add or remove at the bounding size or length. ## Interface changes 1. add list entry param to listTypeSetIteratorDirection When list encoding is listpack, `listTypeIterator->lpi` points to the next entry of current entry, so when changing the direction, we need to use the current node (listTypeEntry->p) to update `listTypeIterator->lpi` to the next node in the reverse direction. ## Benchmark ### Listpack VS Quicklist with one node * LPUSH - roughly 0.3% improvement * LRANGE - roughly 13% improvement ### Both are quicklist * LRANGE - roughly 3% improvement * LRANGE without pipeline - roughly 3% improvement From the benchmark, as we can see from the results 1. When list is quicklist encoding, LRANGE improves performance by <5%. 2. When list is listpack encoding, LRANGE improves performance by ~13%, the main enhancement is brought by `addListListpackRangeReply()`. ## Memory usage 1M lists(key:0~key:1000000) with 5 items of 10 chars ("hellohello") each. shows memory usage down by 35.49%, from 214MB to 138MB. ## Note 1. Add conversion callback to support doing some work before conversion Since the quicklist iterator decompresses the current node when it is released, we can no longer decompress the quicklist after we convert the list.
-
Madelyn Olson authored
Both functions and eval are marked as "no-monitor", since we want to explicitly feed in the script command before the commands generated by the script. Note that we want this behavior generally, so that commands can redact arguments before being added to the monitor.
-
- 15 Nov, 2022 1 commit
-
-
Binbin authored
The test introduced in #11482 fail on ARM (extra CI): ``` *** [err]: RESP2: RM_ReplyWithDouble: NaN in tests/unit/moduleapi/reply.tcl Expected '-nan' to be equal to 'nan' (context: type eval line 3 cmd {assert_equal "-nan" [r rw.double 0 0]} proc ::test) *** [err]: RESP3: RM_ReplyWithDouble: NaN in tests/unit/moduleapi/reply.tcl Expected ',-nan' to be equal to ',nan' (context: type eval line 8 cmd {assert_equal ",-nan" [r rw.double 0 0]} proc ::test) ``` It looks like there is no negative nan on ARM.
-
- 14 Nov, 2022 2 commits
-
-
uriyage authored
In moduleFireServerEvent we change the real client DB to 0 on freeClient in case the event is REDISMODULE_EVENT_CLIENT_CHANGE. It results in a crash if the client is blocked on a key on other than DB 0. The DB change is not necessary even for module-client, as we set its DB to 0 on either createClient or moduleReleaseTempClient. Co-authored-by:
Madelyn Olson <34459052+madolson@users.noreply.github.com> Co-authored-by:
Binbin <binloveplay1314@qq.com>
-
Binbin authored
The test introduced in #11482 fail on mac: ``` *** [err]: RESP3: RM_ReplyWithDouble: inf in tests/unit/moduleapi/reply.tcl Expected 'Inf' to be equal to 'inf' (context: type eval line 6 cmd {assert_equal Inf [r rw.double inf]} proc ::test) ``` Looks like the mac platform returns inf instead of Inf in this case, this PR uses readraw to verify the protocol.
-
- 13 Nov, 2022 1 commit
-
-
Oran Agra authored
Adding a test to cover the already existing behavior of NAN replies, to accompany the PR that adds them to the RESP3 spec: https://github.com/redis/redis-specifications/pull/10 This PR also covers Inf replies that are already in the spec, as well as RESP2 coverage.
-
- 12 Nov, 2022 1 commit
-
-
Oran Agra authored
Fix a few issues with the recent #11463 * use exitFromChild instead of exit * test should ignore defunct process since that's what we expect to happen for thees child processes when the parent dies. * fix typo Co-authored-by:
Binbin <binloveplay1314@qq.com>
-
- 10 Nov, 2022 1 commit
-
-
Binbin authored
Add missing lpFree, introduced in #11290
-
- 09 Nov, 2022 3 commits
-
-
Viktor Söderqvist authored
Small sets with not only integer elements are listpack encoded, by default up to 128 elements, max 64 bytes per element, new config `set-max-listpack-entries` and `set-max-listpack-value`. This saves memory for small sets compared to using a hashtable. Sets with only integers, even very small sets, are still intset encoded (up to 1G limit, etc.). Larger sets are hashtable encoded. This PR increments the RDB version, and has an effect on OBJECT ENCODING Possible conversions when elements are added: intset -> listpack listpack -> hashtable intset -> hashtable Note: No conversion happens when elements are deleted. If all elements are deleted and then added again, the set is deleted and recreated, thus implicitly converted to a smaller encoding.
-
Viktor Söderqvist authored
Clients should not use this command. Instead, clients should simply close the connection when they're not used anymore. Terminating a connection on the client side is preferable, as it eliminates `TIME_WAIT` lingering sockets on the server side.
-
Oran Agra authored
During a diskless sync, if the master main process crashes, the child would have hung in `write`. This fix closes the read fd on the child side, so that if the parent crashes, the child will get a write error and exit. This change also fixes disk-based replication, BGSAVE and AOFRW. In that case the child wouldn't have been hang, it would have just kept running until done which may be pointless. There is a certain degree of risk here. in case there's a BGSAVE child that could maybe succeed and the parent dies for some reason, the old code would have let the child keep running and maybe succeed and avoid data loss. On the other hand, if the parent is restarted, it would have loaded an old rdb file (or none), and then the child could reach the end and rename the rdb file (data conflicting with what the parent has), or also have a race with another BGSAVE child that the new parent started. Note that i removed a comment saying a write error will be ignored in the child and handled by the parent (this comment was very old and i don't think relevant).
-
- 08 Nov, 2022 1 commit
-
-
Ozan Tezcan authored
Add needs:save tag for the test introduced by #11376
-
- 07 Nov, 2022 1 commit
-
-
Oran Agra authored
Till now Redis attempted to avoid using jemalloc on ARM, but didn't do that properly (missing armv8l and aarch64), so in fact we did you jemalloc on these without a problem. Side notes: Some ARM platforms, which share instruction set and can share binaries (docker images), may have different page size, and apparently jemalloc uses the page size of the build machine as the maximum page size to be supported by the build. see https://github.com/redis-stack/redis-stack/issues/187 To work around that, when building for ARM, one can change the maximum page size to 64k (or greater if present on the build machine) In recent versions of jemalloc, this should not have any severe side effects (like VM map fragmentation), see: https://github.com/jemalloc/jemalloc/issues/467 https://github.com/redis/redis/pull/11170#issuecomment-1236265230 To do that, one can use: ``` JEMALLOC_CONFIGURE_OPTS="--with-lg-page=16" make ``` Besides that, this PR fixes a messy makefile condition that was created here: f30b18f4
-
- 06 Nov, 2022 2 commits
-
-
yancz2000 authored
-
Hanif Ariffin authored
Since they're created with makeObjectShared, then incrRefCount on them is a NOP. Signed-off-by:
Hanif Bin Ariffin <hanif.ariffin.4326@gmail.com>
-
- 04 Nov, 2022 2 commits
-
-
Binbin authored
Our FreeBSD daily has been failing recently: ``` Config file: freebsd-13.1.conf cd: /Users/runner/work/redis/redis: No such file or directory gmake: *** No targets specified and no makefile found. Stop. ``` Upgrade vmactions/freebsd-vm to the latest version (0.3.0) can work. I've tested it, but don't know why, but first let's fix it.
-
Binbin authored
Introduce socket `shutdown()` into connection type, and use it on normal socket if a fork is active. This allows us to close client connections when there are child processes sharing the file descriptors. Fixes #10077. The reason is that since the `fork()` child is holding the file descriptors, the `close` in `unlinkClient -> connClose` isn't sufficient. The client will not realize that the connection is disconnected until the child process ends. Let's try to be conservative and only use shutdown when the fork is active.
-
- 03 Nov, 2022 2 commits
-
-
Madelyn Olson authored
Retain ACL categories used to generate ACL for displaying them later
-
Binbin authored
Today we don't place any specific restrictions on module command names. This can cause ambiguous scenarios. For example, someone might name a command like "module|feature" which would be incorrectly parsed by the ACL system as a subcommand. In this PR, we will block some chars that we know can mess things up. Specifically ones that can appear ok at first and cause problems in some cases (we rather surface the issue right away). There are these characters: * ` ` (space) - issues with old inline protocol. * `\r`, `\n` (newline) - can mess up the protocol on acl error replies. * `|` - sub-commands. * `@` - ACL categories * `=`, `,` - info and client list fields. note that we decided to leave `:` out as it's handled by `getSafeInfoString` and is more likely to already been used by existing modules.
-
- 02 Nov, 2022 4 commits
-
-
Wen Hui authored
Resolve an edge case where the ID of a stream is updated retroactively to an ID lower than the already set max_deleted_entry_id. Currently, if we have command as below: **xsetid mystream 1-1 MAXDELETEDID 1-2** Then we will get the following error: **(error) ERR The ID specified in XSETID is smaller than the provided max_deleted_entry_id** Becuase the provided MAXDELETEDID 1-2 is greated than input last-id: 1-1 Then we could assume there is a similar situation: step 1: we add three items in the mystream **127.0.0.1:6381> xadd mystream 1-1 a 1 "1-1" 127.0.0.1:6381> xadd mystream 1-2 b 2 "1-2" 127.0.0.1:6381> xadd mystream 1-3 c 3 "1-3"** step 2: we could check the mystream infomation as below: **127.0.0.1:6381> xinfo stream mystream 1) "length" 2) (integer) 3 7) "last-generated-id" 8) "1-3" 9) "max-deleted-entry-id" 10) "0-0" step 3: we delete the item id 1-2 and 1-3 as below: **127.0.0.1:6381> xdel mystream 1-2 (integer) 1 127.0.0.1:6381> xdel mystream 1-3 (integer) 1** step 4: we check the mystream information: 127.0.0.1:6381> xinfo stream mystream 1) "length" 2) (integer) 1 7) "last-generated-id" 8) "1-3" 9) "max-deleted-entry-id" 10) "1-3" we could notice that the **max-deleted-entry-id update to 1-3**, so right now, if we just run: **xsetid mystream 1-2** the above command has the same effect with **xsetid mystream 1-2 MAXDELETEDID 1-3** So we should return an error to the client that **(error) ERR The ID specified in XSETID is smaller than current max_deleted_entry_id**
-
Wen Hui authored
According to the source code, the commands can be executed with only key name, and no GET/SET/INCR operation arguments. change the docs to reflect that by marking these arguments as optional. also add tests.
-
Binbin authored
* Print IP and port on cluster bus message sanity check Add a print statement to indicate which IP/port is sending the error messages. That way we can at least check to see if it is a node in the cluster or some other nefarious nodes. It is proposed in #11339. Unrelated changes: the return check for connAddrPeerName should be -1 instead of C_ERR, although the value of C_ERR is also -1. Co-authored-by:
Madelyn Olson <34459052+madolson@users.noreply.github.com>
-
Brennan authored
Re-design cluster link send queue to improve memory management
-
- 01 Nov, 2022 1 commit
-
-
Wen Hui authored
This PR add test case for PR #11417, with only key as argument for GEOHASH and GEOPOS
-
- 29 Oct, 2022 1 commit
-
-
xbasel authored
All current usage of the function are correct, but in the future it might change.
-
- 27 Oct, 2022 1 commit
-
-
Moti Cohen authored
Renamed from "Pause Clients" to "Pause Actions" since the mechanism can pause several actions in redis, not just clients (e.g. eviction, expiration). Previously each pause purpose (which has a timeout that's tracked separately from others purposes), also implicitly dictated what it pauses (reads, writes, eviction, etc). Now it is explicit, and the actions that are paused (bit flags) are defined separately from the purpose. - Previously, when using feature pause-client it also implicitly means to make the server static: - Pause replica traffic - Pauses eviction processing - Pauses expire processing Making the server static is used also for failover and shutdown. This PR internally rebrand pause-client API to become pause-action API. It also Simplifies pauseClients structure by replacing pointers array with static array. The context of this PR is to add another trigger to pause-client which will activated in case of OOM as throttling mechanism ([see here](https://github.com/redis/redis/issues/10907)). In this case we want only to pause client, and eviction actions.
-