- 20 Jun, 2023 10 commits
-
-
guybe7 authored
Introduced by https://github.com/redis/redis/pull/11923 (Redis 7.2 RC2) It's very weird and counterintuitive that `RM_ReplyWithError` requires the error-code **without** a hyphen while `RM_ReplyWithErrorFormat` requires either the error-code **with** a hyphen or no error-code at all ``` RedisModule_ReplyWithError(ctx, "BLA bla bla"); ``` vs. ``` RedisModule_ReplyWithErrorFormat(ctx, "-BLA %s", "bla bla"); ``` This commit aligns RM_ReplyWithErrorFormat to behvae like RM_ReplyWithError. it's a breaking changes but it's done before 7.2 goes GA.
-
Oran Agra authored
When a connection that's subscribe to a channel emits PUBLISH inside MULTI-EXEC, the push notification messes up the EXEC response. e.g. MULTI, PING, PUSH foo bar, PING, EXEC the EXEC's response will contain: PONG, {message foo bar}, 1. and the second PONG will be delivered outside the EXEC's response. Additionally, this PR changes the order of responses in case of a plain PUBLISH (when the current client also subscribed to it), by delivering the push after the command's response instead of before it. This also affects modules calling RM_PublishMessage in a similar way, so that we don't run the risk of getting that push mixed together with the module command's response. -
judeng authored
use embedded string object and more efficient ll2string for long long value convert to string (#12250) A value of type long long is always less than 21 bytes when convert to a string, so always meets the conditions for using embedded string object which can always get memory reduction and performance gain (less calls to the heap allocator). Additionally, for the conversion of longlong type to sds, we also use a faster algorithm (the one in util.c instead of the one that used to be in sds.c). For the DECR command on 32-bit Redis, we get about a 5.7% performance improvement. There will also be some performance gains for some commands that heavily use sdscatfmt to convert numbers, such as INFO. Co-authored-by:Oran Agra <oran@redislabs.com>
-
Binbin authored
Now we will check the offset in zrangeGenericCommand. With a negative offset, we will throw an error and return. This also resolve the issue of zeroing the destination key in case of the "store" variant when we input a negative offset. ``` 127.0.0.1:6379> set key value OK 127.0.0.1:6379> zrangestore key myzset 0 10 byscore limit -1 10 (integer) 0 127.0.0.1:6379> exists key (integer) 0 ``` This change affects the following commands: - ZRANGE / ZRANGESTORE / ZRANGEBYLEX / ZRANGEBYSCORE - ZREVRANGE / ZREVRANGEBYSCORE / ZREVRANGEBYLEX
-
Wen Hui authored
For geosearch and georadius we have already test coverage for wrong type, but we dont have for geodist, geohash, geopos commands. So adding the wrong type test cases for geodist, geohash, geopos commands. Existing code, we have verify_geo_edge_response_bymember function for wrong type test cases which has member as an option. But the function is being called in other test cases where the output is not inline with these commnds(geodist, geohash, geopos). So I could not include these commands(geodist, geohash, geopos) as part of existing function, hence implemented a new function verify_geo_edge_response_generic and called from the test case.
-
Binbin authored
The parameter name is WITHSCORE instead of WITHSCORES.
-
mstmdev authored
-
Wen Hui authored
Sanitizer reported memory leak for '--invalid' option or port number is missed cases to redis-server. (#12322) Observed that the sanitizer reported memory leak as clean up is not done before the process termination in negative/following cases: **- when we passed '--invalid' as option to redis-server.** ``` -vm:~/mem-leak-issue/redis$ ./src/redis-server --invalid *** FATAL CONFIG FILE ERROR (Redis 255.255.255) *** Reading the configuration file, at line 2 >>> 'invalid' Bad directive or wrong number of arguments ================================================================= ==865778==ERROR: LeakSanitizer: detected memory leaks Direct leak of 8 byte(s) in 1 object(s) allocated from: #0 0x7f0985f65867 in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:145 #1 0x558ec86686ec in ztrymalloc_usable_internal /home/ubuntu/mem-leak-issue/redis/src/zmalloc.c:117 #2 0x558ec86686ec in ztrymalloc_usable /home/ubuntu/mem-leak-issue/redis/src/zmalloc.c:135 #3 0x558ec86686ec in ztryrealloc_usable_internal /home/ubuntu/mem-leak-issue/redis/src/zmalloc.c:276 #4 0x558ec86686ec in zrealloc /home/ubuntu/mem-leak-issue/redis/src/zmalloc.c:327 #5 0x558ec865dd7e in sdssplitargs /home/ubuntu/mem-leak-issue/redis/src/sds.c:1172 #6 0x558ec87a1be7 in loadServerConfigFromString /home/ubuntu/mem-leak-issue/redis/src/config.c:472 #7 0x558ec87a13b3 in loadServerConfig /home/ubuntu/mem-leak-issue/redis/src/config.c:718 #8 0x558ec85e6f15 in main /home/ubuntu/mem-leak-issue/redis/src/server.c:7258 #9 0x7f09856e5d8f in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58 SUMMARY: AddressSanitizer: 8 byte(s) leaked in 1 allocation(s). ``` **- when we pass '--port' as option and missed to add port number to redis-server.** ``` vm:~/mem-leak-issue/redis$ ./src/redis-server --port *** FATAL CONFIG FILE ERROR (Redis 255.255.255) *** Reading the configuration file, at line 2 >>> 'port' wrong number of arguments ================================================================= ==865846==ERROR: LeakSanitizer: detected memory leaks Direct leak of 8 byte(s) in 1 object(s) allocated from: #0 0x7fdcdbb1f867 in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:145 #1 0x557e8b04f6ec in ztrymalloc_usable_internal /home/ubuntu/mem-leak-issue/redis/src/zmalloc.c:117 #2 0x557e8b04f6ec in ztrymalloc_usable /home/ubuntu/mem-leak-issue/redis/src/zmalloc.c:135 #3 0x557e8b04f6ec in ztryrealloc_usable_internal /home/ubuntu/mem-leak-issue/redis/src/zmalloc.c:276 #4 0x557e8b04f6ec in zrealloc /home/ubuntu/mem-leak-issue/redis/src/zmalloc.c:327 #5 0x557e8b044d7e in sdssplitargs /home/ubuntu/mem-leak-issue/redis/src/sds.c:1172 #6 0x557e8b188be7 in loadServerConfigFromString /home/ubuntu/mem-leak-issue/redis/src/config.c:472 #7 0x557e8b1883b3 in loadServerConfig /home/ubuntu/mem-leak-issue/redis/src/config.c:718 #8 0x557e8afcdf15 in main /home/ubuntu/mem-leak-issue/redis/src/server.c:7258 #9 0x7fdcdb29fd8f in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58 Indirect leak of 10 byte(s) in 1 object(s) allocated from: #0 0x7fdcdbb1fc18 in __interceptor_realloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:164 #1 0x557e8b04f9aa in ztryrealloc_usable_internal /home/ubuntu/mem-leak-issue/redis/src/zmalloc.c:287 #2 0x557e8b04f9aa in ztryrealloc_usable /home/ubuntu/mem-leak-issue/redis/src/zmalloc.c:317 #3 0x557e8b04f9aa in zrealloc_usable /home/ubuntu/mem-leak-issue/redis/src/zmalloc.c:342 #4 0x557e8b033f90 in _sdsMakeRoomFor /home/ubuntu/mem-leak-issue/redis/src/sds.c:271 #5 0x557e8b033f90 in sdsMakeRoomFor /home/ubuntu/mem-leak-issue/redis/src/sds.c:295 #6 0x557e8b033f90 in sdscatlen /home/ubuntu/mem-leak-issue/redis/src/sds.c:486 #7 0x557e8b044e1f in sdssplitargs /home/ubuntu/mem-leak-issue/redis/src/sds.c:1165 #8 0x557e8b188be7 in loadServerConfigFromString /home/ubuntu/mem-leak-issue/redis/src/config.c:472 #9 0x557e8b1883b3 in loadServerConfig /home/ubuntu/mem-leak-issue/redis/src/config.c:718 #10 0x557e8afcdf15 in main /home/ubuntu/mem-leak-issue/redis/src/server.c:7258 #11 0x7fdcdb29fd8f in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58 SUMMARY: AddressSanitizer: 18 byte(s) leaked in 2 allocation(s). ``` As part analysis found that the sdsfreesplitres is not called when this condition checks are being hit. Output after the fix: ``` vm:~/mem-leak-issue/redis$ ./src/redis-server --invalid *** FATAL CONFIG FILE ERROR (Redis 255.255.255) *** Reading the configuration file, at line 2 >>> 'invalid' Bad directive or wrong number of arguments vm:~/mem-leak-issue/redis$ =========================================== vm:~/mem-leak-issue/redis$ ./src/redis-server --jdhg *** FATAL CONFIG FILE ERROR (Redis 255.255.255) *** Reading the configuration file, at line 2 >>> 'jdhg' Bad directive or wrong number of arguments --------------------------------------------------------------------------- vm:~/mem-leak-issue/redis$ ./src/redis-server --port *** FATAL CONFIG FILE ERROR (Redis 255.255.255) *** Reading the configuration file, at line 2 >>> 'port' wrong number of arguments ``` Co-authored-by:Oran Agra <oran@redislabs.com>
-
Shaya Potter authored
Adds API - RedisModule_CommandFilterGetClientId() Includes addition to commandfilter test module to validate that it works by performing the same command from 2 different clients
-
Binbin authored
auxHumanNodenameGetter limited to %.40s, since we did not limit the length of config cluster-announce-human-nodename, %.40s will cause nodename data loss (we will persist it in nodes.conf). Additional modified auxHumanNodenamePresent to use sdslen.
-
- 19 Jun, 2023 1 commit
-
-
Binbin authored
In the original implementation, the time complexity of the commands is actually O(N*M), where N is the number of patterns the client is already subscribed and M is the number of patterns to subscribe to. The docs are all wrong about this. Specifically, because the original client->pubsub_patterns is a list, so we need to do listSearchKey which is O(N). In this PR, we change it to a dict, so the search becomes O(1). At the same time, both pubsub_channels and pubsubshard_channels are dicts. Changing pubsub_patterns to a dictionary improves the readability and maintainability of the code.
-
- 18 Jun, 2023 2 commits
-
-
Oran Agra authored
Apparently for large size classes Jemalloc allocate some extra memory (can be up to 25% overhead for allocations of 16kb). see https://github.com/jemalloc/jemalloc/issues/1098#issuecomment-1589870476 p.s. from Redis's perspective that looks like external fragmentation, (i.e. allocated bytes will be low, and active pages bytes will be large) which can cause active-defrag to eat CPU cycles in vain. Some details about this mechanism we disable: --------------------------------------------------------------- Disabling this mechanism only affects large allocations (above 16kb) Not only that it isn't expected to cause any performance regressions, it's actually recommended, unless you have a specific workload pattern and hardware that benefit from this feature -- by default it's enabled and adds address randomization to all large buffers, by over allocating 1 page per large size class, and offsetting into that page to make the starting address of the user buffer randomized. Workloads such as scientific computation often handle multiple big matrixes at the same time, and the randomization makes sure that the cacheline level accesses don't suffer bad conflicts (when they all start from page-aligned addresses). However the downsize is also quite noticeable, like you observed that extra page per large size can cause memory overhead, plus the extra TLB entry. The other factor is, hardware in the last few years started doing the randomization at the hardware level, i.e. the address to cacheline mapping isn't a direct mapping anymore. So there's debate to disable the randomization by default, but we are still hesitant because when it matters, it could matter a lot, and having it enabled by default limits that worst case behavior, even though it means the majority of workloads suffers a regression. So in short, it's safe and offers better performance in most cases.
-
Wen Hui authored
This PR adds a human readable name to a node in clusters that are visible as part of error logs. This is useful so that admins and operators of Redis cluster have better visibility into failures without having to cross-reference the generated ID with some logical identifier (such as pod-ID or EC2 instance ID). This is mentioned in #8948. Specific nodenames can be set by using the variable cluster-announce-human-nodename. The nodename is gossiped using the clusterbus extension in #9530. Co-authored-by:Madelyn Olson <madelyneolson@gmail.com>
-
- 16 Jun, 2023 4 commits
-
-
sundb authored
## Issue: When a dict has a long chain or the length of the chain is longer than the number of samples, we will never be able to sample the elements at the end of the chain using dictGetSomeKeys(). This could mean that SRANDMEMBER can be hang in and endless loop. The most severe case, is the pathological case of when someone uses SCAN+DEL or SSCAN+SREM creating an unevenly distributed dict. This was amplified by the recent change in #11692 which prevented a down-sizing rehashing while there is a fork. ## Solution 1. Before, we will stop sampling when we reach the maximum number of samples, even if there is more data after the current chain. Now when we reach the maximum we use the Reservoir Sampling algorithm to fairly sample the end of the chain that cannot be sampled 2. Fix the rehashing code, so that the same as it allows rehashing for up-sizing during fork when the ratio is extreme, it will allow it for down-sizing as well. Issue was introduced (or became more severe) by #11692 Co-authored-by:Oran Agra <oran@redislabs.com>
-
Binbin authored
In SPOP, when COUNT is greater than or equal to set's size, we will remove the set. In dbDelete, we will do DEL or UNLINK according to the lazy flag. This is also required for propagate. In RESTORE, we won't store expired keys into the db, see #7472. When used together with REPLACE, it should emit a DEL or UNLINK according to the lazy flag. This PR also adds tests to cover the propagation. The RESTORE test will also cover #7472.
-
YaacovHazan authored
In 4ba47d2d the following tests added in both tracking.tcl and introspection.tcl - Coverage: Basic CLIENT CACHING - Coverage: Basic CLIENT REPLY - Coverage: Basic CLIENT TRACKINGINFO - Coverage: Basic CLIENT GETREDIR
-
Binbin authored
* Add command being unblocked cause another command to get unblocked execution order test In #12301, we observed that if the `while(listLength(server.ready_keys) != 0)` in handleClientsBlockedOnKeys is changed to `if(listLength(server.ready_keys) != 0)`, the order of command execution will change. It is wrong to change that. It means that if a command being unblocked causes another command to get unblocked (like a BLMOVE would do), then the new unblocked command will wait for later to get processed rather than right away. It'll not have any real implication if we change that since we do call handleClientsBlockedOnKeys in beforeSleep again, and redis will still behave correctly, but we don't change that. An example: 1. $rd1 blmove src{t} dst{t} left right 0 2. $rd2 blmove dst{t} src{t} right left 0 3. $rd3 set key1{t}, $rd3 lpush src{t}, $rd3 set key2{t} in a pipeline The correct order would be: 1. set key1{t} 2. lpush src{t} 3. lmove src{t} dst{t} left right 4. lmove dst{t} src{t} right left 5. set key2{t} The wrong order would be: 1. set key1{t} 2. lpush src{t} 3. lmove src{t} dst{t} left right 4. set key2{t} 5. lmove dst{t} src{t} right left This PR adds corresponding test to cover it. * Add comment near while(listLength(server.ready_keys) != 0)
-
- 15 Jun, 2023 2 commits
-
-
Meir Shpilraien (Spielrein) authored
While Redis loading data from disk (AOF or RDB), modules will get key space notifications. In such stage the module should not register any PEJ, the main reason this is forbidden is that PEJ purpose is to perform a write operation as a reaction to the key space notification. Write operations should not be performed while loading data and so there is no reason to register a PEJ. Same argument also apply to readonly replica. module should not perform any writes as a reaction to key space notifications and so it should not register a PEJ. If a module need to perform some other task which is not involve writing, he can do it on the key space notification callback itself.
-
Binbin authored
In PXAT case, there is no need to do the rewriteClientCommandVector, a simply benchmark show we gain a improvement of 10%.
-
- 14 Jun, 2023 2 commits
-
-
judeng authored
This change only affects keys with expiry time. For SETEX, the average improvement is 5%, and for GET with expiation key, we gain a improvement of 13%. When keys have expiration time, Redis has an assertion to look up the main dict every time when it touches the expires. This comes with a performance const, especially during rehash. the damage will be double. It looks like that assert was added some ten years old, maybe out of paranoia, and there's probably no reason to keep it at that cost.
-
Wen Hui authored
Looks like the Zadd test case was copied to create Zincrby test case ,but missed to change the command.
-
- 13 Jun, 2023 2 commits
-
-
Harkrishn Patro authored
It would be helpful for clients to get cluster slots/shards information during a node failover and is loading data.
-
Binbin authored
For the XREADGROUP BLOCK > scenario, there is an endless loop. Due to #11012, it keep going, reprocess command -> blockForKeys -> reprocess command The right fix is to avoid an endless loop in handleClientsBlockedOnKey and handleClientsBlockedOnKeys, looks like there was some attempt in handleClientsBlockedOnKeys but maybe not sufficiently good, and it looks like using a similar trick in handleClientsBlockedOnKey is complicated. i.e. stashing the list on the stack and iterating on it after creating a fresh one for future use, is problematic since the code keeps accessing the global list. Co-authored-by:Oran Agra <oran@redislabs.com>
-
- 12 Jun, 2023 1 commit
-
-
Oran Agra authored
This will increase the size of an already large COB (one already passed the threshold for disconnection) This could also mean that we'll attempt to write that data to the socket and the replica will manage to read it, which will result in an undesired partial sync (undesired for the test)
-
- 11 Jun, 2023 4 commits
-
-
YaacovHazan authored
In 7.2, After 971b177f we make sure (assert) that the duration has been recorded when resetting the client. This is not true for rejected commands. The use case I found is a blocking command that an ACL rule changed before it was unblocked, and while reprocessing it, the command rejected and triggered the assert. The PR reset the command duration inside rejectCommand / rejectCommandSds. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Chen Tianjie authored
In #11963, some new tests about eventloop duration were added, which includes time measurement in TCL scripts. This has caused some unexpected CI failures, such as #12169 and #12177, due to slow test servers or some performance jittering.
-
Wen Hui authored
Added missing test case coverage for below scenarios: 1. The command only works if all the specified slots are, from the point of view of the node receiving the command, currently not assigned. A node will refuse to take ownership for slots that already belong to some other node (including itself). 2. The command fails if the same slot is specified multiple times.
-
Binbin authored
This leak will only happen in loadServerConfigFromString, that is, when we are loading a redis.conf, and the user is wrong. Because it happens in loadServerConfigFromString, redis will exit if there is an error, so this is actually just a cleanup.
-
- 08 Jun, 2023 1 commit
-
-
Binbin authored
We now no longer propagate scripts (started from 7.0), so this is a very rare issue that in nearly-dead-code. This is an overlook in #9780
-
- 06 Jun, 2023 1 commit
-
-
Yossi Gottlieb authored
Refresh deps/hiredis to latest (unreleased) version.
-
- 05 Jun, 2023 1 commit
-
-
Yossi Gottlieb authored
Adding this as it's required by the latest version of libmusl (but not clear if it's a regression or an intentional change).
-
- 04 Jun, 2023 1 commit
-
-
Yossi Gottlieb authored
* We patch hiredis rather than rely on having a compatible sds version. * We now have better test coverage for redis-cli and redis-benchmark.
-
- 30 May, 2023 3 commits
-
-
Yossi Gottlieb authored
-
Yossi Gottlieb authored
b6a052fe0 Helper for setting TCP_USER_TIMEOUT socket option (#1188) 3fa9b6944 Add RedisModule adapter (#1182) d13c091e9 Fix wincrypt symbols conflict 5d84c8cfd Add a test ensuring we don't clobber connection error. 3f95fcdae Don't attempt to set a timeout if we are in an error state. aacb84b8d Fix typo in makefile. 563b062e3 Accept -nan per the RESP3 spec recommendation. 04c1b5b02 Fix colliding option values 4ca8e73f6 Rework searching for openssl cd208812f Attempt to find the correct path for openssl. 011f7093c Allow specifying the keepalive interval e9243d4f7 Cmake static or shared (#1160) 1cbd5bc76 Write a version file for the CMake package (#1165) 6f5bae8c6 fix typo acd09461d CMakeLists.txt: respect BUILD_SHARED_LIBS 97fcf0fd1 Add sdevent adapter ccff093bc Bump dev version for the next release cycle. c14775b4e Prepare for v1.1.0 GA f0bdf8405 Add support for nan in RESP3 double (#1133) 991b0b0b3 Add an example that calls redisCommandArgv (#1140)...
-
Oran Agra authored
This is a followup fix for #11817
-
- 29 May, 2023 3 commits
-
-
Binbin authored
We should emit DB_FLAG_KEY_EXPIRED instead of DB_FLAG_KEY_DELETED. This is an overlook in #9406.
-
Binbin authored
This test was introduced in #12079, it works well most of the time, but occasionally fails: ``` 00:34:45> SENTINEL SIMULATE-FAILURE crash-after-election works: OK 00:34:45> SENTINEL SIMULATE-FAILURE crash-after-promotion works: FAILED: Sentinel set crash-after-promotion but did not exit ``` Don't know the reason, it may be affected by the exit of the previous crash-after-election test. Because it doesn't really make much sense to go deeper into it now, we re-source init-tests to get a clean environment before each test, to try to fix this. After applying this change, we found a new error: ``` 16:39:33> SENTINEL SIMULATE-FAILURE crash-after-election works: FAILED: caught an error in the test couldn't open socket: connection refused couldn't open socket: connection refused ``` I am guessing the sentinel triggers failover and exits before SENTINEL FAILOVER, added a new || condition in wait_for_condition to fix it.
-
Binbin authored
Try lazyfree temp zset in ZUNION / ZINTER / ZDIFF and optimize ZINTERCARD to avoid create temp zset (#12229) We check lazyfree_lazy_server_del in sunionDiffGenericCommand to see if we need to lazyfree the temp set. Now do the same in zunionInterDiffGenericCommand to lazyfree the temp zset. This is a minor change, follow #5903. Also improved the comments. Additionally, avoid creating unused zset object in ZINTERCARD, results in some 10% performance improvement.
-
- 28 May, 2023 2 commits
-
-
Oran Agra authored
This is a redo of #11594 which got reverted in #11940 It improves performance by avoiding double lookup of the the key.
-
Oran Agra authored
So far clients being blocked and unblocked by a module command would update the c->woff variable and so WAIT was ineffective and got released without waiting for the command actions to propagate. This seems to have existed since forever, but not for RM_BlockClientOnKeys. It is problematic though to know if the module did or didn't propagate anything in that command, so for now, instead of adding an API, we'll just update the woff to the latest offset when unblocking, this will cause the client to possibly wait excessively, but that's not that bad.
-