- 28 Feb, 2023 1 commit
-
-
Oran Agra authored
Issue happens when passing a negative long value that greater than the max positive value that the long can store. (cherry picked from commit 41430af6a821c551abb862666ef896f2c196dea6)
-
- 16 Jan, 2023 2 commits
-
-
Oran Agra authored
missing range check in ZRANDMEMBER and HRANDIFLD leading to panic due to protocol limitations
-
Oran Agra authored
Related to the hang reported in #11671 Currently, redis can disconnect a client due to reaching output buffer limit, it'll also avoid feeding that output buffer with more data, but it will keep running the loop in the command (despite the client already being marked for disconnection) This PR is an attempt to mitigate the problem, specifically for commands that are easy to abuse, specifically: KEYS, HRANDFIELD, SRANDMEMBER, ZRANDMEMBER. The RAND family of commands can take a negative COUNT argument (which is not bound to the number of elements in the key), so it's enough to create a key with one field, and then these commands can be used to hang redis. For KEYS the caller can use the existing keyspace in redis (if big enough).
-
- 12 Dec, 2022 2 commits
-
-
Oran Agra authored
when we know the size of the zset we're gonna store in advance, we can check if it's greater than the listpack encoding threshold, in which case we can create a skiplist from the get go, and avoid converting the listpack to skiplist later after it was already populated. (cherry picked from commit 21891003)
-
Vitaly authored
When `zrangestore` is called container destination object is created. Before this PR we used to create a listpack based object even if `zset-max-ziplist-entries` or equivalent`zset-max-listpack-entries` was set to 0. This triggered immediate conversion of the listpack into a skiplist in `zrangestore`, which hits an assertion resulting in an engine crash. Added a TCL test that reproduces this issue. (cherry picked from commit 6461f09f)
-
- 27 Apr, 2022 2 commits
-
-
filipe oliveira authored
Avoid deferred array reply on genericZrangebyrankCommand() when consumer type is client. I.e. any ZRANGE / ZREVRNGE (when tank is used). This was a performance regression introduced in #7844 (v 6.2) mainly affecting pipelined workloads. Co-authored-by:
Oran Agra <oran@redislabs.com> (cherry picked from commit 1dc89e2d)
- 04 Oct, 2021 1 commit
-
-
Oran Agra authored
- fix possible heap corruption in ziplist and listpack resulting by trying to allocate more than the maximum size of 4GB. - prevent ziplist (hash and zset) from reaching size of above 1GB, will be converted to HT encoding, that's not a useful size. - prevent listpack (stream) from reaching size of above 1GB. - XADD will start a new listpack if the new record may cause the previous listpack to grow over 1GB. - XADD will respond with an error if a single stream record is over 1GB - List type (ziplist in quicklist) was truncating strings that were over 4GB, now it'll respond with an error.
-
- 21 Jul, 2021 4 commits
-
-
Jason Elbaum authored
When using RESP3, ZPOPMAX/ZPOPMIN should return nested arrays for consistency with other commands (e.g. ZRANGE). We do that only when COUNT argument is present (similarly to how LPOP behaves). for reasoning see https://github.com/redis/redis/issues/8824#issuecomment-855427955 This is a breaking change only when RESP3 is used, and COUNT argument is present! (cherry picked from commit 7f342020)
-
Binbin authored
due to a copy-paste bug, it used to reply with null response rather than empty array. this commit includes new tests that are looking at the RESP response directly in order to be able to tell the difference between them. Co-authored-by:
Oran Agra <oran@redislabs.com> (cherry picked from commit a418a2d3)
-
Leibale Eidelman authored
mistakenly it used to return an empty array rather than 0. Co-authored-by:
Oran Agra <oran@redislabs.com> (cherry picked from commit 95274f1f)
-
- 15 Apr, 2021 1 commit
-
-
guybe7 authored
-
- 14 Apr, 2021 1 commit
-
-
Bonsai authored
-
- 01 Apr, 2021 2 commits
-
-
guybe7 authored
If GT/LT fails the operation we need to reply with nill (like failure due to NX). Other changes: Add the missing $encoding suffix to many zset tests Note: there's a behavior change just in case of INCR + GT/LT that fails. The old code was replying with the wrong (rejected) score, and now it'll reply with nil. Note that that's anyway a corner case so this "behavior change" shouldn't have too much affect. Using GT/LT with INCR has a predictable result even before we run the command (INCR GT will only only / always fail if the increment is negative).
-
wuYin authored
There are 2 common range comparators for skiplist: zslValueGteMin and zslValueLteMax, but they're not being reused in zslDeleteRangeByScore This is a small change to make code cleaner.
-
- 10 Mar, 2021 1 commit
-
-
guybe7 authored
Have a clear separation between in and out flags Other changes: delete dead code in RM_ZsetIncrby: if zsetAdd returned error (happens only if the result of the operation is NAN or if score is NAN) we return immediately so there is no way that zsetAdd succeeded and returned NAN in the out-flags
-
- 22 Feb, 2021 1 commit
-
-
Wen Hui authored
SRANDMEMBER with negative count (non unique) can return the same member multiple times, and the order of elements in the returned collection matters. For these reasons returning a RESP3 Set type is not valid for the negative count, but also not really valid for the positive (unique) variant either (the command returns an array of random picks, not a set) This PR also contains a minor optimization for SRANDMEMBER, HRANDFIELD, and ZRANDMEMBER, to avoid the temporary dict from being rehashed while it grows. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 07 Feb, 2021 1 commit
-
-
Oran Agra authored
It is inefficient to repeatedly pick a single random element from a ziplist. For CASE4, which is when the user requested a low number of unique random picks from the collectoin, we used thta pattern. Now we use a different algorithm that picks unique elements from a ziplist, and guarentee no duplicate but doesn't provide random order (which is only needed in the non-unique random picks case) Unrelated changes: * change ziplist count and indexes variables to unsigned * solve compilation warnings about uninitialized vars in gcc 10.2 Co-authored-by:
xinluton <xinluton@qq.com>
-
- 05 Feb, 2021 1 commit
-
-
sundb authored
RAND* commands: fix risk of OOM panic in hash and zset, use fair random in hash, and add tests for even distribution to all (#8429) Changes to HRANDFIELD and ZRANDMEMBER: * Fix risk of OOM panic when client query a very big negative count (avoid allocating huge temporary buffer). * Fix uneven random distribution in HRANDFIELD with negative count (wasn't using dictGetFairRandomKey). * Add tests to check an even random distribution (HRANDFIELD, SRANDMEMBER, ZRANDMEMBER). Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 29 Jan, 2021 1 commit
-
-
Yang Bodong authored
New commands: `HRANDFIELD [<count> [WITHVALUES]]` `ZRANDMEMBER [<count> [WITHSCORES]]` Algorithms are similar to the one in SRANDMEMBER. Both return a simple bulk response when no arguments are given, and an array otherwise. In case values/scores are requested, RESP2 returns a long array, and RESP3 a nested array. note: in all 3 commands, the only option that also provides random order is the one with negative count. Changes to SRANDMEMBER * Optimization when count is 1, we can use the more efficient algorithm of non-unique random * optimization: work with sds strings rather than robj Other changes: * zzlGetScore: when zset needs to convert string to double, we use safer memcpy (in case the buffer is too small) * Solve a "bug" in SRANDMEMBER test: it intended to test a positive count (case 3 or case 4) and by accident used a negative count Co-authored-by:
xinluton <xinluton@qq.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 26 Jan, 2021 2 commits
-
-
Oran Agra authored
It was confusing as to why these don't return a map type. the reason is that order matters, so we need to make sure the client library knows to respect it. Added comments in the implementation and tests to cover it.
-
Vladimir Maksimovski authored
Remove check leading to duplicate branches and unused withscores parameter
-
- 13 Jan, 2021 1 commit
-
-
sundb authored
Fix use of lookupKeyRead and lookupKeyWrite in zrangeGenericCommand, zunionInterDiffGenericCommand (#8316) * Change zunionInterDiffGenericCommand to use lookupKeyRead if dstkey is null * Change zrangeGenericCommand to use lookupKey Write if dstkey isn't null ZRANGESTORE and UNION, ZINTER, ZDIFF are all new commands (6.2 RC1 and RC2). In redis 6.0 the ZRANGE was using lookupKeyRead, and ZUNIONSTORE / ZINTERSTORE were using lookupKeyWrite. So there bugs are introduced in 6.2 and will be resolved before it is released. the implications of this bug are also not big: The sole difference between LookupKeyRead and LookupKeyWrite is for command executed on a replica, which are not received from its master client. (for the master, and for the master client on the replica, these two functions behave the same)!
-
- 07 Jan, 2021 1 commit
-
-
Jonah H. Harris authored
Add ZRANGESTORE command, and improve ZSTORE command to deprecated Z[REV]RANGE[BYSCORE|BYLEX]. Syntax for the new ZRANGESTORE command: ZRANGESTORE [BYSCORE | BYLEX] [REV] [LIMIT offset count] New syntax for ZRANGE: ZRANGE [BYSCORE | BYLEX] [REV] [WITHSCORES] [LIMIT offset count] Old syntax for ZRANGE: ZRANGE [WITHSCORES] Other ZRANGE commands remain unchanged. The implementation uses common code for all of these, by utilizing a consumer interface that in one command response to the client, and in the other command stores a zset key. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 24 Dec, 2020 1 commit
-
-
Madelyn Olson authored
Properly throw errors for invalid replication stream and support https://github.com/redis/redis/pull/8217
-
- 09 Dec, 2020 1 commit
-
-
Oran Agra authored
-
- 06 Dec, 2020 2 commits
-
-
Oran Agra authored
If RESTORE passes successfully with full sanitization, we can't affort to crash later on assertion due to duplicate records in a hash when converting it form ziplist to dict. This means that when doing full sanitization, we must make sure there are no duplicate records in any of the collections.
-
Wang Yuan authored
As we know, redis may reject user's requests or evict some keys if used memory is over maxmemory. Dictionaries expanding may make things worse, some big dictionaries, such as main db and expires dict, may eat huge memory at once for allocating a new big hash table and be far more than maxmemory after expanding. There are related issues: #4213 #4583 More details, when expand dict in redis, we will allocate a new big ht[1] that generally is double of ht[0], The size of ht[1] will be very big if ht[0] already is big. For db dict, if we have more than 64 million keys, we need to cost 1GB for ht[1] when dict expands. If the sum of used memory and new hash table of dict needed exceeds maxmemory, we shouldn't allow the dict to expand. Because, if we enable keys eviction, we still couldn't add much more keys after eviction and rehashing, what's worse, redis will keep less keys when redis only remains a little memory for storing new hash table instead of users' data. Moreover users can't write data in redis if disable keys eviction. What this commit changed ? Add a new member function expandAllowed for dict type, it provide a way for caller to allow expand or not. We expose two parameters for this function: more memory needed for expanding and dict current load factor, users can implement a function to make a decision by them. For main db dict and expires dict type, these dictionaries may be very big and cost huge memory for expanding, so we implement a judgement function: we can stop dict to expand provisionally if used memory will be over maxmemory after dict expands, but to guarantee the performance of redis, we still allow dict to expand if dict load factor exceeds the safe load factor. Add test cases to verify we don't allow main db to expand when left memory is not enough, so that avoid keys eviction. Other changes: For new hash table size when expand. Before this commit, the size is that double used of dict and later _dictNextPower. Actually we aim to control a dict load factor between 0.5 and 1.0. Now we replace *2 with +1, since the first check is that used >= size, the outcome of before will usually be the same as _dictNextPower(used+1). The only case where it'll differ is when dict_can_resize is false during fork, so that later the _dictNextPower(used*2) will cause the dict to jump to *4 (i.e. _dictNextPower(1025*2) will return 4096). Fix rehash test cases due to changing algorithm of new hash table size when expand.
-
- 03 Dec, 2020 1 commit
-
-
Felipe Machado authored
In the iterator for these functions, we'll traverse the sorted sets in a reversed way so that largest elements come first. We prefer this order because it's optimized for insertion in a skiplist, which is the destination of the elements being iterated in there functions.
-
- 24 Nov, 2020 1 commit
-
-
sundb authored
Avoid multiple conditional judgments Avoid allocating robj->ptr when we're gonna replace it right after.
-
- 22 Nov, 2020 1 commit
-
-
xindoo authored
Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 17 Nov, 2020 3 commits
-
-
Meir Shpilraien (Spielrein) authored
Blocking command should not be used with MULTI, LUA, and RM_Call. This is because, the caller, who executes the command in this context, expects a reply. Today, LUA and MULTI have a special (and different) treatment to blocking commands: LUA - Most commands are marked with no-script flag which are checked when executing and command from LUA, commands that are not marked (like XREAD) verify that their blocking mode is not used inside LUA (by checking the CLIENT_LUA client flag). MULTI - Command that is going to block, first verify that the client is not inside multi (by checking the CLIENT_MULTI client flag). If the client is inside multi, they return a result which is a match to the empty key with no timeout (for example blpop inside MULTI will act as lpop) For modules that perform RM_Call with blocking command, the returned results type is REDISMODULE_REPLY_UNKNOWN and the caller can not really know what happened. Disadvantages of the current state are: No unified approach, LUA, MULTI, and RM_Call, each has a different treatment Module can not safely execute blocking command (and get reply or error). Though It is true that modules are not like LUA or MULTI and should be smarter not to execute blocking commands on RM_Call, sometimes you want to execute a command base on client input (for example if you create a module that provides a new scripting language like javascript or python). While modules (on modules command) can check for REDISMODULE_CTX_FLAGS_LUA or REDISMODULE_CTX_FLAGS_MULTI to know not to block the client, there is no way to check if the command came from another module using RM_Call. So there is no way for a module to know not to block another module RM_Call execution. This commit adds a way to unify the treatment for blocking clients by introducing a new CLIENT_DENY_BLOCKING client flag. On LUA, MULTI, and RM_Call the new flag turned on to signify that the client should not be blocked. A blocking command verifies that the flag is turned off before blocking. If a blocking command sees that the CLIENT_DENY_BLOCKING flag is on, it's not blocking and return results which are matches to empty key with no timeout (as MULTI does today). The new flag is checked on the following commands: List blocking commands: BLPOP, BRPOP, BRPOPLPUSH, BLMOVE, Zset blocking commands: BZPOPMIN, BZPOPMAX Stream blocking commands: XREAD, XREADGROUP SUBSCRIBE, PSUBSCRIBE, MONITOR In addition, the new flag is turned on inside the AOF client, we do not want to block the AOF client to prevent deadlocks and commands ordering issues (and there is also an existing assert in the code that verifies it). To keep backward compatibility on LUA, all the no-script flags on existing commands were kept untouched. In addition, a LUA special treatment on XREAD and XREADGROUP was kept. To keep backward compatibility on MULTI (which today allows SUBSCRIBE, and PSUBSCRIBE). We added a special treatment on those commands to allow executing them on MULTI. The only backward compatibility issue that this PR introduces is that now MONITOR is not allowed inside MULTI. Tests were added to verify blocking commands are not blocking the client on LUA, MULTI, or RM_Call. Tests were added to verify the module can check for CLIENT_DENY_BLOCKING flag. Co-authored-by:
Oran Agra <oran@redislabs.com> Co-authored-by:
Itamar Haber <itamar@redislabs.com>
-
thomaston authored
ZREVRANGEBYSCORE key max min [WITHSCORES] [LIMIT offset count] When the offset is too large, the query is very slow. Especially when the offset is greater than the length of zset it is easy to determine whether the offset is greater than the length of zset at first, and If it exceed the length of zset, then return directly. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
swamp0407 authored
Syntax: COPY <key> <new-key> [DB <dest-db>] [REPLACE] No support for module keys yet. Co-authored-by: tmgauss Co-authored-by:
Itamar Haber <itamar@redislabs.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 16 Nov, 2020 1 commit
-
-
Oran Agra authored
-
- 15 Nov, 2020 1 commit
-
-
Felipe Machado authored
- Add ZDIFF and ZDIFFSTORE which work similarly to SDIFF and SDIFFSTORE - Make sure the new WITHSCORES argument that was added for ZUNION isn't considered valid for ZUNIONSTORE Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 08 Oct, 2020 1 commit
-
-
Felipe Machado authored
Adding [B]LMOVE <src> <dst> RIGHT|LEFT RIGHT|LEFT. deprecating [B]RPOPLPUSH. Note that when receiving a BRPOPLPUSH we'll still propagate an RPOPLPUSH, but on BLMOVE RIGHT LEFT we'll propagate an LMOVE improvement to existing tests - Replace "after 1000" with "wait_for_condition" when wait for clients to block/unblock. - Add a pre-existing element to target list on basic tests so that we can check if the new element was added to the correct side of the list. - check command stats on the replica to make sure the right command was replicated Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 24 Sep, 2020 1 commit
-
-
bodong.ybd authored
Syntax: ZINTER/ZUNION numkeys key [key ...] [WEIGHTS weight [weight ...]] [AGGREGATE SUM|MIN|MAX] [WITHSCORES] see #7624
-
- 23 Sep, 2020 1 commit
-
-
alexronke-channeladvisor authored
Co-authored-by:
Alex Ronke <w.alex.ronke@gmail.com>
-