- 28 Feb, 2023 1 commit
-
-
Tom Levy authored
Authenticated users can use string matching commands with a specially crafted pattern to trigger a denial-of-service attack on Redis, causing it to hang and consume 100% CPU time. (cherry picked from commit e75f92047c22e659d49bba3a083cd0c9935f21e6) (cherry picked from commit e8a9d3f63aebf6065d69bd0125d4b9c367f88def)
-
- 17 Jan, 2023 4 commits
-
-
Yossi Gottlieb authored
Before this commit, TLS tests on Ubuntu 22.04 would fail as dropped connections result with an ECONNABORTED error thrown instead of an empty read. (cherry picked from commit 69d55768)
-
Oran Agra authored
Authenticated users issuing specially crafted SETRANGE and SORT(_RO) commands can trigger an integer overflow, resulting with Redis attempting to allocate impossible amounts of memory and abort with an OOM panic.
-
Oran Agra authored
Related to the hang reported in #11671 Currently, redis can disconnect a client due to reaching output buffer limit, it'll also avoid feeding that output buffer with more data, but it will keep running the loop in the command (despite the client already being marked for disconnection) This PR is an attempt to mitigate the problem, specifically for commands that are easy to abuse, specifically: SRANDMEMBER. The RAND family of commands can take a negative COUNT argument (which is not bound to the number of elements in the key), so it's enough to create a key with one field, and then these commands can be used to hang redis. NOTICE: For in Redis 7.0 this fix handles KEYS as well, but in this branch it doesn't, details in #11676
-
Meir Shpilraien (Spielrein) authored
This commit 0f8b634c (CVE-2021-32626 released in 6.2.6, 6.0.16, 5.0.14) fixes an invalid memory write issue by using `lua_checkstack` API to make sure the Lua stack is not overflow. This fix was added on 3 places: 1. `luaReplyToRedisReply` 2. `ldbRedis` 3. `redisProtocolToLuaType` On the first 2 functions, `lua_checkstack` is handled gracefully while the last is handled with an assert and a statement that this situation can not happened (only with misbehave module): > the Redis reply might be deep enough to explode the LUA stack (notice that currently there is no such command in Redis that returns such a nested reply, but modules might do it) The issue that was discovered is that user arguments is also considered part of the stack, and so the following script (for example) make the assertion reachable: ``` local a = {} for i=1,7999 do a[i] = 1 end return redis.call("lpush", "l", unpack(a)) ``` This is a regression because such a script would have worked before and now its crashing Redis. The solution is to clear the function arguments from the Lua stack which makes the original assumption true and the assertion unreachable. (cherry picked from commit 6b0b04f1)
-
- 04 Oct, 2021 3 commits
-
-
Oran Agra authored
- fix possible heap corruption in ziplist and listpack resulting by trying to allocate more than the maximum size of 4GB. - prevent ziplist (hash and zset) from reaching size of above 1GB, will be converted to HT encoding, that's not a useful size. - prevent listpack (stream) from reaching size of above 1GB. - XADD will start a new listpack if the new record may cause the previous listpack to grow over 1GB. - XADD will respond with an error if a single stream record is over 1GB - List type (ziplist in quicklist) was truncating strings that were over 4GB, now it'll respond with an error.
-
meir@redislabs.com authored
The protocol parsing on 'ldbReplParseCommand' (LUA debugging) Assumed protocol correctness. This means that if the following is given: *1 $100 test The parser will try to read additional 94 unallocated bytes after the client buffer. This commit fixes this issue by validating that there are actually enough bytes to read. It also limits the amount of data that can be sent by the debugger client to 1M so the client will not be able to explode the memory.
-
Oran Agra authored
This change sets a low limit for multibulk and bulk length in the protocol for unauthenticated connections, so that they can't easily cause redis to allocate massive amounts of memory by sending just a few characters on the network. The new limits are 10 arguments of 16kb each (instead of 1m of 512mb)
-
- 21 Jul, 2021 8 commits
-
-
Huang Zhw authored
On 32 bit platform, the bit position of GETBIT/SETBIT/BITFIELD/BITCOUNT,BITPOS may overflow (see CVE-2021-32761) (#9191) GETBIT, SETBIT may access wrong address because of wrap. BITCOUNT and BITPOS may return wrapped results. BITFIELD may access the wrong address but also allocate insufficient memory and segfault (see CVE-2021-32761). This commit uses `uint64_t` or `long long` instead of `size_t`. related https://github.com/redis/redis/pull/8096 At 32bit platform: > setbit bit 4294967295 1 (integer) 0 > config set proto-max-bulk-len 536870913 OK > append bit "\xFF" (integer) 536870913 > getbit bit 4294967296 (integer) 0 When the bit index is larger than 4294967295, size_t can't hold bit index. In the past, `proto-max-bulk-len` is limit to 536870912, so there is no problem. After this commit, bit position is stored in `uint64_t` or `long long`. So when `proto-max-bulk-len > 536870912`, 32bit platforms can still be correct. For 64bit platform, this problem still exists. The major reason is bit pos 8 times of byte pos. When proto-max-bulk-len is very larger, bit pos may overflow. But at 64bit platform, we don't have so long string. So this bug may never happen. Additionally this commit add a test cost `512MB` memory which is tag as `large-memory`. Make freebsd ci and valgrind ci ignore this test. * This test is disabled in this version since bitops doesn't rely on proto-max-bulk-len. some of the overflows can still occur so we do want the fixes. (cherry picked from commit 71d45287)
-
Oran Agra authored
- promote the code in DEBUG PROTOCOL to addReplyBigNum - DEBUG PROTOCOL ATTRIB skips the attribute when client is RESP2 - networking.c addReply for push and attributes generate assertion when called on a RESP2 client, anything else would produce a broken protocol that clients can't handle. (cherry picked from commit 6a5bac30) (cherry picked from commit 7f38aa8bc719f709acdcefc35a45a7aa6faa76fa)
-
Binbin authored
SINTERSTORE would have deleted the dest key right away, even when later on it is bound to fail on an (WRONGTYPE) error. With this change it first picks up all the input keys, and only later delete the dest key if one is empty. Also add more tests for some commands. Mainly focus on - `wrong type error`: expand test case (base on sinter bug) in non-store variant add tests for store variant (although it exists in non-store variant, i think it would be better to have same tests) - the dstkey result when we meet `non-exist key (empty set)` in *store sdiff: - improve test case about wrong type error (the one we found in sinter, although it is safe in sdiff) - add test about using non-exist key (treat it like an empty set) sdiffstore: - according to sdiff test case, also add some tests about `wrong type error` and `non-exist key` - the different is that in sdiffstore, we will consider the `dstkey` result sunion/sunionstore add more tests (same as above) sinter/sinterstore also same as above ... (cherry picked from commit b8a5da80) (cherry picked from commit f4702b8b7a7da6cc661ddb6744cb322bc92e3267)
-
Jason Elbaum authored
When using RESP3, ZPOPMAX/ZPOPMIN should return nested arrays for consistency with other commands (e.g. ZRANGE). We do that only when COUNT argument is present (similarly to how LPOP behaves). for reasoning see https://github.com/redis/redis/issues/8824#issuecomment-855427955 This is a breaking change only when RESP3 is used, and COUNT argument is present! (cherry picked from commit 7f342020) (cherry picked from commit caaad2d686b2af0d13fbeda414e2b70e57635b5c)
-
perryitay authored
There are two issues fixed in this commit: 1. we want to fail the EXEC command in case there is a watched key that's logically expired but not yet deleted by active expire or lazy expire. 2. we saw that currently cache time is update in every `call()` (including nested calls), this time is being also being use for the isKeyExpired comparison, we want to update the cache time only in the first call (execCommand) Co-authored-by:
Oran Agra <oran@redislabs.com> (cherry picked from commit ac8b1df8)
-
Oran Agra authored
The `Tracking gets notification of expired keys` test in tracking.tcl used to hung in valgrind CI quite a lot. It turns out the reason is that with valgrind and a busy machine, the server cron active expire cycle could easily run in the same event loop as the command that created `mykey`, so that when they key got expired, there were two change events to broadcast, one that set the key and one that expired it, but since we used raxTryInsert, the client that was associated with the "last" change was the one that created the key, so the NOLOOP filtered that event. This commit adds a test that reproduces the problem by using lazy expire in a multi-exec which makes sure the key expires in the same event loop as the one that added it. (cherry picked from commit 9b564b52)
-
- 02 Mar, 2021 1 commit
-
- 22 Feb, 2021 1 commit
-
-
Viktor Söderqvist authored
Without this fix, RM_ZsetRem can leave empty sorted sets which are not allowed to exist. Removing from a sorted set while iterating seems to work (while inserting causes failed assetions). RM_ZsetRangeEndReached is modified to return 1 if the key doesn't exist, to terminate iteration when the last element has been removed. (cherry picked from commit aea6e71e)
-
- 12 Jan, 2021 8 commits
-
-
Oran Agra authored
c4fdf09c added a test that now fails with valgrind it fails for two resons: 1) the test samples the used memory and then limits the maxmemory to that value, but it turns out this is not atomic and on slow machines the background cron process that clean out old query buffers reduces the memory so that the setting doesn't cause eviction. 2) the dbsize was tested late, after reading some invalidation messages by that time more and more keys got evicted, partially draining the db. this is not the focus of this fix (still a known limitation) (cherry picked from commit a102b21d)
-
Oran Agra authored
When client tracking is enabled signalModifiedKey can increase memory usage, this can cause the loop in performEvictions to keep running since it was measuring the memory usage impact of signalModifiedKey. The section that measures the memory impact of the eviction should be just on dbDelete, excluding keyspace notification, client tracking, and propagation to AOF and replicas. This resolves part of the problem described in #8069 p.s. fix took 1 minute, test took about 3 hours to write. (cherry picked from commit c4fdf09c)
-
Madelyn Olson authored
(cherry picked from commit 411bcf1a)
-
Yang Bodong authored
This PR not only fixes the problem that swapdb does not make the transaction fail, but also optimizes the FLUSHALL and FLUSHDB command to set the CLIENT_DIRTY_CAS flag to avoid unnecessary traversal of clients. FLUSHDB was changed to first iterate on all watched keys, and then on the clients watching each key. Instead of iterating though all clients, and for each iterate on watched keys. Co-authored-by:
Oran Agra <oran@redislabs.com> (cherry picked from commit 10f94b0a)
-
Oran Agra authored
When a Lua script returns a map to redis (a feature which was added in redis 6 together with RESP3), it would have returned the value first and the key second. If the client was using RESP2, it was getting them out of order, and if the client was in RESP3, it was getting a map of value => key. This was happening regardless of the Lua script using redis.setresp(3) or not. This also affects a case where the script was returning a map which it got from from redis by doing something like: redis.setresp(3); return redis.call() This fix is a breaking change for redis 6.0 users who happened to rely on the wrong order (either ones that used redis.setresp(3), or ones that returned a map explicitly). This commit also includes other two changes in the tests: 1. The test suite now handles RESP3 maps as dicts rather than nested lists 2. Remove some redundant (duplicate) tests from tracking.tcl (cherry picked from commit 2017407b)
-
Oran Agra authored
Module blocked clients cache the response in a temporary client, the reply list in this client would be affected by the recent fix in #7202, but when the reply is later copied into the real client, it would have bypassed all the checks for output buffer limit, which would have resulted in both: responding with a partial response to the client, and also not disconnecting it at all. (cherry picked from commit 48efc25f)
-
guybe7 authored
The bug was introduced by #5021 which only attempted avoid EXIST on an already expired key from returning 1 on a replica. Before that commit, dbExists was used instead of lookupKeyRead (which had an undesired effect to "touch" the LRU/LFU) Other than that, this commit fixes OBJECT to also come empty handed on expired keys in replica. And DEBUG DIGEST-VALUE to behave like DEBUG OBJECT (get the data from the key regardless of it's expired state) (cherry picked from commit f8ae9917)
-
- 27 Oct, 2020 14 commits
-
-
Qu Chen authored
This wrong behavior was backed by a test, and also documentation, and dates back to 2010. But it makes no sense to anyone involved so it was decided to change that. Note that 20eeddfb (invalidate watch on expire on access) was released in 6.0 RC2 and 2d1968f8 released in in 6.0.0 GA (invalidate watch when key is evicted). both of which do similar changes. (cherry picked from commit 556acefe)
-
Yossi Gottlieb authored
Useful for running tests on systems which may be way slower than usual. (cherry picked from commit 843a13e8)
-
Meir Shpilraien (Spielrein) authored
* Introduce a new API's: RM_GetContextFlagsAll, and RM_GetKeyspaceNotificationFlagsAll that will return the full flags mask of each feature. The module writer can check base on this value if the Flags he needs are supported or not. * For each flag, introduce a new value on redismodule.h, this value represents the LAST value and should be there as a reminder to update it when a new value is added, also it will be used in the code to calculate the full flags mask (assuming flags are incrementally increasing). In addition, stated that the module writer should not use the LAST flag directly and he should use the GetFlagAll API's. * Introduce a new API: RM_IsSubEventSupported, that returns for a given event and subevent, whether or not the subevent supported. * Introduce a new macro RMAPI_FUNC_SUPPORTED(func) that returns whether or not a function API is supported by comparing it to NULL. * Introduce a new API: int RM_GetServerVersion();, that will return the current Redis version in the format 0x00MMmmpp; e.g. 0x00060008; * Changed unstable version from 999.999.999 to 255.255.255 Co-authored-by:
Oran Agra <oran@redislabs.com> Co-authored-by:
Yossi Gottlieb <yossigo@gmail.com> (cherry picked from commit adc3183c)
-
Yossi Gottlieb authored
This API function makes it possible to retrieve the X.509 certificate used by clients to authenticate TLS connections. (cherry picked from commit 0aec98dc)
-
Yossi Gottlieb authored
The main motivation here is to provide a way for modules to create a single, global context that can be used for logging. Currently, it is possible to obtain a thread-safe context that is not attached to any blocked client by using `RM_GetThreadSafeContext`. However, the attached context is not linked to the module identity so log messages produced are not tagged with the module name. Ideally we'd fix this in `RM_GetThreadSafeContext` itself but as it doesn't accept the current context as an argument there's no way to do that in a backwards compatible manner. (cherry picked from commit 907da058)
-
Yossi Gottlieb authored
This is essentially the same as calling COMMAND GETKEYS but provides a more efficient interface that can be used in every context (i.e. not a Redis command). (cherry picked from commit 7d117d75)
-
Madelyn Olson authored
(cherry picked from commit 2127f7c8)
-
Oran Agra authored
track and report memory used by clients argv. this is very usaful in case clients started sending a command and didn't complete it. in which case the first args of the command are already trimmed from the query buffer. in an effort to avoid cache misses and overheads while keeping track of these, i avoid calling sdsZmallocSize and instead use the sdslen / bulk-len which can at least give some insight into the problem. This memory is now added to the total clients memory usage, as well as the client list. (cherry picked from commit bea40e6a)
-
nitaicaro authored
PROBLEM: [$rd1 read] reads invalidation messages one by one, so it's never going to see the second invalidation message produced after INCR b, whether or not it exists. Adding another read will block incase no invalidation message is produced. FIX: We switch the order of "INCR a" and "INCR b" - now "INCR b" comes first. We still only read the first invalidation message produces. If an invalidation message is wrongly produces for b - then it will be produced before that of a, since "INCR b" comes before "INCR a". Co-authored-by:
Nitai Caro <caronita@amazon.com> (cherry picked from commit 8fb89a57)
-
Wang Yuan authored
Before this commit, we would have continued to add replies to the reply buffer even if client output buffer limit is reached, so the used memory would keep increasing over the configured limit. What's more, we shouldn’t write any reply to the client if it is set 'CLIENT_CLOSE_ASAP' flag because that doesn't conform to its definition and we will close all clients flagged with 'CLIENT_CLOSE_ASAP' in ‘beforeSleep’. Because of code execution order, before this, we may firstly write to part of the replies to the socket before disconnecting it, but in fact, we may can’t send the full replies to clients since OS socket buffer is limited. But this unexpected behavior makes some commands work well, for instance ACL DELUSER, if the client deletes the current user, we need to send reply to client and close the connection, but before, we close the client firstly and write the reply to reply buffer. secondly, we shouldn't do this despite the fact it works well in most cases. We add a flag 'CLIENT_CLOSE_AFTER_COMMAND' to mark clients, this flag means we will close the client after executing commands and send all entire replies, so that we can write replies to reply buffer during executing commands, send replies to clients, and close them later. We also fix some implicit problems. If client output buffer limit is enforced in 'multi/exec', all commands will be executed completely in redis and clients will not read any reply instead of partial replies. Even more, if the client executes 'ACL deluser' the using user in 'multi/exec', it will not read the replies after 'ACL deluser' just like before executing 'client kill' itself in 'multi/exec'. We added some tests for output buffer limit breach during multi-exec and using a pipeline of many small commands rather than one with big response. Co-authored-by:
Oran Agra <oran@redislabs.com> (cherry picked from commit 57709c4b)
-
Wang Yuan authored
We're already using bg_unlink in several places to delete the rdb file in the background, and avoid paying the cost of the deletion from our main thread. This commit uses bg_unlink to remove the temporary rdb file in the background too. However, in case we delete that rdb file just before exiting, we don't actually wait for the background thread or the main thread to delete it, and just let the OS clean up after us. i.e. we open the file, unlink it and exit with the fd still open. Furthermore, rdbRemoveTempFile can be called from a thread and was using snprintf which is not async-signal-safe, we now use ll2string instead. (cherry picked from commit b002d2b4)
-