- 04 Oct, 2021 9 commits
-
-
Oran Agra authored
Recently we found two issues in the fuzzer tester: #9302 #9285 After fixing them, more problems surfaced and this PR (as well as #9297) aims to fix them. Here's a list of the fixes - Prevent an overflow when allocating a dict hashtable - Prevent OOM when attempting to allocate a huge string - Prevent a few invalid accesses in listpack - Improve sanitization of listpack first entry - Validate integrity of stream consumer groups PEL - Validate integrity of stream listpack entry IDs - Validate ziplist tail followed by extra data which start with 0xff Co-authored-by:
sundb <sundbcn@gmail.com> (cherry picked from commit 0c90370e)
-
sundb authored
When we load rdb or restore command, if we encounter a length of 0, it will result in the creation of an empty key. This could either be a corrupt payload, or a result of a bug (see #8453 ) This PR mainly fixes the following: 1) When restore command will return `Bad data format` error. 2) When loading RDB, we will silently discard the key. Co-authored-by:
Oran Agra <oran@redislabs.com> (cherry picked from commit 8ea777a6)
-
Viktor Söderqvist authored
(cherry picked from commit 1c59567a)
-
Huang Zhw authored
When redis-cli received ASK, it used string matching wrong and didn't handle it. When we access a slot which is in migrating state, it maybe return ASK. After redirect to the new node, we need send ASKING command before retry the command. In this PR after redis-cli receives ASK, we send a ASKING command before send the origin command after reconnecting. Other changes: * Make redis-cli -u and -c (unix socket and cluster mode) incompatible with one another. * When send command fails, we avoid the 2nd reconnect retry and just print the error info. Users will decide how to do next. See #9277. * Add a test faking two redis nodes in TCL to just send ASK and OK in redis protocol to test ASK behavior. Co-authored-by:
Viktor Söderqvist <viktor.soderqvist@est.tech> Co-authored-by:
Oran Agra <oran@redislabs.com> (cherry picked from commit cf61ad14)
-
Binbin authored
With an empty src key, we need to deal with two situations: 1. non-STORE: We should return emptyarray. 2. STORE: Try to delete the store key and return 0. This applies to both GEOSEARCHSTORE (new to v6.2), and also GEORADIUS STORE (which was broken since forever) This pr try to fix #9261. i.e. both STORE variants would have behaved like the non-STORE variants when the source key was missing, returning an empty array and not deleting the destination key, instead of returning 0, and deleting the destination key. Also add more tests for some commands. - GEORADIUS: wrong type src key, non existing src key, empty search, store with non existing src key, store with empty search - GEORADIUSBYMEMBER: wrong type src key, non existing src key, non existing member, store with non existing src key - GEOSEARCH: wrong type src key, non existing src key, empty search, frommember with non existing member - GEOSEARCHSTORE: wrong type key, non existing src key, fromlonlat with empty search, frommember with non existing member Co-authored-by:
Oran Agra <oran@redislabs.com> (cherry picked from commit 86555ae0)
-
Oran Agra authored
- fix possible heap corruption in ziplist and listpack resulting by trying to allocate more than the maximum size of 4GB. - prevent ziplist (hash and zset) from reaching size of above 1GB, will be converted to HT encoding, that's not a useful size. - prevent listpack (stream) from reaching size of above 1GB. - XADD will start a new listpack if the new record may cause the previous listpack to grow over 1GB. - XADD will respond with an error if a single stream record is over 1GB - List type (ziplist in quicklist) was truncating strings that were over 4GB, now it'll respond with an error.
-
meir@redislabs.com authored
The protocol parsing on 'ldbReplParseCommand' (LUA debugging) Assumed protocol correctness. This means that if the following is given: *1 $100 test The parser will try to read additional 94 unallocated bytes after the client buffer. This commit fixes this issue by validating that there are actually enough bytes to read. It also limits the amount of data that can be sent by the debugger client to 1M so the client will not be able to explode the memory.
-
Oran Agra authored
This change sets a low limit for multibulk and bulk length in the protocol for unauthenticated connections, so that they can't easily cause redis to allocate massive amounts of memory by sending just a few characters on the network. The new limits are 10 arguments of 16kb each (instead of 1m of 512mb)
-
- 21 Jul, 2021 20 commits
-
-
Huang Zhw authored
On 32 bit platform, the bit position of GETBIT/SETBIT/BITFIELD/BITCOUNT,BITPOS may overflow (see CVE-2021-32761) (#9191) GETBIT, SETBIT may access wrong address because of wrap. BITCOUNT and BITPOS may return wrapped results. BITFIELD may access the wrong address but also allocate insufficient memory and segfault (see CVE-2021-32761). This commit uses `uint64_t` or `long long` instead of `size_t`. related https://github.com/redis/redis/pull/8096 At 32bit platform: > setbit bit 4294967295 1 (integer) 0 > config set proto-max-bulk-len 536870913 OK > append bit "\xFF" (integer) 536870913 > getbit bit 4294967296 (integer) 0 When the bit index is larger than 4294967295, size_t can't hold bit index. In the past, `proto-max-bulk-len` is limit to 536870912, so there is no problem. After this commit, bit position is stored in `uint64_t` or `long long`. So when `proto-max-bulk-len > 536870912`, 32bit platforms can still be correct. For 64bit platform, this problem still exists. The major reason is bit pos 8 times of byte pos. When proto-max-bulk-len is very larger, bit pos may overflow. But at 64bit platform, we don't have so long string. So this bug may never happen. Additionally this commit add a test cost `512MB` memory which is tag as `large-memory`. Make freebsd ci and valgrind ci ignore this test. (cherry picked from commit 71d45287)
-
Oran Agra authored
- Introduce a new sdssubstr api as a building block for sdsrange. The API of sdsrange is many times hard to work with and also has corner case that cause bugs. sdsrange is easy to work with and also simplifies the implementation of sdsrange. - Revert the fix to RM_StringTruncate and just use sdssubstr instead of sdsrange. - Solve valgrind warnings from the new tests introduced by the previous PR. (cherry picked from commit ae418eca)
-
Yossi Gottlieb authored
Modules that use background threads with thread safe contexts are likely to use RM_BlockClient() without a timeout function, because they do not set up a timeout. Before this commit, `CLIENT UNBLOCK` would result with a crash as the `NULL` timeout callback is called. Beyond just crashing, this is also logically wrong as it may throw the module into an unexpected client state. This commits makes `CLIENT UNBLOCK` on such clients behave the same as any other client that is not in a blocked state and therefore cannot be unblocked. (cherry picked from commit aa139e2f)
-
Binbin authored
SINTERSTORE would have deleted the dest key right away, even when later on it is bound to fail on an (WRONGTYPE) error. With this change it first picks up all the input keys, and only later delete the dest key if one is empty. Also add more tests for some commands. Mainly focus on - `wrong type error`: expand test case (base on sinter bug) in non-store variant add tests for store variant (although it exists in non-store variant, i think it would be better to have same tests) - the dstkey result when we meet `non-exist key (empty set)` in *store sdiff: - improve test case about wrong type error (the one we found in sinter, although it is safe in sdiff) - add test about using non-exist key (treat it like an empty set) sdiffstore: - according to sdiff test case, also add some tests about `wrong type error` and `non-exist key` - the different is that in sdiffstore, we will consider the `dstkey` result sunion/sunionstore add more tests (same as above) sinter/sinterstore also same as above ... (cherry picked from commit b8a5da80)
-
Jason Elbaum authored
When using RESP3, ZPOPMAX/ZPOPMIN should return nested arrays for consistency with other commands (e.g. ZRANGE). We do that only when COUNT argument is present (similarly to how LPOP behaves). for reasoning see https://github.com/redis/redis/issues/8824#issuecomment-855427955 This is a breaking change only when RESP3 is used, and COUNT argument is present! (cherry picked from commit 7f342020)
-
Mikhail Fesenko authored
Direct redis-cli repl prints to stderr, because --rdb can print to stdout. fflush stdout after responses (#9136) 1. redis-cli can output --rdb data to stdout but redis-cli also write some messages to stdout which will mess up the rdb. 2. Make redis-cli flush stdout when printing a reply This was needed in order to fix a hung in redis-cli test that uses --replica. Note that printf does flush when there's a newline, but fwrite does not. 3. fix the redis-cli --replica test which used to pass previously because it didn't really care what it read, and because redis-cli used printf to print these other things to stdout. 4. improve redis-cli --replica test to run with both diskless and disk-based. Co-authored-by:
Oran Agra <oran@redislabs.com> Co-authored-by:
Viktor Söderqvist <viktor@zuiderkwast.se> (cherry picked from commit 1eb4baa5)
-
Oran Agra authored
- promote the code in DEBUG PROTOCOL to addReplyBigNum - DEBUG PROTOCOL ATTRIB skips the attribute when client is RESP2 - networking.c addReply for push and attributes generate assertion when called on a RESP2 client, anything else would produce a broken protocol that clients can't handle. (cherry picked from commit 6a5bac30)
-
Binbin authored
due to a copy-paste bug, it used to reply with null response rather than empty array. this commit includes new tests that are looking at the RESP response directly in order to be able to tell the difference between them. Co-authored-by:
Oran Agra <oran@redislabs.com> (cherry picked from commit a418a2d3)
-
Leibale Eidelman authored
mistakenly it used to return an empty array rather than 0. Co-authored-by:
Oran Agra <oran@redislabs.com> (cherry picked from commit 95274f1f)
-
Evan authored
Previously, passing 0 for newlen would not truncate the string at all. This adds handling of this case, freeing the old string and creating a new empty string. Other changes: - Move `src/modules/testmodule.c` to `tests/modules/basics.c` - Introduce that basic test into the test suite - Add tests to cover StringTruncate - Add `test-modules` build target for the main makefile - Extend `distclean` build target to clean modules too (cherry picked from commit 1ccf2ca2)
-
Huang Zhw authored
The decision to stop trimming due to LIMIT in XADD and XTRIM was after the limit was reached. i.e. the code was deleting **at least** that count of records (from the LIMIT argument's perspective, not the MAXLEN), instead of **up to** that count of records. see #9046 (cherry picked from commit eaa7a7bb)
-
YaacovHazan authored
When test stop 'load handler' by killing the process that generating the load, some commands that already in the input buffer, still might be processed by the server. This may cause some instability in tests, that count on that no more commands processed after we stop the `load handler' In this commit, new proc 'wait_load_handlers_disconnected' added, to verify that no more cammands from any 'load handler' prossesed, by checking that the clients who genreate the load is disconnceted. Also, replacing check of dbsize with wait_for_ofs_sync before comparing debug digest, as it would fail in case the last key the workload wrote was an overridden key (not a new one). Affected tests Race fix: - failover command to specific replica works - Connect multiple replicas at the same time (issue #141), master diskless=$mdl, replica diskless=$sdl - AOF rewrite during write load: RDB preamble=$rdbpre Cleanup and speedup: - Test replication with blocking lists and sorted sets operations - Test replication with parallel clients writing in different DBs - Test replication partial resync: $descr (diskless: $mdl, $sdl, reconnect: $reconnect (cherry picked from commit 32a2584e)
-
perryitay authored
There are two issues fixed in this commit: 1. we want to fail the EXEC command in case there is a watched key that's logically expired but not yet deleted by active expire or lazy expire. 2. we saw that currently cache time is update in every `call()` (including nested calls), this time is being also being use for the isKeyExpired comparison, we want to update the cache time only in the first call (execCommand) Co-authored-by:
Oran Agra <oran@redislabs.com> (cherry picked from commit ac8b1df8)
-
Oran Agra authored
The `Tracking gets notification of expired keys` test in tracking.tcl used to hung in valgrind CI quite a lot. It turns out the reason is that with valgrind and a busy machine, the server cron active expire cycle could easily run in the same event loop as the command that created `mykey`, so that when they key got expired, there were two change events to broadcast, one that set the key and one that expired it, but since we used raxTryInsert, the client that was associated with the "last" change was the one that created the key, so the NOLOOP filtered that event. This commit adds a test that reproduces the problem by using lazy expire in a multi-exec which makes sure the key expires in the same event loop as the one that added it. (cherry picked from commit 9b564b52)
-
- 01 Jun, 2021 3 commits
-
-
YaacovHazan authored
In diskless replication, we create a read pipe for the RDB, between the child and the parent. When we close this pipe (fd), the read handler also needs to be removed from the event loop (if it still registered). Otherwise, next time we will use the same fd, the registration will be fail (panic), because we will use EPOLL_CTL_MOD (the fd still register in the event loop), on fd that already removed from epoll_ctl (cherry picked from commit 501d7755)
-
Madelyn Olson authored
Redact commands that include sensitive data from slowlog and monitor (cherry picked from commit a59e75a4)
-
yoav-steinberg authored
When client breached the output buffer soft limit but then went idle, we didn't disconnect on soft limit timeout, now we do. Note this also resolves some sporadic test failures in due to Linux buffering data which caused tests to fail if during the test we went back under the soft COB limit. Co-authored-by:
Oran Agra <oran@redislabs.com> Co-authored-by:
sundb <sundbcn@gmail.com> (cherry picked from commit 152fce5e)
-
- 03 May, 2021 3 commits
-
-
Madelyn Olson authored
Interior rax pointers were not being freed (cherry picked from commit c73b4ddf)
- 19 Apr, 2021 4 commits
-
-
Hanna Fadida authored
Adding a new type mask for key space notification, REDISMODULE_NOTIFY_MODULE, to enable unique notifications from commands on REDISMODULE_KEYTYPE_MODULE type keys (which is currently unsupported). Modules can subscribe to a module key keyspace notification by RM_SubscribeToKeyspaceEvents, and clients by notify-keyspace-events of redis.conf or via the CONFIG SET, with the characters 'd' or 'A' (REDISMODULE_NOTIFY_MODULE type mask is part of the '**A**ll' notation for key space notifications). Refactor: move some pubsub test infra from pubsub.tcl to util.tcl to be re-used by other tests.
-
guybe7 authored
Before this commit using RM_Call without "!" could cause the master to lazy-expire a key (delete it) but without replicating to replicas. This could cause the replica's memory usage to gradually grow and could also cause consistency issues if the master and replica have a clock diff. This bug was introduced in #8617 Added a test which demonstrates that scenario.
-
Harkrishn Patro authored
In the initial release of Redis 6.2 setting a user to only allow pubsub access to a specific channel, and doing ACL SAVE, resulted in an assertion when ACL LOAD was used. This was later changed by #8723 (not yet released), but still not properly resolved (now it errors instead of crash). The problem is that the server that generates an ACL file, doesn't know what would be the setting of the acl-pubsub-default config in the server that will load it. so ACL SAVE needs to always start with resetchannels directive. This should still be compatible with old acl files (from redis 6.0), and ones from earlier versions of 6.2 that didn't mess with channels. Co-authored-by:
Harkrishn Patro <harkrisp@amazon.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
sundb authored
The tail size of c->reply is 16kb, but in the test only publish a few chars each time, due to a change in #8699, the obuf limit is now checked a new memory allocation is made, so this test would have sometimes failed to trigger a soft limit disconnection in time. The solution is to write bigger payloads to the output buffer, but still limit their rate (not more than 100k/s).
-
- 18 Apr, 2021 1 commit
-
-
Yossi Gottlieb authored
Disables #8649 and subsequent attempts to stabilize the test.
-