- 21 Jul, 2021 21 commits
-
-
Oran Agra authored
-
Huang Zhw authored
On 32 bit platform, the bit position of GETBIT/SETBIT/BITFIELD/BITCOUNT,BITPOS may overflow (see CVE-2021-32761) (#9191) GETBIT, SETBIT may access wrong address because of wrap. BITCOUNT and BITPOS may return wrapped results. BITFIELD may access the wrong address but also allocate insufficient memory and segfault (see CVE-2021-32761). This commit uses `uint64_t` or `long long` instead of `size_t`. related https://github.com/redis/redis/pull/8096 At 32bit platform: > setbit bit 4294967295 1 (integer) 0 > config set proto-max-bulk-len 536870913 OK > append bit "\xFF" (integer) 536870913 > getbit bit 4294967296 (integer) 0 When the bit index is larger than 4294967295, size_t can't hold bit index. In the past, `proto-max-bulk-len` is limit to 536870912, so there is no problem. After this commit, bit position is stored in `uint64_t` or `long long`. So when `proto-max-bulk-len > 536870912`, 32bit platforms can still be correct. For 64bit platform, this problem still exists. The major reason is bit pos 8 times of byte pos. When proto-max-bulk-len is very larger, bit pos may overflow. But at 64bit platform, we don't have so long string. So this bug may never happen. Additionally this commit add a test cost `512MB` memory which is tag as `large-memory`. Make freebsd ci and valgrind ci ignore this test. * This test is disabled in this version since bitops doesn't rely on proto-max-bulk-len. some of the overflows can still occur so we do want the fixes. (cherry picked from commit 71d45287)
-
Huang Zhw authored
Fix module info genModulesInfoStringRenderModulesList lack separator when there's more than one module in the list. Co-authored-by:
Oran Agra <oran@redislabs.com> (cherry picked from commit 1895e134)
-
Wen Hui authored
Make it possible for redis-cli cluster import to work with source and target that require AUTH. Adding two different flags --cluster-from-user, --cluster-from-pass and --cluster-askpass for source node authentication. Also for target authentication, using existing --user and --pass flag. Example: ./redis-cli --cluster import 127.0.0.1:7000 --cluster-from 127.0.0.1:6379 --pass 1234 --user default --cluster-from-user default --cluster-from-pass 123456 ./redis-cli --cluster import 127.0.0.1:7000 --cluster-from 127.0.0.1:6379 --askpass --cluster-from-user default --cluster-from-askpass (cherry picked from commit 639b73cd)
-
Oran Agra authored
- promote the code in DEBUG PROTOCOL to addReplyBigNum - DEBUG PROTOCOL ATTRIB skips the attribute when client is RESP2 - networking.c addReply for push and attributes generate assertion when called on a RESP2 client, anything else would produce a broken protocol that clients can't handle. (cherry picked from commit 6a5bac30) (cherry picked from commit 7f38aa8bc719f709acdcefc35a45a7aa6faa76fa)
-
Huang Zhw authored
In clusterManagerCommandImport strcat was used to concat COPY and REPLACE, the space maybe not enough. If we use --cluster-replace but not --cluster-copy, the MIGRATE command contained COPY instead of REPLACE. (cherry picked from commit a049f629) (cherry picked from commit d4771a995e99e61e2e8feb92ede06431195b0300)
-
Binbin authored
SINTERSTORE would have deleted the dest key right away, even when later on it is bound to fail on an (WRONGTYPE) error. With this change it first picks up all the input keys, and only later delete the dest key if one is empty. Also add more tests for some commands. Mainly focus on - `wrong type error`: expand test case (base on sinter bug) in non-store variant add tests for store variant (although it exists in non-store variant, i think it would be better to have same tests) - the dstkey result when we meet `non-exist key (empty set)` in *store sdiff: - improve test case about wrong type error (the one we found in sinter, although it is safe in sdiff) - add test about using non-exist key (treat it like an empty set) sdiffstore: - according to sdiff test case, also add some tests about `wrong type error` and `non-exist key` - the different is that in sdiffstore, we will consider the `dstkey` result sunion/sunionstore add more tests (same as above) sinter/sinterstore also same as above ... (cherry picked from commit b8a5da80) (cherry picked from commit f4702b8b7a7da6cc661ddb6744cb322bc92e3267)
-
Jason Elbaum authored
When using RESP3, ZPOPMAX/ZPOPMIN should return nested arrays for consistency with other commands (e.g. ZRANGE). We do that only when COUNT argument is present (similarly to how LPOP behaves). for reasoning see https://github.com/redis/redis/issues/8824#issuecomment-855427955 This is a breaking change only when RESP3 is used, and COUNT argument is present! (cherry picked from commit 7f342020) (cherry picked from commit caaad2d686b2af0d13fbeda414e2b70e57635b5c)
-
Rob Snyder authored
Adds call to intrev16ifbe to ensure ZIPLIST_LENGTH is compared correctly (cherry picked from commit eaa52719) (cherry picked from commit 4c181230855bb3f92e93b32a48db52e69bd7e509)
-
perryitay authored
There are two issues fixed in this commit: 1. we want to fail the EXEC command in case there is a watched key that's logically expired but not yet deleted by active expire or lazy expire. 2. we saw that currently cache time is update in every `call()` (including nested calls), this time is being also being use for the isKeyExpired comparison, we want to update the cache time only in the first call (execCommand) Co-authored-by:
Oran Agra <oran@redislabs.com> (cherry picked from commit ac8b1df8)
-
Huang Zhw authored
Do not install a file event to send data to rewrite child when parent stop sending diff to child in aof rewrite. (#8767) In aof rewrite, when parent stop sending data to child, if there is new rewrite data, aofChildWriteDiffData write event will be installed. Then this event is issued and deletes the file event without do anyting. This will happen over and over again until aof rewrite finish. This bug used to waste a few system calls per excessive wake-up (epoll_ctl and epoll_wait) per cycle, each cycle triggered by receiving a write command from a client. (cherry picked from commit cb961d8c)
-
Omer Shadmi authored
Currently a replica is able to recover from a short read (when diskless loading is enabled) and avoid crashing/exiting, replying to the master and then the rdb could be sent again by the master for another load attempt by the replica. There were a few scenarios that were not behaving similarly, such as when there is no end-of-file marker, or when module aux data failed to load, which should be allowed to occur due to a short read. (cherry picked from commit f06d782f)
-
Oran Agra authored
The `Tracking gets notification of expired keys` test in tracking.tcl used to hung in valgrind CI quite a lot. It turns out the reason is that with valgrind and a busy machine, the server cron active expire cycle could easily run in the same event loop as the command that created `mykey`, so that when they key got expired, there were two change events to broadcast, one that set the key and one that expired it, but since we used raxTryInsert, the client that was associated with the "last" change was the one that created the key, so the NOLOOP filtered that event. This commit adds a test that reproduces the problem by using lazy expire in a multi-exec which makes sure the key expires in the same event loop as the one that added it. (cherry picked from commit 9b564b52)
-
Maxim Galushka authored
Fixes #6792. Added support of REDIS_REPLY_SET in raw and csv output of `./redis-cli` Test: run commands to test: ./redis-cli -3 --csv COMMAND ./redis-cli -3 --raw COMMAND Now they are returning resuts, were failing with: "Unknown reply type: 10" before the change. (cherry picked from commit 96bb0785)
-
YaacovHazan authored
In diskless replication, we create a read pipe for the RDB, between the child and the parent. When we close this pipe (fd), the read handler also needs to be removed from the event loop (if it still registered). Otherwise, next time we will use the same fd, the registration will be fail (panic), because we will use EPOLL_CTL_MOD (the fd still register in the event loop), on fd that already removed from epoll_ctl (cherry picked from commit 501d7755)
-
guybe7 authored
Starting redis 6.0 (part of the TLS feature), diskless master uses pipe from the fork child so that the parent is the one sending data to the replicas. This mechanism has an issue in which a hung replica will cause the master to wait for it to read the data sent to it forever, thus preventing the fork child from terminating and preventing the creations of any other forks. This PR adds a timeout mechanism, much like the ACK-based timeout, we disconnect replicas that aren't reading the RDB file fast enough. (cherry picked from commit d63d0260)
-
Oran Agra authored
Starting redis 6.0 and the changes we made to the diskless master to be suitable for TLS, I made the master avoid reaping (wait3) the pid of the child until we know all replicas are done reading their rdb. I did that in order to avoid a state where the rdb_child_pid is -1 but we don't yet want to start another fork (still busy serving that data to replicas). It turns out that the solution used so far was problematic in case the fork child was being killed (e.g. by the kernel OOM killer), in that case there's a chance that we currently disabled the read event on the rdb pipe, since we're waiting for a replica to become writable again. and in that scenario the master would have never realized the child exited, and the replica will remain hung too. Note that there's no mechanism to detect a hung replica while it's in rdb transfer state. The solution here is to add another pipe which is used by the parent to tell the child it is safe to exit. this mean that when the child exits, for whatever reason, it is safe to reap it. Besides that, i'm re-introducing an adjustment to REPLCONF ACK which was part of #6271 (Accelerate diskless master connections) but was dropped when that PR was rebased after the TLS fork/pipe changes (5a477946). Now that RdbPipeCleanup no longer calls checkChildrenDone, and the ACK has chance to detect that the child exited, it should be the one to call it so that we don't have to wait for cron (server.hz) to do that. (cherry picked from commit 573246f7)
-
- 01 Jun, 2021 5 commits
-
-
M Sazzadul Hoque authored
-
Oran Agra authored
-
patpatbear authored
this patch fixes sinterstore by add missing keyspace del event when any source set not exists. Co-authored-by:
srzhao <srzhao@sysnew.com> (cherry picked from commit 46d9f31e)
-
- 30 May, 2021 1 commit
-
-
ikeberlein authored
-
- 03 May, 2021 9 commits
-
-
Oran Agra authored
-
Oran Agra authored
An integer overflow bug in Redis 6.2 could be exploited to corrupt the heap and potentially result with remote code execution. The vulnerability involves changing the default set-max-intset-entries configuration value, creating a large set key that consists of integer values and using the COPY command to duplicate it. The integer overflow bug exists in all versions of Redis starting with 2.6, where it could result with a corrupted RDB or DUMP payload, but not exploited through COPY (which did not exist before 6.2). (cherry picked from commit 29900d4e)
-
Madelyn Olson authored
(cherry picked from commit 98d2e001)
-
Oran Agra authored
We sometimes see the crash report saying we were killed by a random process even in cases where the crash was spontanius in redis. for instance, crashes found by the corrupt-dump test. It looks like this si_pid is sometimes left uninitialized, and a good way to tell if the crash originated in redis or trigged by outside is to look at si_code, real signal codes are always > 0, and ones generated by kill are have si_code of 0 or below. (cherry picked from commit b45b0d81)
-
Yossi Gottlieb authored
listenToPort attempts to gracefully handle and ignore certain errors but does not store errno prior to logging, which in turn calls several libc functions that may overwrite errno. This has been discovered due to libmusl strftime() always returning with errno set to EINVAL, which resulted with docker-library/redis#273. (cherry picked from commit df5f543b)
-
guybe7 authored
Scenario: 1. A module client is blocked on keys with a timeout 2. Shortly before the timeout expires, the key is being populated and signaled as ready 3. Redis calls moduleTryServeClientBlockedOnKey (which replies to client) and then moduleUnblockClient 4. moduleUnblockClient doesn't really unblock the client, it writes to server.module_blocked_pipe and only marks the BC as unblocked. 5. beforeSleep kics in, by this time the client still exists and techincally timed-out. beforeSleep replies to the timeout client (double reply) and only then moduleHandleBlockedClients is called, reading from module_blocked_pipe and calling unblockClient The solution is similar to what was done in moduleTryServeClientBlockedOnKey: we should avoid re-processing an already-unblocked client (cherry picked from commit e58118cd)
-
- 02 Mar, 2021 4 commits
-
-
Oran Agra authored
-
Yossi Gottlieb authored
* Remove linux/version.h dependency. This introduces unnecessary dependencies, and generally not a good idea as the platform we build on may be different than the platform we run on. To determine if sync_file_range exists we can simply rely on header file hints. * Fix setproctitle() on libmusl. The previous ifdef checks were a bit too strict for no apparent reason. * Fix tests failure on Linux with no backtrace. * Add alpine daily CI job. (cherry picked from commit 95ea7454)
-
Yossi Gottlieb authored
Also adds a new daily CI test, relying on the fact that we don't use malloc_size() on alpine libmusl. Fixes #8531 (cherry picked from commit dd885780)
-