- 10 Oct, 2021 8 commits
-
-
Oran Agra authored
* overflow in jemalloc fragmentation hint to the defragger
-
Oran Agra authored
-
Yoav Steinberg authored
./autogen.sh --with-version=5.2.1-0-g0
-
Yoav Steinberg authored
-
Yoav Steinberg authored
git-subtree-dir: deps/jemalloc git-subtree-split: 886e40bb339ec1358a5ff2a52fdb782ca66461cb
-
Yoav Steinberg authored
-
menwen authored
looks like this field was never actually used and the call to time() is excessive.
- 08 Oct, 2021 2 commits
-
-
Bjorn Svensson authored
Move config `logfile` to generic configs
-
Bjorn Svensson authored
-
- 07 Oct, 2021 3 commits
-
-
yoav-steinberg authored
obuf based eviction tests run until eviction occurs instead of assuming a certain amount of writes will fill the obuf enough for eviction to occur. This handles the kernel buffering written data and emptying the obuf even though no one actualy reads from it. The tests have a new timeout of 20sec: if the test doesn't pass after 20 sec it'll fail. Hopefully this enough for our slow CI targets. This also eliminates the need to skip some tests in TLS.
-
Huang Zhw authored
Tracking invalidation messages were sometimes sent in inconsistent order, before the command's reply rather than after. In addition to that, they were sometimes embedded inside other commands responses, like MULTI-EXEC and MGET.
-
GutovskyMaria authored
Hide empty and loading replicas from CLUSTER SLOTS responses
-
- 06 Oct, 2021 5 commits
-
-
Andy Pan authored
Implement createPipe() to combine creating pipe and setting flags, also reduce system calls by prioritizing pipe2() over pipe(). Without createPipe(), we have to call pipe() to create a pipe and then call some functions (like anetCloexec() and anetNonBlock()) of anet.c to set flags respectively, which leads to some extra system calls, now we can leverage pipe2() to combine them and make the process of creating pipe more convergent in createPipe(). Co-authored-by:
Viktor Söderqvist <viktor.soderqvist@est.tech> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
yoav-steinberg authored
Flush db and *then* wait for the bgsave to complete.
-
yoav-steinberg authored
When queuing a multi command we duplicated the argv (meaning an alloc and a memcpy). This isn't needed since we can use the previously allocated argv and just reset the client objects argv to NULL. This should saves some memory and is a minor optimization in heavy MULTI/EXEC traffic, especially if there are lots of arguments.
-
Meir Shpilraien (Spielrein) authored
The new value indicates how long Redis wait to acquire the GIL after sleep. This can help identify problems where a module perform some background operation for a long time (with the GIL held) and blocks the Redis main thread.
-
tzongw authored
Scenario: 1. client block on command `XREAD BLOCK 0 STREAMS mystream $` 2. in a module, calling `XADD mystream * field value` via lua from a timer callback 3. client will receive response after some latency up to 100ms Reason: When `XADD` signal the key `mystream` as ready, `beforeSleep` in next eventloop will call `handleClientsBlockedOnKeys` to unblock the client and add pending data to write but not actually install a write handler, so next redis will block in `aeApiPoll` up to 100ms given `hz` config as default 10, pending data will be sent in another next eventloop by `handleClientsWithPendingWritesUsingThreads`. Calling `handleClientsBlockedOnKeys` before `handleClientsWithPendingWritesUsingThreads` in `beforeSleep` solves the problem.
-
- 05 Oct, 2021 2 commits
-
-
yoav-steinberg authored
* Reduce delay between publishes to allow less time to write the obufs. * More subscribed clients to buffer more data per publish. * Make sure main connection isn't evicted (it has a large qbuf).
-
yoav-steinberg authored
Changes in #9528 lead to memory leak if the command implementation used rewriteClientCommandArgument inside MULTI-EXEC. Adding an explicit test for that case since the test that uncovered it didn't specifically target this scenario
-
- 04 Oct, 2021 9 commits
-
-
Meir Shpilraien (Spielrein) authored
When LUA call our C code, by default, the LUA stack has room for 10 elements. In most cases, this is more than enough but sometimes it's not and the caller must verify the LUA stack size before he pushes elements. On 3 places in the code, there was no verification of the LUA stack size. On specific inputs this missing verification could have lead to invalid memory write: 1. On 'luaReplyToRedisReply', one might return a nested reply that will explode the LUA stack. 2. On 'redisProtocolToLuaType', the Redis reply might be deep enough to explode the LUA stack (notice that currently there is no such command in Redis that returns such a nested reply, but modules might do it) 3. On 'ldbRedis', one might give a command with enough arguments to explode the LUA stack (all the arguments will be pushed to the LUA stack) This commit is solving all those 3 issues by calling 'lua_checkstack' and verify that there is enough room in the LUA stack to push elements. In case 'lua_checkstack' returns an error (there is not enough room in the LUA stack and it's not possible to increase the stack), we will do the following: 1. On 'luaReplyToRedisReply', we will return an error to the user. 2. On 'redisProtocolToLuaType' we will exit with panic (we assume this scenario is rare because it can only happen with a module). 3. On 'ldbRedis', we return an error.
-
Oran Agra authored
Recently merged PR introduced a leak when loading AOF files. This was because argv_len wasn't set, so rewriteClientCommandArgument would shrink the argv array and updating argc to a small value.
-
Oran Agra authored
The protocol parsing on 'ldbReplParseCommand' (LUA debugging) Assumed protocol correctness. This means that if the following is given: *1 $100 test The parser will try to read additional 94 unallocated bytes after the client buffer. This commit fixes this issue by validating that there are actually enough bytes to read. It also limits the amount of data that can be sent by the debugger client to 1M so the client will not be able to explode the memory. Co-authored-by:
meir@redislabs.com <meir@redislabs.com>
-
Oran Agra authored
- fix possible heap corruption in ziplist and listpack resulting by trying to allocate more than the maximum size of 4GB. - prevent ziplist (hash and zset) from reaching size of above 1GB, will be converted to HT encoding, that's not a useful size. - prevent listpack (stream) from reaching size of above 1GB. - XADD will start a new listpack if the new record may cause the previous listpack to grow over 1GB. - XADD will respond with an error if a single stream record is over 1GB - List type (ziplist in quicklist) was truncating strings that were over 4GB, now it'll respond with an error. Co-authored-by:
sundb <sundbcn@gmail.com>
-
Oran Agra authored
This change sets a low limit for multibulk and bulk length in the protocol for unauthenticated connections, so that they can't easily cause redis to allocate massive amounts of memory by sending just a few characters on the network. The new limits are 10 arguments of 16kb each (instead of 1m of 512mb)
-
Oran Agra authored
The redis-cli command line tool and redis-sentinel service may be vulnerable to integer overflow when parsing specially crafted large multi-bulk network replies. This is a result of a vulnerability in the underlying hiredis library which does not perform an overflow check before calling the calloc() heap allocation function. This issue only impacts systems with heap allocators that do not perform their own overflow checks. Most modern systems do and are therefore not likely to be affected. Furthermore, by default redis-sentinel uses the jemalloc allocator which is also not vulnerable. Co-authored-by:
Yossi Gottlieb <yossigo@gmail.com>
-
Oran Agra authored
The vulnerability involves changing the default set-max-intset-entries configuration parameter to a very large value and constructing specially crafted commands to manipulate sets
-
yiyuaner authored
The existing overflow checks handled the greedy growing, but didn't handle a case where the addition of the header size is what causes the overflow.
-
YaacovHazan authored
Since we measure the COW size in this test by changing some keys and reading the reported COW size, we need to ensure that the "dismiss mechanism" (#8974) will not free memory and reduce the COW size. For that, this commit changes the size of the keys to 512B (less than a page). and because some keys may fall into the same page, we are modifying ten keys on each iteration and check for at least 50% change in the COW size.
-
- 03 Oct, 2021 3 commits
-
-
yoav-steinberg authored
Note that this breaks compatibility because in the past doing: DECRBY x -9223372036854775808 would succeed (and create an invalid result) and now this returns an error.
-
yoav-steinberg authored
Remove hard coded multi-bulk limit (was 1,048,576), new limit is INT_MAX. When client sends an m-bulk that's higher than 1024, we initially only allocate the argv array for 1024 arguments, and gradually grow that allocation as arguments are received.
-
Binbin authored
1. Remove forward declarations from header files to functions that do not exist: hmsetCommand and rdbSaveTime. 2. Minor phrasing fixes in #9519 3. Add missing sdsfree(title) and fix typo in redis-benchmark. 4. Modify some error comments in some zset commands. 5. Fix copy-paste bug comment in syncWithMaster about `ip-address`.
-
- 01 Oct, 2021 1 commit
-
-
Viktor Söderqvist authored
Just a cleanup to make the code easier to maintain and reduce the risk of something being overlooked.
-
- 30 Sep, 2021 3 commits
-
-
Eduardo Semprebon authored
seems that his piece of doc was always wrong (no such error in the code)
-
Yunier Pérez authored
While the original issue was on Linux, this should work for other platforms as well.
-
Hanna Fadida authored
adding an advanced api to enable loading data that was sereialized with a specific encoding version
-
- 29 Sep, 2021 2 commits
-
-
yoav-steinberg authored
-
Wen Hui authored
-
- 27 Sep, 2021 1 commit
-
-
Ozan Tezcan authored
-
- 26 Sep, 2021 1 commit
-
-
Oran Agra authored
This was recently broken in #9321 when we validated stream IDs to be integers but did that after to the stepping next record instead of before.
-