- 05 Oct, 2021 1 commit
-
-
yoav-steinberg authored
Changes in #9528 lead to memory leak if the command implementation used rewriteClientCommandArgument inside MULTI-EXEC. Adding an explicit test for that case since the test that uncovered it didn't specifically target this scenario
-
- 04 Oct, 2021 9 commits
-
-
Meir Shpilraien (Spielrein) authored
When LUA call our C code, by default, the LUA stack has room for 10 elements. In most cases, this is more than enough but sometimes it's not and the caller must verify the LUA stack size before he pushes elements. On 3 places in the code, there was no verification of the LUA stack size. On specific inputs this missing verification could have lead to invalid memory write: 1. On 'luaReplyToRedisReply', one might return a nested reply that will explode the LUA stack. 2. On 'redisProtocolToLuaType', the Redis reply might be deep enough to explode the LUA stack (notice that currently there is no such command in Redis that returns such a nested reply, but modules might do it) 3. On 'ldbRedis', one might give a command with enough arguments to explode the LUA stack (all the arguments will be pushed to the LUA stack) This commit is solving all those 3 issues by calling 'lua_checkstack' and verify that there is enough room in the LUA stack to push elements. In case 'lua_checkstack' returns an error (there is not enough room in the LUA stack and it's not possible to increase the stack), we will do the following: 1. On 'luaReplyToRedisReply', we will return an error to the user. 2. On 'redisProtocolToLuaType' we will exit with panic (we assume this scenario is rare because it can only happen with a module). 3. On 'ldbRedis', we return an error.
-
Oran Agra authored
Recently merged PR introduced a leak when loading AOF files. This was because argv_len wasn't set, so rewriteClientCommandArgument would shrink the argv array and updating argc to a small value.
-
Oran Agra authored
The protocol parsing on 'ldbReplParseCommand' (LUA debugging) Assumed protocol correctness. This means that if the following is given: *1 $100 test The parser will try to read additional 94 unallocated bytes after the client buffer. This commit fixes this issue by validating that there are actually enough bytes to read. It also limits the amount of data that can be sent by the debugger client to 1M so the client will not be able to explode the memory. Co-authored-by:
meir@redislabs.com <meir@redislabs.com>
-
Oran Agra authored
- fix possible heap corruption in ziplist and listpack resulting by trying to allocate more than the maximum size of 4GB. - prevent ziplist (hash and zset) from reaching size of above 1GB, will be converted to HT encoding, that's not a useful size. - prevent listpack (stream) from reaching size of above 1GB. - XADD will start a new listpack if the new record may cause the previous listpack to grow over 1GB. - XADD will respond with an error if a single stream record is over 1GB - List type (ziplist in quicklist) was truncating strings that were over 4GB, now it'll respond with an error. Co-authored-by:
sundb <sundbcn@gmail.com>
-
Oran Agra authored
This change sets a low limit for multibulk and bulk length in the protocol for unauthenticated connections, so that they can't easily cause redis to allocate massive amounts of memory by sending just a few characters on the network. The new limits are 10 arguments of 16kb each (instead of 1m of 512mb)
-
Oran Agra authored
The redis-cli command line tool and redis-sentinel service may be vulnerable to integer overflow when parsing specially crafted large multi-bulk network replies. This is a result of a vulnerability in the underlying hiredis library which does not perform an overflow check before calling the calloc() heap allocation function. This issue only impacts systems with heap allocators that do not perform their own overflow checks. Most modern systems do and are therefore not likely to be affected. Furthermore, by default redis-sentinel uses the jemalloc allocator which is also not vulnerable. Co-authored-by:
Yossi Gottlieb <yossigo@gmail.com>
-
Oran Agra authored
The vulnerability involves changing the default set-max-intset-entries configuration parameter to a very large value and constructing specially crafted commands to manipulate sets
-
yiyuaner authored
The existing overflow checks handled the greedy growing, but didn't handle a case where the addition of the header size is what causes the overflow.
-
YaacovHazan authored
Since we measure the COW size in this test by changing some keys and reading the reported COW size, we need to ensure that the "dismiss mechanism" (#8974) will not free memory and reduce the COW size. For that, this commit changes the size of the keys to 512B (less than a page). and because some keys may fall into the same page, we are modifying ten keys on each iteration and check for at least 50% change in the COW size.
-
- 03 Oct, 2021 3 commits
-
-
yoav-steinberg authored
Note that this breaks compatibility because in the past doing: DECRBY x -9223372036854775808 would succeed (and create an invalid result) and now this returns an error.
-
yoav-steinberg authored
Remove hard coded multi-bulk limit (was 1,048,576), new limit is INT_MAX. When client sends an m-bulk that's higher than 1024, we initially only allocate the argv array for 1024 arguments, and gradually grow that allocation as arguments are received.
-
Binbin authored
1. Remove forward declarations from header files to functions that do not exist: hmsetCommand and rdbSaveTime. 2. Minor phrasing fixes in #9519 3. Add missing sdsfree(title) and fix typo in redis-benchmark. 4. Modify some error comments in some zset commands. 5. Fix copy-paste bug comment in syncWithMaster about `ip-address`.
-
- 01 Oct, 2021 1 commit
-
-
Viktor Söderqvist authored
Just a cleanup to make the code easier to maintain and reduce the risk of something being overlooked.
-
- 30 Sep, 2021 3 commits
-
-
Eduardo Semprebon authored
seems that his piece of doc was always wrong (no such error in the code)
-
Yunier Pérez authored
While the original issue was on Linux, this should work for other platforms as well.
-
Hanna Fadida authored
adding an advanced api to enable loading data that was sereialized with a specific encoding version
-
- 29 Sep, 2021 2 commits
-
-
yoav-steinberg authored
-
Wen Hui authored
-
- 27 Sep, 2021 1 commit
-
-
Ozan Tezcan authored
-
- 26 Sep, 2021 4 commits
-
-
Oran Agra authored
This was recently broken in #9321 when we validated stream IDs to be integers but did that after to the stepping next record instead of before.
-
yoav-steinberg authored
Fixing CI test issues introduced in #8687 - valgrind warnings in readQueryFromClient when client was freed by processInputBuffer - adding DEBUG pause-cron for tests not to be time dependent. - skipping a test that depends on socket buffers / events not compatible with TLS - making sure client got subscribed by not using deferring client
-
Yossi Gottlieb authored
Empty patterns are not considered and skipped. Also, improve help text.
-
chenyang8094 authored
this was a regression from #9012 (not released yet)
-
- 24 Sep, 2021 3 commits
-
-
Ozan Tezcan authored
-
sundb authored
In the `HRANDFIELD`, `SRANDMEMBER` and `ZRANDMEMBER` commands, There are some strategies that could in some rare cases return an unfair random. these cases are where s small dict happens be be hashed unevenly. Specifically when `count*ZRANDMEMBER_SUB_STRATEGY_MUL > size`, using `dictGetRandomKey` to randomize from a dict will result in an unfair random result.
-
Huang Zhw authored
This will cause the generated string containing "\". Fixes a broken change in #8687
-
- 23 Sep, 2021 5 commits
-
-
Huang Zhw authored
Minor optimize getMaxmemoryState, when server.maxmemory is not set, don't count AOF and replicas buffers. Co-authored-by:
Viktor Söderqvist <viktor@zuiderkwast.se>
-
Yossi Gottlieb authored
This commit makes it possible to explicitly trim the allocation of a RedisModuleString. Currently, Redis automatically trims strings that have been retained by a module command when it returns. However, this is not thread safe and may result with corruption in threaded modules. Supporting explicit trimming offers a backwards compatible workaround to this problem.
-
yoav-steinberg authored
### Description A mechanism for disconnecting clients when the sum of all connected clients is above a configured limit. This prevents eviction or OOM caused by accumulated used memory between all clients. It's a complimentary mechanism to the `client-output-buffer-limit` mechanism which takes into account not only a single client and not only output buffers but rather all memory used by all clients. #### Design The general design is as following: * We track memory usage of each client, taking into account all memory used by the client (query buffer, output buffer, parsed arguments, etc...). This is kept up to date after reading from the socket, after processing commands and after writing to the socket. * Based on the used memory we sort all clients into buckets. Each bucket contains all clients using up up to x2 memory of the clients in the bucket below it. For example up to 1m clients, up to 2m clients, up to 4m clients, ... * Before processing a command and before sleep we check if we're over the configured limit. If we are we start disconnecting clients from larger buckets downwards until we're under the limit. #### Config `maxmemory-clients` max memory all clients are allowed to consume, above this threshold we disconnect clients. This config can either be set to 0 (meaning no limit), a size in bytes (possibly with MB/GB suffix), or as a percentage of `maxmemory` by using the `%` suffix (e.g. setting it to `10%` would mean 10% of `maxmemory`). #### Important code changes * During the development I encountered yet more situations where our io-threads access global vars. And needed to fix them. I also had to handle keeps the clients sorted into the memory buckets (which are global) while their memory usage changes in the io-thread. To achieve this I decided to simplify how we check if we're in an io-thread and make it much more explicit. I removed the `CLIENT_PENDING_READ` flag used for checking if the client is in an io-thread (it wasn't used for anything else) and just used the global `io_threads_op` variable the same way to check during writes. * I optimized the cleanup of the client from the `clients_pending_read` list on client freeing. We now store a pointer in the `client` struct to this list so we don't need to search in it (`pending_read_list_node`). * Added `evicted_clients` stat to `INFO` command. * Added `CLIENT NO-EVICT ON|OFF` sub command to exclude a specific client from the client eviction mechanism. Added corrosponding 'e' flag in the client info string. * Added `multi-mem` field in the client info string to show how much memory is used up by buffered multi commands. * Client `tot-mem` now accounts for buffered multi-commands, pubsub patterns and channels (partially), tracking prefixes (partially). * CLIENT_CLOSE_ASAP flag is now handled in a new `beforeNextClient()` function so clients will be disconnected between processing different clients and not only before sleep. This new function can be used in the future for work we want to do outside the command processing loop but don't want to wait for all clients to be processed before we get to it. Specifically I wanted to handle output-buffer-limit related closing before we process client eviction in case the two race with each other. * Added a `DEBUG CLIENT-EVICTION` command to print out info about the client eviction buckets. * Each client now holds a pointer to the client eviction memory usage bucket it belongs to and listNode to itself in that bucket for quick removal. * Global `io_threads_op` variable now can contain a `IO_THREADS_OP_IDLE` value indicating no io-threading is currently being executed. * In order to track memory used by each clients in real-time we can't rely on updating these stats in `clientsCron()` alone anymore. So now I call `updateClientMemUsage()` (used to be `clientsCronTrackClientsMemUsage()`) after command processing, after writing data to pubsub clients, after writing the output buffer and after reading from the socket (and maybe other places too). The function is written to be fast. * Clients are evicted if needed (with appropriate log line) in `beforeSleep()` and before processing a command (before performing oom-checks and key-eviction). * All clients memory usage buckets are grouped as follows: * All clients using less than 64k. * 64K..128K * 128K..256K * ... * 2G..4G * All clients using 4g and up. * Added client-eviction.tcl with a bunch of tests for the new mechanism. * Extended maxmemory.tcl to test the interaction between maxmemory and maxmemory-clients settings. * Added an option to flag a numeric configuration variable as a "percent", this means that if we encounter a '%' after the number in the config file (or config set command) we consider it as valid. Such a number is store internally as a negative value. This way an integer value can be interpreted as either a percent (negative) or absolute value (positive). This is useful for example if some numeric configuration can optionally be set to a percentage of something else. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
YaacovHazan authored
This commit introduced a new flag to the RM_Call: 'C' - Check if the command can be executed according to the ACLs associated with it. Also, three new API's added to check if a command, key, or channel can be executed or accessed by a user, according to the ACLs associated with it. - RM_ACLCheckCommandPerm - RM_ACLCheckKeyPerm - RM_ACLCheckChannelPerm The user for these API's is a RedisModuleUser object, that for a Module user returned by the RM_CreateModuleUser API, or for a general ACL user can be retrieved by these two new API's: - RM_GetCurrentUserName - Retrieve the user name of the client connection behind the current context. - RM_GetModuleUserFromUserName - Get a RedisModuleUser from a user name As a result of getting a RedisModuleUser from name, it can now also access the general ACL users (not just ones created by the module). This mean the already existing API RM_SetModuleUserACL(), can be used to change the ACL rules for such users.
-
Binbin authored
This is similar to the recent addition of LMPOP/BLMPOP (#9373), but zset. Syntax for the new ZMPOP command: `ZMPOP numkeys [<key> ...] MIN|MAX [COUNT count]` Syntax for the new BZMPOP command: `BZMPOP timeout numkeys [<key> ...] MIN|MAX [COUNT count]` Some background: - ZPOPMIN/ZPOPMAX take only one key, and can return multiple elements. - BZPOPMIN/BZPOPMAX take multiple keys, but return only one element from just one key. - ZMPOP/BZMPOP can take multiple keys, and can return multiple elements from just one key. Note that ZMPOP/BZMPOP can take multiple keys, it eventually operates on just on key. And it will propagate as ZPOPMIN or ZPOPMAX with the COUNT option. As new commands, if we can not pop any elements, the response like: - ZMPOP: Return a NIL in both RESP2 and RESP3, unlike ZPOPMIN/ZPOPMAX return emptyarray. - BZMPOP: Return a NIL in both RESP2 and RESP3 when timeout is reached, like BZPOPMIN/BZPOPMAX. For the normal response is nested arrays in RESP2 and RESP3: ``` ZMPOP/BZMPOP 1) keyname 2) 1) 1) member1 2) score1 2) 1) member2 2) score2 In RESP2: 1) "myzset" 2) 1) 1) "three" 2) "3" 2) 1) "two" 2) "2" In RESP3: 1) "myzset" 2) 1) 1) "three" 2) (double) 3 2) 1) "two" 2) (double) 2 ```
-
- 22 Sep, 2021 2 commits
-
-
chenyang8094 authored
-
Oran Agra authored
i've seen this CI failure a couple of times on MacOS: *** [err]: lazy free a stream with all types of metadata in tests/unit/lazyfree.tcl lazyfree isn't done only reason i can think of is that 500ms is sometimes not enough on slow systems.
-
- 20 Sep, 2021 2 commits
- 19 Sep, 2021 2 commits
- 16 Sep, 2021 1 commit
-
-
Binbin authored
Implements the [LIMIT limit] variant of SINTERCARD/ZINTERCARD. Now with the LIMIT, we can stop the searching when cardinality reaching the limit, and return the cardinality ASAP. Note that in SINTERCARD, the old synatx was: `SINTERCARD key [key ...]` In order to add a optional parameter, we must break the old synatx. So the new syntax of SINTERCARD will be consistent with ZINTERCARD. New syntax: `SINTERCARD numkeys key [key ...] [LIMIT limit]`. Note that this means that SINTERCARD has a different syntax than SINTER and SINTERSTORE (taking numkeys argument) As for ZINTERCARD, we can easily add a optional parameter to it. New syntax: `ZINTERCARD numkeys key [key ...] [LIMIT limit]`
-
- 15 Sep, 2021 1 commit
-
-
guybe7 authored
Introduced by https://github.com/redis/redis/pull/9502
-