- 12 Dec, 2022 7 commits
-
-
Oran Agra authored
During a diskless sync, if the master main process crashes, the child would have hung in `write`. This fix closes the read fd on the child side, so that if the parent crashes, the child will get a write error and exit. This change also fixes disk-based replication, BGSAVE and AOFRW. In that case the child wouldn't have been hang, it would have just kept running until done which may be pointless. There is a certain degree of risk here. in case there's a BGSAVE child that could maybe succeed and the parent dies for some reason, the old code would have let the child keep running and maybe succeed and avoid data loss. On the other hand, if the parent is restarted, it would have loaded an old rdb file (or none), and then the child could reach the end and rename the rdb file (data conflicting with what the parent has), or also have a race with another BGSAVE child that the new parent started. Note that i removed a comment saying a write error will be ignored in the child and handled by the parent (this comment was very old and i don't think relevant). (cherry picked from commit ccaef5c9)
-
DarrenJiang13 authored
Fix bug with scripts ignoring client tracking NOLOOP and send an invalidation message anyway. (cherry picked from commit 44859a41)
-
Meir Shpilraien (Spielrein) authored
Fix #11030, use lua_rawget to avoid triggering metatables. #11030 shows how return `_G` from the Lua script (either function or eval), cause the Lua interpreter to Panic and the Redis processes to exit with error code 1. Though return `_G` only panic on Redis 7 and 6.2.7, the underline issue exists on older versions as well (6.0 and 6.2). The underline issue is returning a table with a metatable such that the metatable raises an error. The following example demonstrate the issue: ``` 127.0.0.1:6379> eval "local a = {}; setmetatable(a,{__index=function() foo() end}) return a" 0 Error: Server closed the connection ``` ``` PANIC: unprotected error in call to Lua API (user_script:1: Script attempted to access nonexistent global variable 'foo') ``` The Lua panic happened because when returning the result to the client, Redis needs to introspect the returning table and transform the table into a resp. In order to scan the table, Redis uses `lua_gettable` api which might trigger the metatable (if exists) and might raise an error. This code is not running inside `pcall` (Lua protected call), so raising an error causes the Lua to panic and exit. Notice that this is not a crash, its a Lua panic that exit with error code 1. Returning `_G` panics on Redis 7 and 6.2.7 because on those versions `_G` has a metatable that raises error when trying to fetch a none existing key. ### Solution Instead of using `lua_gettable` that might raise error and cause the issue, use `lua_rawget` that simply return the value from the table without triggering any metatable logic. This is promised not to raise and error. The downside of this solution is that it might be considered as breaking change, if someone rely on metatable in the returned value. An alternative solution is to wrap this entire logic with `pcall` (Lua protected call), this alternative require a much bigger refactoring. ### Back Porting The same fix will work on older versions as well (6.2, 6.0). Notice that on those version, the issue can cause Redis to crash if inside the metatable logic there is an attempt to accesses Redis (`redis.call`). On 7.0, there is not crash and the `redis.call` is executed as if it was done from inside the script itself. ### Tests Tests was added the verify the fix (cherry picked from commit 020e046b)
-
Oran Agra authored
when we know the size of the zset we're gonna store in advance, we can check if it's greater than the listpack encoding threshold, in which case we can create a skiplist from the get go, and avoid converting the listpack to skiplist later after it was already populated. (cherry picked from commit 21891003)
-
Vitaly authored
When `zrangestore` is called container destination object is created. Before this PR we used to create a listpack based object even if `zset-max-ziplist-entries` or equivalent`zset-max-listpack-entries` was set to 0. This triggered immediate conversion of the listpack into a skiplist in `zrangestore`, which hits an assertion resulting in an engine crash. Added a TCL test that reproduces this issue. (cherry picked from commit 6461f09f)
-
- 27 Apr, 2022 21 commits
-
-
Yossi Gottlieb authored
(cherry picked from commit 8bf4c2e3)
-
Oran Agra authored
-
Yossi Gottlieb authored
(cherry picked from commit 8bf433dc)
-
Oran Agra authored
-
qetu3790 authored
Consider the following example: 1. geoadd k1 -0.15307903289794921875 85 n1 0.3515625 85.00019260486917005437 n2. 2. geodist k1 n1 n2 returns "4891.9380" 3. but GEORADIUSBYMEMBER k1 n1 4891.94 m only returns n1. n2 is in the boundingbox but out of search areas.So we let search areas contain boundingbox to get n2. Co-authored-by:
Binbin <binloveplay1314@qq.com> (cherry picked from commit b2d393b9)
-
Oran Agra authored
This PR handles several aspects 1. Calls to RM_ReplyWithError from thread safe contexts don't violate thread safety. 2. Errors returning from RM_Call to the module aren't counted in the statistics (they might be handled silently by the module) 3. When a module propagates a reply it got from RM_Call to it's client, then the error statistics are counted. This is done by: 1. When appending an error reply to the output buffer, we avoid updating the global error statistics, instead we cache that error in a deferred list in the client struct. 2. When creating a RedisModuleCallReply object, the deferred error list is moved from the client into that object. 3. when a module calls RM_ReplyWithCallReply we copy the deferred replies to the dest client (if that's a real client, then that's when the error statistics are updated to the server) Note about RM_ReplyWithCallReply: if the original reply had an array with errors, and the module replied with just a portion of the original reply, and not the entire reply, the errors are currently not propagated and the errors stats will not get propagated. Fix #10180 (cherry picked from commit b099889a)
-
Binbin authored
It used to return `$-1` in RESP2, now we will return `*-1`. This is a bug in redis 6.2 when COUNT was added, the `COUNT` option was introduced in #8179. Fix #10089. the documentation of [LPOP](https://redis.io/commands/lpop) says ``` When called without the count argument: Bulk string reply: the value of the first element, or nil when key does not exist. When called with the count argument: Array reply: list of popped elements, or nil when key does not exist. ``` (cherry picked from commit 39feee8e)
-
guybe7 authored
Fix #9410 Crucial for the ms and sequence deltas, but I changed all calls, just in case (e.g. "flags") Before this commit: `ms_delta` and `seq_delta` could have overflown, causing `currid` to be wrong, which in turn would cause `streamTrim` to trim the entire rax node (see new test) (cherry picked from commit 7cd6a64d)
-
Madelyn Olson authored
(cherry picked from commit c40d23b8)
-
Meir Shpilraien (Spielrein) authored
This commit 0f8b634c (CVE-2021-32626 released in 6.2.6, 6.0.16, 5.0.14) fixes an invalid memory write issue by using `lua_checkstack` API to make sure the Lua stack is not overflow. This fix was added on 3 places: 1. `luaReplyToRedisReply` 2. `ldbRedis` 3. `redisProtocolToLuaType` On the first 2 functions, `lua_checkstack` is handled gracefully while the last is handled with an assert and a statement that this situation can not happened (only with misbehave module): > the Redis reply might be deep enough to explode the LUA stack (notice that currently there is no such command in Redis that returns such a nested reply, but modules might do it) The issue that was discovered is that user arguments is also considered part of the stack, and so the following script (for example) make the assertion reachable: ``` local a = {} for i=1,7999 do a[i] = 1 end return redis.call("lpush", "l", unpack(a)) ``` This is a regression because such a script would have worked before and now its crashing Redis. The solution is to clear the function arguments from the Lua stack which makes the original assumption true and the assertion unreachable. (cherry picked from commit 6b0b04f1)
-
Binbin authored
In #8287, some overflow checks have been added. But when `when *= 1000` overflows, it will become a positive number. And the check not able to catch it. The key will be added with a short expiration time and will deleted a few seconds later. In #9601, will check the overflow after `*=` and return an error first, and avoiding this situation. In this commit, added some tests to cover those code paths. Found it in #9825, and close it. (cherry picked from commit 9273d09d)
-
Itamar Haber authored
Introduced in #8179, this fixes the command's replies in the 0 count edge case. [BREAKING] changes the reply type when count is 0 to an empty array (instead of nil) Moves LPOP ... 0 fast exit path after type check to reply with WRONGTYPE (cherry picked from commit 06dd202a)
-
Oran Agra authored
Following #9483 the daily CI exposed a few problems. * The cluster creation code (uses redis-cli) is complicated to test with TLS enabled. for now i'm just skipping them since the tests we run there don't really need that kind of coverage * cluster port binding failures note that `find_available_port` already looks for a free cluster port but the code in `wait_server_started` couldn't detect the failure of binding (the text it greps for wasn't found in the log) (cherry picked from commit 7d6744c7)
-
qetu3790 authored
Prevent clients from being blocked forever in cluster when they block with their own module command and the hash slot is migrated to another master at the same time. These will get a redirection message when unblocked. Also, release clients blocked on module commands when cluster is down (same as other blocked clients) This commit adds basic tests for the main (non-cluster) redis test infra that test the cluster. This was done because the cluster test infra can't handle some common test features, but most importantly we only build the test modules with the non-cluster test suite. note that rather than really supporting cluster operations by the test infra, it was added (as dup code) in two files, one for module tests and one for non-modules tests, maybe in the future we'll refactor that. Co-authored-by:
Oran Agra <oran@redislabs.com> (cherry picked from commit 4962c552)
-
meir authored
The allow list is done by setting a metatable on the global table before initializing any library. The metatable set the `__newindex` field to a function that check the allow list before adding the field to the table. Fields which is not on the allow list are simply ignored. After initialization phase is done we protect the global table and each table that might be reachable from the global table. For each table we also protect the table metatable if exists.
-
meir authored
Use the new `lua_enablereadonlytable` Lua API to protect the global tables of evals scripts. The implemetation is easy, we simply call `lua_enablereadonlytable` on the global table to turn it into a readonly table.
-
- 11 Apr, 2022 1 commit
-
-
Vo Trong Phuc authored
There was no check min-slave-* config when evaluating Lua script. Add check enough good slaves for write command when evaluating scripts. Co-authored-by:
Phuc. Vo Trong <phucvt@vng.com.vn>
-
- 04 Oct, 2021 11 commits
-
-
Oran Agra authored
The execution of the RPOPLPUSH command by the fuzzer created junk keys, that were later being selected by RANDOMKEY and modified. This also meant that lists were statistically tested more than other files. Fix the fuzzer not to pass junk key names to RPOPLPUSH, and add a check that detects that new keys are not added by the fuzzer to detect future similar issues. (cherry picked from commit 3f3f678a)
-
zhaozhao.zz authored
When a replica paused, it would not apply any commands event the command comes from master, if we feed the non-applied command to replication stream, the replication offset would be wrong, and data would be lost after failover(since replica's `master_repl_offset` grows but command is not applied). To fix it, here are the changes: * Don't update replica's replication offset or propagate commands to sub-replicas when it's paused in `commandProcessed`. * Show `slave_read_repl_offset` in info reply. * Add an assert to make sure master client should never be blocked unless pause or module (some modules may use block way to do background (parallel) processing and forward original block module command to the replica, it's not a good way but it can work, so the assert excludes module now, but someday in future all modules should rewrite block command to propagate like what `BLPOP` does). (cherry picked from commit 1b83353d)
-
Madelyn Olson authored
(cherry picked from commit 8b8f05c8)
-
sundb authored
This commit mainly fixes empty keys due to RDB loading and restore command, which was omitted in #9297. 1) When loading quicklsit, if all the ziplists in the quicklist are empty, NULL will be returned. If only some of the ziplists are empty, then we will skip the empty ziplists silently. 2) When loading hash zipmap, if zipmap is empty, sanitization check will fail. 3) When loading hash ziplist, if ziplist is empty, NULL will be returned. 4) Add RDB loading test with sanitize. (cherry picked from commit cbda4929)
-
DarrenJiang13 authored
add error counting for some missed behaviors. (cherry picked from commit 43eb0ce3)
-
Oran Agra authored
Recently we found two issues in the fuzzer tester: #9302 #9285 After fixing them, more problems surfaced and this PR (as well as #9297) aims to fix them. Here's a list of the fixes - Prevent an overflow when allocating a dict hashtable - Prevent OOM when attempting to allocate a huge string - Prevent a few invalid accesses in listpack - Improve sanitization of listpack first entry - Validate integrity of stream consumer groups PEL - Validate integrity of stream listpack entry IDs - Validate ziplist tail followed by extra data which start with 0xff Co-authored-by:
sundb <sundbcn@gmail.com> (cherry picked from commit 0c90370e)
-
sundb authored
When we load rdb or restore command, if we encounter a length of 0, it will result in the creation of an empty key. This could either be a corrupt payload, or a result of a bug (see #8453 ) This PR mainly fixes the following: 1) When restore command will return `Bad data format` error. 2) When loading RDB, we will silently discard the key. Co-authored-by:
Oran Agra <oran@redislabs.com> (cherry picked from commit 8ea777a6)
-
Viktor Söderqvist authored
(cherry picked from commit 1c59567a)
-