- 12 Dec, 2022 17 commits
-
-
Oran Agra authored
Fix a few issues with the recent #11463 * use exitFromChild instead of exit * test should ignore defunct process since that's what we expect to happen for thees child processes when the parent dies. * fix typo Co-authored-by:
Binbin <binloveplay1314@qq.com> (cherry picked from commit 4c54528f)
-
Oran Agra authored
During a diskless sync, if the master main process crashes, the child would have hung in `write`. This fix closes the read fd on the child side, so that if the parent crashes, the child will get a write error and exit. This change also fixes disk-based replication, BGSAVE and AOFRW. In that case the child wouldn't have been hang, it would have just kept running until done which may be pointless. There is a certain degree of risk here. in case there's a BGSAVE child that could maybe succeed and the parent dies for some reason, the old code would have let the child keep running and maybe succeed and avoid data loss. On the other hand, if the parent is restarted, it would have loaded an old rdb file (or none), and then the child could reach the end and rename the rdb file (data conflicting with what the parent has), or also have a race with another BGSAVE child that the new parent started. Note that i removed a comment saying a write error will be ignored in the child and handled by the parent (this comment was very old and i don't think relevant). (cherry picked from commit ccaef5c9)
-
Moti Cohen authored
Funcion sentinelAddrEqualsHostname() of sentinel makes DNS resolve and based on it determines if two IP addresses are equal. Now, If the DNS resolve command fails, the function simply returns 0, even if the hostnames are identical. This might become an issue in case of failover such that sentinel might receives from Redis instance, response to regular INFO query it sent, and wrongly decide that the instance is pointing to is different leader than the one recorded because of this function, yet hostnames are identical. In turn sentinel disconnects the connection between sentinel and valid slave which leads to -failover-abort-no-good-slave. See issue #11241. I managed to reproduce only part of the flow in which the function return wrong result and trigger +fix-slave-config. The fix is even if the function failed to resolve then compare based on hostnames. That is our best effort as long as the server is unavailable for some reason. It is fine since Redis instance cannot have multiple hostnames for a given setup (cherry picked from commit bd23b15a)
-
Meir Shpilraien (Spielrein) authored
If Redis crashes due to calling an invalid function pointer, the `backtrace` function will try to dereference this invalid pointer which will cause a crash inside the crash report and will kill the processes without having all the crash report information. Example: ``` === REDIS BUG REPORT START: Cut & paste starting from here === 198672:M 19 Sep 2022 18:06:12.936 # Redis 255.255.255 crashed by signal: 11, si_code: 1 198672:M 19 Sep 2022 18:06:12.936 # Accessing address: 0x1 198672:M 19 Sep 2022 18:06:12.936 # Crashed running the instruction at: 0x1 // here the processes is crashing ``` This PR tries to fix this crash be: 1. Identify the issue when it happened. 2. Replace the invalid pointer with a pointer to some dummy function so that `backtrace` will not crash. I identification is done by comparing `eip` to `info->si_addr`, if they are the same we know that the crash happened on the same address it tries to accesses and we can conclude that it tries to call and invalid function pointer. To replace the invalid pointer we introduce a new function, `setMcontextEip`, which is very similar to `getMcontextEip` and it knows to set the Eip for the different supported OS's. After printing the trace we retrieve the old `Eip` value. (cherry picked from commit 0bf90d94)
-
zhaozhao.zz authored
This bug is introduced in #7653. (Redis 6.2.0) When `server.maxmemory_eviction_tenacity` is 100, `eviction_time_limit_us` is `ULONG_MAX`, and if we cannot find the best key to delete (e.g. maxmemory-policy is `volatile-lru` and all keys with ttl have been evicted), in `cant_free` redis will sleep forever if some items are being freed in the lazyfree thread. (cherry picked from commit 464aa041)
-
DarrenJiang13 authored
Fix bug with scripts ignoring client tracking NOLOOP and send an invalidation message anyway. (cherry picked from commit 44859a41)
-
Meir Shpilraien (Spielrein) authored
Fix #11030, use lua_rawget to avoid triggering metatables. #11030 shows how return `_G` from the Lua script (either function or eval), cause the Lua interpreter to Panic and the Redis processes to exit with error code 1. Though return `_G` only panic on Redis 7 and 6.2.7, the underline issue exists on older versions as well (6.0 and 6.2). The underline issue is returning a table with a metatable such that the metatable raises an error. The following example demonstrate the issue: ``` 127.0.0.1:6379> eval "local a = {}; setmetatable(a,{__index=function() foo() end}) return a" 0 Error: Server closed the connection ``` ``` PANIC: unprotected error in call to Lua API (user_script:1: Script attempted to access nonexistent global variable 'foo') ``` The Lua panic happened because when returning the result to the client, Redis needs to introspect the returning table and transform the table into a resp. In order to scan the table, Redis uses `lua_gettable` api which might trigger the metatable (if exists) and might raise an error. This code is not running inside `pcall` (Lua protected call), so raising an error causes the Lua to panic and exit. Notice that this is not a crash, its a Lua panic that exit with error code 1. Returning `_G` panics on Redis 7 and 6.2.7 because on those versions `_G` has a metatable that raises error when trying to fetch a none existing key. ### Solution Instead of using `lua_gettable` that might raise error and cause the issue, use `lua_rawget` that simply return the value from the table without triggering any metatable logic. This is promised not to raise and error. The downside of this solution is that it might be considered as breaking change, if someone rely on metatable in the returned value. An alternative solution is to wrap this entire logic with `pcall` (Lua protected call), this alternative require a much bigger refactoring. ### Back Porting The same fix will work on older versions as well (6.2, 6.0). Notice that on those version, the issue can cause Redis to crash if inside the metatable logic there is an attempt to accesses Redis (`redis.call`). On 7.0, there is not crash and the `redis.call` is executed as if it was done from inside the script itself. ### Tests Tests was added the verify the fix (cherry picked from commit 020e046b)
-
chenyang8094 authored
RM_SetAbsExpire and RM_GetAbsExpire were not actually operational since they were introduced, due to omission in API registration. (cherry picked from commit 39d216a3)
-
Yossi Gottlieb authored
Use SSL_shutdown(), in a best-effort manner, when closing a TLS connection. This change better supports OpenSSL 3.x clients that will not silently ignore the socket-level EOF. (cherry picked from commit 45ae6053)
-
Oran Agra authored
when we know the size of the zset we're gonna store in advance, we can check if it's greater than the listpack encoding threshold, in which case we can create a skiplist from the get go, and avoid converting the listpack to skiplist later after it was already populated. (cherry picked from commit 21891003)
-
Vitaly authored
When `zrangestore` is called container destination object is created. Before this PR we used to create a listpack based object even if `zset-max-ziplist-entries` or equivalent`zset-max-listpack-entries` was set to 0. This triggered immediate conversion of the listpack into a skiplist in `zrangestore`, which hits an assertion resulting in an engine crash. Added a TCL test that reproduces this issue. (cherry picked from commit 6461f09f)
-
Madelyn Olson authored
Unpause clients after manual failover ends instead of the timed offset (cherry picked from commit 32215e78)
-
- 06 Dec, 2022 1 commit
-
-
Meir Shpilraien (Spielrein) authored
On v6.2.7 a new mechanism was added to Lua scripts that allows filtering the globals of the Lua interpreter. This mechanism was added on the following commit: https://github.com/redis/redis/commit/11b602fbf8f9cdf8fc741c24625ab6287ab998a9 One of the globals that was filtered out was `__redis__compare_helper`. This global was missed and was not added to the allow list or to the deny list. This is why we get the following warning when Redis starts: `A key '__redis__compare_helper' was added to Lua globals which is not on the globals allow list nor listed on the deny list.` After investigating the git blame log, the conclusion is that `__redis__compare_helper` is no longer needed, the PR deletes this function, and fixes the warning. Detailed Explanation: `__redis__compare_helper` was added on this commit: https://github.com/redis/redis/commit/2c861050c1 Its purpose is to sort the replies of `SORT` command when script replication is enable and keep the replies deterministic and avoid primary and replica synchronization issues. On `SORT` command, there was a need for special compare function that are able to compare boolean values. The need to sort the `SORT` command reply was removed on this commit: https://github.com/redis/redis/commit/36741b2c818a95e8ef167818271614ee6b1bc414 The sorting was moved to be part of the `SORT` command and there was not longer a need to sort it on the Lua interpreter. The commit made `__redis__compare_helper` a dead code but did not deleted it.
-
- 27 Apr, 2022 22 commits
-
-
Oran Agra authored
-
Yossi Gottlieb authored
(cherry picked from commit 8bf4c2e3)
-
Oran Agra authored
-
filipe oliveira authored
Optimization: Use either monotonic or wall-clock to measure command execution time, to regain up to 4% execution time (#10502) In #7491 (part of redis 6.2), we started using the monotonic timer instead of mstime to measure command execution time for stats, apparently this meant sampling the clock 3 times per command rather than two (wince we also need the wall-clock time). In some cases this causes a significant overhead. This PR fixes that by avoiding the use of monotonic timer, except for the cases were we know it should be extremely fast. This PR also adds a new INFO field called `monotonic_clock` that shows which clock redis is using. Co-authored-by:
Oran Agra <oran@redislabs.com> (cherry picked from commit 3cd8baf6)
-
Oran Agra authored
-
Yossi Gottlieb authored
(cherry picked from commit 8bf433dc)
-
Ozan Tezcan authored
Partial cherry pick from #9601 in order for the tests in #9601 to pass (cherry picked from commit b91d8b28)
-
Oran Agra authored
-
filipe oliveira authored
Avoid deferred array reply on genericZrangebyrankCommand() when consumer type is client. I.e. any ZRANGE / ZREVRNGE (when tank is used). This was a performance regression introduced in #7844 (v 6.2) mainly affecting pipelined workloads. Co-authored-by:
Oran Agra <oran@redislabs.com> (cherry picked from commit 1dc89e2d)
-
filipe oliveira authored
Avoid sprintf/ll2string on setDeferredAggregateLen()/addReplyLongLongWithPrefix() when we can used shared objects. In some pipelined workloads this achieves about 10% improvement. Co-authored-by:
Oran Agra <oran@redislabs.com> (cherry picked from commit b857928b)
-
qetu3790 authored
Consider the following example: 1. geoadd k1 -0.15307903289794921875 85 n1 0.3515625 85.00019260486917005437 n2. 2. geodist k1 n1 n2 returns "4891.9380" 3. but GEORADIUSBYMEMBER k1 n1 4891.94 m only returns n1. n2 is in the boundingbox but out of search areas.So we let search areas contain boundingbox to get n2. Co-authored-by:
Binbin <binloveplay1314@qq.com> (cherry picked from commit b2d393b9)
-
Yossi Gottlieb authored
* Drop obsolete initialization calls. * Use decoder API for DH parameters. * Enable auto DH parameters if not explicitly used, which should be the preferred configuration going forward. (cherry picked from commit 3881f785)
-
Oran Agra authored
This PR handles several aspects 1. Calls to RM_ReplyWithError from thread safe contexts don't violate thread safety. 2. Errors returning from RM_Call to the module aren't counted in the statistics (they might be handled silently by the module) 3. When a module propagates a reply it got from RM_Call to it's client, then the error statistics are counted. This is done by: 1. When appending an error reply to the output buffer, we avoid updating the global error statistics, instead we cache that error in a deferred list in the client struct. 2. When creating a RedisModuleCallReply object, the deferred error list is moved from the client into that object. 3. when a module calls RM_ReplyWithCallReply we copy the deferred replies to the dest client (if that's a real client, then that's when the error statistics are updated to the server) Note about RM_ReplyWithCallReply: if the original reply had an array with errors, and the module replied with just a portion of the original reply, and not the entire reply, the errors are currently not propagated and the errors stats will not get propagated. Fix #10180 (cherry picked from commit b099889a)
-
ivanstosic-janea authored
The protocol error was caused by the buggy `writeHandler` in `redis-benchmark.c`, which didn't handle one of the cases, thereby repeating data, leading to protocol errors when the values being sent are very long. This PR fixes #10233, issue introduced by #7959 (cherry picked from commit bb875603)
-
Binbin authored
`PSYNC replicationid str_offset` will crash the server. The reason is in `masterTryPartialResynchronization`, we will call `getLongLongFromObjectOrReply` check the offset. With a wrong offset, it will add a reply and then trigger a full SYNC and the client become a replica. So crash in `c->bufpos == 0 && listLength(c->reply) == 0`. In this commit, we check the psync_offset before entering function `masterTryPartialResynchronization`, and return. Regardless of that crash, accepting the sync, but also replying with an error would have corrupt the replication stream. (cherry picked from commit 344e41c9)
-
Moti Cohen authored
As Sentinel relies upon consensus algorithm, all sentinel instances, randomize a time to initiate their next attempt to become the leader of the group. But time after time, all raffled the same value. The problem is in the line `srand(time(NULL)^getpid())` such that all spinned up containers get same time (in seconds) and same pid which is always 1. Added material `tv_usec` and verify that even consecutive calls brings different values and makes the difference. (cherry picked from commit 52b2fbe9)
-