- 28 Feb, 2023 6 commits
-
-
Harkrishn Patro authored
Currently while a sharded pubsub message publish tries to propagate the message across the cluster, a NULL check is missing for clusterLink. clusterLink could be NULL if the link is causing memory beyond the set threshold cluster-link-sendbuf-limit and server terminates the link. This change introduces two things: Avoids the engine crashes on the publishing node if a message is tried to be sent to a node and the link is NULL. Adds a debugging tool CLUSTERLINK KILL to terminate the clusterLink between two nodes. (cherry picked from commit fd397568)
-
Binbin authored
Change history: - `user` added in 6.0.0, 0f42447a - `argv-mem` and `tot-mem` added in 6.2.0, bea40e6a - `redir` added in 6.2.0, dd1f20ed - `resp` added in 7.0.0, 7c376398 - `multi-mem` added in 7.0.0, 2753429c - `rbs` and `rbp` added in 7.0.0, 47c51d0c - `ssub` added in 7.0.3, 35c2ee87 (cherry picked from commit e7f35edb)
-
uriyage authored
In #7875 (Redis 6.2), we changed the sds alloc to be the usable allocation size in order to: > reduce the need for realloc calls by making the sds implicitly take over the internal fragmentation This change was done most sds functions, excluding `sdsRemoveFreeSpace` and `sdsResize`, the reason is that in some places (e.g. clientsCronResizeQueryBuffer) we call sdsRemoveFreeSpace when we see excessive free space and want to trim it. so if we don't trim it exactly to size, the caller may still see excessive free space and call it again and again. However, this resulted in some excessive calls to realloc, even when there's no need and it's gonna be a no-op (e.g. when reducing 15 bytes allocation to 13). It turns out that a call for realloc with jemalloc can be expensive even if it ends up doing nothing, so this PR adds a check using `je_nallocx`, which is cheap to avoid the call for realloc. in addition to that this PR unifies sdsResize and sdsRemoveFreeSpace into common code. the difference between them was that sdsResize would avoid using SDS_TYPE_5, since it want to keep the string ready to be resized again, while sdsRemoveFreeSpace would permit using SDS_TYPE_5 and get an optimal memory consumption. now both methods take a `would_regrow` argument that makes it more explicit. the only actual impact of that is that in clientsCronResizeQueryBuffer we call both sdsResize and sdsRemoveFreeSpace for in different cases, and we now prevent the use of SDS_TYPE_5 in both. The new test that was added to cover this concern used to pass before this PR as well, this PR is just a performance optimization and cleanup. Benchmark: `redis-benchmark -c 100 -t set -d 512 -P 10 -n 100000000` on i7-9850H with jemalloc, shows improvement from 1021k ops/sec to 1067k (average of 3 runs). some 4.5% improvement. Co-authored-by:
Oran Agra <oran@redislabs.com> (cherry picked from commit 46393f98)
-
Madelyn Olson authored
This change improves the performance of cluster slots by removing the deferring lengths that are used. Deferring lengths are used in two contexts, the first is for determining the number of replicas that serve a slot (Added in 6.2 as part of a different performance improvement) and the second is for determining the extra networking options for each node (Added in 7.0). For continuous slots, (e.g. 0-8196) this improvement is very negligible, however it becomes more significant when slots are not continuous (e.g. 0 2 4 6 etc) which can happen in production for various users. The `cluster slots` command is deprecated in favor of `cluster shards`, but since most clients don't support the new command yet I think it's important to not degrade performance here. Benchmarking shows about 2x improvement, however I wasn't able to get a coherent TPS number since the benchmark process was being saturated long before Redis was, so had to run with multiple benchmarks and merge results. If needed I can add this to our memtier framework. Instead the next section shows the number of usec per call from the benchmark results, which shows significant improvement as well as having a more coherent response in the CoB. | | New Code | Old Code | % Improvements |----|----|----- |----- | Uniform slots| usec_per_call=10.46 | usec_per_call=11.03 | 5.7% | Worst case (Only even slots)| usec_per_call=963.80 | usec_per_call=2950.99 | 307% This change also removes some extra white space that I added a when making a code change for adding hostnames. (cherry picked from commit e74a1f3b)
-
guybe7 authored
We need to honor the post-execution-unit API and call it after each KSN Note that this is an edge case that only happens in case volatile keys were created directly on a writable replica, and that anyway nothing is propagated to sub-replicas Co-authored-by:
Oran Agra <oran@redislabs.com> (cherry picked from commit df327b8b)
-
- 16 Jan, 2023 11 commits
-
-
Oran Agra authored
-
Oran Agra authored
missing range check in ZRANDMEMBER and HRANDIFLD leading to panic due to protocol limitations
-
Oran Agra authored
Authenticated users issuing specially crafted SETRANGE and SORT(_RO) commands can trigger an integer overflow, resulting with Redis attempting to allocate impossible amounts of memory and abort with an OOM panic.
-
Oran Agra authored
Related to the hang reported in #11671 Currently, redis can disconnect a client due to reaching output buffer limit, it'll also avoid feeding that output buffer with more data, but it will keep running the loop in the command (despite the client already being marked for disconnection) This PR is an attempt to mitigate the problem, specifically for commands that are easy to abuse, specifically: KEYS, HRANDFIELD, SRANDMEMBER, ZRANDMEMBER. The RAND family of commands can take a negative COUNT argument (which is not bound to the number of elements in the key), so it's enough to create a key with one field, and then these commands can be used to hang redis. For KEYS the caller can use the existing keyspace in redis (if big enough).
-
Oran Agra authored
Turns out that a fork child calling getExpire while persisting keys (and possibly also a result of some module fork tasks) could cause dictFind to do incremental rehashing in the child process, which is both a waste of time, and also causes COW harm. (cherry picked from commit 2bec254d)
-
Gabi Ganam authored
Any value in the range of [0-1) turns to 0 when being cast from double to long long. This change rounds up instead of down for values that can't be stored precisely as long doubles. (cherry picked from commit eef29b68)
-
Oran Agra authored
TLDR: solve a problem introduced in Redis 7.0.6 (#11541) with RM_CommandFilterArgInsert being called from scripts, which can lead to memory corruption. Libc realloc can return the same pointer even if the size was changed. The code in freeLuaRedisArgv had an assumption that if the pointer didn't change, then the allocation didn't change, and the cache can still be reused. However, if rewriteClientCommandArgument or RM_CommandFilterArgInsert were used, it could be that we realloced the argv array, and the pointer didn't change, then a consecutive command being executed from Lua can use that argv cache reaching beyond its size. This was actually only possible with modules, since the decision to realloc was based on argc, rather than argv_len. (cherry picked from commit c8052122)
-
sundb authored
This call is introduced in #8687, but became irrelevant in #11348, and is currently a no-op. The fact is that #11348 an unintended side effect, which is that even if the client eviction config is enabled, there are certain types of clients for which memory consumption is not accurately tracked, and so unlike normal clients, their memory isn't reported correctly in INFO. (cherry picked from commit af0a4fe2)
-
Moti Cohen authored
As Sentinel supports dynamic IP only when using hostnames, there are few leftover addess comparison logic that doesn't take into account that the IP might get change. Co-authored-by:
moticless <moticless@github.com> (cherry picked from commit 4a27aa48)
-
- 16 Dec, 2022 2 commits
-
-
Oran Agra authored
-
filipe oliveira authored
Fixes a regression introduced by #11552 in 7.0.6. it causes replies in the GEO commands to contain garbage when the result is a very small distance (less than 1) Includes test to confirm indeed with junk in buffer now we properly reply (cherry picked from commit d7b4c917)
-
- 12 Dec, 2022 21 commits
-
-
Oran Agra authored
-
Binbin authored
In replica, the key expired before master's `INCR` was arrived, so INCR creates a new key in the replica and the test failed. ``` *** [err]: Replication of an expired key does not delete the expired key in tests/integration/replication-4.tcl Expected '0' to be equal to '1' (context: type eval line 13 cmd {assert_equal 0 [$slave exists k]} proc ::test) ``` This test is very likely to do a false positive if the `wait_for_ofs_sync` takes longer than the expiration time, so give it a few more chances. The test was introduced in #9572. (cherry picked from commit 06b577aa)
-
Binbin authored
There is a race condition in the test: ``` *** [err]: redis-cli --cluster add-node with cluster-port in tests/unit/cluster/cli.tcl Expected '5' to be equal to '4' {assert_equal 5 [CI 0 cluster_known_nodes]} proc ::test) ``` When using cli to add node, there can potentially be a race condition in which all nodes presenting cluster state o.k even though the added node did not yet meet all cluster nodes. This comment and the fix were taken from #11221. Also apply it in several other similar places. (cherry picked from commit a549b78c)
-
ranshid authored
When using cli to add node, there can potentially be a race condition in which all nodes presenting cluster state o.k even though the added node did not yet meet all cluster nodes. this adds another utility function to wait until all cluster nodes see the same cluster size (cherry picked from commit c0ce97fa)
-
Binbin authored
A timing issue like this was reported in freebsd daily CI: ``` *** [err]: Sanity test push cmd after resharding in tests/unit/cluster/cli.tcl Expected 'CLUSTERDOWN The cluster is down' to match '*MOVED*' ``` We additionally wait for each node to reach a consensus on the cluster state in wait_for_condition to avoid the cluster down error. The fix just like #10495, quoting madolson's comment: Cluster check just verifies the the config state is self-consistent, waiting for cluster_state to be okay is an independent check that all the nodes actually believe each other are healthy. At the same time i noticed that unit/moduleapi/cluster.tcl has an exact same test, may have the same problem, also modified it. (cherry picked from commit 5ce64ab0)
-
David CARLIER authored
* Fixes build warning when CACHE_LINE_SIZE is already defined * Fixes wrong CACHE_LINE_SIZE on some FreeBSD systems where it could be set to 128 (e.g. on MIPS) * Fixes wrong CACHE_LINE_SIZE on Apple M1 (use 128 instead of 64) Wrong cache line size in that case can some false sharing of array elements between threads, see #10892 (cherry picked from commit 871cc200)
-
Oran Agra authored
Clang Address Sanitizer tests started reporting unknown-crash on these tests due to the memcheck, disable the memcheck to avoid that noise. (cherry picked from commit 18ff6a3269a34b9bfe549b34bda9d83c3eae7e2a)
-
Oran Agra authored
This test sets the master ping interval to 1 hour, in order to avoid pings in the replicatoin stream incrementing the replication offset, however, it didn't increase the repl-timeout so on slow machines where the test took more than 60 seconds, the replicas would drop and reconnect. ``` *** [err]: PSYNC2: Partial resync after restart using RDB aux fields in tests/integration/psync2.tcl Replica didn't partial sync ``` The test would detect 4 additional partial syncs where it expects only one. (cherry picked from commit b0250b45)
-
Binbin authored
Our FreeBSD daily has been failing recently: ``` Config file: freebsd-13.1.conf cd: /Users/runner/work/redis/redis: No such file or directory gmake: *** No targets specified and no makefile found. Stop. ``` Upgrade vmactions/freebsd-vm to the latest version (0.3.0) can work. I've tested it, but don't know why, but first let's fix it. (cherry picked from commit 5246bf45)
-
Binbin authored
The kill above is sometimes successful and sometimes already too late. The PING in pysnc wrong offset test got rejected by bgsaveerr because lastbgsave_status is C_ERR. In theory, using diskless can avoid PING being affected, because when the replica is dropped, we will kill the child with SIGUSR1, and this will not affect lastbgsave_status. Anyway, this kill is not particularly needed here, dropping the kill is the best one, since we do have the waitForBgsave, so just let it take care of the bgsave. No need for fast termination. (cherry picked from commit e7144693)
-
filipe oliveira authored
There is overhead on Redis 7.0 EXPIRE command that is not present on 6.2.7. We could see that on the unstable profile there are around 7% of CPU cycles spent on rewriteClientCommandVector that are not present on 6.2.7. This was introduced in #8474. This PR reduces the overhead by using 2X rewriteClientCommandArgument instead of rewriteClientCommandVector. In this scenario rewriteClientCommandVector creates 4 arguments. the above usage of rewriteClientCommandArgument reduces the overhead in half. This PR should also improve PEXPIREAT performance by avoiding at all rewriteClientCommandArgument usage. Co-authored-by:
Oran Agra <oran@redislabs.com> (cherry picked from commit c3fb48da)
-
Harkrishn Patro authored
## Issue During the client input/output buffer processing, the memory usage is incrementally updated to keep track of clients going beyond a certain threshold `maxmemory-clients` to be evicted. However, this additional tracking activity leads to unnecessary CPU cycles wasted when no client-eviction is required. It is applicable in two cases. * `maxmemory-clients` is set to `0` which equates to no client eviction (applicable to all clients) * `CLIENT NO-EVICT` flag is set to `ON` which equates to a particular client not applicable for eviction. ## Solution * Disable client memory usage tracking during the read/write flow when `maxmemory-clients` is set to `0` or `client no-evict` is `on`. The memory usage is tracked only during the `clientCron` i.e. it gets periodically updated. * Cleanup the clients from the memory usage bucket when client eviction is disabled. * When the maxmemory-clients config is enabled or disabled at runtime, we immediately update the memory usage buckets for all clients (tested scanning 80000 took some 20ms) Benchmark shown that this can improve performance by about 5% in certain situations. Co-authored-by:
Oran Agra <oran@redislabs.com> (cherry picked from commit c0267b3f)
-
Binbin authored
There is a issue with --sentinel: ``` [root]# src/redis-server sentinel.conf --sentinel --loglevel verbose *** FATAL CONFIG FILE ERROR (Redis 255.255.255) *** Reading the configuration file, at line 352 >>> 'sentinel "--loglevel" "verbose"' Unrecognized sentinel configuration statement ``` This is because in #10660 (Redis 7.0.1), `--` prefix change break it. In this PR, we will handle `--sentinel` the same as we did for `--save` in #10866. i.e. it's a pseudo config option with no value. (cherry picked from commit 8f13ac10)
-
filipe oliveira authored
This is take 2 of `GEOSEARCH BYBOX` optimizations based on haversine distance formula when longitude diff is 0. The first one was in #11535 . - Given longitude diff is 0 the asin(sqrt(a)) on the haversine is asin(sin(abs(u))). - arcsin(sin(x)) equal to x when x ∈[−𝜋/2,𝜋/2]. - Given latitude is between [−𝜋/2,𝜋/2] we can simplifiy arcsin(sin(x)) to x. On the sample dataset with 60M datapoints, we've measured 55% increase in the achievable ops/sec. (cherry picked from commit e48ac075)
-
filipe oliveira authored
This mechanism aims to reduce calls to malloc and free when preparing the arguments the script sends to redis commands. This is a mechanism was originally implemented in 48c49c48 and 4f686555 , and was removed in #10220 (thinking it's not needed and that it has no impact), but it now turns out it was wrong, and it indeed provides some 5% performance improvement. The implementation is a little bit too simplistic, it assumes consecutive calls use the same size in the same arg index, but that's arguably sufficient since it's only aimed at caching very small things. We could even consider always pre-allocating args to the full LUA_CMD_OBJCACHE_MAX_LEN (64 bytes) rather than the right size for the argument, that would increase the chance they'll be able to be re-used. But in some way this is already happening since we're using sdsalloc, which in turn uses s_malloc_usable and takes ownership of the full side of the allocation, so we are padded to the allocator bucket size. Co-authored-by:
Oran Agra <oran@redislabs.com> Co-authored-by:
sundb <sundbcn@gmail.com> (cherry picked from commit 2d80cd78)
-
filipe oliveira authored
GEODIST used snprintf("%.4f") for the reply using addReplyDoubleDistance, which was slow. This PR optimizes it without breaking compatibility by following the approach of ll2string with some changes to match the use case of distance and precision. I.e. we multiply it by 10000 format it as an integer, and then add a decimal point. This can achieve about 35% increase in the achievable ops/sec. Co-authored-by:
Oran Agra <oran@redislabs.com> (cherry picked from commit 61c85a2b)
-
Yossi Gottlieb authored
* Remove duplicate code, propagating SSL errors into connection state. * Add missing error handling in synchronous IO functions. * Fix connection error reporting in some replication flows. (cherry picked from commit 155acef5)
-
filipe oliveira authored
profiling EVALSHA\ we see that luaReplyToRedisReply takes 8.73% out of the 56.90% of luaCallFunction CPU cycles. Using addReplyStatusLength instead of directly composing the protocol to avoid sdscatprintf and addReplySds ( which imply multiple sdslen calls ). The new approach drops luaReplyToRedisReply CPU cycles to 3.77% (cherry picked from commit 68e87eb0)
-
filipe oliveira authored
As being discussed in #10981 we see a degradation in performance between v6.2 and v7.0 of Redis on the EVAL command. After profiling the current unstable branch we can see that we call the expensive function evalCalcFunctionName twice. The current "fix" is to basically avoid calling evalCalcFunctionName and even dictFind(lua_scripts) twice for the same command. Instead we cache the current script's dictEntry (for both Eval and Functions) in the current client so we don't have to repeat these calls. The exception would be when doing an EVAL on a new script that's not yet in the script cache. in that case we will call evalCalcFunctionName (and even evalExtractShebangFlags) twice. Co-authored-by:
Oran Agra <oran@redislabs.com> (cherry picked from commit 7dfd7b91)
-
zhaozhao.zz authored
redis-benchmark: when trying to get the CONFIG before benchmark, avoid printing any warning on most errors (e.g. NOPERM error). avoid aborting the benchmark on NOPERM. keep the warning only when we abort the benchmark on a NOAUTH error (cherry picked from commit f0005b53)
-