- 10 Jun, 2020 2 commits
- 09 Jun, 2020 1 commit
-
-
antirez authored
-
- 03 Jun, 2020 1 commit
-
-
zhaozhao.zz authored
-
- 31 May, 2020 1 commit
-
-
Oran Agra authored
-
- 28 May, 2020 4 commits
-
-
antirez authored
This will likely avoid false positives due to trailing pings.
-
antirez authored
-
Oran Agra authored
these tests create several edge cases that are otherwise uncovered (at least not consistently) by the test suite, so although they're no longer testing what they were meant to test, it's still a good idea to keep them in hope that they'll expose some issue in the future.
-
Oran Agra authored
-
- 27 May, 2020 4 commits
- 26 May, 2020 1 commit
-
-
Oran Agra authored
apparently when running tests in parallel (the default of --clients 16), there's a chance for two tests to use the same port. specifically, one test might shutdown a master and still have the replica up, and then another test will re-use the port number of master for another master, and then that replica will connect to the master of the other test. this can cause a master to count too many full syncs and fail a test if we run the tests with --single integration/psync2 --loop --stop see Probmem 2 in #7314
-
- 25 May, 2020 1 commit
-
-
antirez authored
-
- 22 May, 2020 1 commit
-
-
Qu Chen authored
Disconnect chained replicas when the replica performs PSYNC with the master always to avoid replication offset mismatch between master and chained replicas.
-
- 20 May, 2020 1 commit
-
-
Oran Agra authored
There's a rare case which leads to stagnation in the defragger, causing it to keep scanning the keyspace and do nothing (not moving any allocation), this happens when all the allocator slabs of a certain bin have the same % utilization, but the slab from which new allocations are made have a lower utilization. this commit fixes it by removing the current slab from the overall average utilization of the bin, and also eliminate any precision loss in the utilization calculation and move the decision about the defrag to reside inside jemalloc. and also add a test that consistently reproduce this issue.
-
- 18 May, 2020 1 commit
-
- 17 May, 2020 2 commits
-
-
antirez authored
-
- 14 May, 2020 1 commit
-
-
antirez authored
-
- 12 May, 2020 1 commit
-
-
Oran Agra authored
this test which has coverage for varoius flows of diskless master was failing randomly from time to time. the failure was: [err]: diskless all replicas drop during rdb pipe in tests/integration/replication.tcl log message of '*Diskless rdb transfer, last replica dropped, killing fork child*' not found what seemed to have happened is that the master didn't detect that all replicas dropped by the time the replication ended, it thought that one replica is still connected. now the test takes a few seconds longer but it seems stable.
-
- 11 May, 2020 3 commits
-
-
Oran Agra authored
This bug was introduced by a recent change in which readQueryFromClient is using freeClientAsync, and despite the fact that now freeClientsInAsyncFreeQueue is in beforeSleep, that's not enough since it's not called during loading in processEventsWhileBlocked. furthermore, afterSleep was called in that case but beforeSleep wasn't. This bug also caused slowness sine the level-triggered mode of epoll kept signaling these connections as readable causing us to keep doing connRead again and again for ll of these, which keep accumulating. now both before and after sleep are called, but not all of their actions are performed during loading, some are only reserved for the main loop. fixes issue #7215
-
WuYunlong authored
-
WuYunlong authored
-
- 10 May, 2020 1 commit
-
-
Yossi Gottlieb authored
Seems like on some systems choosing specific TLS v1/v1.1 versions no longer works as expected. Test is reduced for v1.2 now which is still good enough to test the mechansim, and matters most anyway.
-
- 05 May, 2020 1 commit
-
-
antirez authored
-
- 04 May, 2020 1 commit
-
-
Oran Agra authored
* fix memlry leaks with diskless replica short read. * fix a few timing issues with valgrind runs * fix issue with valgrind and watchdog schedule signal about the valgrind WD issue: the stack trace test in logging.tcl, has issues with valgrind: ==28808== Can't extend stack to 0x1ffeffdb38 during signal delivery for thread 1: ==28808== too small or bad protection modes it seems to be some valgrind bug with SA_ONSTACK. SA_ONSTACK seems unneeded since WD is not recursive (SA_NODEFER was removed), also, not sure if it's even valid without a call to sigaltstack()
-
- 02 May, 2020 1 commit
-
-
zhenwei pi authored
Currently, there are several types of threads/child processes of a redis server. Sometimes we need deeply optimise the performance of redis, so we would like to isolate threads/processes. There were some discussion about cpu affinity cases in the issue: https://github.com/antirez/redis/issues/2863 So implement cpu affinity setting by redis.conf in this patch, then we can config server_cpulist/bio_cpulist/aof_rewrite_cpulist/ bgsave_cpulist by cpu list. Examples of cpulist in redis.conf: server_cpulist 0-7:2 means cpu affinity 0,2,4,6 bio_cpulist 1,3 means cpu affinity 1,3 aof_rewrite_cpulist 8-11 means cpu affinity 8,9,10,11 bgsave_cpulist 1,10-11 means cpu affinity 1,10,11 Test on linux/freebsd, both work fine. Signed-off-by:
zhenwei pi <pizhenwei@bytedance.com>
-
- 28 Apr, 2020 2 commits
-
-
Guy Benoish authored
Introducing XINFO STREAM <key> FULL
-
Oran Agra authored
-
- 27 Apr, 2020 1 commit
-
-
Oran Agra authored
Now both master and replicas keep track of the last replication offset that contains meaningful data (ignoring the tailing pings), and both trim that tail from the replication backlog, and the offset with which they try to use for psync. the implication is that if someone missed some pings, or even have excessive pings that the promoted replica has, it'll still be able to psync (avoid full sync). the downside (which was already committed) is that replicas running old code may fail to psync, since the promoted replica trims pings form it's backlog. This commit adds a test that reproduces several cases of promotions and demotions with stale and non-stale pings Background: The mearningful offset on the master was added recently to solve a problem were the master is left all alone, injecting PINGs into it's backlog when no one is listening and then gets demoted and tries to replicate from a replica that didn't have any of the PINGs (or at least not the last ones). however, consider this case: master A has two replicas (B and C) replicating directly from it. there's no traffic at all, and also no network issues, just many pings in the tail of the backlog. now B gets promoted, A becomes a replica of B, and C remains a replica of A. when A gets demoted, it trims the pings from its backlog, and successfully replicate from B. however, C is still aware of these PINGs, when it'll disconnect and re-connect to A, it'll ask for something that's not in the backlog anymore (since A trimmed the tail of it's backlog), and be forced to do a full sync (something it didn't have to do before the meaningful offset fix). Besides that, the psync2 test was always failing randomly here and there, it turns out the reason were PINGs. Investigating it shows the following scenario: cycle 1: redis #1 is master, and all the rest are direct replicas of #1 cycle 2: redis #2 is promoted to master, #1 is a replica of #2 and #3 is replica of #1 now we see that when #1 is demoted it prints: 17339:S 21 Apr 2020 11:16:38.523 * Using the meaningful offset 3929963 instead of 3929977 to exclude the final PINGs (14 bytes difference) 17339:S 21 Apr 2020 11:16:39.391 * Trying a partial resynchronization (request e2b3f8817735fdfe5fa4626766daa938b61419e5:3929964). 17339:S 21 Apr 2020 11:16:39.392 * Successful partial resynchronization with master. and when #3 connects to the demoted #2, #2 says: 17339:S 21 Apr 2020 11:16:40.084 * Partial resynchronization not accepted: Requested offset for secondary ID was 3929978, but I can reply up to 3929964 so the issue here is that the meaningful offset feature saved the day for the demoted master (since it needs to sync from a replica that didn't get the last ping), but it didn't help one of the other replicas which did get the last ping.
-
- 24 Apr, 2020 1 commit
-
-
antirez authored
STRALGO should be a container for mostly read-only string algorithms in Redis. The algorithms should have two main characteristics: 1. They should be non trivial to compute, and often not part of programming language standard libraries. 2. They should be fast enough that it is a good idea to have optimized C implementations. Next thing I would love to see? A small strings compression algorithm.
-
- 22 Apr, 2020 2 commits
- 21 Apr, 2020 1 commit
-
-
yanhui13 authored
-
- 19 Apr, 2020 1 commit
-
-
Guy Benoish authored
-
- 17 Apr, 2020 1 commit
-
-
antirez authored
-
- 16 Apr, 2020 1 commit
-
-
Oran Agra authored
this test is time sensitive and it sometimes fail to pass below the latency threshold, even on strong machines. this test was the reson we're running just 2 parallel tests in the github actions CI, revering this.
-
- 06 Apr, 2020 1 commit
-
-
antirez authored
-