- 05 May, 2020 1 commit
-
-
antirez authored
-
- 04 May, 2020 1 commit
-
-
Oran Agra authored
* fix memlry leaks with diskless replica short read. * fix a few timing issues with valgrind runs * fix issue with valgrind and watchdog schedule signal about the valgrind WD issue: the stack trace test in logging.tcl, has issues with valgrind: ==28808== Can't extend stack to 0x1ffeffdb38 during signal delivery for thread 1: ==28808== too small or bad protection modes it seems to be some valgrind bug with SA_ONSTACK. SA_ONSTACK seems unneeded since WD is not recursive (SA_NODEFER was removed), also, not sure if it's even valid without a call to sigaltstack()
-
- 02 May, 2020 1 commit
-
-
zhenwei pi authored
Currently, there are several types of threads/child processes of a redis server. Sometimes we need deeply optimise the performance of redis, so we would like to isolate threads/processes. There were some discussion about cpu affinity cases in the issue: https://github.com/antirez/redis/issues/2863 So implement cpu affinity setting by redis.conf in this patch, then we can config server_cpulist/bio_cpulist/aof_rewrite_cpulist/ bgsave_cpulist by cpu list. Examples of cpulist in redis.conf: server_cpulist 0-7:2 means cpu affinity 0,2,4,6 bio_cpulist 1,3 means cpu affinity 1,3 aof_rewrite_cpulist 8-11 means cpu affinity 8,9,10,11 bgsave_cpulist 1,10-11 means cpu affinity 1,10,11 Test on linux/freebsd, both work fine. Signed-off-by:
zhenwei pi <pizhenwei@bytedance.com>
-
- 28 Apr, 2020 2 commits
-
-
Guy Benoish authored
Introducing XINFO STREAM <key> FULL
-
Oran Agra authored
-
- 27 Apr, 2020 1 commit
-
-
Oran Agra authored
Now both master and replicas keep track of the last replication offset that contains meaningful data (ignoring the tailing pings), and both trim that tail from the replication backlog, and the offset with which they try to use for psync. the implication is that if someone missed some pings, or even have excessive pings that the promoted replica has, it'll still be able to psync (avoid full sync). the downside (which was already committed) is that replicas running old code may fail to psync, since the promoted replica trims pings form it's backlog. This commit adds a test that reproduces several cases of promotions and demotions with stale and non-stale pings Background: The mearningful offset on the master was added recently to solve a problem were the master is left all alone, injecting PINGs into it's backlog when no one is listening and then gets demoted and tries to replicate from a replica that didn't have any of the PINGs (or at least not the last ones). however, consider this case: master A has two replicas (B and C) replicating directly from it. there's no traffic at all, and also no network issues, just many pings in the tail of the backlog. now B gets promoted, A becomes a replica of B, and C remains a replica of A. when A gets demoted, it trims the pings from its backlog, and successfully replicate from B. however, C is still aware of these PINGs, when it'll disconnect and re-connect to A, it'll ask for something that's not in the backlog anymore (since A trimmed the tail of it's backlog), and be forced to do a full sync (something it didn't have to do before the meaningful offset fix). Besides that, the psync2 test was always failing randomly here and there, it turns out the reason were PINGs. Investigating it shows the following scenario: cycle 1: redis #1 is master, and all the rest are direct replicas of #1 cycle 2: redis #2 is promoted to master, #1 is a replica of #2 and #3 is replica of #1 now we see that when #1 is demoted it prints: 17339:S 21 Apr 2020 11:16:38.523 * Using the meaningful offset 3929963 instead of 3929977 to exclude the final PINGs (14 bytes difference) 17339:S 21 Apr 2020 11:16:39.391 * Trying a partial resynchronization (request e2b3f8817735fdfe5fa4626766daa938b61419e5:3929964). 17339:S 21 Apr 2020 11:16:39.392 * Successful partial resynchronization with master. and when #3 connects to the demoted #2, #2 says: 17339:S 21 Apr 2020 11:16:40.084 * Partial resynchronization not accepted: Requested offset for secondary ID was 3929978, but I can reply up to 3929964 so the issue here is that the meaningful offset feature saved the day for the demoted master (since it needs to sync from a replica that didn't get the last ping), but it didn't help one of the other replicas which did get the last ping.
-
- 24 Apr, 2020 1 commit
-
-
antirez authored
STRALGO should be a container for mostly read-only string algorithms in Redis. The algorithms should have two main characteristics: 1. They should be non trivial to compute, and often not part of programming language standard libraries. 2. They should be fast enough that it is a good idea to have optimized C implementations. Next thing I would love to see? A small strings compression algorithm.
-
- 22 Apr, 2020 2 commits
- 21 Apr, 2020 1 commit
-
-
yanhui13 authored
-
- 19 Apr, 2020 1 commit
-
-
Guy Benoish authored
-
- 17 Apr, 2020 1 commit
-
-
antirez authored
-
- 16 Apr, 2020 1 commit
-
-
Oran Agra authored
this test is time sensitive and it sometimes fail to pass below the latency threshold, even on strong machines. this test was the reson we're running just 2 parallel tests in the github actions CI, revering this.
-
- 06 Apr, 2020 3 commits
- 03 Apr, 2020 1 commit
-
-
Guy Benoish authored
There is an inherent race between the deferring client and the "main" client of the test: While the deferring client issues a blocking command, we can't know for sure that by the time the "main" client tries to issue another command (Usually one that unblocks the deferring client) the deferring client is even blocked... For lack of a better choice this commit uses TCL's 'after' in order to give some time for the deferring client to issues its blocking command before the "main" client does its thing. This problem probably exists in many other tests but this commit tries to fix blockonkeys.tcl
-
- 02 Apr, 2020 1 commit
-
-
Guy Benoish authored
-
- 01 Apr, 2020 1 commit
-
-
Guy Benoish authored
By using a "circular BRPOPLPUSH"-like scenario it was possible the get the same client on db->blocking_keys twice (See comment in moduleTryServeClientBlockedOnKey) The fix was actually already implememnted in moduleTryServeClientBlockedOnKey but it had a bug: the funxction should return 0 or 1 (not OK or ERR) Other changes: 1. Added two commands to blockonkeys.c test module (To reproduce the case described above) 2. Simplify blockonkeys.c in order to make testing easier 3. cast raxSize() to avoid warning with format spec
-
- 31 Mar, 2020 5 commits
-
-
Guy Benoish authored
Other changes: Support stream in serverLogObjectDebugInfo
-
Guy Benoish authored
Makse sure call() doesn't wrap replicated commands with a redundant MULTI/EXEC Other, unrelated changes: 1. Formatting compiler warning in INFO CLIENTS 2. Use CLIENT_ID_AOF instead of UINT64_MAX
-
antirez authored
-
antirez authored
-
antirez authored
37a10cef introduced automatic wrapping of MULTI/EXEC for the alsoPropagate API. However this collides with the built-in mechanism already present in module.c. To avoid complex changes near Redis 6 GA this commit introduces the ability to exclude call() MUTLI/EXEC wrapping for also propagate in order to continue to use the old code paths in module.c.
-
- 26 Mar, 2020 2 commits
-
-
Valentino Geron authored
-
Valentino Geron authored
First, we must parse the IDs, so that we abort ASAP. The return value of this command cannot be an error if the client successfully acknowledged some messages, so it should be executed in a "all or nothing" fashion.
-
- 25 Mar, 2020 2 commits
- 23 Mar, 2020 2 commits
-
-
Oran Agra authored
Redis refusing to run MULTI or EXEC during script timeout may cause partial transactions to run. 1) if the client sends MULTI+commands+EXEC in pipeline without waiting for response, but these arrive to the shards partially while there's a busy script, and partially after it eventually finishes: we'll end up running only part of the transaction (since multi was ignored, and exec would fail). 2) similar to the above if EXEC arrives during busy script, it'll be ignored and the client state remains in a transaction. the 3rd test which i added for a case where MULTI and EXEC are ok, and only the body arrives during busy script was already handled correctly since processCommand calls flagTransaction
-
antirez authored
-
- 20 Mar, 2020 1 commit
-
-
antirez authored
-
- 18 Mar, 2020 1 commit
-
-
WuYunlong authored
-
- 11 Mar, 2020 1 commit
-
-
bodong.ybd authored
-
- 05 Mar, 2020 1 commit
-
-
Oran Agra authored
*** [err]: PSYNC2: total sum of full synchronizations is exactly 4 in tests/integration/psync2.tcl Expected 5 == 4 (context: type eval line 6 cmd {assert {$sum == 4}} proc ::test) issue was that sometime the test got an unexpected full sync since it tried to switch to the replica before it was in sync with it's master.
-
- 04 Mar, 2020 1 commit
-
-
bodong.ybd authored
-
- 27 Feb, 2020 1 commit
-
-
Oran Agra authored
it seems that running two clients at a time is ok too, resuces action time from 20 minutes to 10. we'll use this for now, and if one day it won't be enough we'll have to run just the sensitive tests one by one separately from the others. this commit also fixes an issue with the defrag test that appears to be very rare.
-
- 25 Feb, 2020 1 commit
-
-
Oran Agra authored
seems that github actions are slow, using just one client to reduce false positives. also adding verbose, testing only on latest ubuntu, and building on older one. when doing that, i can reduce the test threshold back to something saner
-
- 24 Feb, 2020 1 commit
-
-
antirez authored
-
- 23 Feb, 2020 2 commits
-
-
Oran Agra authored
in some cases we were trying to kill the fork before it got created
-
Oran Agra authored
I saw that the new defag test for list was failing in CI recently, so i reduce it's threshold from 12 to 60. besides that, i add / improve the latency test for that other two defrag tests (add a sensitive latency and digest / save checks) and fix bad usage of debug populate (can't overrides existing keys). this was the original intention, which creates higher fragmentation.
-