1. 05 May, 2020 1 commit
  2. 04 May, 2020 1 commit
    • Oran Agra's avatar
      add daily github actions with libc malloc and valgrind · deee2c1e
      Oran Agra authored
      * fix memlry leaks with diskless replica short read.
      * fix a few timing issues with valgrind runs
      * fix issue with valgrind and watchdog schedule signal
      
      about the valgrind WD issue:
      the stack trace test in logging.tcl, has issues with valgrind:
      ==28808== Can't extend stack to 0x1ffeffdb38 during signal delivery for thread 1:
      ==28808==   too small or bad protection modes
      
      it seems to be some valgrind bug with SA_ONSTACK.
      SA_ONSTACK seems unneeded since WD is not recursive (SA_NODEFER was removed),
      also, not sure if it's even valid without a call to sigaltstack()
      deee2c1e
  3. 02 May, 2020 1 commit
    • zhenwei pi's avatar
      Support setcpuaffinity on linux/bsd · 1a0deab2
      zhenwei pi authored
      Currently, there are several types of threads/child processes of a
      redis server. Sometimes we need deeply optimise the performance of
      redis, so we would like to isolate threads/processes.
      
      There were some discussion about cpu affinity cases in the issue:
      https://github.com/antirez/redis/issues/2863
      
      
      
      So implement cpu affinity setting by redis.conf in this patch, then
      we can config server_cpulist/bio_cpulist/aof_rewrite_cpulist/
      bgsave_cpulist by cpu list.
      
      Examples of cpulist in redis.conf:
      server_cpulist 0-7:2      means cpu affinity 0,2,4,6
      bio_cpulist 1,3           means cpu affinity 1,3
      aof_rewrite_cpulist 8-11  means cpu affinity 8,9,10,11
      bgsave_cpulist 1,10-11    means cpu affinity 1,10,11
      
      Test on linux/freebsd, both work fine.
      Signed-off-by: default avatarzhenwei pi <pizhenwei@bytedance.com>
      1a0deab2
  4. 28 Apr, 2020 2 commits
  5. 27 Apr, 2020 1 commit
    • Oran Agra's avatar
      Keep track of meaningful replication offset in replicas too · 4447ddc8
      Oran Agra authored
      Now both master and replicas keep track of the last replication offset
      that contains meaningful data (ignoring the tailing pings), and both
      trim that tail from the replication backlog, and the offset with which
      they try to use for psync.
      
      the implication is that if someone missed some pings, or even have
      excessive pings that the promoted replica has, it'll still be able to
      psync (avoid full sync).
      
      the downside (which was already committed) is that replicas running old
      code may fail to psync, since the promoted replica trims pings form it's
      backlog.
      
      This commit adds a test that reproduces several cases of promotions and
      demotions with stale and non-stale pings
      
      Background:
      The mearningful offset on the master was added recently to solve a problem were
      the master is left all alone, injecting PINGs into it's backlog when no one is
      listening and then gets demoted and tries to replicate from a replica that didn't
      have any of the PINGs (or at least not the last ones).
      
      however, consider this case:
      master A has two replicas (B and C) replicating directly from it.
      there's no traffic at all, and also no network issues, just many pings in the
      tail of the backlog. now B gets promoted, A becomes a replica of B, and C
      remains a replica of A. when A gets demoted, it trims the pings from its
      backlog, and successfully replicate from B. however, C is still aware of
      these PINGs, when it'll disconnect and re-connect to A, it'll ask for something
      that's not in the backlog anymore (since A trimmed the tail of it's backlog),
      and be forced to do a full sync (something it didn't have to do before the
      meaningful offset fix).
      
      Besides that, the psync2 test was always failing randomly here and there, it
      turns out the reason were PINGs. Investigating it shows the following scenario:
      
      cycle 1: redis #1 is master, and all the rest are direct replicas of #1
      cycle 2: redis #2 is promoted to master, #1 is a replica of #2 and #3 is replica of #1
      now we see that when #1 is demoted it prints:
      17339:S 21 Apr 2020 11:16:38.523 * Using the meaningful offset 3929963 instead of 3929977 to exclude the final PINGs (14 bytes difference)
      17339:S 21 Apr 2020 11:16:39.391 * Trying a partial resynchronization (request e2b3f8817735fdfe5fa4626766daa938b61419e5:3929964).
      17339:S 21 Apr 2020 11:16:39.392 * Successful partial resynchronization with master.
      and when #3 connects to the demoted #2, #2 says:
      17339:S 21 Apr 2020 11:16:40.084 * Partial resynchronization not accepted: Requested offset for secondary ID was 3929978, but I can reply up to 3929964
      
      so the issue here is that the meaningful offset feature saved the day for the
      demoted master (since it needs to sync from a replica that didn't get the last
      ping), but it didn't help one of the other replicas which did get the last ping.
      4447ddc8
  6. 24 Apr, 2020 1 commit
    • antirez's avatar
      LCS -> STRALGO LCS. · 8a7f255c
      antirez authored
      STRALGO should be a container for mostly read-only string
      algorithms in Redis. The algorithms should have two main
      characteristics:
      
      1. They should be non trivial to compute, and often not part of
      programming language standard libraries.
      2. They should be fast enough that it is a good idea to have optimized C
      implementations.
      
      Next thing I would love to see? A small strings compression algorithm.
      8a7f255c
  7. 22 Apr, 2020 2 commits
  8. 21 Apr, 2020 1 commit
  9. 19 Apr, 2020 1 commit
  10. 17 Apr, 2020 1 commit
  11. 16 Apr, 2020 1 commit
    • Oran Agra's avatar
      testsuite run the defrag latency test solo · b9fa42a1
      Oran Agra authored
      this test is time sensitive and it sometimes fail to pass below the
      latency threshold, even on strong machines.
      
      this test was the reson we're running just 2 parallel tests in the
      github actions CI, revering this.
      b9fa42a1
  12. 06 Apr, 2020 3 commits
  13. 03 Apr, 2020 1 commit
    • Guy Benoish's avatar
      Try to fix time-sensitive tests in blockonkey.tcl · 1b0d30ae
      Guy Benoish authored
      There is an inherent race between the deferring client and the
      "main" client of the test: While the deferring client issues a blocking
      command, we can't know for sure that by the time the "main" client
      tries to issue another command (Usually one that unblocks the deferring
      client) the deferring client is even blocked...
      For lack of a better choice this commit uses TCL's 'after' in order
      to give some time for the deferring client to issues its blocking
      command before the "main" client does its thing.
      This problem probably exists in many other tests but this commit
      tries to fix blockonkeys.tcl
      1b0d30ae
  14. 02 Apr, 2020 1 commit
  15. 01 Apr, 2020 1 commit
    • Guy Benoish's avatar
      Fix memory corruption in moduleHandleBlockedClients · c4dc5b80
      Guy Benoish authored
      By using a "circular BRPOPLPUSH"-like scenario it was
      possible the get the same client on db->blocking_keys
      twice (See comment in moduleTryServeClientBlockedOnKey)
      
      The fix was actually already implememnted in
      moduleTryServeClientBlockedOnKey but it had a bug:
      the funxction should return 0 or 1 (not OK or ERR)
      
      Other changes:
      1. Added two commands to blockonkeys.c test module (To
         reproduce the case described above)
      2. Simplify blockonkeys.c in order to make testing easier
      3. cast raxSize() to avoid warning with format spec
      c4dc5b80
  16. 31 Mar, 2020 5 commits
  17. 26 Mar, 2020 2 commits
  18. 25 Mar, 2020 2 commits
  19. 23 Mar, 2020 2 commits
    • Oran Agra's avatar
      MULTI/EXEC during LUA script timeout are messed up · ec007559
      Oran Agra authored
      Redis refusing to run MULTI or EXEC during script timeout may cause partial
      transactions to run.
      
      1) if the client sends MULTI+commands+EXEC in pipeline without waiting for
      response, but these arrive to the shards partially while there's a busy script,
      and partially after it eventually finishes: we'll end up running only part of
      the transaction (since multi was ignored, and exec would fail).
      
      2) similar to the above if EXEC arrives during busy script, it'll be ignored and
      the client state remains in a transaction.
      
      the 3rd test which i added for a case where MULTI and EXEC are ok, and
      only the body arrives during busy script was already handled correctly
      since processCommand calls flagTransaction
      ec007559
    • antirez's avatar
      Fix BITFIELD_RO test. · 61de1c11
      antirez authored
      61de1c11
  20. 20 Mar, 2020 1 commit
  21. 18 Mar, 2020 1 commit
  22. 11 Mar, 2020 1 commit
  23. 05 Mar, 2020 1 commit
    • Oran Agra's avatar
      fix for flaky psync2 test · 27641ee4
      Oran Agra authored
      *** [err]: PSYNC2: total sum of full synchronizations is exactly 4 in tests/integration/psync2.tcl
      Expected 5 == 4 (context: type eval line 6 cmd {assert {$sum == 4}} proc ::test)
      
      issue was that sometime the test got an unexpected full sync since it
      tried to switch to the replica before it was in sync with it's master.
      27641ee4
  24. 04 Mar, 2020 1 commit
  25. 27 Feb, 2020 1 commit
    • Oran Agra's avatar
      fix github actions failing latency test for active defrag - part 2 · 2f1a1c38
      Oran Agra authored
      it seems that running two clients at a time is ok too, resuces action
      time from 20 minutes to 10. we'll use this for now, and if one day it
      won't be enough we'll have to run just the sensitive tests one by one
      separately from the others.
      
      this commit also fixes an issue with the defrag test that appears to be
      very rare.
      2f1a1c38
  26. 25 Feb, 2020 1 commit
    • Oran Agra's avatar
      fix github actions failing latency test for active defrag · 53789342
      Oran Agra authored
      seems that github actions are slow, using just one client to reduce
      false positives.
      
      also adding verbose, testing only on latest ubuntu, and building on
      older one.
      
      when doing that, i can reduce the test threshold back to something saner
      53789342
  27. 24 Feb, 2020 1 commit
  28. 23 Feb, 2020 2 commits
    • Oran Agra's avatar
      fix race in module api test for fork · 0a643efa
      Oran Agra authored
      in some cases we were trying to kill the fork before it got created
      0a643efa
    • Oran Agra's avatar
      Fix latency sensitivity of new defrag test · 62adabd0
      Oran Agra authored
      I saw that the new defag test for list was failing in CI recently, so i
      reduce it's threshold from 12 to 60.
      
      besides that, i add / improve the latency test for that other two defrag
      tests (add a sensitive latency and digest / save checks)
      
      and fix bad usage of debug populate (can't overrides existing keys).
      this was the original intention, which creates higher fragmentation.
      62adabd0