1. 20 May, 2020 1 commit
    • Oran Agra's avatar
      fix a rare active defrag edge case bug leading to stagnation · 88d71f47
      Oran Agra authored
      There's a rare case which leads to stagnation in the defragger, causing
      it to keep scanning the keyspace and do nothing (not moving any
      allocation), this happens when all the allocator slabs of a certain bin
      have the same % utilization, but the slab from which new allocations are
      made have a lower utilization.
      
      this commit fixes it by removing the current slab from the overall
      average utilization of the bin, and also eliminate any precision loss in
      the utilization calculation and move the decision about the defrag to
      reside inside jemalloc.
      
      and also add a test that consistently reproduce this issue.
      88d71f47
  2. 18 May, 2020 1 commit
  3. 17 May, 2020 2 commits
  4. 14 May, 2020 1 commit
  5. 12 May, 2020 1 commit
    • Oran Agra's avatar
      fix unstable replication test · b4416280
      Oran Agra authored
      this test which has coverage for varoius flows of diskless master was
      failing randomly from time to time.
      
      the failure was:
      [err]: diskless all replicas drop during rdb pipe in tests/integration/replication.tcl
      log message of '*Diskless rdb transfer, last replica dropped, killing fork child*' not found
      
      what seemed to have happened is that the master didn't detect that all
      replicas dropped by the time the replication ended, it thought that one
      replica is still connected.
      
      now the test takes a few seconds longer but it seems stable.
      b4416280
  6. 11 May, 2020 3 commits
  7. 10 May, 2020 1 commit
    • Yossi Gottlieb's avatar
      TLS: Fix test failures on recent Debian/Ubuntu. · 4d1178cc
      Yossi Gottlieb authored
      Seems like on some systems choosing specific TLS v1/v1.1 versions no
      longer works as expected. Test is reduced for v1.2 now which is still
      good enough to test the mechansim, and matters most anyway.
      4d1178cc
  8. 05 May, 2020 1 commit
  9. 04 May, 2020 1 commit
    • Oran Agra's avatar
      add daily github actions with libc malloc and valgrind · deee2c1e
      Oran Agra authored
      * fix memlry leaks with diskless replica short read.
      * fix a few timing issues with valgrind runs
      * fix issue with valgrind and watchdog schedule signal
      
      about the valgrind WD issue:
      the stack trace test in logging.tcl, has issues with valgrind:
      ==28808== Can't extend stack to 0x1ffeffdb38 during signal delivery for thread 1:
      ==28808==   too small or bad protection modes
      
      it seems to be some valgrind bug with SA_ONSTACK.
      SA_ONSTACK seems unneeded since WD is not recursive (SA_NODEFER was removed),
      also, not sure if it's even valid without a call to sigaltstack()
      deee2c1e
  10. 02 May, 2020 1 commit
    • zhenwei pi's avatar
      Support setcpuaffinity on linux/bsd · 1a0deab2
      zhenwei pi authored
      Currently, there are several types of threads/child processes of a
      redis server. Sometimes we need deeply optimise the performance of
      redis, so we would like to isolate threads/processes.
      
      There were some discussion about cpu affinity cases in the issue:
      https://github.com/antirez/redis/issues/2863
      
      
      
      So implement cpu affinity setting by redis.conf in this patch, then
      we can config server_cpulist/bio_cpulist/aof_rewrite_cpulist/
      bgsave_cpulist by cpu list.
      
      Examples of cpulist in redis.conf:
      server_cpulist 0-7:2      means cpu affinity 0,2,4,6
      bio_cpulist 1,3           means cpu affinity 1,3
      aof_rewrite_cpulist 8-11  means cpu affinity 8,9,10,11
      bgsave_cpulist 1,10-11    means cpu affinity 1,10,11
      
      Test on linux/freebsd, both work fine.
      Signed-off-by: default avatarzhenwei pi <pizhenwei@bytedance.com>
      1a0deab2
  11. 28 Apr, 2020 2 commits
  12. 27 Apr, 2020 1 commit
    • Oran Agra's avatar
      Keep track of meaningful replication offset in replicas too · 4447ddc8
      Oran Agra authored
      Now both master and replicas keep track of the last replication offset
      that contains meaningful data (ignoring the tailing pings), and both
      trim that tail from the replication backlog, and the offset with which
      they try to use for psync.
      
      the implication is that if someone missed some pings, or even have
      excessive pings that the promoted replica has, it'll still be able to
      psync (avoid full sync).
      
      the downside (which was already committed) is that replicas running old
      code may fail to psync, since the promoted replica trims pings form it's
      backlog.
      
      This commit adds a test that reproduces several cases of promotions and
      demotions with stale and non-stale pings
      
      Background:
      The mearningful offset on the master was added recently to solve a problem were
      the master is left all alone, injecting PINGs into it's backlog when no one is
      listening and then gets demoted and tries to replicate from a replica that didn't
      have any of the PINGs (or at least not the last ones).
      
      however, consider this case:
      master A has two replicas (B and C) replicating directly from it.
      there's no traffic at all, and also no network issues, just many pings in the
      tail of the backlog. now B gets promoted, A becomes a replica of B, and C
      remains a replica of A. when A gets demoted, it trims the pings from its
      backlog, and successfully replicate from B. however, C is still aware of
      these PINGs, when it'll disconnect and re-connect to A, it'll ask for something
      that's not in the backlog anymore (since A trimmed the tail of it's backlog),
      and be forced to do a full sync (something it didn't have to do before the
      meaningful offset fix).
      
      Besides that, the psync2 test was always failing randomly here and there, it
      turns out the reason were PINGs. Investigating it shows the following scenario:
      
      cycle 1: redis #1 is master, and all the rest are direct replicas of #1
      cycle 2: redis #2 is promoted to master, #1 is a replica of #2 and #3 is replica of #1
      now we see that when #1 is demoted it prints:
      17339:S 21 Apr 2020 11:16:38.523 * Using the meaningful offset 3929963 instead of 3929977 to exclude the final PINGs (14 bytes difference)
      17339:S 21 Apr 2020 11:16:39.391 * Trying a partial resynchronization (request e2b3f8817735fdfe5fa4626766daa938b61419e5:3929964).
      17339:S 21 Apr 2020 11:16:39.392 * Successful partial resynchronization with master.
      and when #3 connects to the demoted #2, #2 says:
      17339:S 21 Apr 2020 11:16:40.084 * Partial resynchronization not accepted: Requested offset for secondary ID was 3929978, but I can reply up to 3929964
      
      so the issue here is that the meaningful offset feature saved the day for the
      demoted master (since it needs to sync from a replica that didn't get the last
      ping), but it didn't help one of the other replicas which did get the last ping.
      4447ddc8
  13. 24 Apr, 2020 1 commit
    • antirez's avatar
      LCS -> STRALGO LCS. · 8a7f255c
      antirez authored
      STRALGO should be a container for mostly read-only string
      algorithms in Redis. The algorithms should have two main
      characteristics:
      
      1. They should be non trivial to compute, and often not part of
      programming language standard libraries.
      2. They should be fast enough that it is a good idea to have optimized C
      implementations.
      
      Next thing I would love to see? A small strings compression algorithm.
      8a7f255c
  14. 22 Apr, 2020 2 commits
  15. 21 Apr, 2020 1 commit
  16. 19 Apr, 2020 1 commit
  17. 17 Apr, 2020 1 commit
  18. 16 Apr, 2020 1 commit
    • Oran Agra's avatar
      testsuite run the defrag latency test solo · b9fa42a1
      Oran Agra authored
      this test is time sensitive and it sometimes fail to pass below the
      latency threshold, even on strong machines.
      
      this test was the reson we're running just 2 parallel tests in the
      github actions CI, revering this.
      b9fa42a1
  19. 06 Apr, 2020 3 commits
  20. 03 Apr, 2020 1 commit
    • Guy Benoish's avatar
      Try to fix time-sensitive tests in blockonkey.tcl · 1b0d30ae
      Guy Benoish authored
      There is an inherent race between the deferring client and the
      "main" client of the test: While the deferring client issues a blocking
      command, we can't know for sure that by the time the "main" client
      tries to issue another command (Usually one that unblocks the deferring
      client) the deferring client is even blocked...
      For lack of a better choice this commit uses TCL's 'after' in order
      to give some time for the deferring client to issues its blocking
      command before the "main" client does its thing.
      This problem probably exists in many other tests but this commit
      tries to fix blockonkeys.tcl
      1b0d30ae
  21. 02 Apr, 2020 1 commit
  22. 01 Apr, 2020 1 commit
    • Guy Benoish's avatar
      Fix memory corruption in moduleHandleBlockedClients · c4dc5b80
      Guy Benoish authored
      By using a "circular BRPOPLPUSH"-like scenario it was
      possible the get the same client on db->blocking_keys
      twice (See comment in moduleTryServeClientBlockedOnKey)
      
      The fix was actually already implememnted in
      moduleTryServeClientBlockedOnKey but it had a bug:
      the funxction should return 0 or 1 (not OK or ERR)
      
      Other changes:
      1. Added two commands to blockonkeys.c test module (To
         reproduce the case described above)
      2. Simplify blockonkeys.c in order to make testing easier
      3. cast raxSize() to avoid warning with format spec
      c4dc5b80
  23. 31 Mar, 2020 5 commits
  24. 26 Mar, 2020 2 commits
  25. 25 Mar, 2020 2 commits
  26. 23 Mar, 2020 2 commits
    • Oran Agra's avatar
      MULTI/EXEC during LUA script timeout are messed up · ec007559
      Oran Agra authored
      Redis refusing to run MULTI or EXEC during script timeout may cause partial
      transactions to run.
      
      1) if the client sends MULTI+commands+EXEC in pipeline without waiting for
      response, but these arrive to the shards partially while there's a busy script,
      and partially after it eventually finishes: we'll end up running only part of
      the transaction (since multi was ignored, and exec would fail).
      
      2) similar to the above if EXEC arrives during busy script, it'll be ignored and
      the client state remains in a transaction.
      
      the 3rd test which i added for a case where MULTI and EXEC are ok, and
      only the body arrives during busy script was already handled correctly
      since processCommand calls flagTransaction
      ec007559
    • antirez's avatar
      Fix BITFIELD_RO test. · 61de1c11
      antirez authored
      61de1c11