1. 10 Sep, 2020 3 commits
  2. 01 Sep, 2020 5 commits
    • Yossi Gottlieb's avatar
      Tests: fix redis-cli with remote hosts. (#7693) · 8d79702d
      Yossi Gottlieb authored
      
      (cherry picked from commit f80f3f49)
      8d79702d
    • Oran Agra's avatar
      fix new rdb test failing on timing issues (#7604) · 916b215f
      Oran Agra authored
      apparenlty on github actions sometimes 500ms is not enough
      
      (cherry picked from commit 824bd2ac)
      916b215f
    • Oran Agra's avatar
      Fix failing tests due to issues with wait_for_log_message (#7572) · 67750ce3
      Oran Agra authored
      - the test now waits for specific set of log messages rather than wait for
        timeout looking for just one message.
      - we don't wanna sample the current length of the log after an action, due
        to a race, we need to start the search from the line number of the last
        message we where waiting for.
      - when attempting to trigger a full sync, use multi-exec to avoid a race
        where the replica manages to re-connect before we completed the set of
        actions that should force a full sync.
      - fix verify_log_message which was broken and unused
      
      (cherry picked from commit 109b5ccd)
      67750ce3
    • Oran Agra's avatar
      Stabilize bgsave test that sometimes fails with valgrind (#7559) · 6daa8b9a
      Oran Agra authored
      on ci.redis.io the test fails a lot, reporting that bgsave didn't end.
      increaseing the timeout we wait for that bgsave to get aborted.
      in addition to that, i also verify that it indeed got aborted by
      checking that the save counter wasn't reset.
      
      add another test to verify that a successful bgsave indeed resets the
      change counter.
      
      (cherry picked from commit 8a57969f)
      6daa8b9a
    • Yossi Gottlieb's avatar
      Tests: drop TCL 8.6 dependency. (#7548) · f1d5d5d2
      Yossi Gottlieb authored
      This re-implements the redis-cli --pipe test so it no longer depends on a close feature available only in TCL 8.6.
      
      Basically what this test does is run redis-cli --pipe, generates a bunch of commands and pipes them through redis-cli, and inspects the result in both Redis and the redis-cli output.
      
      To do that, we need to close stdin for redis-cli to indicate we're done so it can flush its buffers and exit. TCL has bi-directional channels can only offers a way to "one-way close" a channel with TCL 8.6. To work around that, we now generate the commands into a file and feed that file to redis-cli directly.
      
      As we're writing to an actual file, the number of commands is now reduced.
      
      (cherry picked from commit f57e844b)
      f1d5d5d2
  3. 20 Jul, 2020 5 commits
    • Oran Agra's avatar
      redis-cli tests, fix valgrind timing issue (#7519) · 05f8975d
      Oran Agra authored
      this test when run with valgrind on github actions takes 160 seconds
      
      (cherry picked from commit 254c9625)
      05f8975d
    • Oran Agra's avatar
      fix recently added time sensitive tests failing with valgrind (#7512) · aea4db2f
      Oran Agra authored
      interestingly the latency monitor test fails because valgrind is slow
      enough so that the time inside PEXPIREAT command from the moment of
      the first mstime() call to get the basetime until checkAlreadyExpired
      calls mstime() again is more than 1ms, and that test was too sensitive.
      
      using this opportunity to speed up the test (unrelated to the failure)
      the fix is just the longer time passed to PEXPIRE.
      
      (cherry picked from commit e5227aab)
      aea4db2f
    • Yossi Gottlieb's avatar
      TLS: Add missing redis-cli options. (#7456) · b057ff81
      Yossi Gottlieb authored
      * Tests: fix and reintroduce redis-cli tests.
      
      These tests have been broken and disabled for 10 years now!
      
      * TLS: add remaining redis-cli support.
      
      This adds support for the redis-cli --pipe, --rdb and --replica options
      previously unsupported in --tls mode.
      
      * Fix writeConn().
      
      (cherry picked from commit d9f970d8)
      b057ff81
    • Oran Agra's avatar
      stabilize tests that look for log lines (#7367) · 2b5f2319
      Oran Agra authored
      tests were sensitive to additional log lines appearing in the log
      causing the search to come empty handed.
      
      instead of just looking for the n last log lines, capture the log lines
      before performing the action, and then search from that offset.
      
      (cherry picked from commit 8e76e134)
      2b5f2319
    • Oran Agra's avatar
      tests/valgrind: don't use debug restart (#7404) · 1104113c
      Oran Agra authored
      * tests/valgrind: don't use debug restart
      
      DEBUG REATART causes two issues:
      1. it uses execve which replaces the original process and valgrind doesn't
         have a chance to check for errors, so leaks go unreported.
      2. valgrind report invalid calls to close() which we're unable to resolve.
      
      So now the tests use restart_server mechanism in the tests, that terminates
      the old server and starts a new one, new PID, but same stdout, stderr.
      
      since the stderr can contain two or more valgrind report, it is not enough
      to just check for the absence of leaks, we also need to check for some known
      errors, we do both, and fail if we either find an error, or can't find a
      report saying there are no leaks.
      
      other changes:
      - when killing a server that was already terminated we check for leaks too.
      - adding DEBUG LEAK which was used to test it.
      - adding --trace-children to valgrind, although no longer needed.
      - since the stdout contains two or more runs, we need slightly different way
        of checking if the new process is up (explicitly looking for the new PID)
      - move the code that handles --wait-server to happen earlier (before
        watching the startup message in the log), and serve the restarted server too.
      
      * squashme - CR fixes
      
      (cherry picked from commit 69ade873)
      1104113c
  4. 06 Jun, 2020 1 commit
  5. 28 May, 2020 6 commits
  6. 22 May, 2020 4 commits
  7. 14 May, 2020 2 commits
    • Oran Agra's avatar
      fix redis 6.0 not freeing closed connections during loading. · 9da134cd
      Oran Agra authored
      This bug was introduced by a recent change in which readQueryFromClient
      is using freeClientAsync, and despite the fact that now
      freeClientsInAsyncFreeQueue is in beforeSleep, that's not enough since
      it's not called during loading in processEventsWhileBlocked.
      furthermore, afterSleep was called in that case but beforeSleep wasn't.
      
      This bug also caused slowness sine the level-triggered mode of epoll
      kept signaling these connections as readable causing us to keep doing
      connRead again and again for ll of these, which keep accumulating.
      
      now both before and after sleep are called, but not all of their actions
      are performed during loading, some are only reserved for the main loop.
      
      fixes issue #7215
      9da134cd
    • Oran Agra's avatar
      fix unstable replication test · 5c41802d
      Oran Agra authored
      this test which has coverage for varoius flows of diskless master was
      failing randomly from time to time.
      
      the failure was:
      [err]: diskless all replicas drop during rdb pipe in tests/integration/replication.tcl
      log message of '*Diskless rdb transfer, last replica dropped, killing fork child*' not found
      
      what seemed to have happened is that the master didn't detect that all
      replicas dropped by the time the replication ended, it thought that one
      replica is still connected.
      
      now the test takes a few seconds longer but it seems stable.
      5c41802d
  8. 08 May, 2020 1 commit
    • Oran Agra's avatar
      add daily github actions with libc malloc and valgrind · 3d3861dd
      Oran Agra authored
      * fix memlry leaks with diskless replica short read.
      * fix a few timing issues with valgrind runs
      * fix issue with valgrind and watchdog schedule signal
      
      about the valgrind WD issue:
      the stack trace test in logging.tcl, has issues with valgrind:
      ==28808== Can't extend stack to 0x1ffeffdb38 during signal delivery for thread 1:
      ==28808==   too small or bad protection modes
      
      it seems to be some valgrind bug with SA_ONSTACK.
      SA_ONSTACK seems unneeded since WD is not recursive (SA_NODEFER was removed),
      also, not sure if it's even valid without a call to sigaltstack()
      3d3861dd
  9. 28 Apr, 2020 1 commit
  10. 27 Apr, 2020 1 commit
    • Oran Agra's avatar
      Keep track of meaningful replication offset in replicas too · e4d2bb62
      Oran Agra authored
      Now both master and replicas keep track of the last replication offset
      that contains meaningful data (ignoring the tailing pings), and both
      trim that tail from the replication backlog, and the offset with which
      they try to use for psync.
      
      the implication is that if someone missed some pings, or even have
      excessive pings that the promoted replica has, it'll still be able to
      psync (avoid full sync).
      
      the downside (which was already committed) is that replicas running old
      code may fail to psync, since the promoted replica trims pings form it's
      backlog.
      
      This commit adds a test that reproduces several cases of promotions and
      demotions with stale and non-stale pings
      
      Background:
      The mearningful offset on the master was added recently to solve a problem were
      the master is left all alone, injecting PINGs into it's backlog when no one is
      listening and then gets demoted and tries to replicate from a replica that didn't
      have any of the PINGs (or at least not the last ones).
      
      however, consider this case:
      master A has two replicas (B and C) replicating directly from it.
      there's no traffic at all, and also no network issues, just many pings in the
      tail of the backlog. now B gets promoted, A becomes a replica of B, and C
      remains a replica of A. when A gets demoted, it trims the pings from its
      backlog, and successfully replicate from B. however, C is still aware of
      these PINGs, when it'll disconnect and re-connect to A, it'll ask for something
      that's not in the backlog anymore (since A trimmed the tail of it's backlog),
      and be forced to do a full sync (something it didn't have to do before the
      meaningful offset fix).
      
      Besides that, the psync2 test was always failing randomly here and there, it
      turns out the reason were PINGs. Investigating it shows the following scenario:
      
      cycle 1: redis #1 is master, and all the rest are direct replicas of #1
      cycle 2: redis #2 is promoted to master, #1 is a replica of #2 and #3 is replica of #1
      now we see that when #1 is demoted it prints:
      17339:S 21 Apr 2020 11:16:38.523 * Using the meaningful offset 3929963 instead of 3929977 to exclude the final PINGs (14 bytes difference)
      17339:S 21 Apr 2020 11:16:39.391 * Trying a partial resynchronization (request e2b3f8817735fdfe5fa4626766daa938b61419e5:3929964).
      17339:S 21 Apr 2020 11:16:39.392 * Successful partial resynchronization with master.
      and when #3 connects to the demoted #2, #2 says:
      17339:S 21 Apr 2020 11:16:40.084 * Partial resynchronization not accepted: Requested offset for secondary ID was 3929978, but I can reply up to 3929964
      
      so the issue here is that the meaningful offset feature saved the day for the
      demoted master (since it needs to sync from a replica that didn't get the last
      ping), but it didn't help one of the other replicas which did get the last ping.
      e4d2bb62
  11. 25 Mar, 2020 1 commit
  12. 12 Mar, 2020 1 commit
    • Oran Agra's avatar
      fix for flaky psync2 test · 61738154
      Oran Agra authored
      *** [err]: PSYNC2: total sum of full synchronizations is exactly 4 in tests/integration/psync2.tcl
      Expected 5 == 4 (context: type eval line 6 cmd {assert {$sum == 4}} proc ::test)
      
      issue was that sometime the test got an unexpected full sync since it
      tried to switch to the replica before it was in sync with it's master.
      61738154
  13. 18 Dec, 2019 1 commit
  14. 09 Oct, 2019 1 commit
  15. 07 Oct, 2019 4 commits
    • Yossi Gottlieb's avatar
      TLS: Configuration options. · 61733ded
      Yossi Gottlieb authored
      Add configuration options for TLS protocol versions, ciphers/cipher
      suites selection, etc.
      61733ded
    • Oran Agra's avatar
      TLS: Implement support for write barrier. · 6b629480
      Oran Agra authored
      6b629480
    • Oran Agra's avatar
      diskless replication rdb transfer uses pipe, and writes to sockets form the parent process. · 5a477946
      Oran Agra authored
      misc:
      - handle SSL_has_pending by iterating though these in beforeSleep, and setting timeout of 0 to aeProcessEvents
      - fix issue with epoll signaling EPOLLHUP and EPOLLERR only to the write handlers. (needed to detect the rdb pipe was closed)
      - add key-load-delay config for testing
      - trim connShutdown which is no longer needed
      - rioFdsetWrite -> rioFdWrite - simplified since there's no longer need to write to multiple FDs
      - don't detect rdb child exited (don't call wait3) until we detect the pipe is closed
      - Cleanup bad optimization from rio.c, add another one
      5a477946
    • Yossi Gottlieb's avatar
      TLS: Connections refactoring and TLS support. · b087dd1d
      Yossi Gottlieb authored
      * Introduce a connection abstraction layer for all socket operations and
      integrate it across the code base.
      * Provide an optional TLS connections implementation based on OpenSSL.
      * Pull a newer version of hiredis with TLS support.
      * Tests, redis-cli updates for TLS support.
      b087dd1d
  16. 26 Sep, 2019 1 commit
  17. 17 Jul, 2019 1 commit
    • Oran Agra's avatar
      prevent diskless replica from terminating on short read · c56b4ddc
      Oran Agra authored
      now that replica can read rdb directly from the socket, it should avoid exiting
      on short read and instead try to re-sync.
      
      this commit tries to have minimal effects on non-diskless rdb reading.
      and includes a test that tries to trigger this scenario on various read cases.
      c56b4ddc
  18. 08 Jul, 2019 1 commit
    • Oran Agra's avatar
      diskless replication on slave side (don't store rdb to file), plus some other related fixes · 2de544cf
      Oran Agra authored
      The implementation of the diskless replication was currently diskless only on the master side.
      The slave side was still storing the received rdb file to the disk before loading it back in and parsing it.
      
      This commit adds two modes to load rdb directly from socket:
      1) when-empty
      2) using "swapdb"
      the third mode of using diskless slave by flushdb is risky and currently not included.
      
      other changes:
      --------------
      distinguish between aof configuration and state so that we can re-enable aof only when sync eventually
      succeeds (and not when exiting from readSyncBulkPayload after a failed attempt)
      also a CONFIG GET and INFO during rdb loading would have lied
      
      When loading rdb from the network, don't kill the server on short read (that can be a network error)
      
      Fix rdb check when performed on preamble AOF
      
      tests:
      run replication tests for diskless slave too
      make replication test a bit more aggressive
      Add test for diskless load swapdb
      2de544cf