1. 27 Apr, 2020 1 commit
    • Oran Agra's avatar
      Keep track of meaningful replication offset in replicas too · 4447ddc8
      Oran Agra authored
      Now both master and replicas keep track of the last replication offset
      that contains meaningful data (ignoring the tailing pings), and both
      trim that tail from the replication backlog, and the offset with which
      they try to use for psync.
      
      the implication is that if someone missed some pings, or even have
      excessive pings that the promoted replica has, it'll still be able to
      psync (avoid full sync).
      
      the downside (which was already committed) is that replicas running old
      code may fail to psync, since the promoted replica trims pings form it's
      backlog.
      
      This commit adds a test that reproduces several cases of promotions and
      demotions with stale and non-stale pings
      
      Background:
      The mearningful offset on the master was added recently to solve a problem were
      the master is left all alone, injecting PINGs into it's backlog when no one is
      listening and then gets demoted and tries to replicate from a replica that didn't
      have any of the PINGs (or at least not the last ones).
      
      however, consider this case:
      master A has two replicas (B and C) replicating directly from it.
      there's no traffic at all, and also no network issues, just many pings in the
      tail of the backlog. now B gets promoted, A becomes a replica of B, and C
      remains a replica of A. when A gets demoted, it trims the pings from its
      backlog, and successfully replicate from B. however, C is still aware of
      these PINGs, when it'll disconnect and re-connect to A, it'll ask for something
      that's not in the backlog anymore (since A trimmed the tail of it's backlog),
      and be forced to do a full sync (something it didn't have to do before the
      meaningful offset fix).
      
      Besides that, the psync2 test was always failing randomly here and there, it
      turns out the reason were PINGs. Investigating it shows the following scenario:
      
      cycle 1: redis #1 is master, and all the rest are direct replicas of #1
      cycle 2: redis #2 is promoted to master, #1 is a replica of #2 and #3 is replica of #1
      now we see that when #1 is demoted it prints:
      17339:S 21 Apr 2020 11:16:38.523 * Using the meaningful offset 3929963 instead of 3929977 to exclude the final PINGs (14 bytes difference)
      17339:S 21 Apr 2020 11:16:39.391 * Trying a partial resynchronization (request e2b3f8817735fdfe5fa4626766daa938b61419e5:3929964).
      17339:S 21 Apr 2020 11:16:39.392 * Successful partial resynchronization with master.
      and when #3 connects to the demoted #2, #2 says:
      17339:S 21 Apr 2020 11:16:40.084 * Partial resynchronization not accepted: Requested offset for secondary ID was 3929978, but I can reply up to 3929964
      
      so the issue here is that the meaningful offset feature saved the day for the
      demoted master (since it needs to sync from a replica that didn't get the last
      ping), but it didn't help one of the other replicas which did get the last ping.
      4447ddc8
  2. 28 Mar, 2020 1 commit
  3. 27 Mar, 2020 1 commit
  4. 25 Mar, 2020 1 commit
    • antirez's avatar
      PSYNC2: meaningful offset implemented. · 57fa355e
      antirez authored
      A very commonly signaled operational problem with Redis master-replicas
      sets is that, once the master becomes unavailable for some reason,
      especially because of network problems, many times it wont be able to
      perform a partial resynchronization with the new master, once it rejoins
      the partition, for the following reason:
      
      1. The master becomes isolated, however it keeps sending PINGs to the
      replicas. Such PINGs will never be received since the link connection is
      actually already severed.
      2. On the other side, one of the replicas will turn into the new master,
      setting its secondary replication ID offset to the one of the last
      command received from the old master: this offset will not include the
      PINGs sent by the master once the link was already disconnected.
      3. When the master rejoins the partion and is turned into a replica, its
      offset will be too advanced because of the PINGs, so a PSYNC will fail,
      and a full synchronization will be required.
      
      Related to issue #7002 and other discussion we had in the past around
      this problem.
      57fa355e
  5. 23 Mar, 2020 1 commit
  6. 04 Mar, 2020 4 commits
  7. 03 Mar, 2020 1 commit
  8. 25 Feb, 2020 1 commit
  9. 06 Feb, 2020 3 commits
  10. 31 Dec, 2019 1 commit
  11. 19 Dec, 2019 1 commit
  12. 19 Nov, 2019 1 commit
    • Johannes Truschnigg's avatar
      Use libsystemd's sd_notify for communicating redis status to systemd · 641c64ad
      Johannes Truschnigg authored
      Instead of replicating a subset of libsystemd's sd_notify(3) internally,
      use the dynamic library provided by systemd to communicate with the
      service manager.
      
      When systemd supervision was auto-detected or configured, communicate
      the actual server status (i.e. "Loading dataset", "Waiting for
      master<->replica sync") to systemd, instead of declaring readiness right
      after initializing the server process.
      641c64ad
  13. 29 Oct, 2019 1 commit
    • Oran Agra's avatar
      Modules hooks: complete missing hooks for the initial set of hooks · 51c3ff8d
      Oran Agra authored
      * replication hooks: role change, master link status, replica online/offline
      * persistence hooks: saving, loading, loading progress
      * misc hooks: cron loop, shutdown, module loaded/unloaded
      * change the way hooks test work, and add tests for all of the above
      
      startLoading() now gets flag indicating what is loaded.
      stopLoading() now gets an indication of success or failure.
      adding startSaving() and stopSaving() with similar args and role.
      51c3ff8d
  14. 10 Oct, 2019 1 commit
    • antirez's avatar
      Cluster: fix memory leak of cached master. · 747be463
      antirez authored
      This is what happened:
      
      1. Instance starts, is a slave in the cluster configuration, but
      actually server.masterhost is not set, so technically the instance
      is acting like a master.
      
      2. loadDataFromDisk() calls replicationCacheMasterUsingMyself() even if
      the instance is a master, in the case it is logically a slave and the
      cluster is enabled. So now we have a cached master even if the instance
      is practically configured as a master (from the POV of
      server.masterhost value and so forth).
      
      3. clusterCron() sees that the instance requires to replicate from its
      master, because logically it is a slave, so it calls
      replicationSetMaster() that will in turn call
      replicationCacheMasterUsingMyself(): before this commit, this call would
      overwrite the old cached master, creating a memory leak.
      747be463
  15. 07 Oct, 2019 4 commits
    • Yossi Gottlieb's avatar
      TLS: Configuration options. · 61733ded
      Yossi Gottlieb authored
      Add configuration options for TLS protocol versions, ciphers/cipher
      suites selection, etc.
      61733ded
    • Oran Agra's avatar
      diskless replication rdb transfer uses pipe, and writes to sockets form the parent process. · 5a477946
      Oran Agra authored
      misc:
      - handle SSL_has_pending by iterating though these in beforeSleep, and setting timeout of 0 to aeProcessEvents
      - fix issue with epoll signaling EPOLLHUP and EPOLLERR only to the write handlers. (needed to detect the rdb pipe was closed)
      - add key-load-delay config for testing
      - trim connShutdown which is no longer needed
      - rioFdsetWrite -> rioFdWrite - simplified since there's no longer need to write to multiple FDs
      - don't detect rdb child exited (don't call wait3) until we detect the pipe is closed
      - Cleanup bad optimization from rio.c, add another one
      5a477946
    • Yossi Gottlieb's avatar
      TLS: Connections refactoring and TLS support. · b087dd1d
      Yossi Gottlieb authored
      * Introduce a connection abstraction layer for all socket operations and
      integrate it across the code base.
      * Provide an optional TLS connections implementation based on OpenSSL.
      * Pull a newer version of hiredis with TLS support.
      * Tests, redis-cli updates for TLS support.
      b087dd1d
    • charsyam's avatar
      fix type salves to slaves · bea0384f
      charsyam authored
      bea0384f
  16. 27 Sep, 2019 1 commit
  17. 05 Aug, 2019 1 commit
  18. 30 Jul, 2019 2 commits
  19. 17 Jul, 2019 1 commit
    • Oran Agra's avatar
      Module API for Forking · 56258c6b
      Oran Agra authored
      * create module API for forking child processes.
      * refactor duplicate code around creating and tracking forks by AOF and RDB.
      * child processes listen to SIGUSR1 and dies exitFromChild in order to
        eliminate a valgrind warning of unhandled signal.
      * note that BGSAVE error reply has changed.
      
      valgrind error is:
        Process terminating with default action of signal 10 (SIGUSR1)
      56258c6b
  20. 10 Jul, 2019 2 commits
  21. 08 Jul, 2019 2 commits
    • antirez's avatar
      81b18fa3
    • Oran Agra's avatar
      diskless replication on slave side (don't store rdb to file), plus some other related fixes · 2de544cf
      Oran Agra authored
      The implementation of the diskless replication was currently diskless only on the master side.
      The slave side was still storing the received rdb file to the disk before loading it back in and parsing it.
      
      This commit adds two modes to load rdb directly from socket:
      1) when-empty
      2) using "swapdb"
      the third mode of using diskless slave by flushdb is risky and currently not included.
      
      other changes:
      --------------
      distinguish between aof configuration and state so that we can re-enable aof only when sync eventually
      succeeds (and not when exiting from readSyncBulkPayload after a failed attempt)
      also a CONFIG GET and INFO during rdb loading would have lied
      
      When loading rdb from the network, don't kill the server on short read (that can be a network error)
      
      Fix rdb check when performed on preamble AOF
      
      tests:
      run replication tests for diskless slave too
      make replication test a bit more aggressive
      Add test for diskless load swapdb
      2de544cf
  22. 15 May, 2019 1 commit
    • antirez's avatar
      Narrow the effects of PR #6029 to the exact state. · 074d24df
      antirez authored
      CLIENT PAUSE may be used, in other contexts, for a long time making all
      the slaves time out. Better for now to be more specific about what
      should disable senidng PINGs.
      
      An alternative to that would be to virtually refresh the slave
      interactions when clients are paused, however for now I went for this
      more conservative solution.
      074d24df
  23. 17 Apr, 2019 1 commit
  24. 21 Mar, 2019 2 commits
  25. 20 Mar, 2019 2 commits
  26. 18 Mar, 2019 1 commit
  27. 10 Mar, 2019 1 commit