1. 25 Mar, 2020 1 commit
    • antirez's avatar
      PSYNC2: meaningful offset implemented. · 57fa355e
      antirez authored
      A very commonly signaled operational problem with Redis master-replicas
      sets is that, once the master becomes unavailable for some reason,
      especially because of network problems, many times it wont be able to
      perform a partial resynchronization with the new master, once it rejoins
      the partition, for the following reason:
      
      1. The master becomes isolated, however it keeps sending PINGs to the
      replicas. Such PINGs will never be received since the link connection is
      actually already severed.
      2. On the other side, one of the replicas will turn into the new master,
      setting its secondary replication ID offset to the one of the last
      command received from the old master: this offset will not include the
      PINGs sent by the master once the link was already disconnected.
      3. When the master rejoins the partion and is turned into a replica, its
      offset will be too advanced because of the PINGs, so a PSYNC will fail,
      and a full synchronization will be required.
      
      Related to issue #7002 and other discussion we had in the past around
      this problem.
      57fa355e
  2. 23 Mar, 2020 1 commit
  3. 04 Mar, 2020 4 commits
  4. 03 Mar, 2020 1 commit
  5. 25 Feb, 2020 1 commit
  6. 06 Feb, 2020 3 commits
  7. 31 Dec, 2019 1 commit
  8. 19 Dec, 2019 1 commit
  9. 19 Nov, 2019 1 commit
    • Johannes Truschnigg's avatar
      Use libsystemd's sd_notify for communicating redis status to systemd · 641c64ad
      Johannes Truschnigg authored
      Instead of replicating a subset of libsystemd's sd_notify(3) internally,
      use the dynamic library provided by systemd to communicate with the
      service manager.
      
      When systemd supervision was auto-detected or configured, communicate
      the actual server status (i.e. "Loading dataset", "Waiting for
      master<->replica sync") to systemd, instead of declaring readiness right
      after initializing the server process.
      641c64ad
  10. 29 Oct, 2019 1 commit
    • Oran Agra's avatar
      Modules hooks: complete missing hooks for the initial set of hooks · 51c3ff8d
      Oran Agra authored
      * replication hooks: role change, master link status, replica online/offline
      * persistence hooks: saving, loading, loading progress
      * misc hooks: cron loop, shutdown, module loaded/unloaded
      * change the way hooks test work, and add tests for all of the above
      
      startLoading() now gets flag indicating what is loaded.
      stopLoading() now gets an indication of success or failure.
      adding startSaving() and stopSaving() with similar args and role.
      51c3ff8d
  11. 10 Oct, 2019 1 commit
    • antirez's avatar
      Cluster: fix memory leak of cached master. · 747be463
      antirez authored
      This is what happened:
      
      1. Instance starts, is a slave in the cluster configuration, but
      actually server.masterhost is not set, so technically the instance
      is acting like a master.
      
      2. loadDataFromDisk() calls replicationCacheMasterUsingMyself() even if
      the instance is a master, in the case it is logically a slave and the
      cluster is enabled. So now we have a cached master even if the instance
      is practically configured as a master (from the POV of
      server.masterhost value and so forth).
      
      3. clusterCron() sees that the instance requires to replicate from its
      master, because logically it is a slave, so it calls
      replicationSetMaster() that will in turn call
      replicationCacheMasterUsingMyself(): before this commit, this call would
      overwrite the old cached master, creating a memory leak.
      747be463
  12. 07 Oct, 2019 4 commits
    • Yossi Gottlieb's avatar
      TLS: Configuration options. · 61733ded
      Yossi Gottlieb authored
      Add configuration options for TLS protocol versions, ciphers/cipher
      suites selection, etc.
      61733ded
    • Oran Agra's avatar
      diskless replication rdb transfer uses pipe, and writes to sockets form the parent process. · 5a477946
      Oran Agra authored
      misc:
      - handle SSL_has_pending by iterating though these in beforeSleep, and setting timeout of 0 to aeProcessEvents
      - fix issue with epoll signaling EPOLLHUP and EPOLLERR only to the write handlers. (needed to detect the rdb pipe was closed)
      - add key-load-delay config for testing
      - trim connShutdown which is no longer needed
      - rioFdsetWrite -> rioFdWrite - simplified since there's no longer need to write to multiple FDs
      - don't detect rdb child exited (don't call wait3) until we detect the pipe is closed
      - Cleanup bad optimization from rio.c, add another one
      5a477946
    • Yossi Gottlieb's avatar
      TLS: Connections refactoring and TLS support. · b087dd1d
      Yossi Gottlieb authored
      * Introduce a connection abstraction layer for all socket operations and
      integrate it across the code base.
      * Provide an optional TLS connections implementation based on OpenSSL.
      * Pull a newer version of hiredis with TLS support.
      * Tests, redis-cli updates for TLS support.
      b087dd1d
    • charsyam's avatar
      fix type salves to slaves · bea0384f
      charsyam authored
      bea0384f
  13. 27 Sep, 2019 1 commit
  14. 05 Aug, 2019 1 commit
  15. 30 Jul, 2019 2 commits
  16. 17 Jul, 2019 1 commit
    • Oran Agra's avatar
      Module API for Forking · 56258c6b
      Oran Agra authored
      * create module API for forking child processes.
      * refactor duplicate code around creating and tracking forks by AOF and RDB.
      * child processes listen to SIGUSR1 and dies exitFromChild in order to
        eliminate a valgrind warning of unhandled signal.
      * note that BGSAVE error reply has changed.
      
      valgrind error is:
        Process terminating with default action of signal 10 (SIGUSR1)
      56258c6b
  17. 10 Jul, 2019 2 commits
  18. 08 Jul, 2019 2 commits
    • antirez's avatar
      81b18fa3
    • Oran Agra's avatar
      diskless replication on slave side (don't store rdb to file), plus some other related fixes · 2de544cf
      Oran Agra authored
      The implementation of the diskless replication was currently diskless only on the master side.
      The slave side was still storing the received rdb file to the disk before loading it back in and parsing it.
      
      This commit adds two modes to load rdb directly from socket:
      1) when-empty
      2) using "swapdb"
      the third mode of using diskless slave by flushdb is risky and currently not included.
      
      other changes:
      --------------
      distinguish between aof configuration and state so that we can re-enable aof only when sync eventually
      succeeds (and not when exiting from readSyncBulkPayload after a failed attempt)
      also a CONFIG GET and INFO during rdb loading would have lied
      
      When loading rdb from the network, don't kill the server on short read (that can be a network error)
      
      Fix rdb check when performed on preamble AOF
      
      tests:
      run replication tests for diskless slave too
      make replication test a bit more aggressive
      Add test for diskless load swapdb
      2de544cf
  19. 15 May, 2019 1 commit
    • antirez's avatar
      Narrow the effects of PR #6029 to the exact state. · 074d24df
      antirez authored
      CLIENT PAUSE may be used, in other contexts, for a long time making all
      the slaves time out. Better for now to be more specific about what
      should disable senidng PINGs.
      
      An alternative to that would be to virtually refresh the slave
      interactions when clients are paused, however for now I went for this
      more conservative solution.
      074d24df
  20. 17 Apr, 2019 1 commit
  21. 21 Mar, 2019 2 commits
  22. 20 Mar, 2019 2 commits
  23. 18 Mar, 2019 1 commit
  24. 10 Mar, 2019 1 commit
  25. 09 Mar, 2019 1 commit
  26. 12 Feb, 2019 1 commit
    • zhaozhao.zz's avatar
      ACL: add masteruser configuration for replication · ea9d3aef
      zhaozhao.zz authored
      In mostly production environment, normal user's behavior should be
      limited.
      
      Now in redis ACL mechanism we can do it like that:
      
          user default on +@all ~* -@dangerous nopass
          user admin on +@all ~* >someSeriousPassword
      
      Then the default normal user can not execute dangerous commands like
      FLUSHALL/KEYS.
      
      But some admin commands are in dangerous category too like PSYNC,
      and the configurations above will forbid replica from sync with master.
      
      Finally I think we could add a new configuration for replication,
      it is masteruser option, like this:
      
          masteruser admin
          masterauth someSeriousPassword
      
      Then replica will try AUTH admin someSeriousPassword and get privilege
      to execute PSYNC. If masteruser is NULL, replica would AUTH with only
      masterauth like before.
      ea9d3aef
  27. 25 Jan, 2019 1 commit