1. 31 May, 2022 1 commit
    • DarrenJiang13's avatar
      Adds isolated netstats for replication. (#10062) · bb1de082
      DarrenJiang13 authored
      
      
      The amount of `server.stat_net_output_bytes/server.stat_net_input_bytes`
      is actually the sum of replication flow and users' data flow. 
      It may cause confusions like this:
      "Why does my server get such a large output_bytes while I am doing nothing? ". 
      
      After discussions and revisions, now here is the change about what this
      PR brings (final version before merge):
      - 2 server variables to count the network bytes during replication,
           including fullsync and propagate bytes.
           - `server.stat_net_repl_output_bytes`/`server.stat_net_repl_input_bytes`
      - 3 info fields to print the input and output of repl bytes and instantaneous
           value of total repl bytes.
           - `total_net_repl_input_bytes` / `total_net_repl_output_bytes`
           - `instantaneous_repl_total_kbps`
      - 1 new API `rioCheckType()` to check the type of rio. So we can use this
           to distinguish between diskless and diskbased replication
      - 2 new counting items to keep network statistics consistent between master
           and slave
          - rdb portion during diskless replica. in `rdbLoadProgressCallback()`
          - first line of the full sync payload. in `readSyncBulkPayload()`
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      bb1de082
  2. 07 Oct, 2019 2 commits
    • Oran Agra's avatar
      diskless replication rdb transfer uses pipe, and writes to sockets form the parent process. · 5a477946
      Oran Agra authored
      misc:
      - handle SSL_has_pending by iterating though these in beforeSleep, and setting timeout of 0 to aeProcessEvents
      - fix issue with epoll signaling EPOLLHUP and EPOLLERR only to the write handlers. (needed to detect the rdb pipe was closed)
      - add key-load-delay config for testing
      - trim connShutdown which is no longer needed
      - rioFdsetWrite -> rioFdWrite - simplified since there's no longer need to write to multiple FDs
      - don't detect rdb child exited (don't call wait3) until we detect the pipe is closed
      - Cleanup bad optimization from rio.c, add another one
      5a477946
    • Yossi Gottlieb's avatar
      TLS: Connections refactoring and TLS support. · b087dd1d
      Yossi Gottlieb authored
      * Introduce a connection abstraction layer for all socket operations and
      integrate it across the code base.
      * Provide an optional TLS connections implementation based on OpenSSL.
      * Pull a newer version of hiredis with TLS support.
      * Tests, redis-cli updates for TLS support.
      b087dd1d
  3. 04 Sep, 2019 1 commit
  4. 17 Jul, 2019 2 commits
  5. 08 Jul, 2019 1 commit
    • Oran Agra's avatar
      diskless replication on slave side (don't store rdb to file), plus some other related fixes · 2de544cf
      Oran Agra authored
      The implementation of the diskless replication was currently diskless only on the master side.
      The slave side was still storing the received rdb file to the disk before loading it back in and parsing it.
      
      This commit adds two modes to load rdb directly from socket:
      1) when-empty
      2) using "swapdb"
      the third mode of using diskless slave by flushdb is risky and currently not included.
      
      other changes:
      --------------
      distinguish between aof configuration and state so that we can re-enable aof only when sync eventually
      succeeds (and not when exiting from readSyncBulkPayload after a failed attempt)
      also a CONFIG GET and INFO during rdb loading would have lied
      
      When loading rdb from the network, don't kill the server on short read (that can be a network error)
      
      Fix rdb check when performed on preamble AOF
      
      tests:
      run replication tests for diskless slave too
      make replication test a bit more aggressive
      Add test for diskless load swapdb
      2de544cf
  6. 29 Dec, 2017 1 commit
    • Oran Agra's avatar
      fix processing of large bulks (above 2GB) · 60a4f12f
      Oran Agra authored
      - protocol parsing (processMultibulkBuffer) was limitted to 32big positions in the buffer
        readQueryFromClient potential overflow
      - rioWriteBulkCount used int, although rioWriteBulkString gave it size_t
      - several places in sds.c that used int for string length or index.
      - bugfix in RM_SaveAuxField (return was 1 or -1 and not length)
      - RM_SaveStringBuffer was limitted to 32bit length
      60a4f12f
  7. 03 Jun, 2016 1 commit
  8. 25 Apr, 2016 1 commit
  9. 17 Oct, 2014 1 commit
    • antirez's avatar
      Diskless replication: rio fdset target new supports buffering. · 10aafdad
      antirez authored
      To perform a socket write() for each RDB rio API write call was
      extremely unefficient, so now rio has minimal buffering capabilities.
      Writes are accumulated into a buffer and only when a given limit is
      reacehd are actually wrote to the N slaves FDs.
      
      Trivia: rio lacked support for buffering since our targets were:
      
      1) Memory buffers.
      2) C standard I/O.
      
      Both were buffered already.
      10aafdad
  10. 14 Oct, 2014 1 commit
    • antirez's avatar
      rio.c fdset target: tolerate (and report) a subset of FDs in error. · 2a436aae
      antirez authored
      Fdset target is used when we want to write an RDB file directly to
      slave's sockets. In this setup as long as there is a single slave that
      is still receiving our payload, we want to continue sennding instead of
      aborting. However rio calls should abort of no FD is ok.
      
      Also we want the errors reported so that we can signal the parent who is
      ok and who is broken, so there is a new set integers with the state of
      each fd. Zero is ok, non-zero is the errno of the failure, if avaialble,
      or a generic EIO.
      2a436aae
  11. 10 Oct, 2014 1 commit
  12. 16 Jul, 2013 2 commits
  13. 24 Apr, 2013 1 commit
  14. 03 Apr, 2013 1 commit
  15. 08 Nov, 2012 1 commit
  16. 11 Apr, 2012 1 commit
  17. 09 Apr, 2012 2 commits
  18. 22 Sep, 2011 2 commits
  19. 13 May, 2011 1 commit