1. 20 Mar, 2024 1 commit
  2. 12 Feb, 2023 1 commit
    • Tian's avatar
      Reclaim page cache of RDB file (#11248) · 7dae142a
      Tian authored
      # Background
      The RDB file is usually generated and used once and seldom used again, but the content would reside in page cache until OS evicts it. A potential problem is that once the free memory exhausts, the OS have to reclaim some memory from page cache or swap anonymous page out, which may result in a jitters to the Redis service.
      
      Supposing an exact scenario, a high-capacity machine hosts many redis instances, and we're upgrading the Redis together. The page cache in host machine increases as RDBs are generated. Once the free memory drop into low watermark(which is more likely to happen in older Linux kernel like 3.10, before [watermark_scale_factor](https://lore.kernel.org/lkml/1455813719-2395-1-git-send-email-hannes@cmpxchg.org/) is introduced, the `low watermark` is linear to `min watermark`, and there'is not too much buffer space for `kswapd` to be wake up to reclaim memory), a `direct reclaim` happens, which means the process would stall to wait for memory allocation.
      
      # What the PR does
      The PR introduces a capability to reclaim the cache when the RDB is operated. Generally there're two cases, read and write the RDB. For read it's a little messy to address the incremental reclaim, so the reclaim is done in one go in background after the load is finished to avoid blocking the work thread. For write, incremental reclaim amortizes the work of reclaim so no need to put it into background, and the peak watermark of cache can be reduced in this way.
      
      Two cases are addresses specially, replication and restart, for both of which the cache is leveraged to speed up the processing, so the reclaim is postponed to a right time. To do this, a flag is added to`rdbSave` and `rdbLoad` to control whether the cache need to be kept, with the default value false.
      
      # Something deserve noting
      1. Though `posix_fadvise` is the POSIX standard, but only few platform support it, e.g. Linux, FreeBSD 10.0.
      2. In Linux `posix_fadvise` only take effect on writeback-ed pages, so a `sync`(or `fsync`, `fdatasync`) is needed to flush the dirty page before `posix_fadvise` if we reclaim write cache.
      
      # About test
      A unit test is added to verify the effect of `posix_fadvise`.
      In integration test overall cache increase is checked, as well as the cache backed by RDB as a specific TCL test is executed in isolated Github action job.
      7dae142a
  3. 31 May, 2022 1 commit
    • DarrenJiang13's avatar
      Adds isolated netstats for replication. (#10062) · bb1de082
      DarrenJiang13 authored
      
      
      The amount of `server.stat_net_output_bytes/server.stat_net_input_bytes`
      is actually the sum of replication flow and users' data flow. 
      It may cause confusions like this:
      "Why does my server get such a large output_bytes while I am doing nothing? ". 
      
      After discussions and revisions, now here is the change about what this
      PR brings (final version before merge):
      - 2 server variables to count the network bytes during replication,
           including fullsync and propagate bytes.
           - `server.stat_net_repl_output_bytes`/`server.stat_net_repl_input_bytes`
      - 3 info fields to print the input and output of repl bytes and instantaneous
           value of total repl bytes.
           - `total_net_repl_input_bytes` / `total_net_repl_output_bytes`
           - `instantaneous_repl_total_kbps`
      - 1 new API `rioCheckType()` to check the type of rio. So we can use this
           to distinguish between diskless and diskbased replication
      - 2 new counting items to keep network statistics consistent between master
           and slave
          - rdb portion during diskless replica. in `rdbLoadProgressCallback()`
          - first line of the full sync payload. in `readSyncBulkPayload()`
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      bb1de082
  4. 07 Oct, 2019 2 commits
    • Oran Agra's avatar
      diskless replication rdb transfer uses pipe, and writes to sockets form the parent process. · 5a477946
      Oran Agra authored
      misc:
      - handle SSL_has_pending by iterating though these in beforeSleep, and setting timeout of 0 to aeProcessEvents
      - fix issue with epoll signaling EPOLLHUP and EPOLLERR only to the write handlers. (needed to detect the rdb pipe was closed)
      - add key-load-delay config for testing
      - trim connShutdown which is no longer needed
      - rioFdsetWrite -> rioFdWrite - simplified since there's no longer need to write to multiple FDs
      - don't detect rdb child exited (don't call wait3) until we detect the pipe is closed
      - Cleanup bad optimization from rio.c, add another one
      5a477946
    • Yossi Gottlieb's avatar
      TLS: Connections refactoring and TLS support. · b087dd1d
      Yossi Gottlieb authored
      * Introduce a connection abstraction layer for all socket operations and
      integrate it across the code base.
      * Provide an optional TLS connections implementation based on OpenSSL.
      * Pull a newer version of hiredis with TLS support.
      * Tests, redis-cli updates for TLS support.
      b087dd1d
  5. 04 Sep, 2019 1 commit
  6. 17 Jul, 2019 2 commits
  7. 08 Jul, 2019 1 commit
    • Oran Agra's avatar
      diskless replication on slave side (don't store rdb to file), plus some other related fixes · 2de544cf
      Oran Agra authored
      The implementation of the diskless replication was currently diskless only on the master side.
      The slave side was still storing the received rdb file to the disk before loading it back in and parsing it.
      
      This commit adds two modes to load rdb directly from socket:
      1) when-empty
      2) using "swapdb"
      the third mode of using diskless slave by flushdb is risky and currently not included.
      
      other changes:
      --------------
      distinguish between aof configuration and state so that we can re-enable aof only when sync eventually
      succeeds (and not when exiting from readSyncBulkPayload after a failed attempt)
      also a CONFIG GET and INFO during rdb loading would have lied
      
      When loading rdb from the network, don't kill the server on short read (that can be a network error)
      
      Fix rdb check when performed on preamble AOF
      
      tests:
      run replication tests for diskless slave too
      make replication test a bit more aggressive
      Add test for diskless load swapdb
      2de544cf
  8. 29 Dec, 2017 1 commit
    • Oran Agra's avatar
      fix processing of large bulks (above 2GB) · 60a4f12f
      Oran Agra authored
      - protocol parsing (processMultibulkBuffer) was limitted to 32big positions in the buffer
        readQueryFromClient potential overflow
      - rioWriteBulkCount used int, although rioWriteBulkString gave it size_t
      - several places in sds.c that used int for string length or index.
      - bugfix in RM_SaveAuxField (return was 1 or -1 and not length)
      - RM_SaveStringBuffer was limitted to 32bit length
      60a4f12f
  9. 03 Jun, 2016 1 commit
  10. 25 Apr, 2016 1 commit
  11. 17 Oct, 2014 1 commit
    • antirez's avatar
      Diskless replication: rio fdset target new supports buffering. · 10aafdad
      antirez authored
      To perform a socket write() for each RDB rio API write call was
      extremely unefficient, so now rio has minimal buffering capabilities.
      Writes are accumulated into a buffer and only when a given limit is
      reacehd are actually wrote to the N slaves FDs.
      
      Trivia: rio lacked support for buffering since our targets were:
      
      1) Memory buffers.
      2) C standard I/O.
      
      Both were buffered already.
      10aafdad
  12. 14 Oct, 2014 1 commit
    • antirez's avatar
      rio.c fdset target: tolerate (and report) a subset of FDs in error. · 2a436aae
      antirez authored
      Fdset target is used when we want to write an RDB file directly to
      slave's sockets. In this setup as long as there is a single slave that
      is still receiving our payload, we want to continue sennding instead of
      aborting. However rio calls should abort of no FD is ok.
      
      Also we want the errors reported so that we can signal the parent who is
      ok and who is broken, so there is a new set integers with the state of
      each fd. Zero is ok, non-zero is the errno of the failure, if avaialble,
      or a generic EIO.
      2a436aae
  13. 10 Oct, 2014 1 commit
  14. 16 Jul, 2013 2 commits
  15. 24 Apr, 2013 1 commit
  16. 03 Apr, 2013 1 commit
  17. 08 Nov, 2012 1 commit
  18. 11 Apr, 2012 1 commit
  19. 09 Apr, 2012 2 commits
  20. 22 Sep, 2011 2 commits
  21. 13 May, 2011 1 commit