1. 22 Dec, 2021 1 commit
    • Oran Agra's avatar
      Allow most CONFIG SET during loading, block some commands in async-loading (#9878) · 41e6e05d
      Oran Agra authored
      ## background
      Till now CONFIG SET was blocked during loading.
      (In the not so distant past, GET was disallowed too)
      
      We recently (not released yet) added an async-loading mode, see #9323,
      and during that time it'll serve CONFIG SET and any other command.
      And now we realized (#9770) that some configs, and commands are dangerous
      during async-loading.
      
      ## changes
      * Allow most CONFIG SET during loading (both on async-loading and normal loading)
      * Allow CONFIG REWRITE and CONFIG RESETSTAT during loading
      * Block a few config during loading (`appendonly`, `repl-diskless-load`, and `dir`)
      * Block a few commands during loading (list below)
      
      ## the blocked commands:
      * SAVE - obviously we don't wanna start a foregreound save during loading 8-)
      * BGSAVE - we don't mind to schedule one, but we don't wanna fork now
      * BGREWRITEAOF - we don't mind to schedule one, but we don't wanna fork now
      * MODULE - we obviously don't wanna unload a module during replication / rdb loading
        (MODULE HELP and MODULE LIST are not blocked)
      * SYNC / PSYNC - we're in the middle of RDB loading from master, must not allow sync
        requests now.
      * REPLICAOF / SLAVEOF - we're in the middle of replicating, maybe it makes sense to let
        the user abort it, but he couldn't do that so far, i don't wanna take any risk of bugs due to odd state.
      * CLUSTER - only allow [HELP, SLOTS, NODES, INFO, MYID, LINKS, KEYSLOT, COUNTKEYSINSLOT,
        GETKEYSINSLOT, RESET, REPLICAS, COUNT_FAILURE_REPORTS], for others, preserve the status quo
      
      ## other fixes
      * processEventsWhileBlocked had an issue when being nested, this could happen with a busy script
        during async loading (new), but also in a busy script during AOF loading (old). this lead to a crash in
        the scenario described in #6988
      41e6e05d
  2. 02 Dec, 2021 1 commit
    • meir@redislabs.com's avatar
      Redis Functions - Added redis function unit and Lua engine · cbd46317
      meir@redislabs.com authored
      Redis function unit is located inside functions.c
      and contains Redis Function implementation:
      1. FUNCTION commands:
        * FUNCTION CREATE
        * FCALL
        * FCALL_RO
        * FUNCTION DELETE
        * FUNCTION KILL
        * FUNCTION INFO
      2. Register engine
      
      In addition, this commit introduce the first engine
      that uses the Redis Function capabilities, the
      Lua engine.
      cbd46317
  3. 24 Nov, 2021 1 commit
    • Binbin's avatar
      Wait for `asyn_loading` to stop in `short read` test (#9841) · fb4f7be2
      Binbin authored
      In #9323, when `repl-diskless-load` is enabled and set to `swapdb`,
      if the master replication ID hasn't changed, we can load data-set
      asynchronously, and serving read commands during the full resync.
      
      In `diskless loading short read` test, after a loading successfully,
      we will wait for the loading to stop and continue the for loop.
      
      After the introduction of `async_loading`, we also need to check it.
      Otherwise the next loop will start too soon, may trigger a timing issue.
      fb4f7be2
  4. 09 Nov, 2021 1 commit
    • YaacovHazan's avatar
      fix short timeout in replication short read tests (#9763) · 03406fcb
      YaacovHazan authored
      In both tests, "diskless loading short read" and "diskless loading short read with module",
      the timeout of waiting for the replica to respond to a short read and log it, is too short.
      
      Also, add --dump-logs in runtest-moduleapi for valgrind runs.
      03406fcb
  5. 04 Nov, 2021 2 commits
    • Eduardo Semprebon's avatar
      Replica keep serving data during repl-diskless-load=swapdb for better availability (#9323) · 91d0c758
      Eduardo Semprebon authored
      
      
      For diskless replication in swapdb mode, considering we already spend replica memory
      having a backup of current db to restore in case of failure, we can have the following benefits
      by instead swapping database only in case we succeeded in transferring db from master:
      
      - Avoid `LOADING` response during failed and successful synchronization for cases where the
        replica is already up and running with data.
      - Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
        time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
      - This could be implemented also for disk replication with similar benefits if consumers are willing
        to spend the extra memory usage.
      
      General notes:
      - The concept of `backupDb` becomes `tempDb` for clarity.
      - Async loading mode will only kick in if the replica is syncing from a master that has the same
        repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline. 
      - New property in INFO: `async_loading` to differentiate from the blocking loading
      - Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
        and the tempDb that is passed around.
      - Because this is affecting replicas only, we assume that if they are not readonly and write commands
        during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
        here anyways to avoid complications.
      
      Considerations for review:
      - We have many cases where server.loading flag is used and even though I tried my best, there may
        be cases where async_loading should be checked as well and cases where it shouldn't (would require
        very good understanding of whole code)
      - Several places that had different behavior depending on the loading flag where actually meant to just
        handle commands coming from the AOF client differently than ones coming from real clients, changed
        to check CLIENT_ID_AOF instead.
      
      **Additional for Release Notes**
      - Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
        contribute on triggering next database SAVE
      - New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
      - Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
        Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
        ABORTED and COMPLETED.
      - New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
        to allow modules to declare they support the diskless replication with async loading (when absent, we fall
        back to disk-based loading).
      Co-authored-by: default avatarEduardo Semprebon <edus@saxobank.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      91d0c758
    • menwen's avatar
      Retry when a blocked connection system call is interrupted by a signal (#9629) · ccf8a651
      menwen authored
      
      
      When repl-diskless-load is enabled, the connection is set to the blocking state.
      The connection may be interrupted by a signal during a system call.
      This would have resulted in a disconnection and possibly a reconnection loop.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      ccf8a651
  6. 02 Nov, 2021 1 commit
    • Binbin's avatar
      Fix timing issue in replication test (#9719) · 58a1d16f
      Binbin authored
      
      
      So it looks like sampling set loglines [count_log_lines -2] was
      executed too late, and the replication managed to complete before that.
      
      ```
      *** [err]: diskless no replicas drop during rdb pipe in tests/integration/replication.tcl
      log message of '"*Diskless rdb transfer, done reading from pipe, 2 replicas still up*"' not found in ./tests/tmp/server.6124.69/stdout after line: 52 till line: 52
      ```
      
      Changes:
      1. when we search the master log file, we start to search from before we sent the REPLICAOF
        command, to prevent a race in which the replication completed before we sampled the log line count.
      2. we don't need to sample the replica loglines sine it's a fresh resplica that's just been started, so the message
        we're looking for is the first occurrence in the log, we can start search from 0.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      58a1d16f
  7. 25 Oct, 2021 1 commit
    • Wang Yuan's avatar
      Replication backlog and replicas use one global shared replication buffer (#9166) · c1718f9d
      Wang Yuan authored
      ## Background
      For redis master, one replica uses one copy of replication buffer, that is a big waste of memory,
      more replicas more waste, and allocate/free memory for every reply list also cost much.
      If we set client-output-buffer-limit small and write traffic is heavy, master may disconnect with
      replicas and can't finish synchronization with replica. If we set  client-output-buffer-limit big,
      master may be OOM when there are many replicas that separately keep much memory.
      Because replication buffers of different replica client are the same, one simple idea is that
      all replicas only use one replication buffer, that will effectively save memory.
      
      Since replication backlog content is the same as replicas' output buffer, now we
      can discard replication backlog memory and use global shared replication buffer
      to implement replication backlog mechanism.
      
      ## Implementation
      I create one global "replication buffer" which contains content of replication stream.
      The structure of "replication buffer" is similar to the reply list that exists in every client.
      But the node of list is `replBufBlock`, which has `id, repl_offset, refcount` fields.
      ```c
      /* Replication buffer blocks is the list of replBufBlock.
       *
       * +--------------+       +--------------+       +--------------+
       * | refcount = 1 |  ...  | refcount = 0 |  ...  | refcount = 2 |
       * +--------------+       +--------------+       +--------------+
       *      |                                            /       \
       *      |                                           /         \
       *      |                                          /           \
       *  Repl Backlog                               Replia_A      Replia_B
       * 
       * Each replica or replication backlog increments only the refcount of the
       * 'ref_repl_buf_node' which it points to. So when replica walks to the next
       * node, it should first increase the next node's refcount, and when we trim
       * the replication buffer nodes, we remove node always from the head node which
       * refcount is 0. If the refcount of the head node is not 0, we must stop
       * trimming and never iterate the next node. */
      
      /* Similar with 'clientReplyBlock', it is used for shared buffers between
       * all replica clients and replication backlog. */
      typedef struct replBufBlock {
          int refcount;           /* Number of replicas or repl backlog using. */
          long long id;           /* The unique incremental number. */
          long long repl_offset;  /* Start replication offset of the block. */
          size_t size, used;
          char buf[];
      } replBufBlock;
      ```
      So now when we feed replication stream into replication backlog and all replicas, we only need
      to feed stream into replication buffer `feedReplicationBuffer`. In this function, we set some fields of
      replication backlog and replicas to references of the global replication buffer blocks. And we also
      need to check replicas' output buffer limit to free if exceeding `client-output-buffer-limit`, and trim
      replication backlog if exceeding `repl-backlog-size`.
      
      When sending reply to replicas, we also need to iterate replication buffer blocks and send its
      content, when totally sending one block for replica, we decrease current node count and
      increase the next current node count, and then free the block which reference is 0 from the
      head of replication buffer blocks.
      
      Since now we use linked list to manage replication backlog, it may cost much time for iterating
      all linked list nodes to find corresponding replication buffer node. So we create a rax tree to
      store some nodes  for index, but to avoid rax tree occupying too much memory, i record
      one per 64 nodes for index.
      
      Currently, to make partial resynchronization as possible as much, we always let replication
      backlog as the last reference of replication buffer blocks, backlog size may exceeds our setting
      if slow replicas that reference vast replication buffer blocks, and this method doesn't increase
      memory usage since they share replication buffer. To avoid freezing server for freeing unreferenced
      replication buffer blocks when we need to trim backlog for exceeding backlog size setting,
      we trim backlog incrementally (free 64 blocks per call now), and make it faster in
      `beforeSleep` (free 640 blocks).
      
      ### Other changes
      - `mem_total_replication_buffers`: we add this field in INFO command, it means the total
        memory of replication buffers used.
      - `mem_clients_slaves`:  now even replica is slow to replicate, and its output buffer memory
        is not 0, but it still may be 0, since replication backlog and replicas share one global replication
        buffer, only if replication buffer memory is more than the repl backlog setting size, we consider
        the excess as replicas' memory. Otherwise, we think replication buffer memory is the consumption
        of repl backlog.
      - Key eviction
        Since all replicas and replication backlog share global replication buffer, we think only the
        part of exceeding backlog size the extra separate consumption of replicas.
        Because we trim backlog incrementally in the background, backlog size may exceeds our
        setting if slow replicas that reference vast replication buffer blocks disconnect.
        To avoid massive eviction loop, we don't count the delayed freed replication backlog into
        used memory even if there are no replicas, i.e. we also regard this memory as replicas's memory.
      - `client-output-buffer-limit` check for replica clients
        It doesn't make sense to set the replica clients output buffer limit lower than the repl-backlog-size
        config (partial sync will succeed and then replica will get disconnected). Such a configuration is
        ignored (the size of repl-backlog-size will be used). This doesn't have memory consumption
        implications since the replica client will share the backlog buffers memory.
      - Drop replication backlog after loading data if needed
        We always create replication backlog if server is a master, we need it because we put DELs in
        it when loading expired keys in RDB, but if RDB doesn't have replication info or there is no rdb,
        it is not possible to support partial resynchronization, to avoid extra memory of replication backlog,
        we drop it.
      - Multi IO threads
       Since all replicas and replication backlog use global replication buffer,  if I/O threads are enabled,
        to guarantee data accessing thread safe, we must let main thread handle sending the output buffer
        to all replicas. But before, other IO threads could handle sending output buffer of all replicas.
      
      ## Other optimizations
      This solution resolve some other problem:
      - When replicas disconnect with master since of out of output buffer limit, releasing the output
        buffer of replicas may freeze server if we set big `client-output-buffer-limit` for replicas, but now,
        it doesn't cause freezing.
      - This implementation may mitigate reply list copy cost time(also freezes server) when one replication
        has huge reply buffer and another replica can copy buffer for full synchronization. now, we just copy
        reference info, it is very light.
      - If we set replication backlog size big, it also may cost much time to copy replication backlog into
        replica's output buffer. But this commit eliminates this problem.
      - Resizing replication backlog size doesn't empty current replication backlog content.
      c1718f9d
  8. 18 Oct, 2021 1 commit
    • Oran Agra's avatar
      Attempt to fix a valgrind test failure due to timing (#9643) · 276b460e
      Oran Agra authored
      in the past few days i've seen two failures in the valgrind daily test.
      
      *** [err]: slave fails full sync and diskless load swapdb recovers it in tests/integration/replication.tcl
      Replica didn't get into loading mode
      
      can't reproduce it, but i'm hoping it's just too slow (to start loading within 5 seconds)
      276b460e
  9. 19 Sep, 2021 1 commit
  10. 09 Sep, 2021 1 commit
    • Wang Yuan's avatar
      Delay to discard cached master when full synchronization (#9398) · cee3d67f
      Wang Yuan authored
      * Delay to discard cache master when full synchronization
      * Don't disconnect with replicas before loading transferred RDB when full sync
      
      Previously, once replica need to start full synchronization with master,
      it will discard cached master whatever full synchronization is failed or
      not. 
      Now we discard cached master only when transferring RDB is finished
      and start to change data space, this make replica could start partial
      resynchronization with another new master if new master is failed
      during full synchronization.
      cee3d67f
  11. 22 Jun, 2021 1 commit
    • Oran Agra's avatar
      solve test timing issues in replication tests (#9121) · d0819d61
      Oran Agra authored
      # replication-3.tcl
      had a test timeout failure with valgrind on daily CI:
      ```
      *** [err]: SLAVE can reload "lua" AUX RDB fields of duplicated scripts in tests/integration/replication-3.tcl
      Replication not started.
      ```
      replication took more than 70 seconds.
      https://github.com/redis/redis/runs/2854037905?check_suite_focus=true
      
      on my machine it takes only about 30, but i can see how 50 seconds isn't enough.
      
      # replication.tcl
      loading was over too quickly in freebsd daily CI:
      ```
      *** [err]: slave fails full sync and diskless load swapdb recovers it in tests/integration/replication.tcl
      Expected '0' to be equal to '1' (context: type eval line 44 cmd {assert_equal [s -1 loading] 1} proc ::start_server)
      ```
      
      # rdb.tcl
      loading was over too quickly.
      increase the time loading takes, and decrease the amount of work we try to achieve in that time.
      d0819d61
  12. 10 Jun, 2021 1 commit
    • Binbin's avatar
      Fixed some typos, add a spell check ci and others minor fix (#8890) · 0bfccc55
      Binbin authored
      This PR adds a spell checker CI action that will fail future PRs if they introduce typos and spelling mistakes.
      This spell checker is based on blacklist of common spelling mistakes, so it will not catch everything,
      but at least it is also unlikely to cause false positives.
      
      Besides that, the PR also fixes many spelling mistakes and types, not all are a result of the spell checker we use.
      
      Here's a summary of other changes:
      1. Scanned the entire source code and fixes all sorts of typos and spelling mistakes (including missing or extra spaces).
      2. Outdated function / variable / argument names in comments
      3. Fix outdated keyspace masks error log when we check `config.notify-keyspace-events` in loadServerConfigFromString.
      4. Trim the white space at the end of line in `module.c`. Check: https://github.com/redis/redis/pull/7751
      5. Some outdated https link URLs.
      6. Fix some outdated comment. Such as:
          - In README: about the rdb, we used to said create a `thread`, change to `process`
          - dbRandomKey function coment (about the dictGetRandomKey, change to dictGetFairRandomKey)
          - notifyKeyspaceEvent fucntion comment (add type arg)
          - Some others minor fix in comment (Most of them are incorrectly quoted by variable names)
      7. Modified the error log so that users can easily distinguish between TCP and TLS in `changeBindAddr`
      0bfccc55
  13. 09 Jun, 2021 1 commit
    • Yossi Gottlieb's avatar
      Improve test suite to handle external servers better. (#9033) · 8a86bca5
      Yossi Gottlieb authored
      This commit revives the improves the ability to run the test suite against
      external servers, instead of launching and managing `redis-server` processes as
      part of the test fixture.
      
      This capability existed in the past, using the `--host` and `--port` options.
      However, it was quite limited and mostly useful when running a specific tests.
      Attempting to run larger chunks of the test suite experienced many issues:
      
      * Many tests depend on being able to start and control `redis-server` themselves,
      and there's no clear distinction between external server compatible and other
      tests.
      * Cluster mode is not supported (resulting with `CROSSSLOT` errors).
      
      This PR cleans up many things and makes it possible to run the entire test suite
      against an external server. It also provides more fine grained controls to
      handle cases where the external server supports a subset of the Redis commands,
      limited number of databases, cluster mode, etc.
      
      The tests directory now contains a `README.md` file that describes how this
      works.
      
      This commit also includes additional cleanups and fixes:
      
      * Tests can now be tagged.
      * Tag-based selection is now unified across `start_server`, `tags` and `test`.
      * More information is provided about skipped or ignored tests.
      * Repeated patterns in tests have been extracted to common procedures, both at a
        global level and on a per-test file basis.
      * Cleaned up some cases where test setup was based on a previous test executing
        (a major anti-pattern that repeats itself in many places).
      * Cleaned up some cases where test teardown was not part of a test (in the
        future we should have dedicated teardown code that executes even when tests
        fail).
      * Fixed some tests that were flaky running on external servers.
      8a86bca5
  14. 26 May, 2021 1 commit
    • YaacovHazan's avatar
      unregister AE_READABLE from the read pipe in backgroundSaveDoneHandlerSocket (#8991) · 501d7755
      YaacovHazan authored
      In diskless replication, we create a read pipe for the RDB, between the child and the parent.
      When we close this pipe (fd), the read handler also needs to be removed from the event loop (if it still registered).
      Otherwise, next time we will use the same fd, the registration will be fail (panic), because
      we will use EPOLL_CTL_MOD (the fd still register in the event loop), on fd that already removed from epoll_ctl
      501d7755
  15. 20 May, 2021 1 commit
    • YaacovHazan's avatar
      stabilize tests that involved with load handlers (#8967) · 32a2584e
      YaacovHazan authored
      When test stop 'load handler' by killing the process that generating the load,
      some commands that already in the input buffer, still might be processed by the server.
      This may cause some instability in tests, that count on that no more commands
      processed after we stop the `load handler'
      
      In this commit, new proc 'wait_load_handlers_disconnected' added, to verify that no more
      cammands from any 'load handler' prossesed, by checking that the clients who
      genreate the load is disconnceted.
      
      Also, replacing check of dbsize with wait_for_ofs_sync before comparing debug digest, as
      it would fail in case the last key the workload wrote was an overridden key (not a new one).
      
      Affected tests
      Race fix:
      - failover command to specific replica works
      - Connect multiple replicas at the same time (issue #141), master diskless=$mdl, replica diskless=$sdl
      - AOF rewrite during write load: RDB preamble=$rdbpre
      
      Cleanup and speedup:
      - Test replication with blocking lists and sorted sets operations
      - Test replication with parallel clients writing in different DBs
      - Test replication partial resync: $descr (diskless: $mdl, $sdl, reconnect: $reconnect
      32a2584e
  16. 18 May, 2021 1 commit
  17. 18 Apr, 2021 1 commit
    • Oran Agra's avatar
      Fix timing of new replication test (#8807) · a9897b00
      Oran Agra authored
      In github actions CI with valgrind, i saw that even the fast replica
      (one that wasn't paused), didn't get to complete the replication fast
      enough, and ended up getting disconnected by timeout.
      
      Additionally, due to a typo in uname, we didn't get to actually run the
      CPU efficiency part of the test.
      a9897b00
  18. 15 Apr, 2021 1 commit
    • guybe7's avatar
      Add a timeout mechanism for replicas stuck in fullsync (#8762) · d63d0260
      guybe7 authored
      Starting redis 6.0 (part of the TLS feature), diskless master uses pipe from the fork
      child so that the parent is the one sending data to the replicas.
      This mechanism has an issue in which a hung replica will cause the master to wait
      for it to read the data sent to it forever, thus preventing the fork child from terminating
      and preventing the creations of any other forks.
      
      This PR adds a timeout mechanism, much like the ACK-based timeout,
      we disconnect replicas that aren't reading the RDB file fast enough.
      d63d0260
  19. 24 Mar, 2021 1 commit
  20. 22 Mar, 2021 1 commit
    • Oran Agra's avatar
      Fix race in replication test (#8679) · a7c02b19
      Oran Agra authored
      Since redis 6.2, redis immediately tries to connect to the master, not
      waiting for replication cron.
      
      in the slow freebsd CI, this test failed and master_link_status was
      already "up" when INFO was called.
      a7c02b19
  21. 17 Jan, 2021 1 commit
    • Yossi Gottlieb's avatar
      Add io-thread daily CI tests. (#8232) · 522d9360
      Yossi Gottlieb authored
      This adds basic coverage to IO threads by running the cluster and few selected Redis test suite tests with the IO threads enabled.
      
      Also provides some necessary additional improvements to the test suite:
      
      * Add --config to sentinel/cluster tests for arbitrary configuration.
      * Fix --tags whitelisting which was broken.
      * Add a `network` tag to some tests that are more network intensive. This is work in progress and more tests should be properly tagged in the future.
      522d9360
  22. 03 Nov, 2020 1 commit
  23. 27 Oct, 2020 1 commit
  24. 22 Oct, 2020 1 commit
  25. 08 Oct, 2020 1 commit
    • Felipe Machado's avatar
      Adds new pop-push commands (LMOVE, BLMOVE) (#6929) · c3f9e017
      Felipe Machado authored
      
      
      Adding [B]LMOVE <src> <dst> RIGHT|LEFT RIGHT|LEFT. deprecating [B]RPOPLPUSH.
      
      Note that when receiving a BRPOPLPUSH we'll still propagate an RPOPLPUSH,
      but on BLMOVE RIGHT LEFT we'll propagate an LMOVE
      
      improvement to existing tests
      - Replace "after 1000" with "wait_for_condition" when wait for
        clients to block/unblock.
      - Add a pre-existing element to target list on basic tests so
        that we can check if the new element was added to the correct
        side of the list.
      - check command stats on the replica to make sure the right
        command was replicated
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      c3f9e017
  26. 22 Sep, 2020 1 commit
    • Wang Yuan's avatar
      Kill disk-based fork child when all replicas drop and 'save' is not enabled (#7819) · 1bb5794a
      Wang Yuan authored
      When all replicas waiting for a bgsave get disconnected (possibly due to output buffer limit),
      It may be good to kill the bgsave child. in diskless replication it already happens, but in
      disk-based, the child may still serve some purpose (for persistence).
      
      By killing the child, we prevent it from eating COW memory in vain, and we also allow a new child fork sooner for the next full synchronization or bgsave.
      We do that only if rdb persistence wasn't enabled in the configuration.
      
      Btw, now, rdbRemoveTempFile in killRDBChild won't block server, so we can killRDBChild safely.
      1bb5794a
  27. 06 Sep, 2020 1 commit
    • Oran Agra's avatar
      if diskless repl child is killed, make sure to reap the pid (#7742) · 573246f7
      Oran Agra authored
      Starting redis 6.0 and the changes we made to the diskless master to be
      suitable for TLS, I made the master avoid reaping (wait3) the pid of the
      child until we know all replicas are done reading their rdb.
      
      I did that in order to avoid a state where the rdb_child_pid is -1 but
      we don't yet want to start another fork (still busy serving that data to
      replicas).
      
      It turns out that the solution used so far was problematic in case the
      fork child was being killed (e.g. by the kernel OOM killer), in that
      case there's a chance that we currently disabled the read event on the
      rdb pipe, since we're waiting for a replica to become writable again.
      and in that scenario the master would have never realized the child
      exited, and the replica will remain hung too.
      Note that there's no mechanism to detect a hung replica while it's in
      rdb transfer state.
      
      The solution here is to add another pipe which is used by the parent to
      tell the child it is safe to exit. this mean that when the child exits,
      for whatever reason, it is safe to reap it.
      
      Besides that, i'm re-introducing an adjustment to REPLCONF ACK which was
      part of #6271 (Accelerate diskless master connections) but was dropped
      when that PR was rebased after the TLS fork/pipe changes (5a477946).
      Now that RdbPipeCleanup no longer calls checkChildrenDone, and the ACK
      has chance to detect that the child exited, it should be the one to call
      it so that we don't have to wait for cron (server.hz) to do that.
      573246f7
  28. 06 Aug, 2020 1 commit
    • Oran Agra's avatar
      Accelerate diskless master connections, and general re-connections (#6271) · c17e597d
      Oran Agra authored
      Diskless master has some inherent latencies.
      1) fork starts with delay from cron rather than immediately
      2) replica is put online only after an ACK. but the ACK
         was sent only once a second.
      3) but even if it would arrive immediately, it will not
         register in case cron didn't yet detect that the fork is done.
      
      Besides that, when a replica disconnects, it doesn't immediately
      attempts to re-connect, it waits for replication cron (one per second).
      in case it was already online, it may be important to try to re-connect
      as soon as possible, so that the backlog at the master doesn't vanish.
      
      In case it disconnected during rdb transfer, one can argue that it's
      not very important to re-connect immediately, but this is needed for the
      "diskless loading short read" test to be able to run 100 iterations in 5
      seconds, rather than 3 (waiting for replication cron re-connection)
      
      changes in this commit:
      1) sync command starts a fork immediately if no sync_delay is configured
      2) replica sends REPLCONF ACK when done reading the rdb (rather than on 1s cron)
      3) when a replica unexpectedly disconnets, it immediately tries to
         re-connect rather than waiting 1s
      4) when when a child exits, if there is another replica waiting, we spawn a new
         one right away, instead of waiting for 1s replicationCron.
      5) added a call to connectWithMaster from replicationSetMaster. which is called
         from the REPLICAOF command but also in 3 places in cluster.c, in all of
         these the connection attempt will now be immediate instead of delayed by 1
         second.
      
      side note:
      we can add a call to rdbPipeReadHandler in replconfCommand when getting
      a REPLCONF ACK from the replica to solve a race where the replica got
      the entire rdb and EOF marker before we detected that the pipe was
      closed.
      in the test i did see this race happens in one about of some 300 runs,
      but i concluded that this race is unlikely in real life (where the
      replica is on another host and we're more likely to first detect the
      pipe was closed.
      the test runs 100 iterations in 3 seconds, so in some cases it'll take 4
      seconds instead (waiting for another REPLCONF ACK).
      
      Removing unneeded startBgsaveForReplication from updateSlavesWaitingForBgsave
      Now that CheckChildrenDone is calling the new replicationStartPendingFork
      (extracted from serverCron) there's actually no need to call
      startBgsaveForReplication from updateSlavesWaitingForBgsave anymore,
      since as soon as updateSlavesWaitingForBgsave returns, CheckChildrenDone is
      calling replicationStartPendingFork that handles that anyway.
      The code in updateSlavesWaitingForBgsave had a bug in which it ignored
      repl-diskless-sync-delay, but removing that code shows that this bug was
      hiding another bug, which is that the max_idle should have used >= and
      not >, this one second delay has a big impact on my new test.
      c17e597d
  29. 28 Jul, 2020 1 commit
    • Oran Agra's avatar
      Fix failing tests due to issues with wait_for_log_message (#7572) · 109b5ccd
      Oran Agra authored
      - the test now waits for specific set of log messages rather than wait for
        timeout looking for just one message.
      - we don't wanna sample the current length of the log after an action, due
        to a race, we need to start the search from the line number of the last
        message we where waiting for.
      - when attempting to trigger a full sync, use multi-exec to avoid a race
        where the replica manages to re-connect before we completed the set of
        actions that should force a full sync.
      - fix verify_log_message which was broken and unused
      109b5ccd
  30. 10 Jul, 2020 1 commit
    • Oran Agra's avatar
      stabilize tests that look for log lines (#7367) · 8e76e134
      Oran Agra authored
      tests were sensitive to additional log lines appearing in the log
      causing the search to come empty handed.
      
      instead of just looking for the n last log lines, capture the log lines
      before performing the action, and then search from that offset.
      8e76e134
  31. 18 May, 2020 1 commit
  32. 17 May, 2020 1 commit
    • Oran Agra's avatar
      add regression test for the race in #7205 · 357aace8
      Oran Agra authored
      with the original version of 6.0.0, this test detects an excessive full
      sync.
      with the fix in 1a7cd2c0, this test detects memory corruption,
      especially when using libc allocator with or without valgrind.
      357aace8
  33. 12 May, 2020 1 commit
    • Oran Agra's avatar
      fix unstable replication test · b4416280
      Oran Agra authored
      this test which has coverage for varoius flows of diskless master was
      failing randomly from time to time.
      
      the failure was:
      [err]: diskless all replicas drop during rdb pipe in tests/integration/replication.tcl
      log message of '*Diskless rdb transfer, last replica dropped, killing fork child*' not found
      
      what seemed to have happened is that the master didn't detect that all
      replicas dropped by the time the replication ended, it thought that one
      replica is still connected.
      
      now the test takes a few seconds longer but it seems stable.
      b4416280
  34. 18 Dec, 2019 1 commit
  35. 09 Oct, 2019 1 commit
  36. 07 Oct, 2019 3 commits
    • Yossi Gottlieb's avatar
      TLS: Configuration options. · 61733ded
      Yossi Gottlieb authored
      Add configuration options for TLS protocol versions, ciphers/cipher
      suites selection, etc.
      61733ded
    • Oran Agra's avatar
      diskless replication rdb transfer uses pipe, and writes to sockets form the parent process. · 5a477946
      Oran Agra authored
      misc:
      - handle SSL_has_pending by iterating though these in beforeSleep, and setting timeout of 0 to aeProcessEvents
      - fix issue with epoll signaling EPOLLHUP and EPOLLERR only to the write handlers. (needed to detect the rdb pipe was closed)
      - add key-load-delay config for testing
      - trim connShutdown which is no longer needed
      - rioFdsetWrite -> rioFdWrite - simplified since there's no longer need to write to multiple FDs
      - don't detect rdb child exited (don't call wait3) until we detect the pipe is closed
      - Cleanup bad optimization from rio.c, add another one
      5a477946
    • Yossi Gottlieb's avatar
      TLS: Connections refactoring and TLS support. · b087dd1d
      Yossi Gottlieb authored
      * Introduce a connection abstraction layer for all socket operations and
      integrate it across the code base.
      * Provide an optional TLS connections implementation based on OpenSSL.
      * Pull a newer version of hiredis with TLS support.
      * Tests, redis-cli updates for TLS support.
      b087dd1d
  37. 17 Jul, 2019 1 commit
    • Oran Agra's avatar
      prevent diskless replica from terminating on short read · c56b4ddc
      Oran Agra authored
      now that replica can read rdb directly from the socket, it should avoid exiting
      on short read and instead try to re-sync.
      
      this commit tries to have minimal effects on non-diskless rdb reading.
      and includes a test that tries to trigger this scenario on various read cases.
      c56b4ddc