1. 29 Aug, 2022 1 commit
    • Ozan Tezcan's avatar
      Fix style (#128) · 21844ec6
      Ozan Tezcan authored
      Fix style issues in raft_log.c raft_node.c and raft_server_properties.c
      21844ec6
  2. 28 Aug, 2022 1 commit
  3. 25 Aug, 2022 1 commit
    • Ozan Tezcan's avatar
      Add support to rebuild configuration after a restart (#125) · 9c09a6cc
      Ozan Tezcan authored
      Configuration changes are special in Raft. They take effect 
      whenever the related log entries are appended. On a restart, 
      we should go over the log entries and rebuild the configuration.
      
      Adding raft_restore_log() function for this purpose.
      
      Overall, on a restart, application should do:
      
      Load the snapshot and call raft_restore_snapshot()
      Load the log entries and call raft_restore_log()
      Read the metadata file and call raft_restore_metadata().
      9c09a6cc
  4. 23 Aug, 2022 2 commits
  5. 22 Aug, 2022 2 commits
    • Ozan Tezcan's avatar
      Fix return types (#122) · e8791a27
      Ozan Tezcan authored
      Use typedef instead of the actual type
      e8791a27
    • Ozan Tezcan's avatar
      Improve snapshot support (#121) · 7d66a156
      Ozan Tezcan authored
      - raft_load_snapshot_f(): Changed order of arguments term and index
      to be inline with raft_begin_load_snapshot().
      
      - Refactored raft_begin_load_snapshot(). Now, inside this function, 
      current server's node flags are cleared. Previously, application had to
      take care of it. e.g Application had to clear inactive flag on a snapshot.
      
      - Added raft_restore_snapshot() to let libraft know that application has
      loaded a snapshot.
      7d66a156
  6. 18 Aug, 2022 1 commit
  7. 16 Aug, 2022 3 commits
  8. 08 Aug, 2022 1 commit
  9. 07 Jul, 2022 1 commit
  10. 30 Jun, 2022 1 commit
  11. 19 Jun, 2022 1 commit
  12. 19 May, 2022 1 commit
  13. 02 May, 2022 1 commit
  14. 26 Apr, 2022 1 commit
  15. 06 Apr, 2022 1 commit
  16. 03 Apr, 2022 1 commit
  17. 30 Mar, 2022 2 commits
  18. 24 Mar, 2022 1 commit
  19. 21 Mar, 2022 1 commit
  20. 13 Mar, 2022 4 commits
  21. 06 Mar, 2022 1 commit
  22. 07 Feb, 2022 1 commit
  23. 16 Nov, 2021 1 commit
    • Yossi Gottlieb's avatar
      Properly get coverage from all tests. (#70) · ada3d6a0
      Yossi Gottlieb authored
      * Add COVERAGE=1 Makefile option to build libraft with coverage.
      * Refactor CFFI module to use the existing shared object in an
        out-of-line configuration, so there's no need to build and re-build
        it.
      * Makefile adjustments to build the module CFFI as needed.
      * Remove raft_get_snapshot_entry_idx, a prototype for an undefined
        function that breaks CFFI.
      * Add a gcov Makefile target to fetch both unit tests and integration
        test coverage in one shot.
      ada3d6a0
  24. 15 Nov, 2021 1 commit
  25. 07 Oct, 2021 1 commit
    • Shaya Potter's avatar
      redo/tweak transfer leader in libraft (#61) · 63b0b1c1
      Shaya Potter authored
      this started off as a small tweak and grew a bit
      
      1) new callback for transfer leader result, along with its own enum of states
      
      2) change/remove raft_reset_transfer_leader() calls
      
      only call in places where there's a new leader or timeout (i.e. failed to transfer within period).
      
      so, we only reset in 3 places 1) recv_appendentries 2) raft_periodic (timeout) 3) raft_become_leader
      
      2a) raft_become_leader also got refactored a little to only issue the normal cb.notify_state_event() callback after it has set its state to be leader (as before, it could fail after the callback was issued)
      
      2b) in raft_periodic the raft_reset_transfer_leader() is pulled out the "if LEADER" block, as will lose leader when a vote is issued to it, we still want it to timeout if leadership isn't gained
      
      3) raft_reset_transfer_leader() does the logic for determining success/failure now, but becaue timeout is its own result which can't be determined by just looking at leader/desired, add a new flag to it to note timeout state
      
      4) leader stickiness in requestvote is only tested for prevote (as in a timeout now case, we want to drop the timeout now flag after prevotes are sent).  The idea is that we will only get to actual election if prevote passes.
      
      4) a bunch of new tests
      
      * fix issue with timeout flag to reset_transfer_leader
      
      also fix test to catch the original error
      
      * libraft/redisraft changes needed for e2e leader transfer
      
      * make leader stickiness only for prevote
      63b0b1c1
  26. 23 Sep, 2021 2 commits
    • Shaya Potter's avatar
      Adding support for TimeoutNow RPC (#7) · 575335c3
      Shaya Potter authored
      this is the small modifications needed for raftlib to support timeout now rpc
      
      1) add a flag to requestvote type to tell other nodes its overriding normal chec
      2) new send_timeoutnow callback to send a timeout now rpc at right time
      3) call said callback is we've marked the node as timeout now and in raft_appendentries_response raft_get_current_idx(me_) == r->current_idx (i.e. response says the node's idx is up to date with leader)
      4) add helper functions to set/get/reset timeout flag on node structs for targeted node.
      
      how it would be used by client
      
      1) client of libraft will call raft_transfer_leader(raft_server_t* me_, raft_node_id_t node_id) to target the node they want to transfer to.
      
      2) client of libraft will provide the send_timeoutnow() callback for the actual sending of the rpc
      
      3) client of libraft will modify their existing notify_state_event() callback to observe the result of the timeout now operation.
      
      if it receives a RAFT_STATE_LEADERSHIP_TRANSFER_FAILED, then the transfer failed
      
      if it received a RAFT_STATE_FOLLOWER, then it inspects for the actual leader to see if its the expected one and can return that it was transferred to expected node or not
      
      if it received a RAFT_STATE_CANDIDATE, then the transfer also failed, as the targeted node sent a request vote which removed leadership, but wasn't able to win the election.
      
      With PreVote, this latter case should be very rare.
      575335c3
    • Ozan Tezcan's avatar
      Define separate callback function for node id retrieval (#55) · ce8b0f02
      Ozan Tezcan authored
      Define separate callback function for node id retrieval
      ce8b0f02
  27. 22 Sep, 2021 1 commit
    • Shaya Potter's avatar
      fix read queue test to be more accurate (#59) · 36efee07
      Shaya Potter authored
      * fix read queue test to be more accurate
      
      1) each server's set of nodes, keeps the max msg_id its seen from that node when that node has been later
      1a) because of this get_max_seen now operates on a server's nodes, not on the server itself b
      
      2) virtraft encodes the leader id into the "arg" (with a user redefinable multiplier) it sends as part of the callback, so when we get the callback, we can know what leader its for and check its voters for correctness.
      36efee07
  28. 15 Sep, 2021 1 commit
  29. 14 Sep, 2021 1 commit
  30. 12 Sep, 2021 1 commit
    • Shaya Potter's avatar
      add a read_queue test to virtraft (#41) · bdd0797f
      Shaya Potter authored
      * add a read_queue test to virtraft
      
      every iteration we push a read_queue request and the handler and an we pass to it set a variable on calling.  we can use this to make sure that the read_queue doesn't get too far from the iteration.  i.e. we pass the leader's msg_id and can check to ensure that leader's msg_id doesn't get too far from the msg_id (variable) we see in read_queue test
      
      this is analog to the current log applying deadlock test.
      
      * implements msg_id checking in virtraft for verification of read_queue requests
      
      When we pop an entry off the read_queue with the can_read flag, we verify across all nodes that the majority of nodes have accepted from the leader a msg_id past what this read_queue entry needs.
      
      the problem with this is, that until now, msg_id was private to each server instance and has no relevance to the followre nodes except to include it back in response. 
      
      we change that to have the followers store the max msg_id they've seen from their current leader
      
      using that, in the read_queue_handler we can verify that the leader's voting nodes have a quorum past the msg_id variable that the handler will return to ensure that this read_queue handler call is correct.
      bdd0797f
  31. 09 Sep, 2021 1 commit