- 18 Apr, 2022 1 commit
-
-
Ozan Tezcan authored
Refactor
-
- 04 Apr, 2022 1 commit
-
-
Ozan Tezcan authored
-
- 03 Apr, 2022 1 commit
-
-
Ozan Tezcan authored
-
- 30 Mar, 2022 2 commits
-
-
Ozan Tezcan authored
-
Ozan Tezcan authored
-
- 24 Mar, 2022 1 commit
-
-
Ozan Tezcan authored
Delete private typedefs, use namespace raft_ for exported symbols
-
- 21 Mar, 2022 1 commit
-
-
Ozan Tezcan authored
-
- 13 Mar, 2022 4 commits
-
-
Ozan Tezcan authored
Enabled compiler warnings and fixed issues
-
Ozan Tezcan authored
-
Ozan Tezcan authored
-
Ozan Tezcan authored
-
- 08 Mar, 2022 1 commit
-
-
Ozan Tezcan authored
-
- 06 Mar, 2022 1 commit
-
-
Ozan Tezcan authored
-
- 07 Feb, 2022 1 commit
-
-
Ozan Tezcan authored
Add support for async disk flush, backpressure and batching
-
- 07 Dec, 2021 1 commit
-
-
Shaya Potter authored
the only valid return values are 0 (success) or RAFT_ERR_SHUTDOWN. this is now enforced by an assert
-
- 15 Nov, 2021 1 commit
-
-
Ozan Tezcan authored
Added snapshot RPC to support large state machines.
-
- 03 Nov, 2021 1 commit
-
-
Shaya Potter authored
this is to avoid sending duplicate entries each time raft_send_appendentries is called, just because they haven't been acked yet. it will still be reset backwards upon the leader receiving a raft_recv_appendentries_response() with a failure mode + a test for next_idx on append entry, which changes a number of tests which checked for this in order to do this, I had to modify the existing mock callback to return 0 instead of 1. this needs to be verified + fixes for raft tests that this change exposed 1) when we issues a sequence of entries, each was given an increasing term, so had entries whose term was higher than leaders term so had to fix that 2) by setting the term of appendentries_repsonse to not be equal to leaders term (because of fix above), we now fail on receiving them do all that by not allowing raft_append_entry to append an entry that break raft semantics (i.e. term of entry > log term), but this breaks many tests that don't set term, so set them
-
- 12 Oct, 2021 1 commit
-
-
Shaya Potter authored
this was recorded in 2 places, inconsistently, when it only neededed to be recorded once. it was in raft_recv_entry() as if (raft_entry_is_voting_cfg_change(ety)) me->voting_cfg_change_log_idx = raft_get_current_idx(me_); and in raft_append_entry() as me->voting_cfg_change_log_idx = raft_get_current_idx(me_) - 1; it only should be in raft_append_entry() becaues raft_recv_entry() is only from the leader, but its value was wrong, while raft_recv_entry() was correct. This is also why it generally worked, if no leader election happened, it always had correct value, but if a new leader was elected that would have to commit this entry, it be wrong.
-
- 07 Oct, 2021 1 commit
-
-
Shaya Potter authored
this started off as a small tweak and grew a bit 1) new callback for transfer leader result, along with its own enum of states 2) change/remove raft_reset_transfer_leader() calls only call in places where there's a new leader or timeout (i.e. failed to transfer within period). so, we only reset in 3 places 1) recv_appendentries 2) raft_periodic (timeout) 3) raft_become_leader 2a) raft_become_leader also got refactored a little to only issue the normal cb.notify_state_event() callback after it has set its state to be leader (as before, it could fail after the callback was issued) 2b) in raft_periodic the raft_reset_transfer_leader() is pulled out the "if LEADER" block, as will lose leader when a vote is issued to it, we still want it to timeout if leadership isn't gained 3) raft_reset_transfer_leader() does the logic for determining success/failure now, but becaue timeout is its own result which can't be determined by just looking at leader/desired, add a new flag to it to note timeout state 4) leader stickiness in requestvote is only tested for prevote (as in a timeout now case, we want to drop the timeout now flag after prevotes are sent). The idea is that we will only get to actual election if prevote passes. 4) a bunch of new tests * fix issue with timeout flag to reset_transfer_leader also fix test to catch the original error * libraft/redisraft changes needed for e2e leader transfer * make leader stickiness only for prevote
-
- 26 Sep, 2021 1 commit
-
-
Shaya Potter authored
it has to be after we notify the caller, not before, as caller needs the state to know what happened I had it before, realized it had to be after, but in efforts of rebasing previous PR, it got moved back to before, without me realizing. Now that integrating it into redisraft, realized the problem
-
- 23 Sep, 2021 2 commits
-
-
Shaya Potter authored
this is the small modifications needed for raftlib to support timeout now rpc 1) add a flag to requestvote type to tell other nodes its overriding normal chec 2) new send_timeoutnow callback to send a timeout now rpc at right time 3) call said callback is we've marked the node as timeout now and in raft_appendentries_response raft_get_current_idx(me_) == r->current_idx (i.e. response says the node's idx is up to date with leader) 4) add helper functions to set/get/reset timeout flag on node structs for targeted node. how it would be used by client 1) client of libraft will call raft_transfer_leader(raft_server_t* me_, raft_node_id_t node_id) to target the node they want to transfer to. 2) client of libraft will provide the send_timeoutnow() callback for the actual sending of the rpc 3) client of libraft will modify their existing notify_state_event() callback to observe the result of the timeout now operation. if it receives a RAFT_STATE_LEADERSHIP_TRANSFER_FAILED, then the transfer failed if it received a RAFT_STATE_FOLLOWER, then it inspects for the actual leader to see if its the expected one and can return that it was transferred to expected node or not if it received a RAFT_STATE_CANDIDATE, then the transfer also failed, as the targeted node sent a request vote which removed leadership, but wasn't able to win the election. With PreVote, this latter case should be very rare.
-
Ozan Tezcan authored
Define separate callback function for node id retrieval
-
- 22 Sep, 2021 1 commit
-
-
Shaya Potter authored
* fix read queue test to be more accurate 1) each server's set of nodes, keeps the max msg_id its seen from that node when that node has been later 1a) because of this get_max_seen now operates on a server's nodes, not on the server itself b 2) virtraft encodes the leader id into the "arg" (with a user redefinable multiplier) it sends as part of the callback, so when we get the callback, we can know what leader its for and check its voters for correctness.
-
- 15 Sep, 2021 2 commits
-
-
Ozan Tezcan authored
-
Ozan Tezcan authored
-
- 14 Sep, 2021 2 commits
-
-
Ozan Tezcan authored
-
Ozan Tezcan authored
* prevote implementation
-
- 12 Sep, 2021 1 commit
-
-
Shaya Potter authored
* add a read_queue test to virtraft every iteration we push a read_queue request and the handler and an we pass to it set a variable on calling. we can use this to make sure that the read_queue doesn't get too far from the iteration. i.e. we pass the leader's msg_id and can check to ensure that leader's msg_id doesn't get too far from the msg_id (variable) we see in read_queue test this is analog to the current log applying deadlock test. * implements msg_id checking in virtraft for verification of read_queue requests When we pop an entry off the read_queue with the can_read flag, we verify across all nodes that the majority of nodes have accepted from the leader a msg_id past what this read_queue entry needs. the problem with this is, that until now, msg_id was private to each server instance and has no relevance to the followre nodes except to include it back in response. we change that to have the followers store the max msg_id they've seen from their current leader using that, in the read_queue_handler we can verify that the leader's voting nodes have a quorum past the msg_id variable that the handler will return to ensure that this read_queue handler call is correct.
-
- 09 Sep, 2021 1 commit
-
-
Ozan Tezcan authored
Unknown node handling
-
- 01 Sep, 2021 1 commit
-
-
Shaya Potter authored
the problem we have is that 2 appendentries can be in process in parallel Howver, the response handler doesn't expect this, it will reset the next_idx in both, but currently resets next_idx based on next_idx's value, not the old value that was the basis of prev_log_idx. Solution: return prev_log_idx in the response Problem with solution: prev_log_idx is not always based on next_idx from node, but can be based on a snapshot idx if it no longer exists. I haven't coded this yet, but perhaps can determine if the prev_log_idx that is returned is the snapshot_idx and if so, then dont increment by 1. Its possible we could pass even another flag into the appendentries struct to say if its a snapshot, but that's ugly. In addition, we increment msg_id on each appendentries so that if a response comes back out of order and we have already accepted the later response, we just ignore this response. In addition, on appendentry response, even if the appendentry wasn't a success, as long as its the same term as us (i.e. we are still the leader) then for leader/quorum purposes, that is sufficient, so move up set_last_ack to right after term test.
-
- 31 Aug, 2021 1 commit
-
-
Ozan Tezcan authored
-
- 24 Aug, 2021 1 commit
-
-
Shaya Potter authored
1) don't skip setting next_idx for an inactive node on become_leader() 2) in raft_send_appendentries() short circuit and don't send anything if inactive
-
- 19 Aug, 2021 1 commit
-
-
Ozan Tezcan authored
code refactor
-
- 18 Aug, 2021 2 commits
-
-
Ozan Tezcan authored
Implemented check-quorum
-
Shaya Potter authored
we were using wrong printf modifiers, now we check them and fixed all of them so no warnings/errors
-
- 11 Aug, 2021 2 commits
-
-
Ozan Tezcan authored
Fix : Added missing va_end() and naming change for the log function
-
Ozan Tezcan authored
-
- 06 Aug, 2021 1 commit
-
-
Ozan Tezcan authored
-
- 04 Aug, 2021 1 commit
-
-
Shaya Potter authored
create a self documenting function name instead of throwing in a bunch of conditions
-
- 03 Aug, 2021 1 commit
-
-
Shaya Potter authored
Before, we only did "auto commit" of entries if the number of voters == 1. As we can have a "non voting leader", this is wrong, as the voter can be another node, not the leader, therefore we add a condition to ensure that the leader is a voter too, so that we know that the leader is the single voter.
-