.. [#] We have to also include the Raft server itself in the raft_add_node calls. When we call raft_add_node for the Raft server, we set peer_is_self to 1.
Calling raft_periodic() periodically
------------------------------------
We need to call ``raft_periodic`` at periodic intervals.
Our Raft application receives log entries from the client.
When this happens we need to:
* Redirect the client to the Raft cluster leader (if necessary)
* Append the entry to our log
* Block until the log entry has been committed [#]_
.. [#] When the log entry has been replicated across a majority of servers in the Raft cluster
Receiving the entry - Append the entry to our log
-------------------------------------------------
We call ``raft_recv_entry`` when we want to append the entry to the log.
.. code-block:: c
msg_entry_response_t response;
e = raft_recv_entry(raft, node_idx, &entry, &response);
You should popuate the ``entry`` struct with the log entry the client has sent. After the call completes the ``response`` parameter is populated and can be used by the ``raft_msg_entry_response_committed`` to check if the log entry has been committed or not.
Receiving the entry - Blocking until the log entry has been committed
When the server receives a log entry from the client, it has to block until the entry is committed. This is necessary as our Raft server has to replicate the log entry with the other peers of the Raft cluster.
The ``raft_recv_entry`` function does not block! This means you will need to implement the blocking functionality yourself.
*Example below is from the ticketd client thread. This shows that we need to block on client requests. ticketd does the blocking by waiting on a conditional, which is signalled by the peer thread. The separate thread is responsible for handling traffic between Raft peers.*
.. code-block:: c
msg_entry_response_t response;
e = raft_recv_entry(sv->raft, sv->node_idx, &entry, &response);
*Example from ticketd of the peer thread. When an appendentries response is received from a raft peer, we signal to the client thread that an entry might be committed.*
.. code-block:: c
e = raft_recv_appendentries_response(sv->raft, conn->node_idx, &m.aer);
uv_cond_signal(&sv->appendentries_received);
Receiving the entry - Redirecting the client to the leader
When we receive an entry log from the client it's possible we might not be a leader.
If we aren't currently the leader of the raft cluster, we MUST send a redirect error message to the client. This is so that the client can connect directly to the leader in future connections.
We use the ``raft_get_current_leader`` function to check who is the current leader.
*Example of ticketd sending a 301 HTTP redirect response:*
You provide your callbacks to the raft server using ``raft_set_callbacks``
We MUST implement the following callbacks: ``send_requestvote``, ``send_appendentries``, ``applylog``, ``persist_vote``, ``persist_term``, ``log_offer``, and ``log_pop``.
We tell the Raft server what the cluster configuration is by using the ``raft_add_node`` function. For example, if we have 5 servers [#]_ in our cluster, we call ``raft_add_node`` 5 [#]_ times.
**send_requestvote()**
For this callback we have to serialize a ``msg_requestvote_t`` struct, and then send it to the peer identified by ``node_idx``.
*Example from ticketd showing how the callback is implemented:*
e = uv_write(&conn->write, conn->stream, bufs, 1, __peer_write_cb);
if (-1 == e)
uv_fatal(e);
return 0;
}
**send_appendentries()**
For this callback we have to serialize a ``msg_appendentries_t`` struct, and then send it to the peer identified by ``node_idx``. This struct is more complicated to serialize because the ``m->entries`` array might be populated.
*Example from ticketd showing how the callback is implemented:*
e = tpl_dump(tn, TPL_MEM | TPL_PREALLOCD, ptr, RAFT_BUFLEN);
assert(0 == e);
bufs[1].len = sz;
bufs[1].base = ptr;
e = uv_write(&conn->write, conn->stream, bufs, 2, __peer_write_cb);
if (-1 == e)
uv_fatal(e);
tpl_free(tn);
}
else
{
/* keep alive appendentries only */
e = uv_write(&conn->write, conn->stream, bufs, 1, __peer_write_cb);
if (-1 == e)
uv_fatal(e);
}
return 0;
}
**applylog()**
This callback is all what is needed to interface the FSM with the Raft library:
**persist_vote() & persist_term()**
These callbacks simply save data to disk, so that when the Raft server is rebooted, it starts from the correct point.
**log_offer()**
For this callback the user needs to add a log entry. The log MUST be saved to disk before this callback returns.
**log_poll()**
For this callback the user needs to remove the most oldes log entry [#]_. The log MUST be saved to disk before this callback returns.
This callback only needs to be implemented to support log compaction.
**log_pop()**
For this callback the user needs to remove the most youngest log entry [#]_. The log MUST be saved to disk before this callback returns.
.. [#] The log entry at the front of the log
.. [#] The log entry at the back of the log
Receving traffic from peers
---------------------------
To receive ``Append Entries``, ``Append Entries response``, ``Request Vote``, and ``Request Vote response`` messages, you need to deserialize the bytes into the message's corresponding struct.
The table below shows the structs that you need to deserialize-to or deserialize-from: