Unverified Commit 40e6131b authored by Brennan's avatar Brennan Committed by GitHub
Browse files

Prevent repetitive backlog trimming (#12155)

When `replicationFeedSlaves()` serializes a command, it repeatedly calls
`feedReplicationBuffer()` to feed it to the replication backlog piece by piece.
It is unnecessary to call `incrementalTrimReplicationBacklog()` for every small
amount of data added with `feedReplicationBuffer()` as the chance of the conditions
being met for trimming are very low and these frequent calls add up to a notable
performance cost. Instead, we will only attempt trimming when a new block is added
to the replication backlog.

Using redis-benchmark to saturate a local redis server indicated a performance
improvement of around 3-3.5% for 100 byte SET commands with this change.
parent 49845c24
......@@ -418,13 +418,13 @@ void feedReplicationBuffer(char *s, size_t len) {
}
if (add_new_block) {
createReplicationBacklogIndex(listLast(server.repl_buffer_blocks));
/* It is important to trim after adding replication data to keep the backlog size close to
* repl_backlog_size in the common case. We wait until we add a new block to avoid repeated
* unnecessary trimming attempts when small amounts of data are added. See comments in
* freeMemoryGetNotCountedMemory() for details on replication backlog memory tracking. */
incrementalTrimReplicationBacklog(REPL_BACKLOG_TRIM_BLOCKS_PER_CALL);
}
/* Try to trim replication backlog since replication backlog may exceed
* our setting when we add replication stream. Note that it is important to
* try to trim at least one node since in the common case this is where one
* new backlog node is added and one should be removed. See also comments
* in freeMemoryGetNotCountedMemory for details. */
incrementalTrimReplicationBacklog(REPL_BACKLOG_TRIM_BLOCKS_PER_CALL);
}
}
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment