- 08 Jul, 2019 1 commit
-
-
Oran Agra authored
The implementation of the diskless replication was currently diskless only on the master side. The slave side was still storing the received rdb file to the disk before loading it back in and parsing it. This commit adds two modes to load rdb directly from socket: 1) when-empty 2) using "swapdb" the third mode of using diskless slave by flushdb is risky and currently not included. other changes: -------------- distinguish between aof configuration and state so that we can re-enable aof only when sync eventually succeeds (and not when exiting from readSyncBulkPayload after a failed attempt) also a CONFIG GET and INFO during rdb loading would have lied When loading rdb from the network, don't kill the server on short read (that can be a network error) Fix rdb check when performed on preamble AOF tests: run replication tests for diskless slave too make replication test a bit more aggressive Add test for diskless load swapdb
-
- 29 Dec, 2017 1 commit
-
-
Oran Agra authored
- protocol parsing (processMultibulkBuffer) was limitted to 32big positions in the buffer readQueryFromClient potential overflow - rioWriteBulkCount used int, although rioWriteBulkString gave it size_t - several places in sds.c that used int for string length or index. - bugfix in RM_SaveAuxField (return was 1 or -1 and not length) - RM_SaveStringBuffer was limitted to 32bit length
-
- 03 Jun, 2016 1 commit
-
-
antirez authored
-
- 25 Apr, 2016 1 commit
-
-
Oran Agra authored
-
- 17 Oct, 2014 1 commit
-
-
antirez authored
To perform a socket write() for each RDB rio API write call was extremely unefficient, so now rio has minimal buffering capabilities. Writes are accumulated into a buffer and only when a given limit is reacehd are actually wrote to the N slaves FDs. Trivia: rio lacked support for buffering since our targets were: 1) Memory buffers. 2) C standard I/O. Both were buffered already.
-
- 14 Oct, 2014 1 commit
-
-
antirez authored
Fdset target is used when we want to write an RDB file directly to slave's sockets. In this setup as long as there is a single slave that is still receiving our payload, we want to continue sennding instead of aborting. However rio calls should abort of no FD is ok. Also we want the errors reported so that we can signal the parent who is ok and who is broken, so there is a new set integers with the state of each fd. Zero is ok, non-zero is the errno of the failure, if avaialble, or a generic EIO.
-
- 10 Oct, 2014 1 commit
-
-
antirez authored
-
- 16 Jul, 2013 2 commits
- 24 Apr, 2013 1 commit
-
-
antirez authored
-
- 03 Apr, 2013 1 commit
-
-
antirez authored
-
- 08 Nov, 2012 1 commit
-
-
antirez authored
-
- 11 Apr, 2012 1 commit
-
-
antirez authored
-
- 09 Apr, 2012 2 commits
- 22 Sep, 2011 2 commits
-
-
antirez authored
rioInitWithFile nad rioInitWithBuffer functions now take a rio structure pointer to avoid copying a structure to return value to the caller.
-
antirez authored
comment on top of the _rio structure modified for correctness as actually fwrite/fread semantics is different in general, but was 0/1 in our old usage before rio.c as we always used 1 as number items, and the actual number of bytes to read as item length.
-
- 13 May, 2011 1 commit
-
-
Pieter Noordhuis authored
-