- 19 Nov, 2019 1 commit
-
-
Johannes Truschnigg authored
Instead of replicating a subset of libsystemd's sd_notify(3) internally, use the dynamic library provided by systemd to communicate with the service manager. When systemd supervision was auto-detected or configured, communicate the actual server status (i.e. "Loading dataset", "Waiting for master<->replica sync") to systemd, instead of declaring readiness right after initializing the server process.
-
- 29 Oct, 2019 1 commit
-
-
Oran Agra authored
* replication hooks: role change, master link status, replica online/offline * persistence hooks: saving, loading, loading progress * misc hooks: cron loop, shutdown, module loaded/unloaded * change the way hooks test work, and add tests for all of the above startLoading() now gets flag indicating what is loaded. stopLoading() now gets an indication of success or failure. adding startSaving() and stopSaving() with similar args and role.
-
- 10 Oct, 2019 1 commit
-
-
antirez authored
This is what happened: 1. Instance starts, is a slave in the cluster configuration, but actually server.masterhost is not set, so technically the instance is acting like a master. 2. loadDataFromDisk() calls replicationCacheMasterUsingMyself() even if the instance is a master, in the case it is logically a slave and the cluster is enabled. So now we have a cached master even if the instance is practically configured as a master (from the POV of server.masterhost value and so forth). 3. clusterCron() sees that the instance requires to replicate from its master, because logically it is a slave, so it calls replicationSetMaster() that will in turn call replicationCacheMasterUsingMyself(): before this commit, this call would overwrite the old cached master, creating a memory leak.
-
- 07 Oct, 2019 4 commits
-
-
Yossi Gottlieb authored
Add configuration options for TLS protocol versions, ciphers/cipher suites selection, etc.
-
Oran Agra authored
misc: - handle SSL_has_pending by iterating though these in beforeSleep, and setting timeout of 0 to aeProcessEvents - fix issue with epoll signaling EPOLLHUP and EPOLLERR only to the write handlers. (needed to detect the rdb pipe was closed) - add key-load-delay config for testing - trim connShutdown which is no longer needed - rioFdsetWrite -> rioFdWrite - simplified since there's no longer need to write to multiple FDs - don't detect rdb child exited (don't call wait3) until we detect the pipe is closed - Cleanup bad optimization from rio.c, add another one
-
Yossi Gottlieb authored
* Introduce a connection abstraction layer for all socket operations and integrate it across the code base. * Provide an optional TLS connections implementation based on OpenSSL. * Pull a newer version of hiredis with TLS support. * Tests, redis-cli updates for TLS support.
-
charsyam authored
-
- 27 Sep, 2019 1 commit
-
-
antirez authored
-
- 05 Aug, 2019 1 commit
-
-
antirez authored
-
- 30 Jul, 2019 2 commits
- 17 Jul, 2019 1 commit
-
-
Oran Agra authored
* create module API for forking child processes. * refactor duplicate code around creating and tracking forks by AOF and RDB. * child processes listen to SIGUSR1 and dies exitFromChild in order to eliminate a valgrind warning of unhandled signal. * note that BGSAVE error reply has changed. valgrind error is: Process terminating with default action of signal 10 (SIGUSR1)
-
- 10 Jul, 2019 2 commits
- 08 Jul, 2019 2 commits
-
-
antirez authored
-
Oran Agra authored
The implementation of the diskless replication was currently diskless only on the master side. The slave side was still storing the received rdb file to the disk before loading it back in and parsing it. This commit adds two modes to load rdb directly from socket: 1) when-empty 2) using "swapdb" the third mode of using diskless slave by flushdb is risky and currently not included. other changes: -------------- distinguish between aof configuration and state so that we can re-enable aof only when sync eventually succeeds (and not when exiting from readSyncBulkPayload after a failed attempt) also a CONFIG GET and INFO during rdb loading would have lied When loading rdb from the network, don't kill the server on short read (that can be a network error) Fix rdb check when performed on preamble AOF tests: run replication tests for diskless slave too make replication test a bit more aggressive Add test for diskless load swapdb
-
- 15 May, 2019 1 commit
-
-
antirez authored
CLIENT PAUSE may be used, in other contexts, for a long time making all the slaves time out. Better for now to be more specific about what should disable senidng PINGs. An alternative to that would be to virtually refresh the slave interactions when clients are paused, however for now I went for this more conservative solution.
-
- 17 Apr, 2019 1 commit
-
-
chendianqiang authored
-
- 21 Mar, 2019 2 commits
- 20 Mar, 2019 2 commits
- 18 Mar, 2019 1 commit
-
-
antirez authored
-
- 10 Mar, 2019 1 commit
-
-
antirez authored
-
- 09 Mar, 2019 1 commit
-
-
John Sully authored
-
- 12 Feb, 2019 1 commit
-
-
zhaozhao.zz authored
In mostly production environment, normal user's behavior should be limited. Now in redis ACL mechanism we can do it like that: user default on +@all ~* -@dangerous nopass user admin on +@all ~* >someSeriousPassword Then the default normal user can not execute dangerous commands like FLUSHALL/KEYS. But some admin commands are in dangerous category too like PSYNC, and the configurations above will forbid replica from sync with master. Finally I think we could add a new configuration for replication, it is masteruser option, like this: masteruser admin masterauth someSeriousPassword Then replica will try AUTH admin someSeriousPassword and get privilege to execute PSYNC. If masteruser is NULL, replica would AUTH with only masterauth like before.
-
- 25 Jan, 2019 1 commit
-
-
ArkayZheng authored
-
- 21 Jan, 2019 3 commits
- 17 Jan, 2019 1 commit
-
-
antirez authored
-
- 09 Jan, 2019 2 commits
- 31 Oct, 2018 1 commit
-
-
antirez authored
This logs what happens in the context of the fix in PR #5367.
-
- 05 Oct, 2018 1 commit
-
-
antirez authored
-
- 27 Sep, 2018 1 commit
-
-
Andrey Bugaevskiy authored
-
- 19 Sep, 2018 1 commit
-
-
Andrey Bugaevskiy authored
During the full database resync we may still have unsaved changes on the receiving side. This causes a race condition between synced data rename/load and the rename of rdbSave tempfile.
-
- 11 Sep, 2018 2 commits
- 17 Jul, 2018 1 commit
-
-
Oran Agra authored
The slave sends \n keepalive messages to the master while parsing the rdb, and later sends REPLCONF ACK once a second. rarely, the master recives both a linefeed char and a REPLCONF in the same read, \n*3\r\n$8\r\nREPLCONF\r\n... and it tries to trim two chars (\r\n) from the query buffer, trimming the '*' from *3\r\n$8\r\nREPLCONF\r\n... then the master tries to process a command starting with '3' and replies to the slave a bunch of -ERR and one +OK. although the slave silently ignores these (prints a log message), this corrupts the replication offset at the slave since the slave increases the replication offset, and the master did not. other than the fix in processInlineBuffer, i did several other improvments while hunting this very rare bug. - when redis replies with "unknown command" it includes a portion of the arguments, not just the command name. so it would be easier to understand what was recived, in my case, on the slave side, it was -ERR, but the "arguments" were the interesting part (containing info on the error). - about a year ago i added code in addReplyErrorLength to print the error to the log in case of a reply to master (since this string isn't actually trasmitted to the master), now changed that block to print a similar log message to indicate an error being sent from the master to the slave. note that the slave is marked as CLIENT_SLAVE only after PSYNC was received, so this will not cause any harm for REPLCONF, and will only indicate problems that are gonna corrupt the replication stream anyway. - two places were c->reply was emptied, and i wanted to reset sentlen this is a precaution (i did not actually see such a problem), since a non-zero sentlen will cause corruption to be transmitted on the socket.
-