- 25 Mar, 2020 1 commit
-
-
antirez authored
A very commonly signaled operational problem with Redis master-replicas sets is that, once the master becomes unavailable for some reason, especially because of network problems, many times it wont be able to perform a partial resynchronization with the new master, once it rejoins the partition, for the following reason: 1. The master becomes isolated, however it keeps sending PINGs to the replicas. Such PINGs will never be received since the link connection is actually already severed. 2. On the other side, one of the replicas will turn into the new master, setting its secondary replication ID offset to the one of the last command received from the old master: this offset will not include the PINGs sent by the master once the link was already disconnected. 3. When the master rejoins the partion and is turned into a replica, its offset will be too advanced because of the PINGs, so a PSYNC will fail, and a full synchronization will be required. Related to issue #7002 and other discussion we had in the past around this problem.
-
- 23 Mar, 2020 1 commit
-
-
antirez authored
-
- 04 Mar, 2020 4 commits
- 03 Mar, 2020 1 commit
-
-
antirez authored
-
- 25 Feb, 2020 1 commit
-
-
Hengjian Tang authored
-
- 06 Feb, 2020 3 commits
-
-
Guy Benoish authored
1. Call emptyDb even in case of diskless-load: We want modules to get the same FLUSHDB event as disk-based replication. 2. Do not fire any module events when flushing the backups array. 3. Delete redundant call to signalFlushedDb (Called from emptyDb).
-
Oran Agra authored
replicationUnsetMaster can be called from other places, not just replicaofCOmmand, and all of these need to restart AOF
-
Oran Agra authored
this function possibly iterates on the module list
-
- 31 Dec, 2019 1 commit
-
-
ShooterIT authored
-
- 19 Dec, 2019 1 commit
-
-
Johannes Truschnigg authored
"Partial Resynchronization" is a special variant of replication success that we have to tell systemd about if it is managing redis-server via a Type=Notify service unit.
-
- 19 Nov, 2019 1 commit
-
-
Johannes Truschnigg authored
Instead of replicating a subset of libsystemd's sd_notify(3) internally, use the dynamic library provided by systemd to communicate with the service manager. When systemd supervision was auto-detected or configured, communicate the actual server status (i.e. "Loading dataset", "Waiting for master<->replica sync") to systemd, instead of declaring readiness right after initializing the server process.
-
- 29 Oct, 2019 1 commit
-
-
Oran Agra authored
* replication hooks: role change, master link status, replica online/offline * persistence hooks: saving, loading, loading progress * misc hooks: cron loop, shutdown, module loaded/unloaded * change the way hooks test work, and add tests for all of the above startLoading() now gets flag indicating what is loaded. stopLoading() now gets an indication of success or failure. adding startSaving() and stopSaving() with similar args and role.
-
- 10 Oct, 2019 1 commit
-
-
antirez authored
This is what happened: 1. Instance starts, is a slave in the cluster configuration, but actually server.masterhost is not set, so technically the instance is acting like a master. 2. loadDataFromDisk() calls replicationCacheMasterUsingMyself() even if the instance is a master, in the case it is logically a slave and the cluster is enabled. So now we have a cached master even if the instance is practically configured as a master (from the POV of server.masterhost value and so forth). 3. clusterCron() sees that the instance requires to replicate from its master, because logically it is a slave, so it calls replicationSetMaster() that will in turn call replicationCacheMasterUsingMyself(): before this commit, this call would overwrite the old cached master, creating a memory leak.
-
- 07 Oct, 2019 4 commits
-
-
Yossi Gottlieb authored
Add configuration options for TLS protocol versions, ciphers/cipher suites selection, etc.
-
Oran Agra authored
misc: - handle SSL_has_pending by iterating though these in beforeSleep, and setting timeout of 0 to aeProcessEvents - fix issue with epoll signaling EPOLLHUP and EPOLLERR only to the write handlers. (needed to detect the rdb pipe was closed) - add key-load-delay config for testing - trim connShutdown which is no longer needed - rioFdsetWrite -> rioFdWrite - simplified since there's no longer need to write to multiple FDs - don't detect rdb child exited (don't call wait3) until we detect the pipe is closed - Cleanup bad optimization from rio.c, add another one
-
Yossi Gottlieb authored
* Introduce a connection abstraction layer for all socket operations and integrate it across the code base. * Provide an optional TLS connections implementation based on OpenSSL. * Pull a newer version of hiredis with TLS support. * Tests, redis-cli updates for TLS support.
-
charsyam authored
-
- 27 Sep, 2019 1 commit
-
-
antirez authored
-
- 05 Aug, 2019 1 commit
-
-
antirez authored
-
- 30 Jul, 2019 2 commits
- 17 Jul, 2019 1 commit
-
-
Oran Agra authored
* create module API for forking child processes. * refactor duplicate code around creating and tracking forks by AOF and RDB. * child processes listen to SIGUSR1 and dies exitFromChild in order to eliminate a valgrind warning of unhandled signal. * note that BGSAVE error reply has changed. valgrind error is: Process terminating with default action of signal 10 (SIGUSR1)
-
- 10 Jul, 2019 2 commits
- 08 Jul, 2019 2 commits
-
-
antirez authored
-
Oran Agra authored
The implementation of the diskless replication was currently diskless only on the master side. The slave side was still storing the received rdb file to the disk before loading it back in and parsing it. This commit adds two modes to load rdb directly from socket: 1) when-empty 2) using "swapdb" the third mode of using diskless slave by flushdb is risky and currently not included. other changes: -------------- distinguish between aof configuration and state so that we can re-enable aof only when sync eventually succeeds (and not when exiting from readSyncBulkPayload after a failed attempt) also a CONFIG GET and INFO during rdb loading would have lied When loading rdb from the network, don't kill the server on short read (that can be a network error) Fix rdb check when performed on preamble AOF tests: run replication tests for diskless slave too make replication test a bit more aggressive Add test for diskless load swapdb
-
- 15 May, 2019 1 commit
-
-
antirez authored
CLIENT PAUSE may be used, in other contexts, for a long time making all the slaves time out. Better for now to be more specific about what should disable senidng PINGs. An alternative to that would be to virtually refresh the slave interactions when clients are paused, however for now I went for this more conservative solution.
-
- 17 Apr, 2019 1 commit
-
-
chendianqiang authored
-
- 21 Mar, 2019 2 commits
- 20 Mar, 2019 2 commits
- 18 Mar, 2019 1 commit
-
-
antirez authored
-
- 10 Mar, 2019 1 commit
-
-
antirez authored
-
- 09 Mar, 2019 1 commit
-
-
John Sully authored
-
- 12 Feb, 2019 1 commit
-
-
zhaozhao.zz authored
In mostly production environment, normal user's behavior should be limited. Now in redis ACL mechanism we can do it like that: user default on +@all ~* -@dangerous nopass user admin on +@all ~* >someSeriousPassword Then the default normal user can not execute dangerous commands like FLUSHALL/KEYS. But some admin commands are in dangerous category too like PSYNC, and the configurations above will forbid replica from sync with master. Finally I think we could add a new configuration for replication, it is masteruser option, like this: masteruser admin masterauth someSeriousPassword Then replica will try AUTH admin someSeriousPassword and get privilege to execute PSYNC. If masteruser is NULL, replica would AUTH with only masterauth like before.
-
- 25 Jan, 2019 1 commit
-
-
ArkayZheng authored
-