- 19 May, 2015 1 commit
-
-
antirez authored
A way for monitoring systems to check that Sentinel is technically able to reach the quorum and failover, using the currently visible Sentinels.
-
- 04 May, 2015 3 commits
-
-
therealbill authored
Originally, only the +slave event which occurs when a slave is reconfigured during sentinelResetMasterAndChangeAddress triggers a flush of the config to disk. However, newly discovered slaves don't apparently trigger this flush but do trigger the +slave event issuance. So if you start up a sentinel, add a master, then add a slave to the master (as a way to reproduce it) you'll see the +slave event issued, but the sentinel config won't be updated with the known-slave entry. This change makes sentinel do the flush of the config if a new slave is deteted in sentinelRefreshInstanceInfo.
-
antirez authored
To rewrite the config in the loop that adds slaves back after a master reset, in order to handle switching to another master, is useless: it just adds latency since there is an fsync call in the inner loop, without providing any additional guarantee, but the contrary, since if after the first loop iteration the server crashes we end with just a single slave entry losing all the other informations. It is wiser to rewrite the config at the end when the full new state is configured.
-
clark.kang authored
-
- 13 Mar, 2015 1 commit
-
-
Leandro López (inkel) authored
When trying to debug sentinel connections or max connections errors it would be very useful to have the ability to see the list of connected clients to a running sentinel. At the same time it would be very helpful to be able to name each sentinel connection or kill offending clients. This commits adds the already defined CLIENT commands back to Redis Sentinel.
-
- 06 Oct, 2014 1 commit
-
-
Matt Stancliff authored
- Remove trailing newlines from redis.conf - Fix comment misspelling - Clarifies zipEncodeLength usage and a C API mention (#1243, #1242) - Fix cluster typos (inspired by @papanikge #1507) - Fix rewite -> rewrite in a few places (inspired by #682) Closes #1243, #1242, #1507
-
- 11 Sep, 2014 2 commits
-
-
antirez authored
-
antirez authored
The code to check the number of voters was never updated to follow the new Sentinel specification, so the number of voters was computed using only the set of Sentinels that provided a vote. This means that there is a changing majority on partitions, even if usually the issue is not triggered because of the configured quorum check (what was broken was the other implicit check that requires anyway half of the known sentinels to agree in order to start a failover).
-
- 10 Sep, 2014 3 commits
-
-
antirez authored
-
antirez authored
The original implementation was modified in order to allow to selectively announce a different IP or port, and to rewrite the two options in the config file after a rewrite.
-
Dara Kong authored
There are instances such as EC2 where the bind address is private (behind a NAT) and cannot be accessible from WAN. https://groups.google.com/d/msg/redis-db/PVVvjO4nMd0/P3oWC036v3cJ
-
- 01 Sep, 2014 1 commit
-
-
Matt Stancliff authored
We can save a little work by aborting when we enter the function if we're disconnected.
-
- 26 Aug, 2014 3 commits
-
-
Matt Stancliff authored
Clearly ip[32] is wrong, but it's less clear that buf[32] was wrong without further reading.
-
Eiichi Sato authored
Closes #1914
-
antirez authored
-
- 23 Jun, 2014 2 commits
-
-
antirez authored
-
Matt Stancliff authored
Some deployments need traffic sent from a specific address. This change uses the same policy as Cluster where the first listed bindaddr becomes the source address for outgoing Sentinel communication. Fixes #1667
-
- 21 Jun, 2014 3 commits
-
-
antirez authored
Eventual configuration convergence is guaranteed by our periodic hello messages to all the instances, however when there are important notices to share, better make a phone call. With this commit we force an hello message to other Sentinal and Redis instances within the next 100 milliseconds of a config update, which is practically better than waiting a few seconds.
-
antirez authored
Lack of check of the SRI_PROMOTED flag caused Sentienl to act with the promoted slave turned into a master during failover like if it was a normal instance. Normally this problem was not apparent because during real failovers the old master is down so the bugged code path was not entered, however with manual failovers via the SENTINEL FAILOVER command, the problem was easily triggered. This commit prevents promoted slaves from getting reconfigured, moreover we now explicitly check that during a failover the slave turning into a master is the one we selected for promotion and not a different one.
-
antirez authored
This implements the new Sentinel-Client protocol for the Sentinel part: now instances are reconfigured using a transaction that ensures that the config is rewritten in the target instance, and that clients lose the connection with the instance, in order to be forced to: ask Sentinel, reconnect to the instance, and verify the instance role with the new ROLE command.
-
- 28 May, 2014 1 commit
-
-
antirez authored
-
- 20 May, 2014 1 commit
-
-
antirez authored
-
- 08 May, 2014 2 commits
-
-
antirez authored
When a Sentinel performs a failover (successful or not), or when a Sentinel votes for a different Sentinel trying to start a failover, it sets a min delay before it will try to get elected for a failover. While not strictly needed, because if multiple Sentinels will try to failover the same master at the same time, only one configuration will eventually win, this serialization is practically very useful. Normal failovers are cleaner: one Sentinel starts to failover, the others update their config when the Sentinel performing the failover is able to get the selected slave to move from the role of slave to the one of master. However currently this timeout was implicit, so users could see Sentinels not reacting, after a failed failover, for some time, without giving any feedback in the logs to the poor sysadmin waiting for clues. This commit makes Sentinels more verbose about the delay: when a master is down and a failover attempt is not performed because the delay has still not elaped, something like that will be logged: Next failover delay: I will not start a failover before Thu May 8 16:48:59 2014
-
antirez authored
This event makes clear, before the switch-master event is generated, that a Sentinel received a configuration update from another Sentinel.
-
- 24 Mar, 2014 3 commits
-
-
antirez authored
In sentinelFlushConfig() fd could be undefined when the following if statement was true: if (rewrite_status == -1) goto werr; This could cause random file descriptors to get closed.
-
Matt Stancliff authored
-
Matt Stancliff authored
GCC-4.9 warned about this, but clang didn't. This commit fixes warning: sentinel.c: In function 'sentinelReceiveHelloMessages': sentinel.c:2156:43: warning: variable 'master' set but not used [-Wunused-but-set-variable] sentinelRedisInstance *ri = c->data, *master;
-
- 21 Mar, 2014 5 commits
-
-
antirez authored
Test sentinel.tilt condition on top and return if it is true. This allows to remove the check for the tilt condition in the remaining code paths of the function.
-
antirez authored
-
antirez authored
addReplySentinelRedisInstance() modified so that this field is displayed for all the kind of instances: Sentinels, Masters, Slaves.
-
antirez authored
Failure detection in Sentinel is ping-pong based. It used to work by remembering the last time a valid PONG reply was received, and checking if the reception time was too old compared to the current current time. PINGs were sent at a fixed interval of 1 second. This works in a decent way, but does not scale well when we want to set very small values of "down-after-milliseconds" (this is the node timeout basically). This commit reiplements the failure detection making a number of changes. Some changes are inspired to Redis Cluster failure detection code: * A new last_ping_time field is added in representation of instances. If non zero, we have an active ping that was sent at the specified time. When a valid reply to ping is received, the field is zeroed again. * last_ping_time is not reset when we reconnect the link or send a new ping, so from our point of view it represents the time we started waiting for the instance to reply to our pings without receiving a reply. * last_ping_time is now used in order to check if the instance is timed out. This means that we can have a node timeout of 100 milliseconds and yet the system will work well since the new check is not bound to the period used to send pings. * Pings are now sent every second, or often if the value of down-after-milliseconds is less than one second. With a lower limit of 10 HZ ping frequency. * Link reconnection code was improved. This is used in order to try to reconnect the link when we are at 50% of the node timeout without a valid reply received yet. However the old code triggered unnecessary reconnections when the node timeout was very small. Now that should be ok. The new code passes the tests but more testing is needed and more unit tests stressing the failure detector, so currently this is merged only in the unstable branch.
-
antirez authored
This makes debugging / monitoring of Sentinels simpler since you can identify sentinels in CLIENT LIST output of Redis instances.
-
- 14 Mar, 2014 4 commits
-
-
Matt Stancliff authored
argc == 2; argv[2] == crash
-
antirez authored
Sentinel's main safety argument is that there are no two configurations for the same master with the same version (configuration epoch). For this to be true Sentinels require to be authorized by a majority. Additionally Sentinels require to do two important things: * Never vote again for the same epoch. * Never exchange an old vote for a fresh one. The first prerequisite, in a crash-recovery system model, requires to persist the master->leader_epoch on durable storage before to reply to messages. This was not the case. We also make sure to persist the current epoch in order to never reply to stale votes requests from other Sentinels, after a recovery. The configuration is persisted by making use of fsync(), this is considered in the context of this code a good enough guarantee that after a restart our durable state is restored, however this may not always be the case depending on the kind of hardware and operating system used.
-
antirez authored
Now the way HELLO messages are received is unified. Now it is no longer needed for Sentinels to converge to the higher configuration for a master to be able to chat via some Redis instance, the are able to directly exchanges configurations. Note that this commit does not include the (trivial) change needed to send HELLO messages to Sentinel instances as well, since for an error I committed the change in the previous commit that refactored hello messages processing into a separated function.
-
antirez authored
-
- 11 Mar, 2014 1 commit
-
-
Jan-Erik Rediger authored
-
- 05 Mar, 2014 1 commit
-
-
antirez authored
Sentinel needs to avoid split brain conditions due to multiple sentinels trying to get voted at the exact same time. So far some desynchronization was provided by fluctuating server.hz, that is the frequency of the timer function call. However the desynchonization provided in this way was not enough when using many Sentinel instances, especially when a large quorum value is used in order to force a greater degree of agreement (more than N/2+1). It was verified that it was likely to trigger a split brain condition, forcing the system to try again after a timeout. Usually the system will succeed after a few retries, but this is not optimal. This commit desynchronizes instances in a more effective way to make it likely that the first attempt will be successful.
-
- 25 Feb, 2014 2 commits