- 14 May, 2015 4 commits
-
-
antirez authored
-
antirez authored
Otherwise pending commands callbacks will fire with a reference that no longer exists.
-
antirez authored
-
antirez authored
The PING trigger was improved again by using two fields instead of a single one to remember when the last ping was sent: 1. The "active" ping is the time at which we sent the last ping that still received no reply. However we continue to ping non replying instances even if they have an old active ping: the link may be disconnected and reconencted in the meantime so the older pings may get lost even if it's a TCP socket. 2. The "last" ping is the time at which we really sent the last ping on the wire, and this is used in order to throttle the amount of pings we send during failures (when no pong is received). All in all the failure detector effectiveness should be identical but we avoid to flood instances with pings during failures or when they are slow.
-
- 13 May, 2015 1 commit
-
-
antirez authored
-
- 12 May, 2015 3 commits
-
-
antirez authored
It's ok to ping as soon as the ping period has elapsed since we received the last PONG, but it's not good that we ping again if there is a pending ping... With this change we'll send a new ping if there is one pending only if two times the ping period elapsed since the ping which is still pending was sent.
-
antirez authored
-
antirez authored
This is useful for debugging and logging activities: given a sentinelRedisInstance object returns a C string representing the instance type: master, slave, sentinel.
-
- 11 May, 2015 2 commits
- 08 May, 2015 4 commits
- 07 May, 2015 1 commit
-
-
antirez authored
Since with a previous commit Sentinels now persist their unique ID, we no longer need to detect duplicated Sentinels and re-add them. We remove and re-add back using different events only in the case of address switch of the same Sentinel, without generating a new +sentinel event.
-
- 06 May, 2015 1 commit
-
-
antirez authored
Previously Sentinels always changed unique ID across restarts, relying on the server.runid field. This is not a good idea, and forced Sentinel to rely on detection of duplicated Sentinels and a potentially dangerous clean-up and re-add operation of the Sentinel instance that was rebooted. Now the ID is generated at the first start and persisted in the configuration file, so that a given Sentinel will have its unique ID forever (unless the configuration is manually deleted or there is a filesystem corruption).
-
- 05 May, 2015 3 commits
- 04 May, 2015 2 commits
-
-
therealbill authored
Originally, only the +slave event which occurs when a slave is reconfigured during sentinelResetMasterAndChangeAddress triggers a flush of the config to disk. However, newly discovered slaves don't apparently trigger this flush but do trigger the +slave event issuance. So if you start up a sentinel, add a master, then add a slave to the master (as a way to reproduce it) you'll see the +slave event issued, but the sentinel config won't be updated with the known-slave entry. This change makes sentinel do the flush of the config if a new slave is deteted in sentinelRefreshInstanceInfo.
-
antirez authored
To rewrite the config in the loop that adds slaves back after a master reset, in order to handle switching to another master, is useless: it just adds latency since there is an fsync call in the inner loop, without providing any additional guarantee, but the contrary, since if after the first loop iteration the server crashes we end with just a single slave entry losing all the other informations. It is wiser to rewrite the config at the end when the full new state is configured.
-
- 29 Apr, 2015 1 commit
-
-
antirez authored
As suggested in #2543.
-
- 28 Apr, 2015 1 commit
-
-
clark.kang authored
-
- 27 Apr, 2015 1 commit
-
-
antirez authored
-
- 26 Apr, 2015 1 commit
-
-
Yossi Gottlieb authored
limit.
-
- 20 Apr, 2015 2 commits
-
-
FuGangqiang authored
-
FuGangqiang authored
-
- 19 Apr, 2015 1 commit
-
-
FuGangqiang authored
-
- 01 Apr, 2015 1 commit
-
-
antirez authored
When we fail to setup the write handler it does not make sense to take the client around, it is missing writes: whatever is a client or a slave anyway the connection should terminated ASAP. Moreover what the function does exactly with its return value, and in which case the write handler is installed on the socket, was not clear, so the functions comment are improved to make the goals of the function more obvious. Also related to #2485.
-
- 31 Mar, 2015 2 commits
-
-
Oran Agra authored
master was closing the connection if the RDB transfer took long time. and also sent PINGs to the slave before it got the initial ACK, in which case the slave wouldn't be able to find the EOF marker.
-
antirez authored
Segfault introduced during a refactoring / warning suppression a few commits away. This particular call assumed that it is safe to pass NULL to the object pointer argument when we are sure the set has a given encoding. This can't be assumed and is now guaranteed to segfault because of the new API of setTypeNext().
-
- 30 Mar, 2015 3 commits
-
-
antirez authored
This change fixes several warnings compiling at -O3 level with GCC 4.8.2, and at the same time, in case of misuse of the API, we have the pointer initialize to NULL or the integer initialized to the value -123456789 which is easy to spot by naked eye.
-
antirez authored
Another one just to avoid a warning. Slightly more defensive code anyway.
-
antirez authored
Change done in order to remove a warning and improve code robustness. No actual bug here.
-
- 27 Mar, 2015 2 commits
-
-
antirez authored
No semantical changes since to make dict.c truly able to scale over the 32 bit table size limit, the hash function shoulds and other internals related to hash function output should be 64 bit ready.
-
antirez authored
rehashidx is always positive in the two code paths, since the only negative value it could have is -1 when there is no rehashing in progress, and the condition is explicitly checked.
-
- 24 Mar, 2015 2 commits
-
-
antirez authored
Bug as old as Redis and blocking operations. It's hard to trigger since only happens on instance role switch, but the results are quite bad since an inconsistency between master and slave is created. How to trigger the bug is a good description of the bug itself. 1. Client does "BLPOP mylist 0" in master. 2. Master is turned into slave, that replicates from New-Master. 3. Client does "LPUSH mylist foo" in New-Master. 4. New-Master propagates write to slave. 5. Slave receives the LPUSH, the blocked client get served. Now Master "mylist" key has "foo", Slave "mylist" key is empty. Highlights: * At step "2" above, the client remains attached, basically escaping any check performed during command dispatch: read only slave, in that case. * At step "5" the slave (that was the master), serves the blocked client consuming a list element, which is not consumed on the master side. This scenario is technically likely to happen during failovers, however since Redis Sentinel already disconnects clients using the CLIENT command when changing the role of the instance, the bug is avoided in Sentinel deployments. Closes #2473.
-
antirez authored
There was a bug in Redis Cluster caused by clients blocked in a blocking list pop operation, for keys no longer handled by the instance, or in a condition where the cluster became down after the client blocked. A typical situation is: 1) BLPOP <somekey> 0 2) <somekey> hash slot is resharded to another master. The client will block forever int this case. A symmentrical non-cluster-specific bug happens when an instance is turned from master to slave. In that case it is more serious since this will desynchronize data between slaves and masters. This other bug was discovered as a side effect of thinking about the bug explained and fixed in this commit, but will be fixed in a separated commit.
-
- 22 Mar, 2015 1 commit
-
-
antirez authored
-
- 21 Mar, 2015 1 commit
-
-
antirez authored
-