- 21 May, 2014 2 commits
-
-
Matt Stancliff authored
Behrad Zari discovered [1] and Josiah reported [2]: if you block and wait for a list to exist, but the list creates from a non-push command, the blocked client never gets notified. This commit adds notification of blocked clients into the DB layer and away from individual commands. Lists can be created by [LR]PUSH, SORT..STORE, RENAME, MOVE, and RESTORE. Previously, blocked client notifications were only triggered by [LR]PUSH. Your client would never get notified if a list were created by SORT..STORE or RENAME or a RESTORE, etc. Blocked client notification now happens in one unified place: - dbAdd() triggers notification when adding a list to the DB Two new tests are added that fail prior to this commit. All test pass. Fixes #1668 [1]: https://groups.google.com/forum/#!topic/redis-db/k4oWfMkN1NU [2]: #1668
-
antirez authored
This fixes issue #1765.
-
- 20 May, 2014 5 commits
-
-
antirez authored
-
antirez authored
-
Salvatore Sanfilippo authored
Fix LUA_OBJCACHE segfault.
-
antirez authored
-
antirez authored
-
- 19 May, 2014 6 commits
-
-
michael-grunder authored
When scanning the argument list inside of a redis.call() invocation for pre-cached values, there was no check being done that the argument we were on was in fact within the bounds of the cache size. So if a redis.call() command was ever executed with more than 32 arguments (current cache size #define setting) redis-server could segfault.
-
antirez authored
-
Salvatore Sanfilippo authored
Correct the HyperLogLog stale cache flag to prevent unnecessary computat...
-
antirez authored
-
antirez authored
-
antirez authored
-
- 18 May, 2014 1 commit
-
-
Mike Trinkala authored
Set the MSB as documented.
-
- 15 May, 2014 5 commits
-
-
antirez authored
clusterHandleSlaveFailover() was reimplementing what clusterSetNodeAsMaster() without any good reason.
-
antirez authored
Thanks to this change, when there is some code like: clusterDoBeforeSleep(CLUSTER_TODO_UPDATE_STATE|...); ... and later before returning to the event loop ... clusterUpdateState(); The clusterUpdateState() function will clar the flag and will not be repeated in the clusterBeforeSleep() function. This especially important for config save/fsync flags which are slow to execute and not a good idea to repeat without a good reason. This is implemented for all the CLUSTER_TODO flags.
-
antirez authored
-
antirez authored
The new command is able to reset a cluster node so that it starts again as a fresh node. By default the command performs a soft reset (the same as calling it as CLUSTER RESET SOFT), and the following steps are performed: 1) All slots are set as unassigned. 2) The list of known nodes is flushed. 3) Node is set as master if it is a slave. When an hard reset is performed with CLUSTER RESET HARD the following additional operations are performed: 4) A new Node ID is created at random. 5) Epochs are set to 0. CLUSTER RESET is useful both when the sysadmin wants to reconfigure a node with a different role (for example turning a slave into a master) and for testing purposes. It also may play a role in automatically provisioned Redis Clusters, since it allows to reset a node back to the initial state in order to be reconfigured.
-
antirez authored
-
- 14 May, 2014 4 commits
-
-
antirez authored
-
antirez authored
-
antirez authored
The previous code handling a lost slot (by another master with an higher configuration for the slot) was defensive, considering it an error and putting the cluster in an odd state requiring redis-cli fix. This was changed, because actually this only happens either in a legitimate way, with failovers, or when the admin messed with the config in order to reconfigure the cluster. So the new code instead will try to make sure that the keys stored match the new slots map, by removing all the keys in the slots we lost ownership from. The function that deletes the keys from the lost slots is called only if the node does not lose all its slots (resulting in a reconfiguration as a slave of the node that got ownership). This is an optimization since the replication code will anyway flush all the instance data in a faster way.
-
antirez authored
-
- 13 May, 2014 2 commits
-
-
antirez authored
Better handling of connection errors in order to update the table and recovery, populate the startup nodes table after fetching the list of nodes. More work to do about it, it is still not as reliable as redis-rb-cluster implementation which is the minimal reference implementation for Redis Cluster clients.
-
antirez authored
-
- 12 May, 2014 10 commits
-
-
antirez authored
-
antirez authored
Using CLUSTER FAILOVER FORCE it is now possible to failover a master in a forced way, which means: 1) No check to understand if the master is up is performed. 2) No data age of the slave is checked. Evan a slave with very old data can manually failover a master in this way. 3) No chat with the master is attempted to reach its replication offset: the master can just be down.
-
antirez authored
Automatic failovers only happen in Redis Cluster if the slave trying to be elected was disconnected from its master for no more than 10 times the node-timeout value. However there should be no such a check for manual failovers, since these are initiated by the sysadmin that, in theory, knows what she is doing when a slave is selected to be promoted.
-
Akos Vandra authored
(Note: commit message modified by @antirez for clarity).
-
Akos Vandra authored
-
antirez authored
This way there is no need for the conflict resolution algo to be used in order to start with a cluster where each node has a different configEpoch.
-
antirez authored
-
antirez authored
-
antirez authored
Will be configurable / adaptive at some point but let's start with a saner value compared to 1 sec which is not a good idea for big data structures stored into a single key.
-
antirez authored
The error when the target key is busy was a generic one, while it makes sense to be able to distinguish between the target key busy error and the others easily.
-
- 10 May, 2014 1 commit
-
-
antirez authored
-
- 09 May, 2014 4 commits
-
-
antirez authored
Fixes issue #1734.
-
antirez authored
-
antirez authored
-
antirez authored
The same change was operated for normal client connections. This is important for Cluster as well, since when a node rejoins the cluster, when a partition heals or after a restart, it gets flooded with new connection attempts by all the other nodes trying to form a full mesh again.
-