- 24 Feb, 2014 3 commits
- 23 Feb, 2014 4 commits
-
-
antirez authored
-
antirez authored
Pause the test with running instances available for state inspection on error.
-
antirez authored
-
Salvatore Sanfilippo authored
Deny SYNC and PSYNC in redis-cli
-
- 22 Feb, 2014 2 commits
-
-
antirez authored
This error was conceived for the older version of Sentinel that worked via master redirection and that was not able to get configuration updates from other Sentinels via the Pub/Sub channel of masters or slaves. This reply does not make sense today, every Sentinel should reply with the best information it has currently. The error will make even more sense in the future since the plan is to allow Sentinels to update the configuration of other Sentinels via gossip with a direct chat without the prerequisite that they have at least a monitored instance in common.
-
antirez authored
It is now possible to kill and restart sentinel or redis instances for more real-world testing. The 01 unit tests the capability of Sentinel to update the configuration of Sentinels rejoining the cluster, however the test is pretty trivial and more tests should be added.
-
- 21 Feb, 2014 2 commits
-
-
Salvatore Sanfilippo authored
Add cluster or sentinel to proc title
-
Matt Stancliff authored
If you launch redis with `redis-server --sentinel` then in a ps, your output only says "redis-server IP:Port" — this patch changes the proc title to include [sentinel] or [cluster] depending on the current server mode: e.g. "redis-server IP:Port [sentinel]" "redis-server IP:Port [cluster]"
-
- 20 Feb, 2014 3 commits
- 19 Feb, 2014 3 commits
- 18 Feb, 2014 8 commits
-
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
Rename define to match the new meaning.
-
antirez authored
If we can't reconfigure a slave in time during failover, go forward as anyway the slave will be fixed by Sentinels in the future, once they detect it is misconfigured. Otherwise a failover in progress may never terminate if for some reason the slave is uncapable to sync with the master while at the same time it is not disconnected.
-
- 17 Feb, 2014 5 commits
-
-
antirez authored
Nothing tested at all so far... Just the infrastructure spawning N Sentinels and N Redis instances that the test will use again and again.
-
antirez authored
-
antirez authored
Some inline test moved into server_is_up procedure. Also find_available_port was moved into util since it is going to be used for the Sentinel test as well.
-
antirez authored
The code tried to obtain the configuration file absolute path after processing the configuration file. However if config file was a relative path and a "dir" statement was processed reading the config, the absolute path obtained was wrong. With this fix the absolute path is obtained before processing the configuration while the server is still in the original directory where it was executed.
-
antirez authored
Now it logs the file name if it is not accessible. Also there is a different error for the missing config file case, and for the non writable file case.
-
- 13 Feb, 2014 4 commits
-
-
antirez authored
server.unixtime and server.mstime are cached less precise timestamps that we use every time we don't need an accurate time representation and a syscall would be too slow for the number of calls we require. Such an example is the initialization and update process of the last interaction time with the client, that is used for timeouts. However rdbLoad() can take some time to load the DB, but at the same time it did not updated the time during DB loading. This resulted in the bug described in issue #1535, where in the replication process the slave loads the DB, creates the redisClient representation of its master, but the timestamp is so old that the master, under certain conditions, is sensed as already "timed out". Thanks to @yoav-steinberg and Redis Labs Inc for the bug report and analysis.
-
antirez authored
-
antirez authored
It was verified that reverting the commit that fixes the bug, the test no longer passes.
-
antirez authored
This commit fixes a serious Lua scripting replication issue, described by Github issue #1549. The root cause of the problem is that scripts were put inside the script cache, assuming that slaves and AOF already contained it, even if the scripts sometimes produced no changes in the data set, and were not actaully propagated to AOF/slaves. Example: eval "if tonumber(KEYS[1]) > 0 then redis.call('incr', 'x') end" 1 0 Then: evalsha <sha1 step 1 script> 1 0 At this step sha1 of the script is added to the replication script cache (the script is marked as known to the slaves) and EVALSHA command is transformed to EVAL. However it is not dirty (there is no changes to db), so it is not propagated to the slaves. Then the script is called again: evalsha <sha1 step 1 script> 1 1 At this step master checks that the script already exists in the replication script cache and doesn't transform it to EVAL command. It is dirty and propagated to the slaves, but they fail to evaluate the script as they don't have it in the script cache. The fix is trivial and just uses the new API to force the propagation of the executed command regardless of the dirty state of the data set. Thank you to @minus-infinity on Github for finding the issue, understanding the root cause, and fixing it.
-
- 12 Feb, 2014 2 commits
-
-
antirez authored
-
antirez authored
A system similar to the RDB write error handling is used, in which when we can't write to the AOF file, writes are no longer accepted until we are able to write again. For fsync == always we still abort on errors since there is currently no easy way to avoid replying with success to the user otherwise, and this would violate the contract with the user of only acknowledging data already secured on disk.
-
- 11 Feb, 2014 4 commits
-
-
antirez authored
-
antirez authored
Logging them at WARNING level was of little utility and of sure disturb.
-
antirez authored
-
antirez authored
Avoid to trash a configEpoch for every slot migrated if this node has already the max configEpoch across the cluster. Still work to do in this area but this avoids both ending with a very high configEpoch without any reason and to flood the system with fsyncs.
-