- 06 Mar, 2014 3 commits
-
-
Matt Stancliff authored
REDIS_CLUSTER_IPLEN had the same value as REDIS_IP_STR_LEN. They were both #define'd to the same INET6_ADDRSTRLEN.
-
Matt Stancliff authored
Function nodeIp2String in cluster.c is exactly anetPeerToString with a pre-extracted fd.
-
Matt Stancliff authored
anetTcpAccept returns ANET_ERR, not AE_ERR. This isn't a physical error since both ANET_ERR and AE_ERR are -1, but better to be consistent.
-
- 05 Mar, 2014 1 commit
-
-
antirez authored
-
- 04 Mar, 2014 2 commits
-
-
antirez authored
Sentinel needs to avoid split brain conditions due to multiple sentinels trying to get voted at the exact same time. So far some desynchronization was provided by fluctuating server.hz, that is the frequency of the timer function call. However the desynchonization provided in this way was not enough when using many Sentinel instances, especially when a large quorum value is used in order to force a greater degree of agreement (more than N/2+1). It was verified that it was likely to trigger a split brain condition, forcing the system to try again after a timeout. Usually the system will succeed after a few retries, but this is not optimal. This commit desynchronizes instances in a more effective way to make it likely that the first attempt will be successful.
-
antirez authored
-
- 03 Mar, 2014 3 commits
-
-
antirez authored
-
antirez authored
-
antirez authored
This is still code to rework in order to use agreement to obtain a new configEpoch when a slot is migrated, however this commit handles the special case that happens when the nodes are just started and everybody has a configEpoch of 0. In this special condition to have the maximum configEpoch is not enough as the special epoch 0 is not unique (all the others are). This does not fixes the intrinsic race condition of a failover happening while we are resharding, that will be addressed later.
-
- 28 Feb, 2014 2 commits
-
-
Matt Stancliff authored
used_memory_peak only updates in serverCron every server.hz, but Redis can use more memory and a user can request memory INFO before used_memory_peak gets updated in the next cron run. This patch updates used_memory_peak to the current memory usage if the current memory usage is higher than the recorded used_memory_peak value. (And it only calls zmalloc_used_memory() once instead of twice as it was doing before.)
-
antirez authored
-
- 27 Feb, 2014 5 commits
-
-
michael-grunder authored
This commit reworks the redis-cli --bigkeys command to provide more information about our progress as well as output summary information when we're done. - We now show an approximate percentage completion as we go - Hiredis pipelining is used for TYPE and SIZE retreival - A summary of keyspace distribution and overall breakout at the end
-
antirez authored
-
antirez authored
With the new behavior it is possible to specify just the start in the range (the end will be assumed to be the first byte), or it is possible to specify both start and end. This is useful to change the behavior of the command when looking for zeros inside a string. 1) If the user specifies both start and end, and no 0 is found inside the range, the command returns -1. 2) If instead no range is specified, or just the start is given, even if in the actual string no 0 bit is found, the command returns the first bit on the right after the end of the string. So for example if the string stored at key foo is "\xff\xff": BITPOS foo (returns 16) BITPOS foo 0 -1 (returns -1) BITPOS foo 0 (returns 16) The idea is that when no end is given the user is just looking for the first bit that is zero and can be set to 1 with SETBIT, as it is "available". Instead when a specific range is given, we just look for a zero within the boundaries of the range.
-
antirez authored
It appears to work but more stress testing, and both unit tests and fuzzy testing, is needed in order to ensure the implementation is sane.
-
antirez authored
-
- 25 Feb, 2014 7 commits
-
-
Matt Stancliff authored
-
michael-grunder authored
This commit changes the findBigKeys() function in redis-cli.c to use the new SCAN command for iterating the keyspace, rather than RANDOMKEY. Because we can know when we're done using SCAN, it will exit after exhausting the keyspace.
-
antirez authored
-
antirez authored
The computation is just something to take the CPU busy, no need to use a specific type. Since stdint.h was not included this prevented compilation on certain systems.
-
antirez authored
-
antirez authored
-
antirez authored
-
- 24 Feb, 2014 4 commits
- 22 Feb, 2014 1 commit
-
-
antirez authored
This error was conceived for the older version of Sentinel that worked via master redirection and that was not able to get configuration updates from other Sentinels via the Pub/Sub channel of masters or slaves. This reply does not make sense today, every Sentinel should reply with the best information it has currently. The error will make even more sense in the future since the plan is to allow Sentinels to update the configuration of other Sentinels via gossip with a direct chat without the prerequisite that they have at least a monitored instance in common.
-
- 21 Feb, 2014 1 commit
-
-
Matt Stancliff authored
If you launch redis with `redis-server --sentinel` then in a ps, your output only says "redis-server IP:Port" — this patch changes the proc title to include [sentinel] or [cluster] depending on the current server mode: e.g. "redis-server IP:Port [sentinel]" "redis-server IP:Port [cluster]"
-
- 20 Feb, 2014 1 commit
-
-
antirez authored
This is useful mostly for debugging of issues.
-
- 18 Feb, 2014 2 commits
-
-
antirez authored
Rename define to match the new meaning.
-
antirez authored
If we can't reconfigure a slave in time during failover, go forward as anyway the slave will be fixed by Sentinels in the future, once they detect it is misconfigured. Otherwise a failover in progress may never terminate if for some reason the slave is uncapable to sync with the master while at the same time it is not disconnected.
-
- 17 Feb, 2014 2 commits
-
-
antirez authored
The code tried to obtain the configuration file absolute path after processing the configuration file. However if config file was a relative path and a "dir" statement was processed reading the config, the absolute path obtained was wrong. With this fix the absolute path is obtained before processing the configuration while the server is still in the original directory where it was executed.
-
antirez authored
Now it logs the file name if it is not accessible. Also there is a different error for the missing config file case, and for the non writable file case.
-
- 13 Feb, 2014 3 commits
-
-
antirez authored
server.unixtime and server.mstime are cached less precise timestamps that we use every time we don't need an accurate time representation and a syscall would be too slow for the number of calls we require. Such an example is the initialization and update process of the last interaction time with the client, that is used for timeouts. However rdbLoad() can take some time to load the DB, but at the same time it did not updated the time during DB loading. This resulted in the bug described in issue #1535, where in the replication process the slave loads the DB, creates the redisClient representation of its master, but the timestamp is so old that the master, under certain conditions, is sensed as already "timed out". Thanks to @yoav-steinberg and Redis Labs Inc for the bug report and analysis.
-
antirez authored
-
antirez authored
This commit fixes a serious Lua scripting replication issue, described by Github issue #1549. The root cause of the problem is that scripts were put inside the script cache, assuming that slaves and AOF already contained it, even if the scripts sometimes produced no changes in the data set, and were not actaully propagated to AOF/slaves. Example: eval "if tonumber(KEYS[1]) > 0 then redis.call('incr', 'x') end" 1 0 Then: evalsha <sha1 step 1 script> 1 0 At this step sha1 of the script is added to the replication script cache (the script is marked as known to the slaves) and EVALSHA command is transformed to EVAL. However it is not dirty (there is no changes to db), so it is not propagated to the slaves. Then the script is called again: evalsha <sha1 step 1 script> 1 1 At this step master checks that the script already exists in the replication script cache and doesn't transform it to EVAL command. It is dirty and propagated to the slaves, but they fail to evaluate the script as they don't have it in the script cache. The fix is trivial and just uses the new API to force the propagation of the executed command regardless of the dirty state of the data set. Thank you to @minus-infinity on Github for finding the issue, understanding the root cause, and fixing it.
-
- 12 Feb, 2014 2 commits
-
-
antirez authored
-
antirez authored
A system similar to the RDB write error handling is used, in which when we can't write to the AOF file, writes are no longer accepted until we are able to write again. For fsync == always we still abort on errors since there is currently no easy way to avoid replying with success to the user otherwise, and this would violate the contract with the user of only acknowledging data already secured on disk.
-
- 11 Feb, 2014 1 commit
-
-
antirez authored
-