- 10 Dec, 2014 8 commits
-
-
antirez authored
Slaves key expire is orchestrated by the master. Sometimes the master will send the synthesized DEL to expire keys on the slave with a non trivial delay (when the key is not accessed, only the incremental expiry algorithm will expire it in background). During that time, a key is logically expired, but slaves still return the key if you GET (or whatever) it. This is a bad behavior. However we can't simply trust the slave view of the key, since we need the master to be able to send write commands to update the slave data set, and DELs should only happen when the key is expired in the master in order to ensure consistency. However 99.99% of the issues with this behavior is when a client which is not a master sends a read only command. In this case we are safe and can consider the key as non existing. This commit does a few changes in order to make this sane: 1. lookupKeyRead() is modified in order to return NULL if the above conditions are met. 2. Calls to lookupKeyRead() in commands actually writing to the data set are repliaced with calls to lookupKeyWrite(). There are redundand checks, so for example, if in "2" something was overlooked, we should be still safe, since anyway, when the master writes the behavior is to don't care about what expireIfneeded() returns. This commit is related to #1768, #1770, #2131.
-
antirez authored
Changed in order to make them more review friendly, based on the experience of reviewing the code myself.
-
antirez authored
I guess the initial goal of the initialization was to suppress GCC warning, but if we have to initialize, we can do it with the base-case value instead of NULL which is never retained.
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
Brochen authored
in the case (all chars of the string s found in 'cset' ), line[573] will no more do the same thing line[572] did. this will be more faster especially in the case that the string s is very long and all chars of string s found in 'cset'
-
- 09 Dec, 2014 1 commit
-
-
antirez authored
-
- 08 Dec, 2014 3 commits
-
-
Jan-Erik Rediger authored
This allows shell pipes to correctly end redis-cli. Ref #2066
-
Sun He authored
-
Sun He authored
-
- 05 Dec, 2014 1 commit
-
-
- 04 Dec, 2014 1 commit
-
-
antirez authored
-
- 03 Dec, 2014 4 commits
-
-
antirez authored
Track bandwidth used by clients and replication (but diskless replication is not tracked since the actual transfer happens in the child process). This includes a refactoring that makes tracking new instantaneous metrics simpler.
-
antirez authored
Closes issue #1935.
-
antirez authored
-
Sun He authored
-
- 02 Dec, 2014 4 commits
-
-
antirez authored
Ref: issue #2175
-
antirez authored
-
antirez authored
PFCOUNT is technically speaking a write command, since the cached value of the HLL is exposed in the data structure (design error, mea culpa), and can be modified by PFCOUNT. However if we flag PFCOUNT as "w", read only slaves can't execute the command, which is a problem since there are environments where slaves are used to scale PFCOUNT reads. Nor it is possible to just prevent PFCOUNT to modify the data structure in slaves, since without the cache we lose too much efficiency. So while this commit allows slaves to create a temporary inconsistency (the strings representing the HLLs in the master and slave can be different in certain moments) it is actually harmless. In the long run this should be probably fixed by turning the HLL into a more opaque representation, for example by storing the cached value in the part of the string which is not exposed (this should be possible with SDS strings).
-
Sun He authored
-
- 01 Dec, 2014 3 commits
-
-
Deepak Verma authored
-
Jan-Erik Rediger authored
-
azure provisioned user authored
-
- 28 Nov, 2014 3 commits
-
-
antirez authored
bulk_data field size was not removed from the count. It is not possible to declare it simply as 'char bulk_data[]' since the structure is nested into another structure.
-
antirez authored
10000 completes in a too short time and may easily provide unreliable figures because of tiny duration.
-
Matthias Petschick authored
-
- 26 Nov, 2014 1 commit
-
-
antirez authored
Because of (not so) recent Redis changes, now the LRU internally reported unit is milliseconds, not seconds, but the DEBUG OBJECT output was still claiming seconds while providing milliseconds. However OBJECT IDLETIME was working as expected, which is the correct API to use.
-
- 25 Nov, 2014 3 commits
-
-
antirez authored
-
Sun He authored
-
antirez authored
zmalloc(0) cauesd to actually trigger a non-zero allocation since with standard libc malloc we have our own zmalloc header for memory tracking, but at the same time the returned pointer is at the end of the block and not in the middle. This triggers a false positive when testing with valgrind. When the inline protocol args count is 0, we now avoid reallocating c->argv, preventing the issue to happen.
-
- 20 Nov, 2014 1 commit
-
-
Matt Stancliff authored
Sentinel queries the INFO from every master and from every replica of every master. We can cache the INFO results in Sentinel so Sentinel can be a single place to quickly get all INFO output for an entire Sentinel monitoring group. This commit gives us SENTINEL INFO-CACHE in two forms: - SENTINEL INFO-CACHE — returns all masters and all replicas - SENTINEL INFO-CACHE master0 master1 ... masterN — vararg specify masters Results are returned as a multibulk reply with two top-level entries for each master. The first entry for each master is the name of the master. The second entry is a nested multibulk reply with the contents of INFO, first for the master, then an additional entry for each of the replicas.
-
- 14 Nov, 2014 1 commit
-
-
antirez authored
-
- 12 Nov, 2014 3 commits
- 11 Nov, 2014 3 commits
-
-
antirez authored
RDB EOF detection was relying on the final part of the RDB transfer to be a magic 40 bytes EOF marker. However as the slave is put online immediately, and because of sockets timeouts, the replication stream is actually contiguous with the RDB file. This means that to detect the EOF correctly we should either: 1) Scan all the stream searching for the mark. Sucks CPU-wise. 2) Start to send the replication stream only after an acknowledge. 3) Implement a proper chunked encoding. For now solution "2" was picked, so the master does not start to send ASAP the stream of commands in the case of diskless replication. We wait for the first REPLCONF ACK command from the slave, that certifies us that the slave correctly loaded the RDB file and is ready to get more data.
-
antirez authored
-
Charles Hooper authored
-