- 12 Dec, 2014 1 commit
-
-
clark.kang authored
-
- 10 Dec, 2014 1 commit
-
-
antirez authored
Slaves key expire is orchestrated by the master. Sometimes the master will send the synthesized DEL to expire keys on the slave with a non trivial delay (when the key is not accessed, only the incremental expiry algorithm will expire it in background). During that time, a key is logically expired, but slaves still return the key if you GET (or whatever) it. This is a bad behavior. However we can't simply trust the slave view of the key, since we need the master to be able to send write commands to update the slave data set, and DELs should only happen when the key is expired in the master in order to ensure consistency. However 99.99% of the issues with this behavior is when a client which is not a master sends a read only command. In this case we are safe and can consider the key as non existing. This commit does a few changes in order to make this sane: 1. lookupKeyRead() is modified in order to return NULL if the above conditions are met. 2. Calls to lookupKeyRead() in commands actually writing to the data set are repliaced with calls to lookupKeyWrite(). There are redundand checks, so for example, if in "2" something was overlooked, we should be still safe, since anyway, when the master writes the behavior is to don't care about what expireIfneeded() returns. This commit is related to #1768, #1770, #2131.
-
- 02 Dec, 2014 1 commit
-
-
antirez authored
Ref: issue #2175
-
- 09 Sep, 2014 1 commit
-
-
xiaost authored
*SCAN will cause redis server to hang for seconds after millions of keys was deleted by SCAN/DEL pairs
-
- 13 Aug, 2014 1 commit
-
-
antirez authored
-
- 08 Aug, 2014 1 commit
-
-
Matt Stancliff authored
Previously, "MOVE key somestring" would move the key to DB 0 which is just unexpected and wrong. String as DB == error. Test added too. Modified by @antirez in order to use the getLongLongFromObject() API instead of strtol(). Fixes #1428
-
- 07 Aug, 2014 1 commit
-
-
Matt Stancliff authored
We only want to use the last STORE key, but we have to record we actually found a STORE key so we can increment the final return key count. Test added to prevent further regression. Closes #1883, #1645, #1647
-
- 26 Jun, 2014 1 commit
-
-
antirez authored
-
- 21 May, 2014 1 commit
-
-
Matt Stancliff authored
Behrad Zari discovered [1] and Josiah reported [2]: if you block and wait for a list to exist, but the list creates from a non-push command, the blocked client never gets notified. This commit adds notification of blocked clients into the DB layer and away from individual commands. Lists can be created by [LR]PUSH, SORT..STORE, RENAME, MOVE, and RESTORE. Previously, blocked client notifications were only triggered by [LR]PUSH. Your client would never get notified if a list were created by SORT..STORE or RENAME or a RESTORE, etc. Blocked client notification now happens in one unified place: - dbAdd() triggers notification when adding a list to the DB Two new tests are added that fail prior to this commit. All test pass. Fixes #1668 [1]: https://groups.google.com/forum/#!topic/redis-db/k4oWfMkN1NU [2]: #1668
-
- 14 May, 2014 1 commit
-
-
antirez authored
The previous code handling a lost slot (by another master with an higher configuration for the slot) was defensive, considering it an error and putting the cluster in an odd state requiring redis-cli fix. This was changed, because actually this only happens either in a legitimate way, with failovers, or when the admin messed with the config in order to reconfigure the cluster. So the new code instead will try to make sure that the keys stored match the new slots map, by removing all the keys in the slots we lost ownership from. The function that deletes the keys from the lost slots is called only if the node does not lose all its slots (resulting in a reconfiguration as a slave of the node that got ownership). This is an optimization since the replication code will anyway flush all the instance data in a faster way.
-
- 17 Apr, 2014 1 commit
-
-
antirez authored
-
- 10 Apr, 2014 1 commit
-
-
Matt Stancliff authored
Deleting an expired key should return 0, not success. Fixes #1648
-
- 30 Mar, 2014 1 commit
-
-
antirez authored
All the Redis functions that need to modify the string value of a key in a destructive way (APPEND, SETBIT, SETRANGE, ...) require to make the object unshared (if refcount > 1) and encoded in raw format (if encoding is not already REDIS_ENCODING_RAW). This was cut & pasted many times in multiple places of the code. This commit puts the small logic needed into a function called dbUnshareStringValue().
-
- 20 Mar, 2014 2 commits
-
-
antirez authored
-
antirez authored
For testing purposes it is handy to have a very high resolution of the LRU clock, so that it is possible to experiment with scripts running in just a few seconds how the eviction algorithms works. This commit allows Redis to use the cached LRU clock, or a value computed on demand, depending on the resolution. So normally we have the good performance of a precomputed value, and a clock that wraps in many days using the normal resolution, but if needed, changing a define will switch behavior to an high resolution LRU clock.
-
- 10 Mar, 2014 8 commits
-
-
antirez authored
It does not make sense to pass multiple store options, so, better to handle it ;-)
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
Previously we used zunionInterGetKeys(), however after this function was fixed to account for the destination key (not needed when the API was designed for "diskstore") the two set of commands can no longer be served by an unique keys-extraction function.
-
antirez authored
-
antirez authored
This API originated from the "diskstore" experiment, not for Redis Cluster itself, so there were legacy/useless things trying to differentiate between keys that are going to be overwritten and keys that need to be fetched from disk (preloaded). All useless with Cluster, so removed with the result of code simplification.
-
antirez authored
Everything was pretty clear again from the initial statements.
-
- 07 Mar, 2014 1 commit
-
-
Matt Stancliff authored
The previous implementation wasn't taking into account the storage key in position 1 being a requirement (it was only counting the source keys in positions 3 to N). Fixes antirez/redis#1581
-
- 07 Feb, 2014 1 commit
-
-
antirez authored
-
- 03 Feb, 2014 1 commit
-
-
antirez authored
Keys expiring in the middle of the execution of Lua scripts are to create inconsistencies in masters and / or AOF files. See the following example: if redis.call("exists",KEYS[1]) == 1 then redis.call("incr","mycounter") end if redis.call("exists",KEYS[1]) == 1 then return redis.call("incr","mycounter") end The script executes two times the same *if key exists then incrementcounter* logic. However the two executions will work differently in the master and the slaves, provided some unlucky timing happens. In the master the first time the key may still exist, while the second time the key may no longer exist. This will result in the key incremented just one time. However as a side effect the master will generate a synthetic `DEL` command in the replication channel in order to force the slaves to expire the key (given that key expiration is master-driven). When the same script will run in the slave, the key will no longer be there, so the script will not increment the key. The key idea used to implement the expire-at-first-lookup semantics was provided by Marc Gravell.
-
- 10 Dec, 2013 1 commit
-
-
antirez authored
Redis hash table implementation has many non-blocking features like incremental rehashing, however while deleting a large hash table there was no way to have a callback called to do some incremental work. This commit adds this support, as an optiona callback argument to dictEmpty() that is currently called at a fixed interval (one time every 65k deletions).
-
- 05 Dec, 2013 1 commit
-
-
antirez authored
-
- 06 Nov, 2013 1 commit
-
-
antirez authored
This change makes Sentinel less fragile about a number of failure modes. This commit also fixes a different bug as a side effect, SLAVEOF command was sent multiple times without incrementing the pending commands count.
-
- 05 Nov, 2013 6 commits
-
-
antirez authored
Thanks to @badboy for reporting it.
-
antirez authored
The previous implementation of SCAN parsed the cursor in the generic function implementing SCAN, SSCAN, HSCAN and ZSCAN. The actual higher-level command implementation only checked for empty keys and return ASAP in that case. The result was that inverting the arguments of, for instance, SSCAN for example and write: SSCAN 0 key Instead of SSCAN key 0 Resulted into no error, since 0 is a non-existing key name very likely. Just the iterator returned no elements at all. In order to fix this issue the code was refactored to extract the function to parse the cursor and return the error. Every higher level command implementation now parses the cursor and later checks if the key exist or not.
-
antirez authored
The previous implementation assumed that the first call always happens with cursor set to 0, this may not be the case, and we want to return 0 anyway otherwise the (broken) client code will loop forever.
-
antirez authored
-
antirez authored
This fixes issue #1360 and #1362.
-
antirez authored
-
- 31 Oct, 2013 2 commits
- 28 Oct, 2013 2 commits
- 25 Oct, 2013 1 commit
-
-
antirez authored
-