- 01 Oct, 2015 12 commits
- 30 Sep, 2015 10 commits
-
-
antirez authored
-
antirez authored
After the introduction of the list with clients with pending writes, to process clients incrementally outside of the event loop we also need to process the pending writes list.
-
antirez authored
-
antirez authored
-
antirez authored
May potentially improve locality... not exactly clear if this makes a difference or not. But for sure is harmless.
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
The code was broken and resulted in redis-cli --pipe to, most of the times, writing everything received in the standard input to the Redis connection socket without ever reading back the replies, until all the content to write was written. This means that Redis had to accumulate all the output in the output buffers of the client, consuming a lot of memory. Fixed thanks to the original report of anomalies in the behavior provided by Twitter user @fsaintjacques.
-
- 29 Sep, 2015 1 commit
-
-
antirez authored
-
- 15 Sep, 2015 1 commit
-
-
antirez authored
HINCRBY* tests later used the value "tmp" that was sometimes generated by the random key generation function. The result was ovewriting what Tcl expected to be inside Redis with another value, causing the next HSTRLEN test to fail.
-
- 14 Sep, 2015 4 commits
-
-
antirez authored
Georadius works by computing the center + neighbors squares covering all the area of the specified position and radius. Then a distance filter is used to remove elements which are actually outside the range. When a huge radius is used, like 5000 km or more, adjacent neighbors may collide and be the same, leading to the reporting of the same element multiple times. This only happens in the edge case of huge radius but is not ideal. A robust but slow solution would involve qsorting the range to remove all the duplicates. However since the collisions are only in adjacent boxes, for the way they are ordered in the code, it is much faster to just check if the current box is the same as the previous one processed. This commit adds a regression test for the bug. Fixes #2767.
-
antirez authored
Related to #2765.
-
antirez authored
getExpire() returns -1 when no expire exists. Related to #2765.
-
antirez authored
MOVE was not able to move the TTL: when a key was moved into a different database number, it became persistent like if PERSIST was used. In some incredible way (I guess almost nobody uses Redis MOVE) this bug remained unnoticed inside Redis internals for many years. Finally Andy Grunwald discovered it and opened an issue. This commit fixes the bug and adds a regression test. Close #2765.
-
- 08 Sep, 2015 2 commits
-
-
antirez authored
-
Salvatore Sanfilippo authored
redis-sentinel crash if ckquorum command is executed without args
-
- 07 Sep, 2015 4 commits
-
-
antirez authored
As Oran Agra suggested, in startBgsaveForReplication() when the BGSAVE attempt returns an error, we scan the list of slaves in order to remove them since there is no way to serve them currently. However we check for the replication state BGSAVE_START, which was modified by rdbSaveToSlaveSockets() before forking(). So when fork fails the state of slaves remain BGSAVE_END and no cleanup is performed. This commit fixes the problem by making rdbSaveToSlavesSockets() able to undo the state change on fork failure.
-
Salvatore Sanfilippo authored
SCAN iter parsing changed from atoi to chartoull
-
ubuntu authored
-
antirez authored
This additional info may provide more clues about the test randomly failing from time to time. Probably the failure is due to some previous test that overwrites the logical content in the Tcl variable, but this will make the problem more obvious.
-
- 21 Aug, 2015 1 commit
-
-
antirez authored
-
- 20 Aug, 2015 1 commit
-
-
antirez authored
Before this commit, after triggering a BGSAVE it was up to the caller of startBgsavForReplication() to handle slaves in WAIT_BGSAVE_START in order to update them accordingly. However when the replication target is the socket, this is not possible since the process of updating the slaves and sending the FULLRESYNC reply must be coupled with the process of starting an RDB save (the reason is, we need to send the FULLSYNC command and spawn a child that will start to send RDB data to the slaves ASAP). This commit moves the responsibility of handling slaves in WAIT_BGSAVE_START to startBgsavForReplication() so that for both diskless and disk-based replication we have the same chain of responsiblity. In order accomodate such change, the syncCommand() also needs to put the client in the slave list ASAP (just after the initial checks) and not at the end, so that startBgsavForReplication() can find the new slave alrady in the list. Another related change is what happens if the BGSAVE fails because of fork() or other errors: we now remove the slave from the list of slaves and send an error, scheduling the slave connection to be terminated. As a side effect of this change the following errors found by Oran Agra are fixed (thanks!): 1. rdbSaveToSlavesSockets() on failed fork will get the slaves cleaned up, otherwise they remain in a wrong state forever since we setup them for full resync before actually trying to fork. 2. updateSlavesWaitingBgsave() with replication target set as "socket" was broken since the function changed the slaves state from WAIT_BGSAVE_START to WAIT_BGSAVE_END via replicationSetupSlaveForFullResync(), so later rdbSaveToSlavesSockets() will not find any slave in the right state (WAIT_BGSAVE_START) to feed.
-
- 07 Aug, 2015 1 commit
-
-
antirez authored
It is simpler if removing the read event handler from the FD is up to slaveTryPartialResynchronization, after all it is only called in the context of syncWithMaster. This commit also makes sure that on error all the event handlers are removed from the socket before closing it.
-
- 06 Aug, 2015 3 commits
-
-
antirez authored
-
antirez authored
Talking with @oranagra we had to reason a little bit to understand if this function could ever flush the output buffers of the wrong slaves, having online state but actually not being ready to receive writes before the first ACK is received from them (this happens with diskless replication). Next time we'll just read this comment.
-
antirez authored
-