- 30 Oct, 2015 10 commits
-
-
antirez authored
Make sure to flush the AOF output buffers before reloading. Result: less false timing related false positives on AOF tests.
-
antirez authored
Currently this feature is only accessible via DEBUG for testing, since otherwise depending on the instance configuration a given script works or is broken, which is against the Redis philosophy.
-
antirez authored
-
antirez authored
This commit also inverts two stanzas of the code just becuase they are more logical like that, not because currently it makes any difference.
-
antirez authored
-
antirez authored
-
antirez authored
By calling redis.replicate_commands(), the scripting engine of Redis switches to commands replication instead of replicating whole scripts. This is useful when the script execution is costly but only results in a few writes performed to the dataset. Morover, in this mode, it is possible to call functions with side effects freely, since the script execution does not need to be deterministic: anyway we'll capture the outcome from the point of view of changes to the dataset. In this mode math.random() returns different sequences at every call. If redis.replicate_commnads() is not called before any other write, the command returns false and sticks to whole scripts replication instead.
-
antirez authored
-
antirez authored
-
antirez authored
Sometimes it can be useful for clients to completely disable replies from the Redis server. For example when the client sends fire and forget commands or performs a mass loading of data, or in caching contexts where new data is streamed constantly. In such contexts to use server time and bandwidth in order to send back replies to clients, which are going to be ignored, is a shame. Multiple mechanisms are possible to implement such a feature. For example it could be a feature of MULTI/EXEC, or a command prefix such as "NOREPLY SADD myset foo", or a different mechanism that allows to switch on/off requests using the CLIENT command. The MULTI/EXEC approach has the problem that transactions are not strictly part of the no-reply semantics, and if we want to insert a lot of data in a bulk way, creating a huge MULTI/EXEC transaction in the server memory is bad. The prefix is the best in this specific use case since it does not allow desynchronizations, and is pretty clear semantically. However Redis internals and client libraries are not prepared to handle this currently. So the implementation uses the CLIENT command, providing a new REPLY subcommand with three options: CLIENT REPLY OFF disables the replies, and does not reply itself. CLIENT REPLY ON re-enables the replies, replying +OK. CLIENT REPLY SKIP only discards the reply of the next command, and like OFF does not reply anything itself. The reason to add the SKIP command is that it allows to have an easy way to send conceptually "single" commands that don't need a reply as the sum of two pipelined commands: CLIENT REPLY SKIP SET key value Note that CLIENT REPLY ON replies with +OK so it should be used when sending multiple commands that don't need a reply. However since it replies with +OK the client can check that the connection is still active and all the previous commands were received. This is currently just into Redis "unstable" so the proposal can be modified or abandoned based on users inputs.
-
- 15 Oct, 2015 6 commits
-
-
David Thomson authored
-
David Thomson authored
-
antirez authored
-
antirez authored
-
antirez authored
This new function is able to restart the server "in place". The current Redis process executes the same executable it was executed with, using the same arguments and configuration file.
-
antirez authored
Kinda related to #2770.
-
- 09 Oct, 2015 1 commit
-
-
antirez authored
the check for lat/long valid ranges were performed inside the for loop, two times instead of one, and the first time when the second element of the array, xy[1], was yet not populated. This resulted into issue #2799. Close issue #2799.
-
- 06 Oct, 2015 1 commit
-
-
antirez authored
-
- 01 Oct, 2015 9 commits
-
-
antirez authored
-
antirez authored
After the introduction of the list with clients with pending writes, to process clients incrementally outside of the event loop we also need to process the pending writes list.
-
antirez authored
-
antirez authored
-
antirez authored
May potentially improve locality... not exactly clear if this makes a difference or not. But for sure is harmless.
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
- 30 Sep, 2015 1 commit
-
-
antirez authored
The code was broken and resulted in redis-cli --pipe to, most of the times, writing everything received in the standard input to the Redis connection socket without ever reading back the replies, until all the content to write was written. This means that Redis had to accumulate all the output in the output buffers of the client, consuming a lot of memory. Fixed thanks to the original report of anomalies in the behavior provided by Twitter user @fsaintjacques.
-
- 14 Sep, 2015 3 commits
-
-
antirez authored
Georadius works by computing the center + neighbors squares covering all the area of the specified position and radius. Then a distance filter is used to remove elements which are actually outside the range. When a huge radius is used, like 5000 km or more, adjacent neighbors may collide and be the same, leading to the reporting of the same element multiple times. This only happens in the edge case of huge radius but is not ideal. A robust but slow solution would involve qsorting the range to remove all the duplicates. However since the collisions are only in adjacent boxes, for the way they are ordered in the code, it is much faster to just check if the current box is the same as the previous one processed. This commit adds a regression test for the bug. Fixes #2767.
-
antirez authored
getExpire() returns -1 when no expire exists. Related to #2765.
-
antirez authored
MOVE was not able to move the TTL: when a key was moved into a different database number, it became persistent like if PERSIST was used. In some incredible way (I guess almost nobody uses Redis MOVE) this bug remained unnoticed inside Redis internals for many years. Finally Andy Grunwald discovered it and opened an issue. This commit fixes the bug and adds a regression test. Close #2765.
-
- 08 Sep, 2015 1 commit
-
-
antirez authored
-
- 07 Sep, 2015 2 commits
-
-
antirez authored
As Oran Agra suggested, in startBgsaveForReplication() when the BGSAVE attempt returns an error, we scan the list of slaves in order to remove them since there is no way to serve them currently. However we check for the replication state BGSAVE_START, which was modified by rdbSaveToSlaveSockets() before forking(). So when fork fails the state of slaves remain BGSAVE_END and no cleanup is performed. This commit fixes the problem by making rdbSaveToSlavesSockets() able to undo the state change on fork failure.
-
ubuntu authored
-
- 21 Aug, 2015 1 commit
-
-
antirez authored
-
- 20 Aug, 2015 1 commit
-
-
antirez authored
Before this commit, after triggering a BGSAVE it was up to the caller of startBgsavForReplication() to handle slaves in WAIT_BGSAVE_START in order to update them accordingly. However when the replication target is the socket, this is not possible since the process of updating the slaves and sending the FULLRESYNC reply must be coupled with the process of starting an RDB save (the reason is, we need to send the FULLSYNC command and spawn a child that will start to send RDB data to the slaves ASAP). This commit moves the responsibility of handling slaves in WAIT_BGSAVE_START to startBgsavForReplication() so that for both diskless and disk-based replication we have the same chain of responsiblity. In order accomodate such change, the syncCommand() also needs to put the client in the slave list ASAP (just after the initial checks) and not at the end, so that startBgsavForReplication() can find the new slave alrady in the list. Another related change is what happens if the BGSAVE fails because of fork() or other errors: we now remove the slave from the list of slaves and send an error, scheduling the slave connection to be terminated. As a side effect of this change the following errors found by Oran Agra are fixed (thanks!): 1. rdbSaveToSlavesSockets() on failed fork will get the slaves cleaned up, otherwise they remain in a wrong state forever since we setup them for full resync before actually trying to fork. 2. updateSlavesWaitingBgsave() with replication target set as "socket" was broken since the function changed the slaves state from WAIT_BGSAVE_START to WAIT_BGSAVE_END via replicationSetupSlaveForFullResync(), so later rdbSaveToSlavesSockets() will not find any slave in the right state (WAIT_BGSAVE_START) to feed.
-
- 07 Aug, 2015 1 commit
-
-
antirez authored
It is simpler if removing the read event handler from the FD is up to slaveTryPartialResynchronization, after all it is only called in the context of syncWithMaster. This commit also makes sure that on error all the event handlers are removed from the socket before closing it.
-
- 06 Aug, 2015 3 commits
-
-
antirez authored
-
antirez authored
Talking with @oranagra we had to reason a little bit to understand if this function could ever flush the output buffers of the wrong slaves, having online state but actually not being ready to receive writes before the first ACK is received from them (this happens with diskless replication). Next time we'll just read this comment.
-
antirez authored
-