- 05 Jul, 2013 1 commit
-
-
antirez authored
-
- 04 Jul, 2013 1 commit
-
-
antirez authored
-
- 02 Jul, 2013 1 commit
-
-
antirez authored
-
- 28 Jun, 2013 1 commit
-
-
antirez authored
-
- 27 Jun, 2013 1 commit
-
-
antirez authored
-
- 24 Jun, 2013 3 commits
-
-
antirez authored
It should be called just one time at startup and not every time the Lua scripting engine is re-initialized, otherwise memory is leaked.
-
antirez authored
This commit uses the Replication Script Cache in order to avoid translating EVALSHA into EVAL whenever possible for both the AOF and slaves.
-
antirez authored
This code is only responsible to take an LRU-evicted fixed length cache of SHA1 that we are sure all the slaves received. In this commit only the implementation is provided, but the Redis core does not use it to actually send EVALSHA to slaves when possible.
-
- 21 Jun, 2013 1 commit
-
-
antirez authored
The old REDIS_CMD_FORCE_REPLICATION flag was removed from the implementation of Redis, now there is a new API to force specific executions of a command to be propagated to AOF / Replication link: void forceCommandPropagation(int flags); The new API is also compatible with Lua scripting, so a script that will execute commands that are forced to be propagated, will also be propagated itself accordingly even if no change to data is operated. As a side effect, this new design fixes the issue with scripts not able to propagate PUBLISH to slaves (issue #873).
-
- 20 Jun, 2013 1 commit
-
-
antirez authored
Currently it implements three subcommands: PUBSUB CHANNELS [<pattern>] List channels with non-zero subscribers. PUBSUB NUMSUB [channel_1 ...] List number of subscribers for channels. PUBSUB NUMPAT Return number of subscribed patterns.
-
- 30 May, 2013 3 commits
-
-
antirez authored
When min-slaves-to-write feature is active, this field reports the number of slaves considered good (online state, lag within the specified range).
-
antirez authored
I guess I needed another coffee...
-
antirez authored
This feature allows the user to specify the minimum number of connected replicas having a lag less or equal than the specified amount of seconds for writes to be accepted.
-
- 29 May, 2013 2 commits
-
-
antirez authored
-
antirez authored
There is a new 'lag' information in the list of slaves, in the "replication" section of the INFO output. Also the format was changed in a backward incompatible way in order to make it more easy to parse if new fields are added in the future, as the new format is comma separated but has named fields (no longer positional fields).
-
- 28 May, 2013 1 commit
-
-
antirez authored
-
- 27 May, 2013 2 commits
- 17 May, 2013 1 commit
-
-
YAMAMOTO Takashi authored
time_t is always 64bit on recent versions of NetBSD.
-
- 15 May, 2013 1 commit
-
-
antirez authored
Also the logfile option was modified to always have an explicit value and to log to stdout when an empty string is used as log file. Previously there was special handling of the string "stdout" that set the logfile to NULL, this always required some special handling.
-
- 13 May, 2013 2 commits
- 09 May, 2013 1 commit
-
-
antirez authored
-
- 24 Apr, 2013 1 commit
-
-
antirez authored
-
- 19 Apr, 2013 1 commit
-
-
antirez authored
-
- 04 Apr, 2013 1 commit
-
-
antirez authored
-
- 02 Apr, 2013 1 commit
-
-
antirez authored
When a BGSAVE fails, Redis used to flood itself trying to BGSAVE at every next cron call, that is either 10 or 100 times per second depending on configuration and server version. This commit does not allow a new automatic BGSAVE attempt to be performed before a few seconds delay (currently 5). This avoids both the auto-flood problem and filling the disk with logs at a serious rate. The five seconds limit, considering a log entry of 200 bytes, will use less than 4 MB of disk space per day that is reasonable, the sysadmin should notice before of catastrofic events especially since by default Redis will stop serving write queries after the first failed BGSAVE. This fixes issue #849
-
- 28 Mar, 2013 1 commit
-
-
antirez authored
-
- 27 Mar, 2013 1 commit
-
-
antirez authored
We need the ability to disable the activeExpireCycle() (active expired key collection) call for testing purposes.
-
- 26 Mar, 2013 2 commits
- 13 Mar, 2013 1 commit
-
-
antirez authored
server.repl_down_since used to be initialized to the current time at startup. This is wrong since the replication never started. Clients testing this filed to check if data is uptodate should never believe data is recent if we never ever connected to our master.
-
- 12 Mar, 2013 2 commits
-
-
Damian Janowski authored
This fixes cases where the RDB file does exist but can't be accessed for any reason. For instance, when the Redis process doesn't have enough permissions on the file.
-
antirez authored
It was placed for error in initServer() that's called after the configuation is already loaded, causing issue #1000.
-
- 11 Mar, 2013 3 commits
-
-
antirez authored
activeExpireCycle() tries to test just a few DBs per iteration so that it scales if there are many configured DBs in the Redis instance. However this commit makes it a bit smarter when one a few of those DBs are under expiration pressure and there are many many keys to expire. What we do is to remember if in the last iteration had to return because we ran out of time. In that case the next iteration we'll test all the configured DBs so that we are sure we'll test again the DB under pressure. Before of this commit after some mass-expire in a given DB the function tested just a few of the next DBs, possibly empty, a few per iteration, so it took a long time for the function to reach again the DB under pressure. This resulted in a lot of memory being used by already expired keys and never accessed by clients.
-
antirez authored
-
antirez authored
-
- 09 Mar, 2013 2 commits
- 08 Mar, 2013 1 commit
-
-
antirez authored
This small number of DBs is set to 16 so actually in the default configuraiton Redis should behave exactly like in the past. However the difference is that when the user configures a very large number of DBs we don't do an O(N) operation, consuming a non trivial amount of CPU per serverCron() iteration.
-