- 21 Jun, 2014 1 commit
-
-
antirez authored
This commit adds peer ID caching in the client structure plus an API change and the use of sdsMakeRoomFor() in order to improve the reallocation pattern to generate the CLIENT LIST output. Both the changes account for a very significant speedup.
-
- 09 Jun, 2014 1 commit
-
-
Matt Stancliff authored
Behrad Zari discovered [1] and Josiah reported [2]: if you block and wait for a list to exist, but the list creates from a non-push command, the blocked client never gets notified. This commit adds notification of blocked clients into the DB layer and away from individual commands. Lists can be created by [LR]PUSH, SORT..STORE, RENAME, MOVE, and RESTORE. Previously, blocked client notifications were only triggered by [LR]PUSH. Your client would never get notified if a list were created by SORT..STORE or RENAME or a RESTORE, etc. Blocked client notification now happens in one unified place: - dbAdd() triggers notification when adding a list to the DB Two new tests are added that fail prior to this commit. All test pass. Fixes #1668 [1]: https://groups.google.com/forum/#!topic/redis-db/k4oWfMkN1NU [2]: #1668
-
- 22 May, 2014 1 commit
-
-
antirez authored
When we are blocked and a few events a processed from time to time, it is smarter to call the event handler a few times in order to handle the accept, read, write, close cycle of a client in a single pass, otherwise there is too much latency added for clients to receive a reply while the server is busy in some way (for example during the DB loading).
-
- 18 Apr, 2014 2 commits
- 16 Apr, 2014 12 commits
-
-
antirez authored
Like ZCOUNT for lexicographical ranges.
-
antirez authored
-
antirez authored
PFDEBUG will be the interface to do debugging tasks with a key containing an HLL object.
-
antirez authored
-
antirez authored
The new command allows to get a dump of the registers stored into an HyperLogLog data structure for testing / debugging purposes.
-
antirez authored
Using both the initials of Philippe Flajolet instead of just "P".
-
antirez authored
-
antirez authored
Merge N HLL data structures by selecting the max value for every M[i] register among the set of HLLs.
-
antirez authored
All the Redis functions that need to modify the string value of a key in a destructive way (APPEND, SETBIT, SETRANGE, ...) require to make the object unshared (if refcount > 1) and encoded in raw format (if encoding is not already REDIS_ENCODING_RAW). This was cut & pasted many times in multiple places of the code. This commit puts the small logic needed into a function called dbUnshareStringValue().
-
antirez authored
-
antirez authored
-
antirez authored
To test the bitfield array of counters set/get macros from the Redis Tcl suite is hard, so a specialized command that is able to test the internals was developed.
-
- 25 Mar, 2014 2 commits
-
-
Matt Stancliff authored
Also update the original REDIS_EVENTLOOP_FDSET_INCR to include REDIS_MIN_RESERVED_FDS. REDIS_EVENTLOOP_FDSET_INCR exists to make sure more than (maxclients+RESERVED) entries are allocated, but we can only guarantee that if we include the current value of REDIS_MIN_RESERVED_FDS as a minimum for the INCR size.
-
Matt Stancliff authored
Everywhere in the Redis code base, maxclients is treated as an int with (int)maxclients or `maxclients = atoi(source)`, so let's make maxclients an int. This fixes a bug where someone could specify a negative maxclients on startup and it would work (as well as set maxclients very high) because: unsigned int maxclients; char *update = "-300"; maxclients = atoi(update); if (maxclients < 1) goto fail; But, (maxclients < 1) can only catch the case when maxclients is exactly 0. maxclients happily sets itself to -300, which isn't -300, but rather 4294966996, which isn't < 1, so... everything "worked." maxclients config parsing checks for the case of < 1, but maxclients CONFIG SET parsing was checking for case of < 0 (allowing maxclients to be set to 0). CONFIG SET parsing is now updated to match config parsing of < 1. It's tempting to add a MINIMUM_CLIENTS define, but... I didn't. These changes were inspired by antirez#356, but this doesn't fix that issue.
-
- 24 Mar, 2014 1 commit
-
-
antirez authored
Obtaining the RSS (Resident Set Size) info is slow in Linux and OSX. This slowed down the generation of the INFO 'memory' section. Since the RSS does not require to be a real-time measurement, we now sample it with server.hz frequency (10 times per second by default) and use this value both to show the INFO rss field and to compute the fragmentation ratio. Practically this does not make any difference for memory profiling of Redis but speeds up the INFO call significantly.
-
- 21 Mar, 2014 4 commits
-
-
antirez authored
There were 2 spare bits inside the Redis object structure that are now used in order to enlarge 4x the range of the LRU field. At the same time the resolution was improved from 10 to 1 second: this still provides 194 days before the LRU counter overflows (restarting from zero). This is not a problem since it only causes lack of eviction precision for objects not touched for a very long time, and the lack of precision is only temporary.
-
antirez authored
The padding field was totally useless: removed.
-
antirez authored
-
antirez authored
Now CONFIG RESETSTAT makes sure to reset all the fields, and in the future it will be simpler to avoid missing new fields.
-
- 11 Mar, 2014 1 commit
-
-
zhanghailei authored
refer to updateLRUClock's comment REDIS_LRU_CLOCK_MAX is 22 bits,but #define REDIS_LRU_CLOCK_MAX ((1<<21)-1) only 21 bits
-
- 27 Feb, 2014 2 commits
-
-
antirez authored
It appears to work but more stress testing, and both unit tests and fuzzy testing, is needed in order to ensure the implementation is sane.
-
antirez authored
It appears to work but more stress testing, and both unit tests and fuzzy testing, is needed in order to ensure the implementation is sane.
-
- 13 Feb, 2014 1 commit
-
-
antirez authored
server.unixtime and server.mstime are cached less precise timestamps that we use every time we don't need an accurate time representation and a syscall would be too slow for the number of calls we require. Such an example is the initialization and update process of the last interaction time with the client, that is used for timeouts. However rdbLoad() can take some time to load the DB, but at the same time it did not updated the time during DB loading. This resulted in the bug described in issue #1535, where in the replication process the slave loads the DB, creates the redisClient representation of its master, but the timestamp is so old that the master, under certain conditions, is sensed as already "timed out". Thanks to @yoav-steinberg and Redis Labs Inc for the bug report and analysis.
-
- 12 Feb, 2014 1 commit
-
-
antirez authored
A system similar to the RDB write error handling is used, in which when we can't write to the AOF file, writes are no longer accepted until we are able to write again. For fsync == always we still abort on errors since there is currently no easy way to avoid replying with success to the user otherwise, and this would violate the contract with the user of only acknowledging data already secured on disk.
-
- 03 Feb, 2014 2 commits
-
-
antirez authored
The define is now used in other parts of Redis 2.8 tree instead of long long. A nice side effect is that now 2.8 and unstable sentinel.c files are identical as it should be.
-
antirez authored
server.lua_time_start is expressed in milliseconds. Use mstime_t instead of long long, and populate it with mstime() instead of ustime()/1000. Functionally identical but more natural.
-
- 31 Jan, 2014 2 commits
-
-
antirez authored
This is especially important since we already have a concept of backlog (the replication backlog).
-
Nenad Merdanovic authored
In high RPS environments, the default listen backlog is not sufficient, so giving users the power to configure it is the right approach, especially since it requires only minor modifications to the code.
-
- 14 Jan, 2014 1 commit
-
-
antirez authored
64mb is the default value in redis.conf. For some reason instead the hard-coded default was 1mb that is too small.
-
- 08 Jan, 2014 1 commit
-
-
antirez authored
Masters not understanding REPLCONF ACK will reply with errors to our requests causing a number of possible issues. This commit detects a global replication offest set to -1 at the end of the replication, and marks the client representing the master with the REDIS_PRE_PSYNC flag. Note that this flag was called REDIS_PRE_PSYNC_SLAVE but now it is just REDIS_PRE_PSYNC as it is used for both slaves and masters starting with this commit. This commit fixes issue #1488.
-
- 19 Dec, 2013 1 commit
-
-
Yubao Liu authored
Those options will be thrown without this patch: include, rename-command, min-slaves-to-write, min-slaves-max-lag, appendfilename.
-
- 11 Dec, 2013 1 commit
-
-
Yossi Gottlieb authored
-
- 10 Dec, 2013 2 commits
-
-
antirez authored
The previous fix for false positive timeout detected by master was not complete. There is another blocking stage while loading data for the first synchronization with the master, that is, flushing away the current data from the DB memory. This commit uses the newly introduced dict.c callback in order to make some incremental work (to send "\n" heartbeats to the master) while flushing the old data from memory. It is hard to write a regression test for this issue unfortunately. More support for debugging in the Redis core would be needed in terms of functionalities to simulate a slow DB loading / deletion.
-
antirez authored
Redis hash table implementation has many non-blocking features like incremental rehashing, however while deleting a large hash table there was no way to have a callback called to do some incremental work. This commit adds this support, as an optiona callback argument to dictEmpty() that is currently called at a fixed interval (one time every 65k deletions).
-
- 03 Dec, 2013 1 commit
-
-
antirez authored
-