- 08 Jan, 2015 1 commit
-
-
antirez authored
-
- 02 Jan, 2015 5 commits
-
-
Matt Stancliff authored
This removes: - list-max-ziplist-entries - list-max-ziplist-value This adds: - list-max-ziplist-size - list-compress-depth Also updates config file with new sections and updates tests to use quicklist settings instead of old list settings.
-
Matt Stancliff authored
Let user set how many nodes to *not* compress. We can specify a compression "depth" of how many nodes to leave uncompressed on each end of the quicklist. Depth 0 = disable compression. Depth 1 = only leave head/tail uncompressed. - (read as: "skip 1 node on each end of the list before compressing") Depth 2 = leave head, head->next, tail->prev, tail uncompressed. - ("skip 2 nodes on each end of the list before compressing") Depth 3 = Depth 2 + head->next->next + tail->prev->prev - ("skip 3 nodes...") etc. This also: - updates RDB storage to use native quicklist compression (if node is already compressed) instead of uncompressing, generating the RDB string, then re-compressing the quicklist node. - internalizes the "fill" parameter for the quicklist so we don't need to pass it to _every_ function. Now it's just a property of the list. - allows a runtime-configurable compression option, so we can expose a compresion parameter in the configuration file if people want to trade slight request-per-second performance for up to 90%+ memory savings in some situations. - updates the quicklist tests to do multiple passes: 200k+ tests now.
-
Matt Stancliff authored
Turns out it's a huge improvement during save/reload/migrate/restore because, with compression enabled, we're compressing 4k or 8k chunks of data consisting of multiple elements in one ziplist instead of compressing series of smaller individual elements.
-
Matt Stancliff authored
This saves us an unnecessary zmalloc, memcpy, and two frees.
-
Matt Stancliff authored
This replaces individual ziplist vs. linkedlist representations for Redis list operations. Big thanks for all the reviews and feedback from everybody in https://github.com/antirez/redis/pull/2143
-
- 23 Dec, 2014 2 commits
-
-
Matt Stancliff authored
-
antirez authored
1. Server unxtime may remain not updated while loading AOF, so ETA is not updated correctly. 2. Number of processed byte was not initialized. 3. Possible division by zero condition (likely cause of issue #1932).
-
- 21 Dec, 2014 1 commit
-
-
Alon Diamant authored
-
- 27 Oct, 2014 1 commit
-
-
antirez authored
-
- 23 Oct, 2014 1 commit
-
-
antirez authored
Child now reports full info to the parent including IDs of slaves in failure state and exit code.
-
- 22 Oct, 2014 1 commit
-
-
antirez authored
We need to avoid that a child -> slaves transfer can continue forever. We use the same timeout used as global replication timeout, which is documented to also affect I/O operations during bulk transfers.
-
- 17 Oct, 2014 3 commits
-
-
antirez authored
While the socket is set in blocking mode, we still can get short writes writing to a socket.
-
antirez authored
To perform a socket write() for each RDB rio API write call was extremely unefficient, so now rio has minimal buffering capabilities. Writes are accumulated into a buffer and only when a given limit is reacehd are actually wrote to the N slaves FDs. Trivia: rio lacked support for buffering since our targets were: 1) Memory buffers. 2) C standard I/O. Both were buffered already.
-
antirez authored
-
- 15 Oct, 2014 2 commits
- 14 Oct, 2014 2 commits
- 08 Oct, 2014 1 commit
-
-
antirez authored
We need to remember what is the saving strategy of the current RDB child process, since the configuration may be modified at runtime via CONFIG SET and still we'll need to understand, when the child exists, what to do and for what goal the process was initiated: to create an RDB file on disk or to write stuff directly to slave's sockets.
-
- 07 Oct, 2014 1 commit
-
-
antirez authored
-
- 29 Sep, 2014 1 commit
-
-
zionwu authored
error != success; and 0 != number of bytes written Closes #1806
-
- 18 Aug, 2014 1 commit
-
-
yoav authored
Closes #857
-
- 08 Aug, 2014 1 commit
-
-
Matt Stancliff authored
dictAdd returns DICT_OK, not REDIS_OK. They both have the same underlying values, so it works even though the code is technically wrong. Fixes #1512
-
- 07 Aug, 2014 1 commit
-
-
antirez authored
Also quit ASAP when we are still loading a DB, since care is not needed in this special condition, especially for a SIGINT.
-
- 28 Jul, 2014 1 commit
-
-
Yossi Gottlieb authored
-
- 16 Jul, 2014 1 commit
-
-
antirez authored
-
- 08 Jul, 2014 1 commit
-
-
antirez authored
-
- 01 Jul, 2014 1 commit
-
-
antirez authored
-
- 26 Jun, 2014 1 commit
-
-
antirez authored
-
- 12 May, 2014 2 commits
-
-
Akos Vandra authored
(Note: commit message modified by @antirez for clarity).
-
Akos Vandra authored
-
- 24 Apr, 2014 1 commit
-
-
antirez authored
When we are blocked and a few events a processed from time to time, it is smarter to call the event handler a few times in order to handle the accept, read, write, close cycle of a client in a single pass, otherwise there is too much latency added for clients to receive a reply while the server is busy in some way (for example during the DB loading).
-
- 24 Mar, 2014 1 commit
-
-
Matt Stancliff authored
Previously, the (!fp) would only catch lack of free space under OS X. Linux waits to discover it can't write until it actually writes contents to disk. (fwrite() returns success even if the underlying file has no free space to write into. All the errors only show up at flush/sync/close time.) Fixes antirez/redis#1604
-
- 13 Feb, 2014 1 commit
-
-
antirez authored
server.unixtime and server.mstime are cached less precise timestamps that we use every time we don't need an accurate time representation and a syscall would be too slow for the number of calls we require. Such an example is the initialization and update process of the last interaction time with the client, that is used for timeouts. However rdbLoad() can take some time to load the DB, but at the same time it did not updated the time during DB loading. This resulted in the bug described in issue #1535, where in the replication process the slave loads the DB, creates the redisClient representation of its master, but the timestamp is so old that the master, under certain conditions, is sensed as already "timed out". Thanks to @yoav-steinberg and Redis Labs Inc for the bug report and analysis.
-
- 10 Dec, 2013 2 commits
-
-
antirez authored
The previous fix for false positive timeout detected by master was not complete. There is another blocking stage while loading data for the first synchronization with the master, that is, flushing away the current data from the DB memory. This commit uses the newly introduced dict.c callback in order to make some incremental work (to send "\n" heartbeats to the master) while flushing the old data from memory. It is hard to write a regression test for this issue unfortunately. More support for debugging in the Redis core would be needed in terms of functionalities to simulate a slow DB loading / deletion.
-
antirez authored
-
- 09 Dec, 2013 1 commit
-
-
antirez authored
Starting with Redis 2.8 masters are able to detect timed out slaves, while before 2.8 only slaves were able to detect a timed out master. Now that timeout detection is bi-directional the following problem happens as described "in the field" by issue #1449: 1) Master and slave setup with big dataset. 2) Slave performs the first synchronization, or a full sync after a failed partial resync. 3) Master sends the RDB payload to the slave. 4) Slave loads this payload. 5) Master detects the slave as timed out since does not receive back the REPLCONF ACK acknowledges. Here the problem is that the master has no way to know how much the slave will take to load the RDB file in memory. The obvious solution is to use a greater replication timeout setting, but this is a shame since for the 0.1% of operation time we are forced to use a timeout that is not what is suited for 99.9% of operation time. This commit tries to fix this problem with a solution that is a bit of an hack, but that modifies little of the replication internals, in order to be back ported to 2.8 safely. During the RDB loading time, we send the master newlines to avoid being sensed as timed out. This is the same that the master already does while saving the RDB file to still signal its presence to the slave. The single newline is used because: 1) It can't desync the protocol, as it is only transmitted all or nothing. 2) It can be safely sent while we don't have a client structure for the master or in similar situations just with write(2).
-
- 05 Dec, 2013 1 commit
-
-
antirez authored
-
- 07 Nov, 2013 1 commit
-
-
antirez authored
Thanks to @PhoneLi for reporting.
-