- 26 Jul, 2015 6 commits
- 17 Jul, 2015 1 commit
-
-
Yongyue Sun authored
Signed-off-by:
Yongyue Sun <abioy.sun@gmail.com>
-
- 03 Feb, 2015 1 commit
-
-
antirez authored
-
- 28 Jan, 2015 1 commit
-
-
Matt Stancliff authored
Previouly if we loaded a corrupt RDB, Redis printed an error report with a big "REPORT ON GITHUB" message at the bottom. But, we know RDB load failures are corrupt data, not corrupt code. Now when RDB failure is detected (duplicate keys or unknown data types in the file), we run check-rdb against the RDB then exit. The automatic check-rdb hopefully gives the user instant feedback about what is wrong instead of providing a mysterious stack trace.
-
- 21 Jan, 2015 1 commit
-
-
antirez authored
-
- 19 Jan, 2015 1 commit
-
-
Matt Stancliff authored
It's possible large objects could be larger than 'int', so let's upgrade all size counters to ssize_t. This also fixes rdbSaveObject serialized bytes calculation. Since entire serializations of data structures can be large, so we don't want to limit their calculated size to a 32 bit signed max. This commit increases object size calculation and cascades the change back up to serializedlength printing. Before: 127.0.0.1:6379> debug object hihihi ... encoding:quicklist serializedlength:-2147483559 ... After: 127.0.0.1:6379> debug object hihihi ... encoding:quicklist serializedlength:2147483737 ...
-
- 09 Jan, 2015 1 commit
-
-
Matt Stancliff authored
-
- 08 Jan, 2015 7 commits
-
-
antirez authored
Thx to @badboy.
-
antirez authored
-
antirez authored
This commit introduces a new RDB data type called 'aux'. It is used in order to insert inside an RDB file key-value pairs that may serve different needs, without breaking backward compatibility when new informations are embedded inside an RDB file. The contract between Redis versions is to ignore unknown aux fields when encountered. Aux fields can be used in order to: 1. Augment the RDB file with info like version of Redis that created the RDB file, creation time, used memory while the RDB was created, and so forth. 2. Add state about Redis inside the RDB file that we need to reload later: replication offset, previos master run ID, in order to improve failovers safety and allow partial resynchronization after a slave restart. 3. Anything that we may want to add to RDB files without breaking the ability of past versions of Redis to load the file.
-
antirez authored
-
antirez authored
The new opcode is an hint about the size of the dataset (keys and number of expires) we are going to load for a given Redis database inside the RDB file. Since hash tables are resized accordingly ASAP, useless rehashing is avoided, speeding up load times significantly, in the order of ~ 20% or more for larger data sets. Related issue: #1719
-
antirez authored
Before we needed to create a string object with an embedded SDS, adn basically duplicate the SDS part into a plain zmalloc() allocation.
-
antirez authored
-
- 02 Jan, 2015 5 commits
-
-
Matt Stancliff authored
This removes: - list-max-ziplist-entries - list-max-ziplist-value This adds: - list-max-ziplist-size - list-compress-depth Also updates config file with new sections and updates tests to use quicklist settings instead of old list settings.
-
Matt Stancliff authored
Let user set how many nodes to *not* compress. We can specify a compression "depth" of how many nodes to leave uncompressed on each end of the quicklist. Depth 0 = disable compression. Depth 1 = only leave head/tail uncompressed. - (read as: "skip 1 node on each end of the list before compressing") Depth 2 = leave head, head->next, tail->prev, tail uncompressed. - ("skip 2 nodes on each end of the list before compressing") Depth 3 = Depth 2 + head->next->next + tail->prev->prev - ("skip 3 nodes...") etc. This also: - updates RDB storage to use native quicklist compression (if node is already compressed) instead of uncompressing, generating the RDB string, then re-compressing the quicklist node. - internalizes the "fill" parameter for the quicklist so we don't need to pass it to _every_ function. Now it's just a property of the list. - allows a runtime-configurable compression option, so we can expose a compresion parameter in the configuration file if people want to trade slight request-per-second performance for up to 90%+ memory savings in some situations. - updates the quicklist tests to do multiple passes: 200k+ tests now.
-
Matt Stancliff authored
Turns out it's a huge improvement during save/reload/migrate/restore because, with compression enabled, we're compressing 4k or 8k chunks of data consisting of multiple elements in one ziplist instead of compressing series of smaller individual elements.
-
Matt Stancliff authored
This saves us an unnecessary zmalloc, memcpy, and two frees.
-
Matt Stancliff authored
This replaces individual ziplist vs. linkedlist representations for Redis list operations. Big thanks for all the reviews and feedback from everybody in https://github.com/antirez/redis/pull/2143
-
- 23 Dec, 2014 2 commits
-
-
Matt Stancliff authored
-
antirez authored
1. Server unxtime may remain not updated while loading AOF, so ETA is not updated correctly. 2. Number of processed byte was not initialized. 3. Possible division by zero condition (likely cause of issue #1932).
-
- 21 Dec, 2014 1 commit
-
-
Alon Diamant authored
-
- 27 Oct, 2014 1 commit
-
-
antirez authored
-
- 23 Oct, 2014 1 commit
-
-
antirez authored
Child now reports full info to the parent including IDs of slaves in failure state and exit code.
-
- 22 Oct, 2014 1 commit
-
-
antirez authored
We need to avoid that a child -> slaves transfer can continue forever. We use the same timeout used as global replication timeout, which is documented to also affect I/O operations during bulk transfers.
-
- 17 Oct, 2014 3 commits
-
-
antirez authored
While the socket is set in blocking mode, we still can get short writes writing to a socket.
-
antirez authored
To perform a socket write() for each RDB rio API write call was extremely unefficient, so now rio has minimal buffering capabilities. Writes are accumulated into a buffer and only when a given limit is reacehd are actually wrote to the N slaves FDs. Trivia: rio lacked support for buffering since our targets were: 1) Memory buffers. 2) C standard I/O. Both were buffered already.
-
antirez authored
-
- 15 Oct, 2014 2 commits
- 14 Oct, 2014 2 commits
- 08 Oct, 2014 1 commit
-
-
antirez authored
We need to remember what is the saving strategy of the current RDB child process, since the configuration may be modified at runtime via CONFIG SET and still we'll need to understand, when the child exists, what to do and for what goal the process was initiated: to create an RDB file on disk or to write stuff directly to slave's sockets.
-
- 07 Oct, 2014 1 commit
-
-
antirez authored
-
- 29 Sep, 2014 1 commit
-
-
zionwu authored
error != success; and 0 != number of bytes written Closes #1806
-