- 15 Mar, 2018 16 commits
-
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
We need to check if we are going to serve the request via the PEL before inserting a deferred array len in the client output buffer.
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
- 01 Mar, 2018 1 commit
-
-
antirez authored
-
- 04 Dec, 2017 1 commit
-
-
antirez authored
-
- 01 Dec, 2017 22 commits
-
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
After checking with the community via Twitter (here: https://twitter.com/antirez/status/915130876861788161) the verdict was to use ":". However I later realized, after users lamented the fact that it's hard to copy IDs just with double click, that this was the reason why I moved to "." in the first instance. Fortunately "-", that was the other option with most votes, also gets selected with double click on most terminal applications on Linux and MacOS. So my reasoning was: 1) We can't retain "." because it's actually confusing to newcomers, it looks like a floating number, people may be tricked into thinking they can order IDs numerically as floats. 2) Moving to a double-click-to-select format is much better. People will work with such IDs for long time when coding / debugging. Why making now a choice that will impact this for the next years? The only other viable option was "-", and that's what I did. Thanks.
-
antirez authored
Clang should be more prone to return warnings by default when there is same-var-name shadowing. GCC does this and can avoid bugs like that.
-
antirez authored
-
antirez authored
-
antirez authored
The core of this change is the implementation of stream trimming, and the resulting MAXLEN option of XADD as a trivial result of having trimming functionalities. MAXLEN already works but in order to be more efficient listpack GC should be implemented, currently marked as a TODO item inside the comments.
-
antirez authored
Listpack max size is a tradeoff between space and time. A 2k max entry puts the memory usage approximately at a similar order of magnitude (5 million entries went from 96 to 120 MB), but the range queries speed doubled (because there are half entries to scan in the average case). Lower values could be considered, or maybe this parameter should be made tunable.
-
antirez authored
We used to have the master ID stored at the start of the listpack, however using the key directly makes more sense in order to create a space efficient representation: anyway the key at the radix tree is very unlikely to change because of how the stream is implemented. Moreover on nodes merging, to rewrite the merged listpacks is anyway the most sensible operation, and we can use the iterator and the append-to-stream function in order to avoid re-implementing the code needed for merging. This commit also adds two items at the start of the listpack: the number of valid items inside the listpack, and the number of items marked as deleted. This means that there is no need to scan a listpack in order to understand if it's a good candidate for garbage collection, if the ration between valid/deleted items triggers the GC.
-
antirez authored
-
antirez authored
The approach used is to set a fixed header at the start of every listpack blob (that contains many entries). The header contains a "master" ID and fields, that are initially just obtained from the first entry inserted in the listpack, so that the first enty is always well compressed. Later every new entry is checked against these fields, and if it matches, the SAMEFIELD flag is set in the entry so that we know to just use the master entry flags. The IDs are always delta-encoded against the first entry. This approach avoids cascading effects in which entries are encoded depending on the previous entries, in order to avoid complexity and rewritings of the data when data is removed in the middle (which is a planned feature).
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-