- 04 Dec, 2017 6 commits
-
-
zhaozhao.zz authored
-
antirez authored
-
antirez authored
The function in its initial form, and after the fixes for the PSYNC2 bugs, required code duplication in multiple spots. This commit modifies it in order to always compute the script name independently, and to return the SDS of the SHA of the body: this way it can be used in all the places, including for SCRIPT LOAD, without duplicating the code to create the Lua function name. Note that this requires to re-compute the body SHA1 in the case of EVAL seeing a script for the first time, but this should not change scripting performance in any way because new scripts definition is a rare event happening the first time a script is seen, and the SHA1 computation is anyway not a very slow process against the typical Redis script and compared to the actua Lua byte compiling of the body. Note that the function used to assert() if a duplicated script was loaded, however actually now two times over three, we want the function to handle duplicated scripts just fine: this happens in SCRIPT LOAD and in RDB AUX "lua" loading. Moreover the assert was not defending against some obvious failure mode, so now the function always tests against already defined functions at start.
-
antirez authored
The block is already inside if (allow_dup).
-
antirez authored
Unfortunately, as outlined by @soloestoy in #4505, "lua" AUX RDB field loading in case of duplicated script was still broken. This commit fixes this problem and also a memory leak introduced by the past commit. Note that now we have a regression test able to duplicate the issue, so this commit was actually tested against the regression. The original PR also had a valid fix, but I prefer to hide the details of scripting.c outside scripting.c, and later "SCRIPT LOAD" should also be able to use the function luaCreateFunction() instead of redoing the work.
-
antirez authored
With PSYNC2 to force a full SYNC in tests is hard. With this new DEBUG subcommand we just need to call it and then CLIENT KILL TYPE master in the slave.
-
- 01 Dec, 2017 34 commits
-
-
antirez authored
In the case of slaves loading the RDB from master, or in other similar cases, the script is already defined, and the function registering the script should not fail in the assert() call.
-
antirez authored
-
antirez authored
It's a bit of black magic without actually tracking it inside rax.c, however Redis usage of the radix tree for the stream data structure is quite consistent, so a few magic constants apparently are producing results that make sense.
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
Note that streams produced by XADD in previous broken versions having elements with 4096 bytes or more will be permanently broken and must be created again from scratch. Fix #4428 Fix #4349
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
After checking with the community via Twitter (here: https://twitter.com/antirez/status/915130876861788161) the verdict was to use ":". However I later realized, after users lamented the fact that it's hard to copy IDs just with double click, that this was the reason why I moved to "." in the first instance. Fortunately "-", that was the other option with most votes, also gets selected with double click on most terminal applications on Linux and MacOS. So my reasoning was: 1) We can't retain "." because it's actually confusing to newcomers, it looks like a floating number, people may be tricked into thinking they can order IDs numerically as floats. 2) Moving to a double-click-to-select format is much better. People will work with such IDs for long time when coding / debugging. Why making now a choice that will impact this for the next years? The only other viable option was "-", and that's what I did. Thanks.
-
antirez authored
Clang should be more prone to return warnings by default when there is same-var-name shadowing. GCC does this and can avoid bugs like that.
-
antirez authored
-
antirez authored
-
antirez authored
The core of this change is the implementation of stream trimming, and the resulting MAXLEN option of XADD as a trivial result of having trimming functionalities. MAXLEN already works but in order to be more efficient listpack GC should be implemented, currently marked as a TODO item inside the comments.
-
antirez authored
Listpack max size is a tradeoff between space and time. A 2k max entry puts the memory usage approximately at a similar order of magnitude (5 million entries went from 96 to 120 MB), but the range queries speed doubled (because there are half entries to scan in the average case). Lower values could be considered, or maybe this parameter should be made tunable.
-
antirez authored
We used to have the master ID stored at the start of the listpack, however using the key directly makes more sense in order to create a space efficient representation: anyway the key at the radix tree is very unlikely to change because of how the stream is implemented. Moreover on nodes merging, to rewrite the merged listpacks is anyway the most sensible operation, and we can use the iterator and the append-to-stream function in order to avoid re-implementing the code needed for merging. This commit also adds two items at the start of the listpack: the number of valid items inside the listpack, and the number of items marked as deleted. This means that there is no need to scan a listpack in order to understand if it's a good candidate for garbage collection, if the ration between valid/deleted items triggers the GC.
-
antirez authored
-
antirez authored
The approach used is to set a fixed header at the start of every listpack blob (that contains many entries). The header contains a "master" ID and fields, that are initially just obtained from the first entry inserted in the listpack, so that the first enty is always well compressed. Later every new entry is checked against these fields, and if it matches, the SAMEFIELD flag is set in the entry so that we know to just use the master entry flags. The IDs are always delta-encoded against the first entry. This approach avoids cascading effects in which entries are encoded depending on the previous entries, in order to avoid complexity and rewritings of the data when data is removed in the middle (which is a planned feature).
-
antirez authored
blockForKeys() was not freeing the allocation holding the ID when the key was already found busy. Fortunately the unit test checked explicitly for blocking multiple times for the same key (copying a regression in the blocking lists tests), so the bug was detected by the Redis test leak checker.
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
XADD was suboptimal in the first incarnation of the command, not being able to accept an ID (very useufl for replication), nor options for having capped streams. The keyspace notification for streams was not implemented.
-
antirez authored
A client may lose a lot of time between invocations of blocking XREAD, for example because it is processing the messages or for any other cause. When it returns back, it may provide a low enough message ID that the server will block to send an unreasonable number of messages in a single call. For this reason we set a COUNT when the client is blocked with XREAD calls, even if no COUNT is given. This is arbitrarily set to 1000 because it's enough to avoid slowing down the reception of many messages, but low enough to avoid to block.
-
antirez authored
-