- 11 Jul, 2017 1 commit
-
-
antirez authored
See issue #3844 for more information.
-
- 10 Jul, 2017 1 commit
-
-
Guy Benoish authored
-
- 06 Jul, 2017 1 commit
-
-
antirez authored
-
- 05 Jul, 2017 1 commit
-
-
antirez authored
They are technically like commands executed from external clients one after the other, and do not constitute a single atomic entity.
-
- 03 Jul, 2017 1 commit
-
-
Dvir Volk authored
-
- 27 Jun, 2017 1 commit
-
-
antirez authored
The original RDB serialization format was not parsable without the module loaded, becuase the structure was managed only by the module itself. Moreover RDB is a streaming protocol in the sense that it is both produce di an append-only fashion, and is also sometimes directly sent to the socket (in the case of diskless replication). The fact that modules values cannot be parsed without the relevant module loaded is a problem in many ways: RDB checking tools must have loaded modules even for doing things not involving the value at all, like splitting an RDB into N RDBs by key or alike, or just checking the RDB for sanity. In theory module values could be just a blob of data with a prefixed length in order for us to be able to skip it. However prefixing the values with a length would mean one of the following: 1. To be able to write some data at a previous offset. This breaks stremaing. 2. To bufferize values before outputting them. This breaks performances. 3. To have some chunked RDB output format. This breaks simplicity. Moreover, the above solution, still makes module values a totally opaque matter, with the fowllowing problems: 1. The RDB check tool can just skip the value without being able to at least check the general structure. For datasets composed mostly of modules values this means to just check the outer level of the RDB not actually doing any checko on most of the data itself. 2. It is not possible to do any recovering or processing of data for which a module no longer exists in the future, or is unknown. So this commit implements a different solution. The modules RDB serialization API is composed if well defined calls to store integers, floats, doubles or strings. After this commit, the parts generated by the module API have a one-byte prefix for each of the above emitted parts, and there is a final EOF byte as well. So even if we don't know exactly how to interpret a module value, we can always parse it at an high level, check the overall structure, understand the types used to store the information, and easily skip the whole value. The change is backward compatible: older RDB files can be still loaded since the new encoding has a new RDB type: MODULE_2 (of value 7). The commit also implements the ability to check RDB files for sanity taking advantage of the new feature.
-
- 03 May, 2017 3 commits
- 02 May, 2017 3 commits
- 28 Apr, 2017 1 commit
-
-
antirez authored
-
- 10 Apr, 2017 2 commits
-
-
antirez authored
-
antirez authored
If a thread unblocks a client blocked in a module command, by using the RedisMdoule_UnblockClient() API, the event loop may not be awaken until the next timeout of the multiplexing API or the next unrelated I/O operation on other clients. We actually want the client to be served ASAP, so a mechanism is needed in order for the unblocking API to inform Redis that there is a client to serve ASAP. This commit fixes the issue using the old trick of the pipe: when a client needs to be unblocked, a byte is written in a pipe. When we run the list of clients blocked in modules, we consume all the bytes written in the pipe. Writes and reads are performed inside the context of the mutex, so no race is possible in which we consume the bytes that are actually related to an awake request for a client that should still be put into the list of clients to unblock. It was verified that after the fix the server handles the blocked clients with the expected short delay. Thanks to @dvirsky for understanding there was such a problem and reporting it.
-
- 06 Mar, 2017 1 commit
-
-
itamar authored
-
- 01 Mar, 2017 1 commit
-
-
Dvir Volk authored
-
- 20 Feb, 2017 1 commit
-
-
antirez authored
This change attempts to switch to an hash function which mitigates the effects of the HashDoS attack (denial of service attack trying to force data structures to worst case behavior) while at the same time providing Redis with an hash function that does not expect the input data to be word aligned, a condition no longer true now that sds.c strings have a varialbe length header. Note that it is possible sometimes that even using an hash function for which collisions cannot be generated without knowing the seed, special implementation details or the exposure of the seed in an indirect way (for example the ability to add elements to a Set and check the return in which Redis returns them with SMEMBERS) may make the attacker's life simpler in the process of trying to guess the correct seed, however the next step would be to switch to a log(N) data structure when too many items in a single bucket are detected: this seems like an overkill in the case of Redis. SPEED REGRESION TESTS: In order to verify that switching from MurmurHash to SipHash had no impact on speed, a set of benchmarks involving fast insertion of 5 million of keys were performed. The result shows Redis with SipHash in high pipelining conditions to be about 4% slower compared to using the previous hash function. However this could partially be related to the fact that the current implementation does not attempt to hash whole words at a time but reads single bytes, in order to have an output which is endian-netural and at the same time working on systems where unaligned memory accesses are a problem. Further X86 specific optimizations should be tested, the function may easily get at the same level of MurMurHash2 if a few optimizations are performed.
-
- 12 Jan, 2017 1 commit
-
-
antirez authored
As a side effect of supporting it, we no longer crash when MEMORY USAGE is called against a module data type. Close #3637.
-
- 14 Dec, 2016 1 commit
-
-
Dvir Volk authored
-
- 13 Dec, 2016 1 commit
-
-
antirez authored
BACKGROUND AND USE CASEj Redis slaves are normally write only, however the supprot a "writable" mode which is very handy when scaling reads on slaves, that actually need write operations in order to access data. For instance imagine having slaves replicating certain Sets keys from the master. When accessing the data on the slave, we want to peform intersections between such Sets values. However we don't want to intersect each time: to cache the intersection for some time often is a good idea. To do so, it is possible to setup a slave as a writable slave, and perform the intersection on the slave side, perhaps setting a TTL on the resulting key so that it will expire after some time. THE BUG Problem: in order to have a consistent replication, expiring of keys in Redis replication is up to the master, that synthesize DEL operations to send in the replication stream. However slaves logically expire keys by hiding them from read attempts from clients so that if the master did not promptly sent a DEL, the client still see logically expired keys as non existing. Because slaves don't actively expire keys by actually evicting them but just masking from the POV of read operations, if a key is created in a writable slave, and an expire is set, the key will be leaked forever: 1. No DEL will be received from the master, which does not know about such a key at all. 2. No eviction will be performed by the slave, since it needs to disable eviction because it's up to masters, otherwise consistency of data is lost. THE FIX In order to fix the problem, the slave should be able to tag keys that were created in the slave side and have an expire set in some way. My solution involved using an unique additional dictionary created by the writable slave only if needed. The dictionary is obviously keyed by the key name that we need to track: all the keys that are set with an expire directly by a client writing to the slave are tracked. The value in the dictionary is a bitmap of all the DBs where such a key name need to be tracked, so that we can use a single dictionary to track keys in all the DBs used by the slave (actually this limits the solution to the first 64 DBs, but the default with Redis is to use 16 DBs). This solution allows to pay both a small complexity and CPU penalty, which is zero when the feature is not used, actually. The slave-side eviction is encapsulated in code which is not coupled with the rest of the Redis core, if not for the hook to track the keys. TODO I'm doing the first smoke tests to see if the feature works as expected: so far so good. Unit tests should be added before merging into the 4.0 branch.
-
- 30 Nov, 2016 2 commits
- 24 Nov, 2016 1 commit
-
-
antirez authored
We already have reference to the client pointer, no need to access the already freed structure. Close #3634.
-
- 31 Oct, 2016 1 commit
-
-
Dvir Volk authored
-
- 13 Oct, 2016 2 commits
- 07 Oct, 2016 4 commits
- 06 Oct, 2016 2 commits
-
-
antirez authored
It was noted by @dvirsky that it is not possible to use string functions when writing the AOF file. This sometimes is critical since the command rewriting may need to be built in the context of the AOF callback, and without access to the context, and the limited types that the AOF production functions will accept, this can be an issue. Moreover there are other needs that we can't anticipate regarding the ability to use Redis Modules APIs using the context in order to build representations to emit AOF / RDB. Because of this a new API was added that allows the user to get a temporary context from the IO context. The context is auto released if obtained when the RDB / AOF callback returns. Calling multiple time the function to get the context, always returns the same one, since it is invalid to have more than a single context.
-
antirez authored
-
- 02 Oct, 2016 2 commits
- 21 Sep, 2016 1 commit
-
-
Dvir Volk authored
-
- 14 Sep, 2016 1 commit
-
-
oranagra authored
Notes by @antirez: This patch was picked from a larger commit by Oran and adapted to change the API a bit. The basic idea is to avoid double lookups when there is to use the value of the deleted entry. BEFORE: entry = dictFind( ... ); /* 1st lookup. */ /* Do somethjing with the entry. */ dictDelete(...); /* 2nd lookup. */ AFTER: entry = dictUnlink( ... ); /* 1st lookup. */ /* Do somethjing with the entry. */ dictFreeUnlinkedEntry(entry); /* No lookups!. */
-
- 09 Sep, 2016 1 commit
-
-
wyx authored
-
- 03 Aug, 2016 2 commits