- 14 Apr, 2017 3 commits
-
-
antirez authored
The gossip section times are 32 bit, so cannot store the milliseconds time but just the seconds approximation, which is good enough for our uses. At the same time however, when comparing the gossip section times of other nodes with our node's view, we need to convert back to milliseconds. Related to #3929. Without this change the patch to reduce the traffic in the bus message does not work.
-
antirez authored
-
antirez authored
Cluster of bigger sizes tend to have a lot of traffic in the cluster bus just for failure detection: a node will try to get a ping reply from another node no longer than when the half the node timeout would elapsed, in order to avoid a false positive. However this means that if we have N nodes and the node timeout is set to, for instance M seconds, we'll have to ping N nodes every M/2 seconds. This N*M/2 pings will receive the same number of pongs, so a total of N*M packets per node. However given that we have a total of N nodes doing this, the total number of messages will be N*N*M. In a 100 nodes cluster with a timeout of 60 seconds, this translates to a total of 100*100*30 packets per second, summing all the packets exchanged by all the nodes. This is, as you can guess, a lot... So this patch changes the implementation in a very simple way in order to trust the reports of other nodes: if a node A reports a node B as alive at least up to a given time, we update our view accordingly. The problem with this approach is that it could result into a subset of nodes being able to reach a given node X, and preventing others from detecting that is actually not reachable from the majority of nodes. So the above algorithm is refined by trusting other nodes only if we do not have currently a ping pending for the node X, and if there are no failure reports for that node. Since each node, anyway, pings 10 other nodes every second (one node every 100 milliseconds), anyway eventually even trusting the other nodes reports, we will detect if a given node is down from our POV. Now to understand the number of packets that the cluster would exchange for failure detection with the patch, we can start considering the random PINGs that the cluster sent anyway as base line: Each node sends 10 packets per second, so the total traffic if no additioal packets would be sent, including PONG packets, would be: Total messages per second = N*10*2 However by trusting other nodes gossip sections will not AWALYS prevent pinging nodes for the "half timeout reached" rule all the times. The math involved in computing the actual rate as N and M change is quite complex and depends also on another parameter, which is the number of entries in the gossip section of PING and PONG packets. However it is possible to compare what happens in cluster of different sizes experimentally. After applying this patch a very important reduction in the number of packets exchanged is trivial to observe, without apparent impacts on the failure detection performances. Actual numbers with different cluster sizes should be published in the Reids Cluster documentation in the future. Related to #3929.
-
- 13 Apr, 2017 1 commit
-
-
antirez authored
First step in order to change Cluster in order to use less messages. Related to issue #3929.
-
- 12 Apr, 2017 2 commits
- 11 Apr, 2017 3 commits
- 10 Apr, 2017 2 commits
-
-
antirez authored
-
antirez authored
If a thread unblocks a client blocked in a module command, by using the RedisMdoule_UnblockClient() API, the event loop may not be awaken until the next timeout of the multiplexing API or the next unrelated I/O operation on other clients. We actually want the client to be served ASAP, so a mechanism is needed in order for the unblocking API to inform Redis that there is a client to serve ASAP. This commit fixes the issue using the old trick of the pipe: when a client needs to be unblocked, a byte is written in a pipe. When we run the list of clients blocked in modules, we consume all the bytes written in the pipe. Writes and reads are performed inside the context of the mutex, so no race is possible in which we consume the bytes that are actually related to an awake request for a client that should still be put into the list of clients to unblock. It was verified that after the fix the server handles the blocked clients with the expected short delay. Thanks to @dvirsky for understanding there was such a problem and reporting it.
-
- 08 Apr, 2017 2 commits
-
-
antirez authored
Important bugs fixed.
-
- 07 Apr, 2017 1 commit
-
-
antirez authored
-
- 27 Mar, 2017 1 commit
-
-
antirez authored
-
- 01 Mar, 2017 1 commit
-
-
Dvir Volk authored
-
- 23 Feb, 2017 3 commits
-
-
Salvatore Sanfilippo authored
-
Salvatore Sanfilippo authored
-
Salvatore Sanfilippo authored
-
- 22 Feb, 2017 1 commit
-
-
antirez authored
Testing with Solaris C compiler (SunOS 5.11 11.2 sun4v sparc sun4v) there were issues compiling due to atomicvar.h and running the tests also failed because of "tail" usage not conform with Solaris tail implementation. This commit fixes both the issues.
-
- 21 Feb, 2017 2 commits
-
-
antirez authored
For performance reasons we use a reduced rounds variant of SipHash. This should still provide enough protection and the effects in the hash table distribution are non existing. If some real world attack on SipHash 1-2 will be found we can trivially switch to something more secure. Anyway it is a big step forward from Murmurhash, for which it is trivial to generate *seed independent* colliding keys... The speed penatly introduced by SipHash 2-4, around 4%, was a too big price to pay compared to the effectiveness of the HashDoS attack against SipHash 1-2, and considering so far in the Redis history, no such an incident ever happened even while using trivially to collide hash functions.
-
antirez authored
1. Refactor memory overhead computation into a function. 2. Every 10 keys evicted, check if memory usage already reached the target value directly, since we otherwise don't count all the memory reclaimed by the background thread right now.
-
- 20 Feb, 2017 5 commits
-
-
antirez authored
-
antirez authored
-
antirez authored
This change attempts to switch to an hash function which mitigates the effects of the HashDoS attack (denial of service attack trying to force data structures to worst case behavior) while at the same time providing Redis with an hash function that does not expect the input data to be word aligned, a condition no longer true now that sds.c strings have a varialbe length header. Note that it is possible sometimes that even using an hash function for which collisions cannot be generated without knowing the seed, special implementation details or the exposure of the seed in an indirect way (for example the ability to add elements to a Set and check the return in which Redis returns them with SMEMBERS) may make the attacker's life simpler in the process of trying to guess the correct seed, however the next step would be to switch to a log(N) data structure when too many items in a single bucket are detected: this seems like an overkill in the case of Redis. SPEED REGRESION TESTS: In order to verify that switching from MurmurHash to SipHash had no impact on speed, a set of benchmarks involving fast insertion of 5 million of keys were performed. The result shows Redis with SipHash in high pipelining conditions to be about 4% slower compared to using the previous hash function. However this could partially be related to the fact that the current implementation does not attempt to hash whole words at a time but reads single bytes, in order to have an output which is endian-netural and at the same time working on systems where unaligned memory accesses are a problem. Further X86 specific optimizations should be tested, the function may easily get at the same level of MurMurHash2 if a few optimizations are performed.
-
John.Koepi authored
-
antirez authored
Close #3804.
-
- 19 Feb, 2017 4 commits
-
-
Salvatore Sanfilippo authored
GCC will produce certain unaligned multi load-store instructions that will be trapped by the Linux kernel since ARM v6 cannot handle them with unaligned addresses. Better to use the slower but safer implementation instead of generating the exception which should be anyway very slow.
-
Salvatore Sanfilippo authored
I'm not sure how much test Jemalloc gets on ARM, moreover compiling Redis with Jemalloc support in not very powerful devices, like most ARMs people will build Redis on, is extremely slow. It is possible to enable Jemalloc build anyway if needed by using "make MALLOC=jemalloc".
-
Salvatore Sanfilippo authored
However note that in architectures supporting 64 bit unaligned accesses memcpy(...,...,8) is likely translated to a simple word memory movement anyway.
-
Salvatore Sanfilippo authored
-
- 09 Feb, 2017 1 commit
-
-
antirez authored
After investigating issue #3796, it was discovered that MIGRATE could call migrateCloseSocket() after the original MIGRATE c->argv was already rewritten as a DEL operation. As a result the host/port passed to migrateCloseSocket() could be anything, often a NULL pointer that gets deferenced crashing the server. Now the socket is closed at an earlier time when there is a socket error in a later stage where no retry will be performed, before we rewrite the argument vector. Moreover a check was added so that later, in the socket_err label, there is no further attempt at closing the socket if the argument was rewritten. This fix should resolve the bug reported in #3796.
-
- 01 Feb, 2017 2 commits
-
-
antirez authored
-
antirez authored
Ziplists had a bug that was discovered while investigating a different issue, resulting in a corrupted ziplist representation, and a likely segmentation foult and/or data corruption of the last element of the ziplist, once the ziplist is accessed again. The bug happens when a specific set of insertions / deletions is performed so that an entry is encoded to have a "prevlen" field (the length of the previous entry) of 5 bytes but with a count that could be encoded in a "prevlen" field of a since byte. This could happen when the "cascading update" process called by ziplistInsert()/ziplistDelete() in certain contitious forces the prevlen to be bigger than necessary in order to avoid too much data moving around. Once such an entry is generated, inserting a very small entry immediately before it will result in a resizing of the ziplist for a count smaller than the current ziplist length (which is a violation, inserting code expects the ziplist to get bigger actually). So an FF byte is inserted in a misplaced position. Moreover a realloc() is performed with a count smaller than the ziplist current length so the final bytes could be trashed as well. SECURITY IMPLICATIONS: Currently it looks like an attacker can only crash a Redis server by providing specifically choosen commands. However a FF byte is written and there are other memory operations that depend on a wrong count, so even if it is not immediately apparent how to mount an attack in order to execute code remotely, it is not impossible at all that this could be done. Attacks always get better... and we did not spent enough time in order to think how to exploit this issue, but security researchers or malicious attackers could.
-
- 30 Jan, 2017 1 commit
-
-
antirez authored
-
- 27 Jan, 2017 1 commit
-
-
Jan-Erik Rediger authored
Previously Redis crashed on `MEMORY DOCTOR` when it has no slaves attached. Fixes #3783
-
- 26 Jan, 2017 1 commit
-
-
miter authored
-
- 18 Jan, 2017 2 commits
-
-
antirez authored
This header file is for libs, like ziplist.c, that we want to leave almost separted from the core. The panic() calls will be easy to delete in order to use such files outside, but the debugging info we gain are very valuable compared to simple assertions where it is not possible to print debugging info.
-
antirez authored
This is of great interest because allows us to print debugging informations that could be of useful when debugging, like in the following example: serverPanic("Unexpected encoding for object %d, %d", obj->type, obj->encoding);
-
- 13 Jan, 2017 1 commit
-
-
antirez authored
-