- 09 May, 2017 1 commit
-
-
antirez authored
More work to do with server.unixtime and similar. Need to write Helgrind suppression file in order to suppress the valse positives.
-
- 04 May, 2017 2 commits
- 03 May, 2017 8 commits
-
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
Instead of giving the module background operations just a small time to run in the beforeSleep() function, we can have the lock released for all the time we are blocked in the multiplexing syscall.
-
- 02 May, 2017 3 commits
- 28 Apr, 2017 2 commits
-
-
antirez authored
-
antirez authored
Experimentally verified that it can trigger the issue reverting the fix. At least on my system... Being the bug time/backlog dependant, it is very hard to tell if this test will be able to trigger the problem consistently, however even if it triggers the problem once in a while, we'll see it in the CI environment at http://ci.redis.io.
-
- 27 Apr, 2017 1 commit
-
-
antirez authored
The master client cleanup was incomplete: resetClient() was missing and the output buffer of the client was not reset, so pending commands related to the previous connection could be still sent. The first problem caused the client argument vector to be, at times, half populated, so that when the correct replication stream arrived the protcol got mixed to the arugments creating invalid commands that nobody called. Thanks to @yangsiran for also investigating this problem, after already providing important design / implementation hints for the original PSYNC2 issues (see referenced Github issue). Note that this commit adds a new function to the list library of Redis in order to be able to reset a list without destroying it. Related to issue #3899.
-
- 22 Apr, 2017 4 commits
-
-
antirez authored
Related to #3786.
-
antirez authored
Apparently 1.4 is too low compared to what you get in certain setups (including mine). I raised it to 1.55 that hopefully is still enough to test that the fragmentation went down from 1.7 but without incurring in issues, however the test setup may be still fragile so certain times this may lead to false positives again, it's hard to test for these things in a determinsitic way. Related to #3786.
-
oranagra authored
-
- 21 Apr, 2017 1 commit
-
-
antirez authored
Normally we never check for OOM conditions inside Redis since the allocator will always return a pointer or abort the program on OOM conditons. However we cannot have control on epool_create(), that may fail for kernel OOM (according to the manual page) even if all the parameters are correct, so the function aeCreateEventLoop() may indeed return NULL and this condition must be checked.
-
- 20 Apr, 2017 1 commit
-
-
Salvatore Sanfilippo authored
update block->free after some diff data are written to the child process
-
- 19 Apr, 2017 3 commits
-
-
antirez authored
Close #3940.
-
antirez authored
During the review of the fix for #3899, @yangsiran identified an implementation bug: given that the offset is now relative to the applied part of the replication log, when we cache a master, the successive PSYNC2 request will be made in order to *include* the transaction that was not completely processed. This means that we need to discard any pending transaction from our replication buffer: it will be re-executed.
-
antirez authored
This bug was discovered by @kevinmcgehee and constituted a major hidden bug in the PSYNC2 implementation, caused by the propagation from the master of incomplete commands to slaves. The bug had several results: 1. Borrowing from Kevin text in the issue: "Given that slaves blindly copy over their master's input into their own replication backlog over successive read syscalls, it's possible that with large commands or small TCP buffers, partial commands are present in this buffer. If the master were to fail before successfully propagating the entire command to a slave, the slaves will never execute the partial command (since the client is invalidated) but will copy it to replication backlog which may relay those invalid bytes to its slaves on PSYNC2, corrupting the backlog and possibly other valid commands that follow the failover. Simple command boundaries aren't sufficient to capture this, either, because in the case of a MULTI/EXEC block, if the master successfully propagates a subset of the commands but not the EXEC, then the transaction in the backlog becomes corrupt and could corrupt other slaves that consume this data." 2. As identified by @yangsiran later, there is another effect of the bug. For the same mechanism of the first problem, a slave having another slave, could receive a full resynchronization request with an already half-applied command in the backlog. Once the RDB is ready, it will be sent to the slave, and the replication will continue sending to the sub-slave the other half of the command, which is not valid. The fix, designed by @yangsiran and @antirez, and implemented by @antirez, uses a secondary buffer in order to feed the sub-masters and update the replication backlog and offsets, only when a given part of the query buffer is actually *applied* to the state of the instance, that is, when the command gets processed and the command is not pending in the Redis transaction buffer because of CLIENT_MULTI state. Given that now the backlog and offsets representation are in agreement with the actual processed commands, both issue 1 and 2 should no longer be possible. Thanks to @kevinmcgehee, @yangsiran and @oranagra for their work in identifying and designing a fix for this problem.
-
- 18 Apr, 2017 8 commits
-
-
Salvatore Sanfilippo authored
Reorder to make dict-benchmark compile on Linux
-
antirez authored
-
antirez authored
-
antirez authored
-
Salvatore Sanfilippo authored
fix #3847: add close socket before return ANET_ERR.
-
张文康 authored
-
antirez authored
Also get rid of variables that are now kinda redundant, since the dictionary iterator was removed. This is related to PR #3949.
-
Salvatore Sanfilippo authored
rdb: saving skiplist in reversed order to accelerate the deserialisation process
-
- 17 Apr, 2017 2 commits
-
-
Jan-Erik Rediger authored
Fixes #3944
-
spinlock authored
-
- 15 Apr, 2017 1 commit
-
-
antirez authored
However we allow for 500 milliseconds of tolerance, in order to avoid often discarding semantically valid info (the node is up) because of natural few milliseconds desync among servers even when NTP is used. Note that anyway we should ping the node from time to time regardless and discover if it's actually down from our point of view, since no update is accepted while we have an active ping on the node. Related to #3929.
-
- 14 Apr, 2017 3 commits
-
-
antirez authored
And many other related Github issues... all reporting the same problem. There was probably just not enough backlog in certain unlucky runs. I'll ask people that can reporduce if they see now this as fixed as well.
-
antirez authored
To rely on the fact that nodes in PFAIL state will be shared around by randomly adding them in the gossip section is a weak assumption, especially after changes related to sending less ping/pong packets. We want to always include gossip entries for all the nodes that are in PFAIL state, so that the PFAIL -> FAIL state promotion can happen much faster and reliably. Related to #3929.
-
antirez authored
The gossip section times are 32 bit, so cannot store the milliseconds time but just the seconds approximation, which is good enough for our uses. At the same time however, when comparing the gossip section times of other nodes with our node's view, we need to convert back to milliseconds. Related to #3929. Without this change the patch to reduce the traffic in the bus message does not work.
-