- 29 Aug, 2013 2 commits
-
-
antirez authored
-
antirez authored
Multiple missing calls to lua_pop prevented the error handler function pushed on the stack for lua_pcall() to be popped before returning, causing a memory leak in almost all the code paths of EVAL (both successful calls and calls returning errors). This caused two issues: Lua leaking memory (and this was very visible from INFO memory output, as the 'used_memory_lua' field reported an always increasing amount of memory used), and as a result slower and slower GC cycles resulting in all the CPU being used. Thanks to Tanguy Le Barzic for noticing something was wrong with his 2.8 slave, and for creating a testing EC2 environment where I was able to investigate the issue.
-
- 27 Aug, 2013 9 commits
-
-
yihuang authored
-
antirez authored
-
antirez authored
We are sure the string is large, since when the sds optimization branch is entered it means that it was not possible to encode it as EMBSTR for size concerns.
-
antirez authored
When no encoding is possible, at least try to reallocate the sds string with one that does not waste memory (with free space at the end of the buffer) when the string is large enough.
-
antirez authored
We are sure that a string that is longer than 21 chars cannot be represented by a 64 bit signed integer, as -(2^64) is 21 chars: strlen(-18446744073709551616) => 21
-
antirez authored
The call to sdsMakeRoomFor() did not accounted for the amount of data already present in the query buffer, resulting into over-allocation.
-
antirez authored
This command is only useful for low-level debugging of memory issues due to sds wasting memory as empty buffer at the end of the string.
-
antirez authored
-
antirez authored
-
- 26 Aug, 2013 1 commit
-
-
antirez authored
This feature was implemented in the initial days of the Redis Cluster implementaiton but is not a good idea at all. 1) It depends on clocks to be synchronized, that is already very bad. 2) Moreover it adds a bug where the pong time is updated via gossip so no new PING is ever sent by the current node, with the effect of no PONG received, no update of tables, no clearing of PFAIL flag. In general to trust other nodes about the reachability of other nodes is a broken distributed programming model.
-
- 22 Aug, 2013 6 commits
-
-
antirez authored
The commit using listenToPort() introduced this bug by no longer creating the event handler to handle incoming messages from the cluster bus.
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
This was broken by the IPv6 support patches.
-
antirez authored
Somewhat a previous commit broken this so CLUSTER MEET was no longer working.
-
- 21 Aug, 2013 3 commits
-
-
antirez authored
-
antirez authored
-
Salvatore Sanfilippo authored
fixed initServer fail problem
-
- 20 Aug, 2013 2 commits
- 19 Aug, 2013 8 commits
-
-
antirez authored
-
antirez authored
Also a warning was suppressed by including unistd.h in redisassert.h (needed for _exit()).
-
antirez authored
By using redisassert.h version of assert() you get stack traces in the log instead of a process disappearing on assertions.
-
antirez authored
The previous hashing used the trivial algorithm of xoring the integers together. This is not optimal as it is very likely that different hash table setups will hash the same, for instance an hash table at the start of the rehashing process, and at the end, will have the same fingerprint. Now we hash N integers in a smarter way, by summing every integer to the previous hash, and taking the integer hashing again (see the code for further details). This way it is a lot less likely that we get a collision. Moreover this way of hashing explicitly protects from the same set of integers in a different order to hash to the same number. This commit is related to issue #1240.
-
antirez authored
Related to issue #1240.
-
antirez authored
This commit does mainly two things: 1) It fixes zunionInterGenericCommand() by removing mass-initialization of all the iterators used, so that we don't violate the unsafe iterator API of dictionaries. This fixes issue #1240. 2) Since the zui* APIs required the allocator to be initialized in the zsetopsrc structure in order to use non-iterator related APIs, this commit fixes this strict requirement by accessing objects directly via the op->subject->ptr pointer we have to the object.
-
antirez authored
dict.c allows the user to create unsafe iterators, that are iterators that will not touch the dictionary data structure in any way, preventing copy on write, but at the same time are limited in their usage. The limitation is that when itearting with an unsafe iterator, no call to other dictionary functions must be done inside the iteration loop, otherwise the dictionary may be incrementally rehashed resulting into missing elements in the set of the elements returned by the iterator. However after introducing this kind of iterators a number of bugs were found due to misuses of the API, and we are still finding bugs about this issue. The bugs are not trivial to track because the effect is just missing elements during the iteartion. This commit introduces auto-detection of the API misuse. The idea is that an unsafe iterator has a contract: from initialization to the release of the iterator the dictionary should not change. So we take a fingerprint of the dictionary state, xoring a few important dict properties when the unsafe iteartor is initialized. We later check when the iterator is released if the fingerprint is still the same. If it is not, we found a misuse of the iterator, as not allowed API calls changed the internal state of the dictionary. This code was checked against a real bug, issue #1240. This is what Redis prints (aborting) when a misuse is detected: Assertion failed: (iter->fingerprint == dictFingerprint(iter->d)), function dictReleaseIterator, file dict.c, line 587.
- 12 Aug, 2013 6 commits
-
-
antirez authored
-
antirez authored
-
antirez authored
The previous code using a static buffer as an optimization was lame: 1) Premature optimization, actually it was *slower* than naive code because resulted into the creation / destruction of the object encapsulating the output buffer. 2) The code was very hard to test, since it was needed to have specific tests for command lines exceeding the size of the static buffer. 3) As a result of "2" the code was bugged as the current tests were not able to stress specific corner cases. It was replaced with easy to understand code that is safer and faster.
-
antirez authored
-
antirez authored
-
antirez authored
During the replication full resynchronization process, the RDB file is transfered from the master to the slave. However there is a short preamble to send, that is currently just the bulk payload length of the file in the usual Redis form $..length..<CR><LF>. This preamble used to be sent with a direct write call, assuming that there was alway room in the socket output buffer to hold the few bytes needed, however this does not scale in case we'll need to send more stuff, and is not very robust code in general. This commit introduces a more general mechanism to send a preamble up to 2GB in size (the max length of an sds string) in a non blocking way.
-
- 08 Aug, 2013 2 commits
-
-
antirez authored
Before this commit redis-benchmark supported random argumetns in the form of :rand:000000000000. In every string of that form, the zeros were replaced with a random number of 12 digits at every command invocation. However this was far from perfect as did not allowed to generate simply random numbers as arguments, there was always the :rand: prefix. Now instead every argument in the form __rand_int__ is replaced with a 12 digits number. Note that "__rand_int__" is 12 characters itself. In order to implement the new semantic, it was needed to change a few thigns in the internals of redis-benchmark, as new clients are created cloning old clients, so without a stable prefix such as ":rand:" the old way of cloning the client was no longer able to understand, from the old command line, what was the position of the random strings to substitute. Now instead a client structure is passed as a reference for cloning, so that we can directly clone the offsets inside the command line.
-
antirez authored
This change was profiler-driven, but the actual effect is hard to measure in real-world redis benchmark runs.
-
- 07 Aug, 2013 1 commit
-
-
Salvatore Sanfilippo authored
Little typo
-