- 18 Sep, 2017 1 commit
-
-
Oran Agra authored
when SHUTDOWN command is recived it is possible that some of the recent command were not yet flushed from the AOF buffer, and the server experiences data loss at shutdown.
-
- 02 Aug, 2017 2 commits
-
-
antirez authored
-
Itamar Haber authored
With the addition of modules, looping over the redisCommandTable misses any added commands. By moving to dictionary iteration this is resolved.
-
- 24 Jul, 2017 9 commits
-
-
antirez authored
-
Jan-Erik Rediger authored
Fixes #2258
-
WuYunlong authored
-
Chris Lamb authored
-
Byron Grobe authored
-
Jan-Erik Rediger authored
It's not POSIX (BSD systems have -E instead) and we don't actually need it. Closes #1922
-
Leon Chen authored
-
Leon Chen authored
-
liangsijian authored
-
- 23 Jul, 2017 4 commits
-
-
antirez authored
Lua scripting does not support calling blocking commands, however all the native Redis commands are flagged as "s" (no scripting flag), so this is not possible at all. With modules there is no such mechanism in order to flag a command as non callable by the Lua scripting engine, moreover we cannot trust the modules users from complying all the times: it is likely that modules will be released to have blocking commands without such commands being flagged correctly, even if we provide a way to signal this fact. This commit attempts to address the problem in a short term way, by detecting that a module is trying to block in the context of the Lua scripting engine client, and preventing to do this. The module will actually believe to block as usually, but what happens is that the Lua script receives an error immediately, and the background call is ignored by the Redis engine (if not for the cleanup callbacks, once it unblocks). Long term, the more likely solution, is to introduce a new call called RedisModule_GetClientFlags(), so that a command can detect if the caller is a Lua script, and return an error, or avoid blocking at all. Being the blocking API experimental right now, more work is needed in this regard in order to reach a level well blocking module commands and all the other Redis subsystems interact peacefully. Now the effect is like the following: 127.0.0.1:6379> eval "redis.call('hello.block',1,5000)" 0 (error) ERR Error running script (call to f_b5ba35ff97bc1ef23debc4d6e9fd802da187ed53): @user_script:1: ERR Blocking module command called from Lua script This commit fixes issue #4127 in the short term. -
antirez authored
-
antirez authored
This function failed when an internal-only flag was set as an only flag in a node: the string was trimmed expecting a final comma before exiting the function, causing a crash. See issue #4142. Moreover generation of flags representation only needed at DEBUG log level was always performed: a waste of CPU time. This is fixed as well by this commit.
-
antirez authored
The function cache was not working at all, and the function returned wrong values if there where two or more modules exporting native data types. See issue #4131 for more details.
-
- 14 Jul, 2017 13 commits
-
-
antirez authored
Those calls may be subject to changes in the future, so the user should acknowledge it is using non stable API.
-
antirez authored
-
antirez authored
Before this fix the DB currenty selected by the client blocked was not respected and operations were always performed on DB 0.
-
antirez authored
Moving to redis-doc repository to publish via Redis.io.
-
antirez authored
-
antirez authored
In Redis 4.0 replication, with the introduction of PSYNC2, masters and slaves replicate commands to cascading slaves and to the replication backlog itself in a different way compared to the past. Masters actually replicate the effects of client commands. Slaves just propagate what they receive from masters. This mechanism can cause problems when the configuration of an instance is changed from master to slave inside a transaction. For instance we could send to a master instance the following sequence: MULTI SLAVEOF 127.0.0.1 0 EXEC SLAVEOF NO ONE Before the fixes in this commit, the MULTI command used to be propagated into the replication backlog, however after the SLAVEOF command the instance is a slave, so the EXEC implementation failed to also propagate the EXEC command. When the slaves of the above instance reconnected, they were incrementally synchronized just sending a "MULTI". This put the master client (in the slaves) into MULTI st... -
antirez authored
Close #3911.
-
antirez authored
Close #3766.
-
antirez authored
See issue #3844 for more information.
-
antirez authored
In general we do not want before/after sleep() callbacks to be called when we re-enter the event loop, since those calls are only designed in order to perform operations every main iteration of the event loop, and re-entering is often just a way to incrementally serve clietns with error messages or other auxiliary operations. However, if we call the callbacks, we are then forced to think at before/after sleep callbacks as re-entrant, which is much harder without any good need. However here there was also a clear bug: beforeSleep() was actually never called when re-entering the event loop. But the new afterSleep() callback was. This is broken and in this instance re-entering afterSleep() caused a modules GIL dead lock.
-
antirez authored
-
Guy Benoish authored
-
antirez authored
-
- 06 Jul, 2017 11 commits
-
-
sunweinan authored
-
antirez authored
Thanks to @oranagra for spotting this bug.
-
antirez authored
-
spinlock authored
-
spinlock authored
-
antirez authored
-
antirez authored
They are technically like commands executed from external clients one after the other, and do not constitute a single atomic entity.
-
antirez authored
Redis clients need to have an instantaneous idea of the amount of memory they are consuming (if the number is not exact should at least be proportional to the actual memory usage). We do that adding and subtracting the SDS length when pushing / popping from the client->reply list. However it is quite simple to add bugs in such a setup, by not taking the objects in the list and the count in sync. For such reason, Redis has an assertion to track counts near 2^64: those are always the result of the counter wrapping around because we subtract more than we add. This commit adds the symmetrical assertion: when the list is empty since we sent everything, the reply_bytes count should be zero. Thanks to the new assertion it should be simple to also detect the other problem, where the count slowly increases because of over-counting. The assertion adds a conditional in the code that sends the buffer to the socket but should not create any measurable performance slowdown, listLength() just accesses a structure field, and this code path is totally dominated by write(2). Related to #4100.
-
Dvir Volk authored
-
antirez authored
This commit closes issue #3698, at least for now, since the root cause was not fixed: the bounding box function, for huge radiuses, does not return a correct bounding box, there are points still within the radius that are left outside. So when using GEORADIUS queries with radiuses in the order of 5000 km or more, it was possible to see, at the edge of the area, certain points not correctly reported. Because the bounding box for now was used just as an optimization, and such huge radiuses are not common, for now the optimization is just switched off when the radius is near such magnitude. Three test cases found by the Continuous Integration test were added, so that we can easily trigger the bug again, both for regression testing and in order to properly fix it as some point in the future.
-
antirez authored
This feature was proposed by @rosmo in PR #2643 and later redesigned in order to fit better with the other options for non-interactive modes of redis-cli. The idea is basically to allow to collect latency information in scripts, cron jobs or whateever, just running for a limited time and then producing a single output.
-