- 05 Dec, 2017 1 commit
-
-
WuYunlong authored
-
- 01 Dec, 2017 7 commits
-
-
antirez authored
-
antirez authored
XADD was suboptimal in the first incarnation of the command, not being able to accept an ID (very useufl for replication), nor options for having capped streams. The keyspace notification for streams was not implemented.
-
antirez authored
With lists we need to signal only on key creation, but streams can provide data to clients listening at every new item added. To make this slightly more efficient we now track different classes of blocked clients to avoid signaling keys when there is nobody listening. A typical case is when the stream is used as a time series DB and accessed only by range with XRANGE.
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
- 30 Nov, 2017 2 commits
-
-
antirez authored
Doing the following ended with a broken server.executable: 1. Start Redis with src/redis-server 2. Send CONFIG SET DIR /tmp/ 3. Send DEBUG RESTART At this point we called execve with an argv[0] that is no longer related to the new path. So after the restart the absolute path of the executable is recomputed in the wrong way. With this fix we pass the absolute path already computed as argv[0].
-
antirez authored
-
- 28 Nov, 2017 2 commits
-
-
Itamar Haber authored
This adds a new `addReplyHelp` helper that's used by commands when returning a help text. The following commands have been touched: DEBUG, OBJECT, COMMAND, PUBSUB, SCRIPT and SLOWLOG. WIP Fix entry command table entry for OBJECT for HELP option. After #4472 the command may have just 2 arguments. Improve OBJECT HELP descriptions. See #4472. WIP 2 WIP 3
-
Itamar Haber authored
-
- 27 Nov, 2017 1 commit
-
-
antirez authored
After #4472 the command may have just 2 arguments.
-
- 23 Nov, 2017 1 commit
-
-
antirez authored
See issue #4466 / #4467.
-
- 21 Nov, 2017 1 commit
-
-
zhaozhao.zz authored
-
- 02 Nov, 2017 1 commit
-
-
zhaozhao.zz authored
-
- 19 Sep, 2017 1 commit
-
-
antirez authored
This commit attempts to fix a number of bugs reported in #4316. They are related to the way replication info like replication ID, offsets, and currently selected DB in the master client, are stored and loaded by Redis. In order to avoid inconsistencies the changes in this commit try to enforce that: 1. Replication information are only stored when the RDB file is generated by a slave that has a valid 'master' client, so that we can always extract the currently selected DB. 2. When replication informations are persisted in the RDB file, all the info for a successful PSYNC or nothing is persisted. 3. The RDB replication informations are only loaded if the instance is configured as a slave, otherwise a master can start with IDs that relate to a different history of the data set, and stil retain such IDs in the future while receiving unrelated writes.
-
- 17 Sep, 2017 1 commit
-
-
Oran Agra authored
when SHUTDOWN command is recived it is possible that some of the recent command were not yet flushed from the AOF buffer, and the server experiences data loss at shutdown.
-
- 10 Jul, 2017 1 commit
-
-
antirez authored
-
- 05 Jul, 2017 1 commit
-
-
antirez authored
-
- 30 Jun, 2017 1 commit
-
-
antirez authored
Issue #4084 shows how for a design error, GEORADIUS is a write command because of the STORE option. Because of this it does not work on readonly slaves, gets redirected to masters in Redis Cluster even when the connection is in READONLY mode and so forth. To break backward compatibility at this stage, with Redis 4.0 to be in advanced RC state, is problematic for the user base. The API can be fixed into the unstable branch soon if we'll decide to do so in order to be more consistent, and reease Redis 5.0 with this incompatibility in the future. This is still unclear. However, the ability to scale GEO queries in slaves easily is too important so this commit adds two read-only variants to the GEORADIUS and GEORADIUSBYMEMBER command: GEORADIUS_RO and GEORADIUSBYMEMBER_RO. The commands are exactly as the original commands, but they do not accept the STORE and STOREDIST options.
-
- 29 Jun, 2017 1 commit
-
-
antirez authored
This is the first step towards getting rid of HMSET which is a command that does not make much sense once HSET is variadic, and has a saner return value.
-
- 23 Jun, 2017 1 commit
-
-
Suraj Narkhede authored
-
- 16 Jun, 2017 1 commit
-
-
xuzhou authored
-
- 15 Jun, 2017 1 commit
-
-
antirez authored
-
- 14 Jun, 2017 1 commit
-
-
Qu Chen authored
commands.
-
- 19 May, 2017 1 commit
-
-
antirez authored
-
- 10 May, 2017 2 commits
-
-
antirez authored
This avoids Helgrind complaining, but we are actually not using atomicGet() to get the unixtime value for now: too many places where it is used and given tha time_t is word-sized it should be safe in all the archs we support as it is. On the other hand, Helgrind, when Redis is compiled with "make helgrind" in order to force the __sync macros, will detect the write in updateCachedTime() as a read (because atomic functions are used) and will not complain about races. This commit also includes minor refactoring of mutex initializations and a "helgrind" target in the Makefile.
-
antirez authored
The __sync builtin can be correctly detected by Helgrind so to force it is useful for testing. The API in the INFO output can be useful for debugging after problems are reported.
-
- 09 May, 2017 3 commits
- 03 May, 2017 1 commit
-
-
antirez authored
Instead of giving the module background operations just a small time to run in the beforeSleep() function, we can have the lock released for all the time we are blocked in the multiplexing syscall.
-
- 28 Apr, 2017 1 commit
-
-
antirez authored
-
- 21 Apr, 2017 1 commit
-
-
antirez authored
Normally we never check for OOM conditions inside Redis since the allocator will always return a pointer or abort the program on OOM conditons. However we cannot have control on epool_create(), that may fail for kernel OOM (according to the manual page) even if all the parameters are correct, so the function aeCreateEventLoop() may indeed return NULL and this condition must be checked.
-
- 13 Apr, 2017 1 commit
-
-
Itamar Haber authored
With the addition of modules, looping over the redisCommandTable misses any added commands. By moving to dictionary iteration this is resolved.
-
- 11 Apr, 2017 1 commit
-
-
antirez authored
Otherwise, as it was, it will overwrite whatever the user set. Close #3703.
-
- 10 Apr, 2017 1 commit
-
-
antirez authored
If a thread unblocks a client blocked in a module command, by using the RedisMdoule_UnblockClient() API, the event loop may not be awaken until the next timeout of the multiplexing API or the next unrelated I/O operation on other clients. We actually want the client to be served ASAP, so a mechanism is needed in order for the unblocking API to inform Redis that there is a client to serve ASAP. This commit fixes the issue using the old trick of the pipe: when a client needs to be unblocked, a byte is written in a pipe. When we run the list of clients blocked in modules, we consume all the bytes written in the pipe. Writes and reads are performed inside the context of the mutex, so no race is possible in which we consume the bytes that are actually related to an awake request for a client that should still be put into the list of clients to unblock. It was verified that after the fix the server handles the blocked clients with the expected short delay. Thanks to @dvirsky for understanding there was such a problem and reporting it.
-
- 20 Feb, 2017 1 commit
-
-
antirez authored
This change attempts to switch to an hash function which mitigates the effects of the HashDoS attack (denial of service attack trying to force data structures to worst case behavior) while at the same time providing Redis with an hash function that does not expect the input data to be word aligned, a condition no longer true now that sds.c strings have a varialbe length header. Note that it is possible sometimes that even using an hash function for which collisions cannot be generated without knowing the seed, special implementation details or the exposure of the seed in an indirect way (for example the ability to add elements to a Set and check the return in which Redis returns them with SMEMBERS) may make the attacker's life simpler in the process of trying to guess the correct seed, however the next step would be to switch to a log(N) data structure when too many items in a single bucket are detected: this seems like an overkill in the case of Redis. SPEED REGRESION TESTS: In order to verify that switching from MurmurHash to SipHash had no impact on speed, a set of benchmarks involving fast insertion of 5 million of keys were performed. The result shows Redis with SipHash in high pipelining conditions to be about 4% slower compared to using the previous hash function. However this could partially be related to the fact that the current implementation does not attempt to hash whole words at a time but reads single bytes, in order to have an output which is endian-netural and at the same time working on systems where unaligned memory accesses are a problem. Further X86 specific optimizations should be tested, the function may easily get at the same level of MurMurHash2 if a few optimizations are performed.
-
- 30 Dec, 2016 1 commit
-
-
oranagra authored
-