- 09 Jan, 2019 1 commit
-
-
antirez authored
-
- 06 Dec, 2018 1 commit
-
-
lsytj0413 authored
-
- 24 Oct, 2018 1 commit
-
-
antirez authored
-
- 20 Oct, 2018 1 commit
-
-
hujie authored
-
- 19 Oct, 2018 2 commits
-
-
zhaozhao.zz authored
-
zhaozhao.zz authored
-
- 31 Jul, 2018 1 commit
-
-
zhaozhao.zz authored
-
- 25 Jul, 2018 3 commits
-
-
antirez authored
Related to #4852.
-
antirez authored
-
zhaozhao.zz authored
-
- 03 Jul, 2018 1 commit
-
-
Jack Drogon authored
-
- 28 Jun, 2018 1 commit
-
-
zhaozhao.zz authored
-
- 21 Jun, 2018 1 commit
-
-
shenlongxing authored
-
- 18 Jun, 2018 2 commits
-
-
antirez authored
The old version could not handle the fact that "STREAMS" is a valid key name for streams. Now we really try to parse the command like the command implementation would do. Related to #5028 and 4857.
-
antirez authored
The loop allocated a buffer for the right number of keys positions, then overflowed it going past the limit. Related to #4857 and cause of the memory violation seen in #5028.
-
- 14 Jun, 2018 1 commit
-
-
antirez authored
Thanks to @kevinmcgehee for signaling the issue and reasoning about the consequences and potential fixes. Issue #5015.
-
- 13 Jun, 2018 1 commit
-
-
zhaozhao.zz authored
-
- 12 Jun, 2018 1 commit
-
-
antirez authored
Since the introduction of ZPOP makes this needed. Thanks to @oranagra for reporting.
-
- 04 Jun, 2018 1 commit
-
-
赵磊 authored
-
- 29 Apr, 2018 1 commit
-
-
Itamar Haber authored
An implementation of the [Ze POP Redis Module](https://github.com/itamarhaber/zpop) as core Redis commands. Fixes #1861.
-
- 27 Feb, 2018 2 commits
- 11 Feb, 2018 1 commit
-
-
赵磊 authored
-
- 12 Jan, 2018 1 commit
-
-
antirez authored
This fixes a crash with Redis Cluster when OBJECT is mis-used, because getKeysUsingCommandTable() will call serverPanic() detecting we are accessing an invalid argument in the case "OBJECT foo" is called. This bug was introduced when OBJECT HELP was introduced, because the key argument is set fixed at index 2 in the command table, however now OBJECT may be called with an insufficient number of arguments to extract the key. The "Right Thing" would be to have a specific function to extract keys from the OBJECT command, however this is kinda of an overkill, so I preferred to make getKeysUsingCommandTable() more robust and just return no keys when it's not possible to honor the command table, because new commands are often added and also there are a number with an HELP subcommand violating the normal form, and crashing for this trivial reason or having many command-specific key extraction functions is not great.
-
- 01 Dec, 2017 5 commits
-
-
antirez authored
-
antirez authored
With lists we need to signal only on key creation, but streams can provide data to clients listening at every new item added. To make this slightly more efficient we now track different classes of blocked clients to avoid signaling keys when there is nobody listening. A typical case is when the stream is used as a time series DB and accessed only by range with XRANGE.
-
antirez authored
-
antirez authored
-
antirez authored
-
- 27 Nov, 2017 1 commit
-
-
zhaozhao.zz authored
Firstly, use access time to replace the decreas time of LFU. For function LFUDecrAndReturn, it should only try to get decremented counter, not update LFU fields, we will update it in an explicit way. And we will times halve the counter according to the times of elapsed time than server.lfu_decay_time. Everytime a key is accessed, we should update the LFU including update access time, and increment the counter after call function LFUDecrAndReturn. If a key is overwritten, the LFU should be also updated. Then we can use `OBJECT freq` command to get a key's frequence, and LFUDecrAndReturn should be called in `OBJECT freq` command in case of the key has not been accessed for a long time, because we update the access time only when the key is read or overwritten.
-
- 20 Sep, 2017 1 commit
-
-
zhaozhao.zz authored
This commit is a reinforcement of commit c1c99e9f. 1. Replication information can be stored when the RDB file is generated by a mater using server.slaveseldb when server.repl_backlog is not NULL, or set repl_stream_db be -1. That's safe, because NULL server.repl_backlog will trigger full synchronization, then master will send SELECT command to replicaiton stream. 2. Only do rdbSave* when rsiptr is not NULL, if we do rdbSave* without rdbSaveInfo, slave will miss repl-stream-db. 3. Save the replication informations also in the case of SAVE command, FLUSHALL command and DEBUG reload.
-
- 14 Jun, 2017 1 commit
-
-
Qu Chen authored
commands.
-
- 13 Jun, 2017 1 commit
-
-
antirez authored
-
- 19 Apr, 2017 1 commit
-
-
antirez authored
Close #3940.
-
- 07 Apr, 2017 1 commit
-
-
antirez authored
-
- 27 Mar, 2017 1 commit
-
-
antirez authored
-
- 30 Dec, 2016 1 commit
-
-
oranagra authored
-
- 13 Dec, 2016 1 commit
-
-
antirez authored
BACKGROUND AND USE CASEj Redis slaves are normally write only, however the supprot a "writable" mode which is very handy when scaling reads on slaves, that actually need write operations in order to access data. For instance imagine having slaves replicating certain Sets keys from the master. When accessing the data on the slave, we want to peform intersections between such Sets values. However we don't want to intersect each time: to cache the intersection for some time often is a good idea. To do so, it is possible to setup a slave as a writable slave, and perform the intersection on the slave side, perhaps setting a TTL on the resulting key so that it will expire after some time. THE BUG Problem: in order to have a consistent replication, expiring of keys in Redis replication is up to the master, that synthesize DEL operations to send in the replication stream. However slaves logically expire keys by hiding them from read attempts from clients so that if the master did not promptly sent a DEL, the client still see logically expired keys as non existing. Because slaves don't actively expire keys by actually evicting them but just masking from the POV of read operations, if a key is created in a writable slave, and an expire is set, the key will be leaked forever: 1. No DEL will be received from the master, which does not know about such a key at all. 2. No eviction will be performed by the slave, since it needs to disable eviction because it's up to masters, otherwise consistency of data is lost. THE FIX In order to fix the problem, the slave should be able to tag keys that were created in the slave side and have an expire set in some way. My solution involved using an unique additional dictionary created by the writable slave only if needed. The dictionary is obviously keyed by the key name that we need to track: all the keys that are set with an expire directly by a client writing to the slave are tracked. The value in the dictionary is a bitmap of all the DBs where such a key name need to be tracked, so that we can use a single dictionary to track keys in all the DBs used by the slave (actually this limits the solution to the first 64 DBs, but the default with Redis is to use 16 DBs). This solution allows to pay both a small complexity and CPU penalty, which is zero when the feature is not used, actually. The slave-side eviction is encapsulated in code which is not coupled with the rest of the Redis core, if not for the hook to track the keys. TODO I'm doing the first smoke tests to see if the feature works as expected: so far so good. Unit tests should be added before merging into the 4.0 branch.
-
- 09 Nov, 2016 1 commit
-
-
antirez authored
The gist of the changes is that now, partial resynchronizations between slaves and masters (without the need of a full resync with RDB transfer and so forth), work in a number of cases when it was impossible in the past. For instance: 1. When a slave is promoted to mastrer, the slaves of the old master can partially resynchronize with the new master. 2. Chained slalves (slaves of slaves) can be moved to replicate to other slaves or the master itsef, without requiring a full resync. 3. The master itself, after being turned into a slave, is able to partially resynchronize with the new master, when it joins replication again. In order to obtain this, the following main changes were operated: * Slaves also take a replication backlog, not just masters. * Same stream replication for all the slaves and sub slaves. The replication stream is identical from the top level master to its slaves and is also the same from the slaves to their sub-slaves and so forth. This means that if a slave is later promoted to master, it has the same replication backlong, and can partially resynchronize with its slaves (that were previously slaves of the old master). * A given replication history is no longer identified by the `runid` of a Redis node. There is instead a `replication ID` which changes every time the instance has a new history no longer coherent with the past one. So, for example, slaves publish the same replication history of their master, however when they are turned into masters, they publish a new replication ID, but still remember the old ID, so that they are able to partially resynchronize with slaves of the old master (up to a given offset). * The replication protocol was slightly modified so that a new extended +CONTINUE reply from the master is able to inform the slave of a replication ID change. * REPLCONF CAPA is used in order to notify masters that a slave is able to understand the new +CONTINUE reply. * The RDB file was extended with an auxiliary field that is able to select a given DB after loading in the slave, so that the slave can continue receiving the replication stream from the point it was disconnected without requiring the master to insert "SELECT" statements. This is useful in order to guarantee the "same stream" property, because the slave must be able to accumulate an identical backlog. * Slave pings to sub-slaves are now sent in a special form, when the top-level master is disconnected, in order to don't interfer with the replication stream. We just use out of band "\n" bytes as in other parts of the Redis protocol. An old design document is available here: https://gist.github.com/antirez/ae068f95c0d084891305 However the implementation is not identical to the description because during the work to implement it, different changes were needed in order to make things working well.
-
- 14 Oct, 2016 1 commit
-
-
antirez authored
This new command swaps two Redis databases, so that immediately all the clients connected to a given DB will see the data of the other DB, and the other way around. Example: SWAPDB 0 1 This will swap DB 0 with DB 1. All the clients connected with DB 0 will immediately see the new data, exactly like all the clients connected with DB 1 will see the data that was formerly of DB 0. MOTIVATION AND HISTORY --- The command was recently demanded by Pedro Melo, but was suggested in the past multiple times, and always refused by me. The reason why it was asked: Imagine you have clients operating in DB 0. At the same time, you create a new version of the dataset in DB 1. When the new version of the dataset is available, you immediately want to swap the two views, so that the clients will transparently use the new version of the data. At the same time you'll likely destroy the DB 1 dataset (that contains the old data) and start to build a new version, to repeat the process. This is an interesting pattern, but the reason why I always opposed to implement this, was that FLUSHDB was a blocking command in Redis before Redis 4.0 improvements. Now we have FLUSHDB ASYNC that releases the old data in O(1) from the point of view of the client, to reclaim memory incrementally in a different thread. At this point, the pattern can really be supported without latency spikes, so I'm providing this implementation for the users to comment. In case a very compelling argument will be made against this new command it may be removed. BEHAVIOR WITH BLOCKING OPERATIONS --- If a client is blocking for a list in a given DB, after the swap it will still be blocked in the same DB ID, since this is the most logical thing to do: if I was blocked for a list push to list "foo", even after the swap I want still a LPUSH to reach the key "foo" in the same DB in order to unblock. However an interesting thing happens when a client is, for instance, blocked waiting for new elements in list "foo" of DB 0. Then the DB 0 and 1 are swapped with SWAPDB. However the DB 1 happened to have a list called "foo" containing elements. When this happens, this implementation can correctly unblock the client. It is possible that there are subtle corner cases that are not covered in the implementation, but since the command is self-contained from the POV of the implementation and the Redis core, it cannot cause anything bad if not used. Tests and documentation are yet to be provided.
-