- 03 Sep, 2018 1 commit
-
-
antirez authored
Related to #5305.
-
- 14 Aug, 2018 1 commit
-
-
zhaozhao.zz authored
-
- 14 Jul, 2018 1 commit
-
-
dejun.xdj authored
-
- 10 Jul, 2018 3 commits
-
-
antirez authored
-
antirez authored
To detect when the group (or the whole key) is destroyed to send an error to the consumers blocked in such group is a problem, so we leave the consumers listening, the sysadmin is free to create or destroy groups assuming she/he knows what to do. However a client may be blocked in a given consumer group, that is later destroyed. Then the stream receives new elements. In that case there is no sane behavior to serve the consumer... but to report an error about the group no longer existing. More about detecting this synchronously and why it is not done: 1. Normally we don't do that, we leave clients blocked for other data types such as lists. 2. When we free a stream object there is no longer information about what was the key it was associated with, so while destroying the consumer groups we miss the info to unblock the clients in that moment. 3. Objects can be reclaimed in other threads where it is no longer safe to do client operations.
-
antirez authored
When a client blocks for a consumer group, we don't know the actual ID we want to be served: other clients blocked in the same consumer group may be served first, so the consumer group latest delivered ID changes. This was not handled correctly, all the clients in the consumer group were unblocked without data but the first.
-
- 09 Jul, 2018 1 commit
-
-
dejun.xdj authored
Save NOACK option into client.blockingState structure.
-
- 11 Jun, 2018 1 commit
-
-
antirez authored
We unblocked the client too early, when the group name object was no longer valid in client->bpop, so propagating XCLAIM later in streamPropagateXCLAIM() deferenced a field already set to NULL.
-
- 31 May, 2018 1 commit
-
-
zhaozhao.zz authored
-
- 15 May, 2018 1 commit
-
-
antirez authored
-
- 11 May, 2018 2 commits
-
-
antirez authored
Usually blocking operations make a lot of sense with multiple keys so that we can listen to multiple queues (or whatever the app models) with a single connection. However in the synchronous case it is more useful to be able to ask for N elements. This is a change that I also wanted to perform soon or later in the blocking list variant, but here it is more natural since there is no reply type difference.
-
antirez authored
This commit also adds a top comment about a subtle behavior of mixing blocking operations of different types in the same key.
-
- 29 Apr, 2018 1 commit
-
-
Itamar Haber authored
An implementation of the [Ze POP Redis Module](https://github.com/itamarhaber/zpop) as core Redis commands. Fixes #1861.
-
- 22 Mar, 2018 1 commit
-
-
Guy Benoish authored
-
- 19 Mar, 2018 1 commit
-
-
antirez authored
-
- 15 Mar, 2018 5 commits
- 01 Dec, 2017 9 commits
-
-
antirez authored
-
antirez authored
blockForKeys() was not freeing the allocation holding the ID when the key was already found busy. Fortunately the unit test checked explicitly for blocking multiple times for the same key (copying a regression in the blocking lists tests), so the bug was detected by the Redis test leak checker.
-
antirez authored
-
antirez authored
With lists we need to signal only on key creation, but streams can provide data to clients listening at every new item added. To make this slightly more efficient we now track different classes of blocked clients to avoid signaling keys when there is nobody listening. A typical case is when the stream is used as a time series DB and accessed only by range with XRANGE.
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
- 07 Oct, 2016 1 commit
-
-
antirez authored
Just a draft to align the main ideas, never executed code. Compiles.
-
- 27 Jul, 2015 1 commit
-
-
antirez authored
-
- 26 Jul, 2015 4 commits
- 05 May, 2015 3 commits
- 24 Mar, 2015 2 commits
-
-
antirez authored
Bug as old as Redis and blocking operations. It's hard to trigger since only happens on instance role switch, but the results are quite bad since an inconsistency between master and slave is created. How to trigger the bug is a good description of the bug itself. 1. Client does "BLPOP mylist 0" in master. 2. Master is turned into slave, that replicates from New-Master. 3. Client does "LPUSH mylist foo" in New-Master. 4. New-Master propagates write to slave. 5. Slave receives the LPUSH, the blocked client get served. Now Master "mylist" key has "foo", Slave "mylist" key is empty. Highlights: * At step "2" above, the client remains attached, basically escaping any check performed during command dispatch: read only slave, in that case. * At step "5" the slave (that was the master), serves the blocked client consuming a list element, which is not consumed on the master side. This scenario is technically likely to happen during failovers, however since Redis Sentinel already disconnects clients using the CLIENT command when changing the role of the instance, the bug is avoided in Sentinel deployments. Closes #2473.
-
antirez authored
There was a bug in Redis Cluster caused by clients blocked in a blocking list pop operation, for keys no longer handled by the instance, or in a condition where the cluster became down after the client blocked. A typical situation is: 1) BLPOP <somekey> 0 2) <somekey> hash slot is resharded to another master. The client will block forever int this case. A symmentrical non-cluster-specific bug happens when an instance is turned from master to slave. In that case it is more serious since this will desynchronize data between slaves and masters. This other bug was discovered as a side effect of thinking about the bug explained and fixed in this commit, but will be fixed in a separated commit.
-