- 10 Jun, 2020 3 commits
- 04 May, 2020 1 commit
-
-
Guy Benoish authored
Same goes for XGROUP DELCONSUMER (But in this case, it doesn't have any visible effect)
-
- 27 Apr, 2020 1 commit
-
-
Oran Agra authored
Now both master and replicas keep track of the last replication offset that contains meaningful data (ignoring the tailing pings), and both trim that tail from the replication backlog, and the offset with which they try to use for psync. the implication is that if someone missed some pings, or even have excessive pings that the promoted replica has, it'll still be able to psync (avoid full sync). the downside (which was already committed) is that replicas running old code may fail to psync, since the promoted replica trims pings form it's backlog. This commit adds a test that reproduces several cases of promotions and demotions with stale and non-stale pings Background: The mearningful offset on the master was added recently to solve a problem were the master is left all alone, injecting PINGs into it's backlog when no one is listening and then gets demoted and tries to replicate from a replica that didn't have any of the PINGs (or at least not the last ones). however, consider this case: master A has two replicas (B and C) replicating directly from it. there's no traffic at all, and also no network issues, just many pings in the tail of the backlog. now B gets promoted, A becomes a replica of B, and C remains a replica of A. when A gets demoted, it trims the pings from its backlog, and successfully replicate from B. however, C is still aware of these PINGs, when it'll disconnect and re-connect to A, it'll ask for something that's not in the backlog anymore (since A trimmed the tail of it's backlog), and be forced to do a full sync (something it didn't have to do before the meaningful offset fix). Besides that, the psync2 test was always failing randomly here and there, it turns out the reason were PINGs. Investigating it shows the following scenario: cycle 1: redis #1 is master, and all the rest are direct replicas of #1 cycle 2: redis #2 is promoted to master, #1 is a replica of #2 and #3 is replica of #1 now we see that when #1 is demoted it prints: 17339:S 21 Apr 2020 11:16:38.523 * Using the meaningful offset 3929963 instead of 3929977 to exclude the final PINGs (14 bytes difference) 17339:S 21 Apr 2020 11:16:39.391 * Trying a partial resynchronization (request e2b3f8817735fdfe5fa4626766daa938b61419e5:3929964). 17339:S 21 Apr 2020 11:16:39.392 * Successful partial resynchronization with master. and when #3 connects to the demoted #2, #2 says: 17339:S 21 Apr 2020 11:16:40.084 * Partial resynchronization not accepted: Requested offset for secondary ID was 3929978, but I can reply up to 3929964 so the issue here is that the meaningful offset feature saved the day for the demoted master (since it needs to sync from a replica that didn't get the last ping), but it didn't help one of the other replicas which did get the last ping.
-
- 08 Apr, 2020 1 commit
-
-
antirez authored
See #7071.
-
- 27 Mar, 2020 5 commits
-
-
antirez authored
-
antirez authored
Now that this mechanism is the sole one used for blocked clients timeouts, it is more wise to cleanup the table when the client unblocks for any reason. We use a flag: CLIENT_IN_TO_TABLE, in order to avoid a radix tree lookup when the client was already removed from the table because we processed it by scanning the radix tree.
-
antirez authored
-
antirez authored
-
antirez authored
-
- 26 Dec, 2019 1 commit
-
-
Guy Benoish authored
This commit solves several edge cases that are related to exhausting the streamID limits: We should correctly calculate the succeeding streamID instead of blindly incrementing 'seq' This affects both XREAD and XADD. Other (unrelated) changes: Reply with a better error message when trying to add an entry to a stream that has exhausted last_id
-
- 19 Nov, 2019 2 commits
- 08 Nov, 2019 1 commit
-
-
zhaozhao.zz authored
-
- 31 Oct, 2019 3 commits
-
-
antirez authored
-
antirez authored
-
antirez authored
Using the is_key_ready() callback plus the reply callback later, creates different issues AFAIK: 1. More complex API. 2. We need to call the reply callback() ASAP if the is_key_ready() interface returned success, however the internals do not work in that way, so when the reply callback is called the setup could be different. To fix that, there is to break the current design that handles the unblocked clients asyncrhonously, and run the list ASAP.
-
- 30 Oct, 2019 1 commit
-
-
antirez authored
-
- 06 Sep, 2019 1 commit
-
-
antirez authored
-
- 05 Sep, 2019 1 commit
-
-
antirez authored
-
- 09 Jan, 2019 3 commits
- 11 Sep, 2018 1 commit
-
-
antirez authored
-
- 03 Sep, 2018 2 commits
- 14 Aug, 2018 1 commit
-
-
zhaozhao.zz authored
-
- 14 Jul, 2018 1 commit
-
-
dejun.xdj authored
-
- 10 Jul, 2018 3 commits
-
-
antirez authored
-
antirez authored
To detect when the group (or the whole key) is destroyed to send an error to the consumers blocked in such group is a problem, so we leave the consumers listening, the sysadmin is free to create or destroy groups assuming she/he knows what to do. However a client may be blocked in a given consumer group, that is later destroyed. Then the stream receives new elements. In that case there is no sane behavior to serve the consumer... but to report an error about the group no longer existing. More about detecting this synchronously and why it is not done: 1. Normally we don't do that, we leave clients blocked for other data types such as lists. 2. When we free a stream object there is no longer information about what was the key it was associated with, so while destroying the consumer groups we miss the info to unblock the clients in that moment. 3. Objects can be reclaimed in other threads where it is no longer safe to do client operations.
-
antirez authored
When a client blocks for a consumer group, we don't know the actual ID we want to be served: other clients blocked in the same consumer group may be served first, so the consumer group latest delivered ID changes. This was not handled correctly, all the clients in the consumer group were unblocked without data but the first.
-
- 09 Jul, 2018 1 commit
-
-
dejun.xdj authored
Save NOACK option into client.blockingState structure.
-
- 11 Jun, 2018 1 commit
-
-
antirez authored
We unblocked the client too early, when the group name object was no longer valid in client->bpop, so propagating XCLAIM later in streamPropagateXCLAIM() deferenced a field already set to NULL.
-
- 31 May, 2018 1 commit
-
-
zhaozhao.zz authored
-
- 15 May, 2018 1 commit
-
-
antirez authored
-
- 11 May, 2018 2 commits
-
-
antirez authored
Usually blocking operations make a lot of sense with multiple keys so that we can listen to multiple queues (or whatever the app models) with a single connection. However in the synchronous case it is more useful to be able to ask for N elements. This is a change that I also wanted to perform soon or later in the blocking list variant, but here it is more natural since there is no reply type difference.
-
antirez authored
This commit also adds a top comment about a subtle behavior of mixing blocking operations of different types in the same key.
-
- 29 Apr, 2018 1 commit
-
-
Itamar Haber authored
An implementation of the [Ze POP Redis Module](https://github.com/itamarhaber/zpop) as core Redis commands. Fixes #1861.
-
- 22 Mar, 2018 1 commit
-
-
Guy Benoish authored
-