- 04 May, 2020 1 commit
-
-
Guy Benoish authored
Same goes for XGROUP DELCONSUMER (But in this case, it doesn't have any visible effect)
-
- 26 Dec, 2019 1 commit
-
-
Guy Benoish authored
This commit solves several edge cases that are related to exhausting the streamID limits: We should correctly calculate the succeeding streamID instead of blindly incrementing 'seq' This affects both XREAD and XADD. Other (unrelated) changes: Reply with a better error message when trying to add an entry to a stream that has exhausted last_id
-
- 06 Nov, 2019 1 commit
-
-
Guy Benoish authored
Fixes GitHub issue #6492 Added stream support in RM_KeyType and RM_ValueLength. Also moduleDelKeyIfEmpty was updated, even though it has no effect now (It will be relevant when stream type direct API will be coded - i.e. RM_StreamAdd)
-
- 07 Oct, 2019 1 commit
-
-
Jamison Judge authored
-
- 17 Jul, 2019 1 commit
-
-
Oran Agra authored
now that replica can read rdb directly from the socket, it should avoid exiting on short read and instead try to re-sync. this commit tries to have minimal effects on non-diskless rdb reading. and includes a test that tries to trigger this scenario on various read cases.
-
- 14 Jul, 2018 1 commit
-
-
dejun.xdj authored
-
- 17 Apr, 2018 1 commit
-
-
antirez authored
-
- 23 Mar, 2018 1 commit
-
-
antirez authored
-
- 19 Mar, 2018 1 commit
-
-
antirez authored
-
- 15 Mar, 2018 8 commits
- 01 Dec, 2017 5 commits
-
-
antirez authored
-
antirez authored
The approach used is to set a fixed header at the start of every listpack blob (that contains many entries). The header contains a "master" ID and fields, that are initially just obtained from the first entry inserted in the listpack, so that the first enty is always well compressed. Later every new entry is checked against these fields, and if it matches, the SAMEFIELD flag is set in the entry so that we know to just use the master entry flags. The IDs are always delta-encoded against the first entry. This approach avoids cascading effects in which entries are encoded depending on the previous entries, in order to avoid complexity and rewritings of the data when data is removed in the middle (which is a planned feature).
-
antirez authored
-
antirez authored
-
antirez authored
-