- 13 Jan, 2020 1 commit
-
-
antirez authored
Related to #6110.
-
- 10 Jan, 2020 1 commit
-
-
antirez authored
We exit later, so no bug fixed, but it is more correct. See #6054, thanks to @ShooterIT for finding the issue.
-
- 08 Jan, 2020 2 commits
- 07 Jan, 2020 2 commits
-
-
Leo Murillo authored
-
yz1509 authored
-
- 04 Jan, 2020 1 commit
-
-
Itamar Haber authored
Instead of 512, use the defined max from networking.c
-
- 01 Jan, 2020 2 commits
-
-
antirez authored
Likely fix #6723. This is what happens AFAIK: we enter the main loop where we expire stuff until a given percentage of keys is still found to be logically expired. There are however other potential exit conditions. However the "sampled" variable is not always incremented inside the loop, because we may found no valid slot as we scan the hash table, but just NULLs ad dict entries. So when the do/while loop condition is triggered at the end, we do (expired*100/sampled), dividing by zero if we sampled 0 keys.
-
John Sully authored
-
- 31 Dec, 2019 2 commits
-
-
ShooterIT authored
-
WuYunlong authored
Funcion adjustOpenFilesLimit() has an implicit parameter, which is server.maxclients. This function aims to ajust maximum file descriptor number according to server.maxclients by best effort, which is "bestlimit" could be lower than "maxfiles" but greater than "oldlimit". When we try to increase "maxclients" using CONFIG SET command, we could increase maximum file descriptor number to a bigger value without calling aeResizeSetSize the same time. When later more and more clients connect to server, the allocated fd could be bigger and bigger, and eventually exceeds events size of aeEventLoop.events. When new nodes joins the cluster, new link is created, together with new fd, but when calling aeCreateFileEvent, we did not check the return value. In this case, we have a non-null "link" but the associated fd is not registered. So when we dynamically set "maxclients" we could reach an inconsistency between maximum file descriptor number of the process and server.maxclients. And later could cause cluster link and link fd inconsistency. While setting "maxclients" dynamically, we consider it as failed when resulting "maxclients" is not the same as expected. We try to restore back the maximum file descriptor number when we failed to set "maxclients" to the specified value, so that server.maxclients could act as a guard as before.
-
- 30 Dec, 2019 1 commit
-
-
Guy Benoish authored
This commit solves the following bug: 127.0.0.1:6379> XGROUP CREATE x grp $ MKSTREAM OK 127.0.0.1:6379> XADD x 666 f v "666-0" 127.0.0.1:6379> XREADGROUP GROUP grp Alice BLOCK 0 STREAMS x > 1) 1) "x" 2) 1) 1) "666-0" 2) 1) "f" 2) "v" 127.0.0.1:6379> XADD x 667 f v "667-0" 127.0.0.1:6379> XDEL x 667 (integer) 1 127.0.0.1:6379> XREADGROUP GROUP grp Alice BLOCK 0 STREAMS x > 1) 1) "x" 2) (empty array) The root cause is that we use s->last_id in streamCompareID while we should use the last *valid* ID
-
- 29 Dec, 2019 2 commits
-
-
antirez authored
Happened when we set the name to "" to cancel the name. Was introduced during the RESP3 refactoring. See #6036.
-
antirez authored
This bug is from the first version of Redis. Probably the problem here is that before we used an SDS split function that created empty strings for additional spaces, like in "SET foo bar". AFAIK later we replaced it with the curretn sdssplitarg() API that has no such a problem. As a result, we introduced a bug, where it is no longer possible to do something like: SET foo "" Using the inline protocol. Now it is fixed.
-
- 26 Dec, 2019 2 commits
-
-
Oran Agra authored
- make lua-replicate-commands mutable (it never was, but i don't see why) - make tcp-backlog immutable (fix a recent refactory mistake) - increase the max limit of a few configs to match what they were before the recent refactory
-
Guy Benoish authored
This commit solves several edge cases that are related to exhausting the streamID limits: We should correctly calculate the succeeding streamID instead of blindly incrementing 'seq' This affects both XREAD and XADD. Other (unrelated) changes: Reply with a better error message when trying to add an entry to a stream that has exhausted last_id
-
- 23 Dec, 2019 1 commit
-
-
Yossi Gottlieb authored
-
- 20 Dec, 2019 1 commit
-
-
antirez authored
-
- 18 Dec, 2019 6 commits
-
-
antirez authored
-
antirez authored
-
antirez authored
-
zhaozhao.zz authored
-
zhaozhao.zz authored
-
- 17 Dec, 2019 11 commits
-
-
antirez authored
-
Madelyn Olson authored
-
Madelyn Olson authored
-
Madelyn Olson authored
-
Madelyn Olson authored
-
Madelyn Olson authored
-
Madelyn Olson authored
-
Madelyn Olson authored
-
Madelyn Olson authored
-
Madelyn Olson authored
-
Madelyn Olson authored
-
- 16 Dec, 2019 1 commit
-
-
antirez authored
-
- 13 Dec, 2019 1 commit
-
-
antirez authored
-
- 12 Dec, 2019 3 commits
-
-
Yossi Gottlieb authored
With the previous API, a NULL return value was ambiguous and could represent either an old value of NULL or an error condition. The new API returns a status code and allows the old value to be returned by-reference. This commit also includes test coverage based on tests/modules/datatype.c which did not exist at the time of the original commit.
-
Oran Agra authored
since the refactory of config.c, it was initialized from config_hz in initServer but apparently that's too late since the config file loading creates objects which call LRU_CLOCK
-
antirez authored
-