- 04 May, 2020 4 commits
-
-
antirez authored
See #7188.
-
antirez authored
-
Guy Benoish authored
Same goes for XGROUP DELCONSUMER (But in this case, it doesn't have any visible effect)
-
Oran Agra authored
* fix memlry leaks with diskless replica short read. * fix a few timing issues with valgrind runs * fix issue with valgrind and watchdog schedule signal about the valgrind WD issue: the stack trace test in logging.tcl, has issues with valgrind: ==28808== Can't extend stack to 0x1ffeffdb38 during signal delivery for thread 1: ==28808== too small or bad protection modes it seems to be some valgrind bug with SA_ONSTACK. SA_ONSTACK seems unneeded since WD is not recursive (SA_NODEFER was removed), also, not sure if it's even valid without a call to sigaltstack()
-
- 03 May, 2020 1 commit
-
-
Deliang Yang authored
-
- 02 May, 2020 2 commits
-
-
zhenwei pi authored
Currently, there are several types of threads/child processes of a redis server. Sometimes we need deeply optimise the performance of redis, so we would like to isolate threads/processes. There were some discussion about cpu affinity cases in the issue: https://github.com/antirez/redis/issues/2863 So implement cpu affinity setting by redis.conf in this patch, then we can config server_cpulist/bio_cpulist/aof_rewrite_cpulist/ bgsave_cpulist by cpu list. Examples of cpulist in redis.conf: server_cpulist 0-7:2 means cpu affinity 0,2,4,6 bio_cpulist 1,3 means cpu affinity 1,3 aof_rewrite_cpulist 8-11 means cpu affinity 8,9,10,11 bgsave_cpulist 1,10-11 means cpu affinity 1,10,11 Test on linux/freebsd, both work fine. Signed-off-by:
zhenwei pi <pizhenwei@bytedance.com>
-
Oran Agra authored
When deffered reply is added the previous reply node cannot be used so all the extra space we allocated in it is wasted. in case someone uses deffered replies in a loop, each time adding a small reply, each of these reply nodes (the small string reply) would have consumed a 16k block. now when we add anther diferred reply node, we trim the unused portion of the previous reply block. see #7123 cherry picked from commit fb732f7a with fix to handle a crash with LIBC allocator, which apparently can return the same pointer despite changing it's size. i.e. shrinking an allocation of 16k into 56 bytes without changing the pointer.
-
- 01 May, 2020 3 commits
-
-
antirez authored
We could use uint64_t specific macros, but after all it's simpler to just use an obvious equivalent type plus casting: this will be a no op and is simpler than fixed size types printf macros.
-
antirez authored
Probably no performance changes, but the code should be trivial to read as in "No threading? Use the normal function and return".
-
- 30 Apr, 2020 6 commits
-
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
zhaozhao.zz authored
1. add eviction-lazyfree monitor 2. put eviction-del & eviction-lazyfree into eviction-cycle that means eviction-cycle contains all the latency in the eviction cycle including del and lazyfree 3. use getMaxmemoryState to check if we can break in lazyfree-evict
-
- 29 Apr, 2020 5 commits
-
-
antirez authored
-
antirez authored
-
antirez authored
Also improve the message to make clear that there is no *clear* owner, not that there is no owner at all.
-
antirez authored
-
srzhao authored
If client gets blocked again in `processUnblockedClients`, redis will not send `REPLCONF GETACK *` to slaves untill next eventloop, so the client will be blocked for 100ms by default(10hz) if no other file event fired. move server.get_ack_from_slaves sinppet after `processUnblockedClients`, so that both the first WAIT command that puts client in blocked context and the following WAIT command processed in processUnblockedClients would trigger redis-sever to send `REPLCONF GETACK *`, so that the eventloop would get `REPLCONG ACK <reploffset>` from slaves and unblocked ASAP.
-
- 28 Apr, 2020 3 commits
-
-
Guy Benoish authored
-
Guy Benoish authored
Introducing XINFO STREAM <key> FULL
-
Oran Agra authored
come to think of it, in theory (not in practice), getDecodedObject can return the same original object with refcount incremented, so the pointer comparision in the previous commit was invalid. so now instead of checking the encoding, we explicitly check the refcount.
-
- 27 Apr, 2020 4 commits
-
-
antirez authored
-
Oran Agra authored
since the recent addition of OBJ_STATIC_REFCOUNT and the assertion in incrRefCount it is now impossible to use dictFind using a static robj, because dictEncObjKeyCompare will call getDecodedObject which tries to increment the refcount just in order to decrement it later.
-
Oran Agra authored
Now both master and replicas keep track of the last replication offset that contains meaningful data (ignoring the tailing pings), and both trim that tail from the replication backlog, and the offset with which they try to use for psync. the implication is that if someone missed some pings, or even have excessive pings that the promoted replica has, it'll still be able to psync (avoid full sync). the downside (which was already committed) is that replicas running old code may fail to psync, since the promoted replica trims pings form it's backlog. This commit adds a test that reproduces several cases of promotions and demotions with stale and non-stale pings Background: The mearningful offset on the master was added recently to solve a problem were the master is left all alone, injecting PINGs into it's backlog when no one is listening and then gets demoted and tries to replicate from a replica that didn't have any of the PINGs (or at least not the last ones). however, consider this case: master A has two replicas (B and C) replicating directly from it. there's no traffic at all, and also no network issues, just many pings in the tail of the backlog. now B gets promoted, A becomes a replica of B, and C remains a replica of A. when A gets demoted, it trims the pings from its backlog, and successfully replicate from B. however, C is still aware of these PINGs, when it'll disconnect and re-connect to A, it'll ask for something that's not in the backlog anymore (since A trimmed the tail of it's backlog), and be forced to do a full sync (something it didn't have to do before the meaningful offset fix). Besides that, the psync2 test was always failing randomly here and there, it turns out the reason were PINGs. Investigating it shows the following scenario: cycle 1: redis #1 is master, and all the rest are direct replicas of #1 cycle 2: redis #2 is promoted to master, #1 is a replica of #2 and #3 is replica of #1 now we see that when #1 is demoted it prints: 17339:S 21 Apr 2020 11:16:38.523 * Using the meaningful offset 3929963 instead of 3929977 to exclude the final PINGs (14 bytes difference) 17339:S 21 Apr 2020 11:16:39.391 * Trying a partial resynchronization (request e2b3f8817735fdfe5fa4626766daa938b61419e5:3929964). 17339:S 21 Apr 2020 11:16:39.392 * Successful partial resynchronization with master. and when #3 connects to the demoted #2, #2 says: 17339:S 21 Apr 2020 11:16:40.084 * Partial resynchronization not accepted: Requested offset for secondary ID was 3929978, but I can reply up to 3929964 so the issue here is that the meaningful offset feature saved the day for the demoted master (since it needs to sync from a replica that didn't get the last ping), but it didn't help one of the other replicas which did get the last ping.
-
antirez authored
-
- 25 Apr, 2020 3 commits
-
-
Madelyn Olson authored
-
Madelyn Olson authored
-
Madelyn Olson authored
-
- 24 Apr, 2020 3 commits
-
-
antirez authored
STRALGO should be a container for mostly read-only string algorithms in Redis. The algorithms should have two main characteristics: 1. They should be non trivial to compute, and often not part of programming language standard libraries. 2. They should be fast enough that it is a good idea to have optimized C implementations. Next thing I would love to see? A small strings compression algorithm.
-
Oran Agra authored
When deffered reply is added the previous reply node cannot be used so all the extra space we allocated in it is wasted. in case someone uses deffered replies in a loop, each time adding a small reply, each of these reply nodes (the small string reply) would have consumed a 16k block. now when we add anther diferred reply node, we trim the unused portion of the previous reply block. see #7123
-
antirez authored
-
- 23 Apr, 2020 5 commits
-
-
antirez authored
-
antirez authored
After all I changed idea again: enabled/disabled should have a more clear meaning, and it only means: you can't authenticate with such user with new connections, however old connections continue to work as expected.
-
antirez authored
Now that we have an interface to use this API directly, via ACL GENPASS, we are no longer sure what people could do with it. So why don't make it a strong primitive exported by Redis in order to create unique IDs and so forth? The implementation was tested against the test vectors that can be found in RFC4231.
-
antirez authored
-
antirez authored
-
- 22 Apr, 2020 1 commit
-
-
antirez authored
-