- 27 May, 2020 1 commit
-
-
antirez authored
After a closer look, the Redis core devleopers all believe that this was too fragile, caused many bugs that we didn't expect and that were very hard to track. Better to find an alternative solution that is simpler.
-
- 22 May, 2020 1 commit
-
-
antirez authored
Otherwise we run into that: Backtrace: src/redis-server 127.0.0.1:21322(logStackTrace+0x45)[0x479035] src/redis-server 127.0.0.1:21322(sigsegvHandler+0xb9)[0x4797f9] /lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7fd373c5e390] src/redis-server 127.0.0.1:21322(_serverAssert+0x6a)[0x47660a] src/redis-server 127.0.0.1:21322(freeReplicationBacklog+0x42)[0x451282] src/redis-server 127.0.0.1:21322[0x4552d4] src/redis-server 127.0.0.1:21322[0x4c5593] src/redis-server 127.0.0.1:21322(aeProcessEvents+0x2e6)[0x42e786] src/redis-server 127.0.0.1:21322(aeMain+0x1d)[0x42eb0d] src/redis-server 127.0.0.1:21322(main+0x4c5)[0x42b145] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7fd3738a3830] src/redis-server 127.0.0.1:21322(_start+0x29)[0x42b409] Since we disconnect all the replicas and free the replication backlog in certain replication paths, and the code that will free the replication backlog expects that no replica is connected. However we still need to free the replicas asynchronously in certain cases, as documented in the top comment of disconnectSlaves().
-
- 15 May, 2020 1 commit
-
-
antirez authored
The context is issue #7205: since the introduction of threaded I/O we close clients asynchronously by default from readQueryFromClient(). So we should no longer prevent the caching of the master client, to later PSYNC incrementally, if such flags are set. However we also don't want the master client to be cached with such flags (would be closed immediately after being restored). And yet we want a way to understand if a master was closed because of a protocol error, and in that case prevent the caching.
-
- 14 May, 2020 1 commit
-
-
antirez authored
Related to #7234.
-
- 05 May, 2020 1 commit
-
-
Titouan Christophe authored
This works because this struct is never referenced by its name, but always by its type. This prevents a conflict with struct user from <sys/user.h> when compiling against uclibc. Signed-off-by:
Titouan Christophe <titouan.christophe@railnova.eu>
-
- 02 May, 2020 2 commits
-
-
hwware authored
-
zhenwei pi authored
Currently, there are several types of threads/child processes of a redis server. Sometimes we need deeply optimise the performance of redis, so we would like to isolate threads/processes. There were some discussion about cpu affinity cases in the issue: https://github.com/antirez/redis/issues/2863 So implement cpu affinity setting by redis.conf in this patch, then we can config server_cpulist/bio_cpulist/aof_rewrite_cpulist/ bgsave_cpulist by cpu list. Examples of cpulist in redis.conf: server_cpulist 0-7:2 means cpu affinity 0,2,4,6 bio_cpulist 1,3 means cpu affinity 1,3 aof_rewrite_cpulist 8-11 means cpu affinity 8,9,10,11 bgsave_cpulist 1,10-11 means cpu affinity 1,10,11 Test on linux/freebsd, both work fine. Signed-off-by:
zhenwei pi <pizhenwei@bytedance.com>
-
- 27 Apr, 2020 1 commit
-
-
Oran Agra authored
Now both master and replicas keep track of the last replication offset that contains meaningful data (ignoring the tailing pings), and both trim that tail from the replication backlog, and the offset with which they try to use for psync. the implication is that if someone missed some pings, or even have excessive pings that the promoted replica has, it'll still be able to psync (avoid full sync). the downside (which was already committed) is that replicas running old code may fail to psync, since the promoted replica trims pings form it's backlog. This commit adds a test that reproduces several cases of promotions and demotions with stale and non-stale pings Background: The mearningful offset on the master was added recently to solve a problem were the master is left all alone, injecting PINGs into it's backlog when no one is listening and then gets demoted and tries to replicate from a replica that didn't have any of the PINGs (or at least not the last ones). however, consider this case: master A has two replicas (B and C) replicating directly from it. there's no traffic at all, and also no network issues, just many pings in the tail of the backlog. now B gets promoted, A becomes a replica of B, and C remains a replica of A. when A gets demoted, it trims the pings from its backlog, and successfully replicate from B. however, C is still aware of these PINGs, when it'll disconnect and re-connect to A, it'll ask for something that's not in the backlog anymore (since A trimmed the tail of it's backlog), and be forced to do a full sync (something it didn't have to do before the meaningful offset fix). Besides that, the psync2 test was always failing randomly here and there, it turns out the reason were PINGs. Investigating it shows the following scenario: cycle 1: redis #1 is master, and all the rest are direct replicas of #1 cycle 2: redis #2 is promoted to master, #1 is a replica of #2 and #3 is replica of #1 now we see that when #1 is demoted it prints: 17339:S 21 Apr 2020 11:16:38.523 * Using the meaningful offset 3929963 instead of 3929977 to exclude the final PINGs (14 bytes difference) 17339:S 21 Apr 2020 11:16:39.391 * Trying a partial resynchronization (request e2b3f8817735fdfe5fa4626766daa938b61419e5:3929964). 17339:S 21 Apr 2020 11:16:39.392 * Successful partial resynchronization with master. and when #3 connects to the demoted #2, #2 says: 17339:S 21 Apr 2020 11:16:40.084 * Partial resynchronization not accepted: Requested offset for secondary ID was 3929978, but I can reply up to 3929964 so the issue here is that the meaningful offset feature saved the day for the demoted master (since it needs to sync from a replica that didn't get the last ping), but it didn't help one of the other replicas which did get the last ping.
-
- 24 Apr, 2020 1 commit
-
-
antirez authored
STRALGO should be a container for mostly read-only string algorithms in Redis. The algorithms should have two main characteristics: 1. They should be non trivial to compute, and often not part of programming language standard libraries. 2. They should be fast enough that it is a good idea to have optimized C implementations. Next thing I would love to see? A small strings compression algorithm.
-
- 21 Apr, 2020 1 commit
-
-
antirez authored
-
- 09 Apr, 2020 4 commits
-
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
Reloading of the RDB generated by DEBUG POPULATE 5000000 SAVE is now 25% faster. This commit also prepares the ability to have more flexibility when loading stuff from the RDB, since we no longer use dbAdd() but can control exactly how things are added in the database.
-
- 07 Apr, 2020 1 commit
-
-
antirez authored
Related to #5145. Design note: clients may change type when they turn into replicas or are moved into the Pub/Sub category and so forth. Moreover the recomputation of the bytes used is problematic for obvious reasons: it changes continuously, so as a conservative way to avoid accumulating errors, each client remembers the contribution it gave to the sum, and removes it when it is freed or before updating it with the new memory usage.
-
- 01 Apr, 2020 1 commit
-
-
antirez authored
-
- 31 Mar, 2020 2 commits
-
-
Guy Benoish authored
Makse sure call() doesn't wrap replicated commands with a redundant MULTI/EXEC Other, unrelated changes: 1. Formatting compiler warning in INFO CLIENTS 2. Use CLIENT_ID_AOF instead of UINT64_MAX
-
antirez authored
37a10cef introduced automatic wrapping of MULTI/EXEC for the alsoPropagate API. However this collides with the built-in mechanism already present in module.c. To avoid complex changes near Redis 6 GA this commit introduces the ability to exclude call() MUTLI/EXEC wrapping for also propagate in order to continue to use the old code paths in module.c.
-
- 27 Mar, 2020 6 commits
-
-
antirez authored
-
antirez authored
Now that this mechanism is the sole one used for blocked clients timeouts, it is more wise to cleanup the table when the client unblocks for any reason. We use a flag: CLIENT_IN_TO_TABLE, in order to avoid a radix tree lookup when the client was already removed from the table because we processed it by scanning the radix tree.
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
- 25 Mar, 2020 1 commit
-
-
antirez authored
A very commonly signaled operational problem with Redis master-replicas sets is that, once the master becomes unavailable for some reason, especially because of network problems, many times it wont be able to perform a partial resynchronization with the new master, once it rejoins the partition, for the following reason: 1. The master becomes isolated, however it keeps sending PINGs to the replicas. Such PINGs will never be received since the link connection is actually already severed. 2. On the other side, one of the replicas will turn into the new master, setting its secondary replication ID offset to the one of the last command received from the old master: this offset will not include the PINGs sent by the master once the link was already disconnected. 3. When the master rejoins the partion and is turned into a replica, its offset will be too advanced because of the PINGs, so a PSYNC will fail, and a full synchronization will be required. Related to issue #7002 and other discussion we had in the past around this problem.
-
- 18 Mar, 2020 1 commit
-
-
WuYunlong authored
Before this commit, when upgrading a replica, expired keys will not be loaded, thus causing replica having less keys in db. To this point, master and replica's keys is logically consistent. However, before the keys in master and replica are physically consistent, that is, they have the same dbsize, if master got a problem and the replica got promoted and becomes new master of that partition, and master updates a key which does not exist on master, but physically exists on the old master(new replica), the old master would refuse to update the key, thus causing master and replica data inconsistent. How could this happen? That's all because of the wrong judgement of roles while starting up the server. We can not use server.masterhost to judge if the server is master or replica, since it fails in cluster mode. When we start the server, we load rdb and do want to load expired keys, and do not want to have the ability to active expire keys, if it is a replica.
-
- 16 Mar, 2020 1 commit
-
-
antirez authored
Note that this as a side effect fixes Sentinel "requirepass" mode.
-
- 04 Mar, 2020 2 commits
-
-
antirez authored
-
bodong.ybd authored
-
- 03 Mar, 2020 1 commit
-
-
antirez authored
-
- 24 Feb, 2020 1 commit
-
-
antirez authored
-
- 19 Feb, 2020 1 commit
-
-
antirez authored
-
- 11 Feb, 2020 2 commits
- 10 Feb, 2020 1 commit
-
-
antirez authored
-
- 07 Feb, 2020 2 commits
- 06 Feb, 2020 1 commit
-
-
Guy Benoish authored
1. Call emptyDb even in case of diskless-load: We want modules to get the same FLUSHDB event as disk-based replication. 2. Do not fire any module events when flushing the backups array. 3. Delete redundant call to signalFlushedDb (Called from emptyDb).
-
- 05 Feb, 2020 1 commit
-
-
Oran Agra authored
-
- 04 Feb, 2020 1 commit
-
-
antirez authored
-