- 28 May, 2020 3 commits
- 25 May, 2020 6 commits
-
-
antirez authored
-
zhaozhao.zz authored
After adjustMeaningfulReplOffset(), all the other related variable should be updated, including server.second_replid_offset. Or the old version redis like 5.0 may receive wrong data from replication stream, cause redis 5.0 can sync with redis 6.0, but doesn't know meaningful offset.
-
Oran Agra authored
-
antirez authored
Otherwise we run into that: Backtrace: src/redis-server 127.0.0.1:21322(logStackTrace+0x45)[0x479035] src/redis-server 127.0.0.1:21322(sigsegvHandler+0xb9)[0x4797f9] /lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7fd373c5e390] src/redis-server 127.0.0.1:21322(_serverAssert+0x6a)[0x47660a] src/redis-server 127.0.0.1:21322(freeReplicationBacklog+0x42)[0x451282] src/redis-server 127.0.0.1:21322[0x4552d4] src/redis-server 127.0.0.1:21322[0x4c5593] src/redis-server 127.0.0.1:21322(aeProcessEvents+0x2e6)[0x42e786] src/redis-server 127.0.0.1:21322(aeMain+0x1d)[0x42eb0d] src/redis-server 127.0.0.1:21322(main+0x4c5)[0x42b145] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7fd3738a3830] src/redis-server 127.0.0.1:21322(_start+0x29)[0x42b409] Since we disconnect all the replicas and free the replication backlog in certain replication paths, and the code that will free the replication backlog expects that no replica is connected. However we still need to free the replicas asynchronously in certain cases, as documented in the top comment of disconnectSlaves().
-
ShooterIT authored
-
antirez authored
Citing from the issue: btw I suggest we change this fix to something else: * We revert the fix. * We add a call that disconnects chained replicas in the place where we trim the replica (that is a master i this case) offset. This way we can avoid disconnections when there is no trimming of the backlog. Note that we now want to disconnect replicas asynchronously in disconnectSlaves(), because it's in general safer now that we can call it from freeClient(). Otherwise for instance the command: CLIENT KILL TYPE master May crash: clientCommand() starts running the linked of of clients, looking for clients to kill. However it finds the master, kills it calling freeClient(), but this in turn calls replicationCacheMaster() that may also call disconnectSlaves() now. So the linked list iterator of the clientCommand() will no longer be valid.
-
- 22 May, 2020 23 commits
-
-
Madelyn Olson authored
-
Qu Chen authored
Disconnect chained replicas when the replica performs PSYNC with the master always to avoid replication offset mismatch between master and chained replicas.
-
hwware authored
-
hwware authored
-
ShooterIT authored
-
Yossi Gottlieb authored
-
ShooterIT authored
Fix #7275.
-
zhaozhao.zz authored
-
Oran Agra authored
There's a rare case which leads to stagnation in the defragger, causing it to keep scanning the keyspace and do nothing (not moving any allocation), this happens when all the allocator slabs of a certain bin have the same % utilization, but the slab from which new allocations are made have a lower utilization. this commit fixes it by removing the current slab from the overall average utilization of the bin, and also eliminate any precision loss in the utilization calculation and move the decision about the defrag to reside inside jemalloc. and also add a test that consistently reproduce this issue.
-
Oran Agra authored
also support: debug mallctl-str thread.tcache.flush VOID
-
hujie authored
in ACLSetUserCommandBit, when the command bit overflows, no operation is performed, so no need clear the USER_FLAG_ALLCOMMANDS flag. in ACLSetUser, when adding subcommand, we don't need to call ACLGetCommandID ahead since subcommand may be empty.
-
ShooterIT authored
The function of generating random data is designed by antirez. See #7196.
-
hwware authored
-
WuYunlong authored
-
WuYunlong authored
-
ShooterIT authored
-
Madelyn Olson authored
-
Benjamin Sergeant authored
-
Benjamin Sergeant authored
-
Benjamin Sergeant authored
This make it so that all prompts for all redis-cli --cluster commands are automatically answered with a yes.
-
antirez authored
-
- 16 May, 2020 2 commits
-
-
antirez authored
-
antirez authored
This was broken in 1a7cd2c0: we identified a crash in the CI, what was happening before the fix should be like that: 1. The client gets in the async free list. 2. However freeClient() gets called again against the same client which is a master. 3. The client arrived in freeClient() with the CLOSE_ASAP flag set. 4. The master gets cached, but NOT removed from the CLOSE_ASAP linked list. 5. The master client that was cached was immediately removed since it was still in the list. 6. Redis accessed a freed cached master. This is how the crash looked like: === REDIS BUG REPORT START: Cut & paste starting from here === 1092:S 16 May 2020 11:44:09.731 # Redis 999.999.999 crashed by signal: 11 1092:S 16 May 2020 11:44:09.731 # Crashed running the instruction at: 0x447e18 1092:S 16 May 2020 11:44:09.731 # Accessing address: 0xffffffffffffffff 1092:S 16 May 2020 11:44:09.731 # Failed assertion: (:0) ------ STACK TRACE ------ EIP: src/redis-server 127.0.0.1:21300(readQueryFromClient+0x48)[0x447e18] And the 0xffff address access likely comes from accessing an SDS that is set to NULL (we go -1 offset to read the header).
-
- 15 May, 2020 6 commits
-
-
antirez authored
-
Yossi Gottlieb authored
Seems like on some systems choosing specific TLS v1/v1.1 versions no longer works as expected. Test is reduced for v1.2 now which is still good enough to test the mechansim, and matters most anyway.
-
Yossi Gottlieb authored
This is really required only for older OpenSSL versions. Also, at the moment Redis does not use OpenSSL from multiple threads so this will only be useful if modules end up doing that.
-
David Carlier authored
This platform supports CPU affinity (but not OpenBSD).
-
Madelyn Olson authored
-
antirez authored
The context is issue #7205: since the introduction of threaded I/O we close clients asynchronously by default from readQueryFromClient(). So we should no longer prevent the caching of the master client, to later PSYNC incrementally, if such flags are set. However we also don't want the master client to be cached with such flags (would be closed immediately after being restored). And yet we want a way to understand if a master was closed because of a protocol error, and in that case prevent the caching.
-