- 10 Jun, 2020 10 commits
- 09 Jun, 2020 1 commit
-
-
antirez authored
-
- 08 Jun, 2020 3 commits
-
-
Oran Agra authored
the recent change in that loop (iteration rather than waiting for it to be empty) was intended to avoid an endless loop in case some slave would refuse to be freed. but the lookup of the first client remained, which would have caused it to try the first one again and again instead of moving on.
-
Oran Agra authored
-
Oran Agra authored
Much like MULTI/EXEC/DISCARD, the WATCH and UNWATCH are not actually operating on the database or server state, but instead operate on the client state. the client may send them all in one long pipeline and check all the responses only at the end, so failing them may lead to a mismatch between the client state on the server and the one on the client end, and execute the wrong commands (ones that were meant to be discarded) the watched keys are not actually stored in the client struct, but they are in fact part of the client state. for instance, they're not cleared or moved in SWAPDB or FLUSHDB.
-
- 07 Jun, 2020 1 commit
-
-
xhe authored
HELLO should return the current proto version, while the code hardcoded 3
-
- 06 Jun, 2020 2 commits
-
- 03 Jun, 2020 1 commit
-
-
zhaozhao.zz authored
-
- 02 Jun, 2020 1 commit
-
-
zhaozhao.zz authored
related #7234
-
- 29 May, 2020 1 commit
-
-
antirez authored
Now it is also possible for ACL SETUSER to accept empty strings as valid operations (doing nothing), so for instance ACL SETUSER myuser "" Will have just the effect of creating a user in the default state. This should fix #7329.
-
- 28 May, 2020 1 commit
-
-
antirez authored
-
- 27 May, 2020 5 commits
-
-
antirez authored
-
Kevin Fwu authored
This impacts client verification for chained certificates (such as Lets Encrypt certificates). Client Verify requires the full chain in order to properly verify the certificate.
-
antirez authored
After a closer look, the Redis core devleopers all believe that this was too fragile, caused many bugs that we didn't expect and that were very hard to track. Better to find an alternative solution that is simpler.
-
antirez authored
We want to react a bit more aggressively if we sense that the master is sending us some corrupted stream. By setting the protocol error we both ensure that the replica will disconnect, and avoid caching the master so that a full SYNC will be required. This is protective against replication bugs.
-
Liu Zhen authored
`clusterStartHandshake` will start hand handshake and eventually send CLUSTER MEET message, which is strictly prohibited in the REDIS CLUSTER SPEC. Only system administrator can initiate CLUSTER MEET message. Futher, according to the SPEC, rather than IP/PORT pairs, only nodeid can be trusted.
-
- 26 May, 2020 2 commits
- 25 May, 2020 2 commits
-
-
antirez authored
-
zhaozhao.zz authored
After adjustMeaningfulReplOffset(), all the other related variable should be updated, including server.second_replid_offset. Or the old version redis like 5.0 may receive wrong data from replication stream, cause redis 5.0 can sync with redis 6.0, but doesn't know meaningful offset.
-
- 22 May, 2020 3 commits
-
-
antirez authored
Otherwise we run into that: Backtrace: src/redis-server 127.0.0.1:21322(logStackTrace+0x45)[0x479035] src/redis-server 127.0.0.1:21322(sigsegvHandler+0xb9)[0x4797f9] /lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7fd373c5e390] src/redis-server 127.0.0.1:21322(_serverAssert+0x6a)[0x47660a] src/redis-server 127.0.0.1:21322(freeReplicationBacklog+0x42)[0x451282] src/redis-server 127.0.0.1:21322[0x4552d4] src/redis-server 127.0.0.1:21322[0x4c5593] src/redis-server 127.0.0.1:21322(aeProcessEvents+0x2e6)[0x42e786] src/redis-server 127.0.0.1:21322(aeMain+0x1d)[0x42eb0d] src/redis-server 127.0.0.1:21322(main+0x4c5)[0x42b145] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7fd3738a3830] src/redis-server 127.0.0.1:21322(_start+0x29)[0x42b409] Since we disconnect all the replicas and free the replication backlog in certain replication paths, and the code that will free the replication backlog expects that no replica is connected. However we still need to free the replicas asynchronously in certain cases, as documented in the top comment of disconnectSlaves().
-
antirez authored
Citing from the issue: btw I suggest we change this fix to something else: * We revert the fix. * We add a call that disconnects chained replicas in the place where we trim the replica (that is a master i this case) offset. This way we can avoid disconnections when there is no trimming of the backlog. Note that we now want to disconnect replicas asynchronously in disconnectSlaves(), because it's in general safer now that we can call it from freeClient(). Otherwise for instance the command: CLIENT KILL TYPE master May crash: clientCommand() starts running the linked of of clients, looking for clients to kill. However it finds the master, kills it calling freeClient(), but this in turn calls replicationCacheMaster() that may also call disconnectSlaves() now. So the linked list iterator of the clientCommand() will no longer be valid.
-
Qu Chen authored
Disconnect chained replicas when the replica performs PSYNC with the master always to avoid replication offset mismatch between master and chained replicas.
-
- 21 May, 2020 6 commits
-
-
Madelyn Olson authored
-
hwware authored
-
hwware authored
-
ShooterIT authored
-
ShooterIT authored
Fix #7275.
-
zhaozhao.zz authored
-
- 20 May, 2020 1 commit
-
-
Oran Agra authored
There's a rare case which leads to stagnation in the defragger, causing it to keep scanning the keyspace and do nothing (not moving any allocation), this happens when all the allocator slabs of a certain bin have the same % utilization, but the slab from which new allocations are made have a lower utilization. this commit fixes it by removing the current slab from the overall average utilization of the bin, and also eliminate any precision loss in the utilization calculation and move the decision about the defrag to reside inside jemalloc. and also add a test that consistently reproduce this issue.
-