- 08 Jun, 2020 1 commit
-
-
Oran Agra authored
Much like MULTI/EXEC/DISCARD, the WATCH and UNWATCH are not actually operating on the database or server state, but instead operate on the client state. the client may send them all in one long pipeline and check all the responses only at the end, so failing them may lead to a mismatch between the client state on the server and the one on the client end, and execute the wrong commands (ones that were meant to be discarded) the watched keys are not actually stored in the client struct, but they are in fact part of the client state. for instance, they're not cleared or moved in SWAPDB or FLUSHDB.
-
- 27 May, 2020 1 commit
-
-
antirez authored
After a closer look, the Redis core devleopers all believe that this was too fragile, caused many bugs that we didn't expect and that were very hard to track. Better to find an alternative solution that is simpler.
-
- 14 May, 2020 2 commits
- 12 May, 2020 1 commit
-
-
antirez authored
-
- 11 May, 2020 1 commit
-
-
Oran Agra authored
This bug was introduced by a recent change in which readQueryFromClient is using freeClientAsync, and despite the fact that now freeClientsInAsyncFreeQueue is in beforeSleep, that's not enough since it's not called during loading in processEventsWhileBlocked. furthermore, afterSleep was called in that case but beforeSleep wasn't. This bug also caused slowness sine the level-triggered mode of epoll kept signaling these connections as readable causing us to keep doing connRead again and again for ll of these, which keep accumulating. now both before and after sleep are called, but not all of their actions are performed during loading, some are only reserved for the main loop. fixes issue #7215
-
- 05 May, 2020 2 commits
- 02 May, 2020 2 commits
-
-
hwware authored
-
zhenwei pi authored
Currently, there are several types of threads/child processes of a redis server. Sometimes we need deeply optimise the performance of redis, so we would like to isolate threads/processes. There were some discussion about cpu affinity cases in the issue: https://github.com/antirez/redis/issues/2863 So implement cpu affinity setting by redis.conf in this patch, then we can config server_cpulist/bio_cpulist/aof_rewrite_cpulist/ bgsave_cpulist by cpu list. Examples of cpulist in redis.conf: server_cpulist 0-7:2 means cpu affinity 0,2,4,6 bio_cpulist 1,3 means cpu affinity 1,3 aof_rewrite_cpulist 8-11 means cpu affinity 8,9,10,11 bgsave_cpulist 1,10-11 means cpu affinity 1,10,11 Test on linux/freebsd, both work fine. Signed-off-by:
zhenwei pi <pizhenwei@bytedance.com>
-
- 29 Apr, 2020 2 commits
-
-
antirez authored
-
srzhao authored
If client gets blocked again in `processUnblockedClients`, redis will not send `REPLCONF GETACK *` to slaves untill next eventloop, so the client will be blocked for 100ms by default(10hz) if no other file event fired. move server.get_ack_from_slaves sinppet after `processUnblockedClients`, so that both the first WAIT command that puts client in blocked context and the following WAIT command processed in processUnblockedClients would trigger redis-sever to send `REPLCONF GETACK *`, so that the eventloop would get `REPLCONG ACK <reploffset>` from slaves and unblocked ASAP.
-
- 28 Apr, 2020 1 commit
-
-
Oran Agra authored
come to think of it, in theory (not in practice), getDecodedObject can return the same original object with refcount incremented, so the pointer comparision in the previous commit was invalid. so now instead of checking the encoding, we explicitly check the refcount.
-
- 27 Apr, 2020 3 commits
-
-
antirez authored
-
Oran Agra authored
since the recent addition of OBJ_STATIC_REFCOUNT and the assertion in incrRefCount it is now impossible to use dictFind using a static robj, because dictEncObjKeyCompare will call getDecodedObject which tries to increment the refcount just in order to decrement it later.
-
antirez authored
-
- 25 Apr, 2020 1 commit
-
-
Madelyn Olson authored
-
- 24 Apr, 2020 1 commit
-
-
antirez authored
STRALGO should be a container for mostly read-only string algorithms in Redis. The algorithms should have two main characteristics: 1. They should be non trivial to compute, and often not part of programming language standard libraries. 2. They should be fast enough that it is a good idea to have optimized C implementations. Next thing I would love to see? A small strings compression algorithm.
-
- 20 Apr, 2020 1 commit
-
-
Dave-in-lafayette authored
If redis crashes early, before lua is set up (like, if File Descriptor 0 is closed before exec), it will crash again trying to print memory statistics.
-
- 17 Apr, 2020 1 commit
-
-
antirez authored
See issue #7105.
-
- 08 Apr, 2020 1 commit
-
-
antirez authored
See #7071.
-
- 07 Apr, 2020 1 commit
-
-
antirez authored
Related to #5145. Design note: clients may change type when they turn into replicas or are moved into the Pub/Sub category and so forth. Moreover the recomputation of the bytes used is problematic for obvious reasons: it changes continuously, so as a conservative way to avoid accumulating errors, each client remembers the contribution it gave to the sum, and removes it when it is freed or before updating it with the new memory usage.
-
- 02 Apr, 2020 1 commit
-
-
Guy Benoish authored
Example: Client uses a pipe to send the following to a stale replica: MULTI .. do something ... DISCARD The replica will reply the MUTLI with -MASTERDOWN and execute the rest of the commands... A client using a pipe might not be aware that MULTI failed until it's too late. I can't think of a reason why MULTI/EXEC/DISCARD should not be executed on stale replicas... Also, enable MULTI/EXEC/DISCARD during loading
-
- 01 Apr, 2020 2 commits
-
-
antirez authored
-
Guy Benoish authored
By using a "circular BRPOPLPUSH"-like scenario it was possible the get the same client on db->blocking_keys twice (See comment in moduleTryServeClientBlockedOnKey) The fix was actually already implememnted in moduleTryServeClientBlockedOnKey but it had a bug: the funxction should return 0 or 1 (not OK or ERR) Other changes: 1. Added two commands to blockonkeys.c test module (To reproduce the case described above) 2. Simplify blockonkeys.c in order to make testing easier 3. cast raxSize() to avoid warning with format spec
-
- 31 Mar, 2020 4 commits
-
-
antirez authored
-
antirez authored
-
Guy Benoish authored
Makse sure call() doesn't wrap replicated commands with a redundant MULTI/EXEC Other, unrelated changes: 1. Formatting compiler warning in INFO CLIENTS 2. Use CLIENT_ID_AOF instead of UINT64_MAX
-
antirez authored
37a10cef introduced automatic wrapping of MULTI/EXEC for the alsoPropagate API. However this collides with the built-in mechanism already present in module.c. To avoid complex changes near Redis 6 GA this commit introduces the ability to exclude call() MUTLI/EXEC wrapping for also propagate in order to continue to use the old code paths in module.c.
-
- 27 Mar, 2020 8 commits
-
-
antirez authored
-
antirez authored
Now that this mechanism is the sole one used for blocked clients timeouts, it is more wise to cleanup the table when the client unblocks for any reason. We use a flag: CLIENT_IN_TO_TABLE, in order to avoid a radix tree lookup when the client was already removed from the table because we processed it by scanning the radix tree.
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
- 26 Mar, 2020 1 commit
-
-
Valentino Geron authored
-
- 25 Mar, 2020 2 commits
-
-
antirez authored
A very commonly signaled operational problem with Redis master-replicas sets is that, once the master becomes unavailable for some reason, especially because of network problems, many times it wont be able to perform a partial resynchronization with the new master, once it rejoins the partition, for the following reason: 1. The master becomes isolated, however it keeps sending PINGs to the replicas. Such PINGs will never be received since the link connection is actually already severed. 2. On the other side, one of the replicas will turn into the new master, setting its secondary replication ID offset to the one of the last command received from the old master: this offset will not include the PINGs sent by the master once the link was already disconnected. 3. When the master rejoins the partion and is turned into a replica, its offset will be too advanced because of the PINGs, so a PSYNC will fail, and a full synchronization will be required. Related to issue #7002 and other discussion we had in the past around this problem.
-
antirez authored
Related to #7022.
-