- 08 Aug, 2020 5 commits
-
-
Wen Hui authored
-
WuYunlong authored
-
Wang Yuan authored
-
Wen Hui authored
-
Wang Yuan authored
Before this fix we where attempting to select a db before creating db the DB, see: #7323 This issue doesn't seem to have any implications, since the selected DB index is 0, the db pointer remains NULL, and will later be correctly set before using this dummy client for the first time. As we know, we call 'moduleInitModulesSystem()' before 'initServer()'. We will allocate memory for server.db in 'initServer', but we call 'createClient()' that will call 'selectDb()' in 'moduleInitModulesSystem()', before the databases where created. Instead, we should call 'createClient()' for moduleFreeContextReusedClient after 'initServer()'.
-
- 07 Aug, 2020 2 commits
-
-
xuannianz authored
The else block would be executed when newlen == 0 and in the case memmove won't be called, so there's no need to set start.
-
fayadexinqing authored
-
- 06 Aug, 2020 3 commits
-
-
Oran Agra authored
Diskless master has some inherent latencies. 1) fork starts with delay from cron rather than immediately 2) replica is put online only after an ACK. but the ACK was sent only once a second. 3) but even if it would arrive immediately, it will not register in case cron didn't yet detect that the fork is done. Besides that, when a replica disconnects, it doesn't immediately attempts to re-connect, it waits for replication cron (one per second). in case it was already online, it may be important to try to re-connect as soon as possible, so that the backlog at the master doesn't vanish. In case it disconnected during rdb transfer, one can argue that it's not very important to re-connect immediately, but this is needed for the "diskless loading short read" test to be able to run 100 iterations in 5 seconds, rather than 3 (waiting for replication cron re-connection) changes in this commit: 1) sync command starts a fork immediately if no sync_delay is configured 2) replica sends REPLCONF ACK when done reading the rdb (rather than on 1s cron) 3) when a replica unexpectedly disconnets, it immediately tries to re-connect rather than waiting 1s 4) when when a child exits, if there is another replica waiting, we spawn a new one right away, instead of waiting for 1s replicationCron. 5) added a call to connectWithMaster from replicationSetMaster. which is called from the REPLICAOF command but also in 3 places in cluster.c, in all of these the connection attempt will now be immediate instead of delayed by 1 second. side note: we can add a call to rdbPipeReadHandler in replconfCommand when getting a REPLCONF ACK from the replica to solve a race where the replica got the entire rdb and EOF marker before we detected that the pipe was closed. in the test i did see this race happens in one about of some 300 runs, but i concluded that this race is unlikely in real life (where the replica is on another host and we're more likely to first detect the pipe was closed. the test runs 100 iterations in 3 seconds, so in some cases it'll take 4 seconds instead (waiting for another REPLCONF ACK). Removing unneeded startBgsaveForReplication from updateSlavesWaitingForBgsave Now that CheckChildrenDone is calling the new replicationStartPendingFork (extracted from serverCron) there's actually no need to call startBgsaveForReplication from updateSlavesWaitingForBgsave anymore, since as soon as updateSlavesWaitingForBgsave returns, CheckChildrenDone is calling replicationStartPendingFork that handles that anyway. The code in updateSlavesWaitingForBgsave had a bug in which it ignored repl-diskless-sync-delay, but removing that code shows that this bug was hiding another bug, which is that the max_idle should have used >= and not >, this one second delay has a big impact on my new test.
-
Oran Agra authored
this race would only happen when two threads paniced at the same time, and even then the only consequence is some extra log lines. race reported in #7391
-
Oran Agra authored
This makes it possible to add tests that generate assertions, and run them with valgrind, making sure that there are no memory violations prior to the assertion. New config options: - crash-log-enabled - can be disabled for cleaner core dumps - crash-memcheck-enabled - useful for faster termination after a crash - use-exit-on-panic - to be used by the test suite so that valgrind can detect leaks and memory corruptions Other changes: - Crash log is printed even on system that dont HAVE_BACKTRACE, i.e. in both SIGSEGV and assert / panic - Assertion and panic won't print registers and code around EIP (which was useless), but will do fast memory test (which may still indicate that the assertion was due to memory corrpution) I had to reshuffle code in order to re-use it, so i extracted come code into function without actually doing any changes to the code: - logServerInfo - logModulesInfo - doFastMemoryTest (with the exception of it being conditional) - dumpCodeAroundEIP changes to the crash report on segfault: - logRegisters is called right after the stack trace (before info) done just in order to have more re-usable code - stack trace skips the first two items on the stack (the crash log and signal handler functions)
-
- 05 Aug, 2020 4 commits
-
-
Itamar Haber authored
Prevents default save configuration being reset...
-
Oran Agra authored
this internal flag is there so that some commands do not comply to `--cluster-yes`
-
Frank Meier authored
the variable was introduced only in the 5.0 branch in #5879 bc6c1c40
-
Frank Meier authored
-
- 04 Aug, 2020 3 commits
-
-
WuYunlong authored
-
Tyson Andre authored
Syntax: `ZMSCORE KEY MEMBER [MEMBER ...]` This is an extension of #2359 amended by Tyson Andre to work with the changed unstable API, add more tests, and consistently return an array. - It seemed as if it would be more likely to get reviewed after updating the implementation. Currently, multi commands or lua scripting to call zscore multiple times would almost definitely be less efficient than a native ZMSCORE for the following reasons: - Need to fetch the set from the string every time instead of reusing the C pointer. - Using pipelining or multi-commands would result in more bytes sent by the client for the repeated `ZMSCORE KEY` sections. - Need to specially encode the data and decode it from the client for lua-based solutions. - The fastest solution I've seen for large sets(thousands or millions) involves lua and a variadic ZADD, then a ZINTERSECT, then a ZRANGE 0 -1, then UNLINK of a temporary set (or lua). This is still inefficient. Co-authored-by:
Tyson Andre <tysonandre775@hotmail.com>
-
Oran Agra authored
apparenlty on github actions sometimes 500ms is not enough
-
- 02 Aug, 2020 1 commit
-
-
hujie authored
-
- 31 Jul, 2020 3 commits
-
-
Yossi Gottlieb authored
-
Oran Agra authored
besides, hooks test was time sensitive. when the replica managed to reconnect quickly after the client kill, the test would fail
-
- 30 Jul, 2020 4 commits
-
-
Yossi Gottlieb authored
-
WuYunlong authored
-
fayadexinqing authored
broadcast a PONG message when slot's migration is over, which may reduce the moved request of clients (#7571)
-
Kevin McGehee authored
Use higher-level API to funnel all generic propagation through single function call.
-
- 29 Jul, 2020 4 commits
-
-
Yossi Gottlieb authored
-
Arun Ranganathan authored
Co-authored-by:
Oran Agra <oran@redislabs.com>
-
namtsui authored
The Redis sentinel would crash with a segfault after a few minutes because it tried to read from a page without read permissions. Check up front whether the sds is long enough to contain redis:slave or redis:master before memcmp() as is done everywhere else in sentinelRefreshInstanceInfo(). Bug report and commit message from Theo Buehler. Fix from Nam Nguyen. Co-authored-by:
Nam Nguyen <namn@berkeley.edu>
-
Wen Hui authored
-
- 28 Jul, 2020 5 commits
-
-
Wen Hui authored
valsize was not modified during the for loop below instead of getting from c->argv[4], therefore there is no need to put inside the for loop.. Moreover, putting the check outside loop will also avoid memory leaking, decrRefCount(key) should be called in the original code if we put the check in for loop
-
Yossi Gottlieb authored
Fix consistency test added in af5167b7 without considering TLS redis-cli configuration.
-
Yossi Gottlieb authored
The connection API may create an accepted connection object in an error state, and callers are expected to check it before attempting to use it. Co-authored-by:
mrpre <mrpre@163.com>
-
Oran Agra authored
- the test now waits for specific set of log messages rather than wait for timeout looking for just one message. - we don't wanna sample the current length of the log after an action, due to a race, we need to start the search from the line number of the last message we where waiting for. - when attempting to trigger a full sync, use multi-exec to avoid a race where the replica manages to re-connect before we completed the set of actions that should force a full sync. - fix verify_log_message which was broken and unused
-
Jiayuan Chen authored
Adds an `optional` value to the previously boolean `tls-auth-clients` configuration keyword. Co-authored-by:
Yossi Gottlieb <yossigo@gmail.com>
-
- 27 Jul, 2020 3 commits
-
-
Yossi Gottlieb authored
Initialize and configure OpenSSL even when tls-port is not used, because we may still have tls-cluster or tls-replication. Also, make sure to reconfigure OpenSSL when these parameters are changed as TLS could have been enabled for the first time.
-
Oran Agra authored
-
Yossi Gottlieb authored
Initialize and configure OpenSSL even when tls-port is not used, because we may still have tls-cluster or tls-replication. Also, make sure to reconfigure OpenSSL when these parameters are changed as TLS could have been enabled for the first time.
-
- 26 Jul, 2020 1 commit
-
-
grishaf authored
-
- 24 Jul, 2020 1 commit
-
-
zhaozhao.zz authored
-
- 23 Jul, 2020 1 commit
-
-
Oran Agra authored
on ci.redis.io the test fails a lot, reporting that bgsave didn't end. increaseing the timeout we wait for that bgsave to get aborted. in addition to that, i also verify that it indeed got aborted by checking that the save counter wasn't reset. add another test to verify that a successful bgsave indeed resets the change counter.
-