- 08 Sep, 2024 1 commit
-
-
Ozan Tezcan authored
#13495 introduced a change to reply -LOADING while flushing existing db on a replica. Some of our tests are sensitive to this change and do no expect -LOADING reply. Fixing a couple of tests that fail time to time.
-
- 03 Sep, 2024 1 commit
-
-
Ozan Tezcan authored
On a full sync, replica starts discarding existing db. If the existing db is huge and flush is happening synchronously, replica may become unresponsive. Adding a change to yield back to event loop while flushing db on a replica. Replica will reply -LOADING in this case. Note that while replica is loading the new rdb, it may get an error and start flushing the partial db. This step may take a long time as well. Similarly, replica will reply -LOADING in this case. To call processEventsWhileBlocked() and reply -LOADING, we need to do: - Set connSetReadHandler() null not to process further data from the master - Set server.loading flag - Call blockingOperationStarts() rdbload() already does these steps and calls processEventsWhileBlocked() while loading the rdb. Added a new call rdbLoadWithEmptyFunc() which accepts callback to flush db before loading rdb or when an error happens while loading. For diskless replication, doing something similar and calling emptyData() after setting required flags. Additional changes: - Allow `appendonly` config change during loading. Config can be changed while loading data on startup or on replication when slave is loading RDB. We allow config change command to update `server.aof_enabled` and then lazily apply config change after loading operation is completed. - Added a test for `replica-lazy-flush` config
-
- 17 Jan, 2022 1 commit
-
-
Oran Agra authored
1. enable diskless replication by default 2. add a new config named repl-diskless-sync-max-replicas that enables replication to start before the full repl-diskless-sync-delay was reached. 3. put replica online sooner on the master (see below) 4. test suite uses repl-diskless-sync-delay of 0 to be faster 5. a few tests that use multiple replica on a pre-populated master, are now using the new repl-diskless-sync-max-replicas 6. fix possible timing issues in a few cluster tests (see below) put replica online sooner on the master ---------------------------------------------------- there were two tests that failed because they needed for the master to realize that the replica is online, but the test code was actually only waiting for the replica to realize it's online, and in diskless it could have been before the master realized it. changes include two things: 1. the tests wait on the right thing 2. issues in the master, putting the replica online in two steps. the master used to put the replica as online in 2 steps. the first step was to mark it as online, and the second step was to enable the write event (only after getting ACK), but in fact the first step didn't contains some of the tasks to put it online (like updating good slave count, and sending the module event). this meant that if a test was waiting to see that the replica is online form the point of view of the master, and then confirm that the module got an event, or that the master has enough good replicas, it could fail due to timing issues. so now the full effect of putting the replica online, happens at once, and only the part about enabling the writes is delayed till the ACK. fix cluster tests -------------------- I added some code to wait for the replica to sync and avoid race conditions. later realized the sentinel and cluster tests where using the original 5 seconds delay, so changed it to 0. this means the other changes are probably not needed, but i suppose they're still better (avoid race conditions)
-
- 31 Oct, 2021 1 commit
-
-
Oran Agra authored
Fix failures introduced by #9695 which was an attempt to solve failures introduced by #9679. And alternative to #9703 (i didn't like the extra argument to kill_instance). Reverting #9695. Instead of stopping AOF on all terminations, stop it only on the two which need it. Do it as part of the test rather than the infra (it was add that kill_instance used `R` to communicate to the instance) Note that the original purpose of these tests was to trigger a crash, but that upsets valgrind so in redis 6.2 i changed it to use SIGTERM, so i now rename the tests (remove "kill" and "crash"). Also add some colors to failures, and the word "FAILED" so that it's searchable. And solve a semi-related race condition in 14-consistency-check.tcl
-
- 30 May, 2021 1 commit
-
-
ny0312 authored
Till now, on replica full-sync we used to transfer absolute time for TTL, however when a command arrived (EXPIRE or EXPIREAT), we used to propagate it as is to replicas (possibly with relative time), but always translate it to EXPIREAT (absolute time) to AOF. This commit changes that and will always use absolute time for propagation. see discussion in #8433 Furthermore, we Introduce new commands: `EXPIRETIME/PEXPIRETIME` that allow extracting the absolute TTL time from a key.
-
- 07 Sep, 2020 1 commit
-
-
Oran Agra authored
This test was failing from time to time see discussion at the bottom of #7635 This was probably due to timing, the DEBUG SLEEP executed by redis-cli didn't sleep for enough time. This commit changes: 1) use SET-ACTIVE-EXPIRE instead of DEBUG SLEEP 2) reduce many `after` sleeps with retry loops to speed up the test. 3) add many comment explaining the different steps of the test and it's purpose. 4) config appendonly before populating the volatile keys, so that they'll be part of the AOF command stream rather than the preamble RDB portion. other complications: recently kill_instance switched from SIGKILL to SIGTERM, and this would sometimes fail since there was an AOFRW running in the background. now we wait for it to end before attempting the kill.
-
- 30 Jul, 2020 1 commit
-
-
WuYunlong authored
-
- 28 Jul, 2020 1 commit
-
-
Yossi Gottlieb authored
Fix consistency test added in af5167b7 without considering TLS redis-cli configuration.
-
- 18 Mar, 2020 1 commit
-
-
WuYunlong authored
-