1. 30 Apr, 2020 6 commits
    • srzhao's avatar
      fix pipelined WAIT performance issue. · ee627bb6
      srzhao authored
      If client gets blocked again in `processUnblockedClients`, redis will not send
      `REPLCONF GETACK *` to slaves untill next eventloop, so the client will be
      blocked for 100ms by default(10hz) if no other file event fired.
      
      move server.get_ack_from_slaves sinppet after `processUnblockedClients`, so
      that both the first WAIT command that puts client in blocked context and the
      following WAIT command processed in processUnblockedClients would trigger
      redis-sever to send `REPLCONF GETACK *`, so that the eventloop would get
      `REPLCONG ACK <reploffset>` from slaves and unblocked ASAP.
      ee627bb6
    • antirez's avatar
      Fix create-cluster BIN_PATH. · 47b8a7f9
      antirez authored
      47b8a7f9
    • Guy Benoish's avatar
      Extend XINFO STREAM output · 6c0bc608
      Guy Benoish authored
      Introducing XINFO STREAM <key> FULL
      6c0bc608
    • hwware's avatar
      Fix not used marco in cluster.c · 5bfc1895
      hwware authored
      5bfc1895
    • Itamar Haber's avatar
      Update create-cluster · 56d628f8
      Itamar Haber authored
      56d628f8
    • Itamar Haber's avatar
      Adds `BIN_PATH` to create-cluster · cac9d7cf
      Itamar Haber authored
      Allows for setting the binaries path if used outside the upstream repo.
      
      Also documents `call` in usage clause (TODO: port to
      `redis-cli --cluster call` or just deprecate it).
      cac9d7cf
  2. 28 Apr, 2020 7 commits
  3. 27 Apr, 2020 6 commits
    • Oran Agra's avatar
      optimize memory usage of deferred replies · 8110ba88
      Oran Agra authored
      When deffered reply is added the previous reply node cannot be used so
      all the extra space we allocated in it is wasted. in case someone uses
      deffered replies in a loop, each time adding a small reply, each of
      these reply nodes (the small string reply) would have consumed a 16k
      block.
      now when we add anther diferred reply node, we trim the unused portion
      of the previous reply block.
      
      see #7123
      8110ba88
    • Oran Agra's avatar
      Keep track of meaningful replication offset in replicas too · e4d2bb62
      Oran Agra authored
      Now both master and replicas keep track of the last replication offset
      that contains meaningful data (ignoring the tailing pings), and both
      trim that tail from the replication backlog, and the offset with which
      they try to use for psync.
      
      the implication is that if someone missed some pings, or even have
      excessive pings that the promoted replica has, it'll still be able to
      psync (avoid full sync).
      
      the downside (which was already committed) is that replicas running old
      code may fail to psync, since the promoted replica trims pings form it's
      backlog.
      
      This commit adds a test that reproduces several cases of promotions and
      demotions with stale and non-stale pings
      
      Background:
      The mearningful offset on the master was added recently to solve a problem were
      the master is left all alone, injecting PINGs into it's backlog when no one is
      listening and then gets demoted and tries to replicate from a replica that didn't
      have any of the PINGs (or at least not the last ones).
      
      however, consider this case:
      master A has two replicas (B and C) replicating directly from it.
      there's no traffic at all, and also no network issues, just many pings in the
      tail of the backlog. now B gets promoted, A becomes a replica of B, and C
      remains a replica of A. when A gets demoted, it trims the pings from its
      backlog, and successfully replicate from B. however, C is still aware of
      these PINGs, when it'll disconnect and re-connect to A, it'll ask for something
      that's not in the backlog anymore (since A trimmed the tail of it's backlog),
      and be forced to do a full sync (something it didn't have to do before the
      meaningful offset fix).
      
      Besides that, the psync2 test was always failing randomly here and there, it
      turns out the reason were PINGs. Investigating it shows the following scenario:
      
      cycle 1: redis #1 is master, and all the rest are direct replicas of #1
      cycle 2: redis #2 is promoted to master, #1 is a replica of #2 and #3 is replica of #1
      now we see that when #1 is demoted it prints:
      17339:S 21 Apr 2020 11:16:38.523 * Using the meaningful offset 3929963 instead of 3929977 to exclude the final PINGs (14 bytes difference)
      17339:S 21 Apr 2020 11:16:39.391 * Trying a partial resynchronization (request e2b3f8817735fdfe5fa4626766daa938b61419e5:3929964).
      17339:S 21 Apr 2020 11:16:39.392 * Successful partial resynchronization with master.
      and when #3 connects to the demoted #2, #2 says:
      17339:S 21 Apr 2020 11:16:40.084 * Partial resynchronization not accepted: Requested offset for secondary ID was 3929978, but I can reply up to 3929964
      
      so the issue here is that the meaningful offset feature saved the day for the
      demoted master (since it needs to sync from a replica that didn't get the last
      ping), but it didn't help one of the other replicas which did get the last ping.
      e4d2bb62
    • antirez's avatar
      Fix STRALGO command flags. · fea9788c
      antirez authored
      fea9788c
    • Dave-in-lafayette's avatar
      fix for unintended crash during panic response · 2144047e
      Dave-in-lafayette authored
      If redis crashes early, before lua is set up (like, if File Descriptor 0 is closed before exec), it will crash again trying to print memory statistics.
      2144047e
    • Guy Benoish's avatar
      Add the stream tag to XSETID tests · 43329c9b
      Guy Benoish authored
      43329c9b
    • Dave-in-lafayette's avatar
      fix for crash during panic before all threads are up · 1e17d3de
      Dave-in-lafayette authored
      If there's a panic before all threads have been started (say, if file descriptor 0 is closed at exec), the panic response will crash here again.
      1e17d3de
  4. 24 Apr, 2020 21 commits