1. 22 May, 2020 4 commits
    • antirez's avatar
      Fix #7306 less aggressively. · b407590c
      antirez authored
      Citing from the issue:
      
      btw I suggest we change this fix to something else:
      * We revert the fix.
      * We add a call that disconnects chained replicas in the place where we trim the replica (that is a master i this case) offset.
      This way we can avoid disconnections when there is no trimming of the backlog.
      
      Note that we now want to disconnect replicas asynchronously in
      disconnectSlaves(), because it's in general safer now that we can call
      it from freeClient(). Otherwise for instance the command:
      
          CLIENT KILL TYPE master
      
      May crash: clientCommand() starts running the linked of of clients,
      looking for clients to kill. However it finds the master, kills it
      calling freeClient(), but this in turn calls replicationCacheMaster()
      that may also call disconnectSlaves() now. So the linked list iterator
      of the clientCommand() will no longer be valid.
      b407590c
    • Salvatore Sanfilippo's avatar
      Merge pull request #7305 from madolson/unstable-connection · 285817b2
      Salvatore Sanfilippo authored
      EAGAIN not handled for TLS during diskless load
      285817b2
    • Salvatore Sanfilippo's avatar
      Merge pull request #7306 from QuChen88/chained-replica-offset · ee93a70e
      Salvatore Sanfilippo authored
      Disconnect chained replicas when the replica performs PSYNC with the master always to avoid replication offset mismatch between master and chained replicas
      ee93a70e
    • Qu Chen's avatar
      Disconnect chained replicas when the replica performs PSYNC with the master... · 42f5da5d
      Qu Chen authored
      Disconnect chained replicas when the replica performs PSYNC with the master always to avoid replication offset mismatch between master and chained replicas.
      42f5da5d
  2. 21 May, 2020 11 commits
  3. 20 May, 2020 4 commits
  4. 19 May, 2020 5 commits
  5. 18 May, 2020 7 commits
  6. 17 May, 2020 4 commits
  7. 16 May, 2020 1 commit
    • antirez's avatar
      Remove the client from CLOSE_ASAP list before caching the master. · 624742d9
      antirez authored
      This was broken in 1a7cd2c0: we identified a crash in the CI, what
      was happening before the fix should be like that:
      
      1. The client gets in the async free list.
      2. However freeClient() gets called again against the same client
         which is a master.
      3. The client arrived in freeClient() with the CLOSE_ASAP flag set.
      4. The master gets cached, but NOT removed from the CLOSE_ASAP linked
         list.
      5. The master client that was cached was immediately removed since it
         was still in the list.
      6. Redis accessed a freed cached master.
      
      This is how the crash looked like:
      
      === REDIS BUG REPORT START: Cut & paste starting from here ===
      1092:S 16 May 2020 11:44:09.731 # Redis 999.999.999 crashed by signal: 11
      1092:S 16 May 2020 11:44:09.731 # Crashed running the instruction at: 0x447e18
      1092:S 16 May 2020 11:44:09.731 # Accessing address: 0xffffffffffffffff
      1092:S 16 May 2020 11:44:09.731 # Failed assertion:  (:0)
      
      ------ STACK TRACE ------
      EIP:
      src/redis-server 127.0.0.1:21300(readQueryFromClient+0x48)[0x447e18]
      
      And the 0xffff address access likely comes from accessing an SDS that is
      set to NULL (we go -1 offset to read the header).
      624742d9
  8. 15 May, 2020 2 commits
    • antirez's avatar
    • antirez's avatar
      Cache master without checking of deferred close flags. · 1a7cd2c0
      antirez authored
      The context is issue #7205: since the introduction of threaded I/O we close
      clients asynchronously by default from readQueryFromClient(). So we
      should no longer prevent the caching of the master client, to later
      PSYNC incrementally, if such flags are set. However we also don't want
      the master client to be cached with such flags (would be closed
      immediately after being restored). And yet we want a way to understand
      if a master was closed because of a protocol error, and in that case
      prevent the caching.
      1a7cd2c0
  9. 14 May, 2020 2 commits