1. 16 May, 2020 2 commits
    • antirez's avatar
      Improve the PSYNC2 test reliability. · a5608cc6
      antirez authored
      a5608cc6
    • antirez's avatar
      Remove the client from CLOSE_ASAP list before caching the master. · 624742d9
      antirez authored
      This was broken in 1a7cd2c0: we identified a crash in the CI, what
      was happening before the fix should be like that:
      
      1. The client gets in the async free list.
      2. However freeClient() gets called again against the same client
         which is a master.
      3. The client arrived in freeClient() with the CLOSE_ASAP flag set.
      4. The master gets cached, but NOT removed from the CLOSE_ASAP linked
         list.
      5. The master client that was cached was immediately removed since it
         was still in the list.
      6. Redis accessed a freed cached master.
      
      This is how the crash looked like:
      
      === REDIS BUG REPORT START: Cut & paste starting from here ===
      1092:S 16 May 2020 11:44:09.731 # Redis 999.999.999 crashed by signal: 11
      1092:S 16 May 2020 11:44:09.731 # Crashed running the instruction at: 0x447e18
      1092:S 16 May 2020 11:44:09.731 # Accessing address: 0xffffffffffffffff
      1092:S 16 May 2020 11:44:09.731 # Failed assertion:  (:0)
      
      ------ STACK TRACE ------
      EIP:
      src/redis-server 127.0.0.1:21300(readQueryFromClient+0x48)[0x447e18]
      
      And the 0xffff address access likely comes from accessing an SDS that is
      set to NULL (we go -1 offset to read the header).
      624742d9
  2. 15 May, 2020 2 commits
    • antirez's avatar
    • antirez's avatar
      Cache master without checking of deferred close flags. · 1a7cd2c0
      antirez authored
      The context is issue #7205: since the introduction of threaded I/O we close
      clients asynchronously by default from readQueryFromClient(). So we
      should no longer prevent the caching of the master client, to later
      PSYNC incrementally, if such flags are set. However we also don't want
      the master client to be cached with such flags (would be closed
      immediately after being restored). And yet we want a way to understand
      if a master was closed because of a protocol error, and in that case
      prevent the caching.
      1a7cd2c0
  3. 14 May, 2020 10 commits
  4. 12 May, 2020 4 commits
    • David Carlier's avatar
      NetBSD build update. · 4715ce59
      David Carlier authored
      This platform supports CPU affinity (but not OpenBSD).
      4715ce59
    • antirez's avatar
      Some rework of #7234. · 27e25e9d
      antirez authored
      27e25e9d
    • Salvatore Sanfilippo's avatar
      Merge pull request #7240 from oranagra/fix_replication_test · b726d642
      Salvatore Sanfilippo authored
      fix unstable replication test
      b726d642
    • Oran Agra's avatar
      fix unstable replication test · b4416280
      Oran Agra authored
      this test which has coverage for varoius flows of diskless master was
      failing randomly from time to time.
      
      the failure was:
      [err]: diskless all replicas drop during rdb pipe in tests/integration/replication.tcl
      log message of '*Diskless rdb transfer, last replica dropped, killing fork child*' not found
      
      what seemed to have happened is that the master didn't detect that all
      replicas dropped by the time the replication ended, it thought that one
      replica is still connected.
      
      now the test takes a few seconds longer but it seems stable.
      b4416280
  5. 11 May, 2020 1 commit
    • Oran Agra's avatar
      fix redis 6.0 not freeing closed connections during loading. · 905e28ee
      Oran Agra authored
      This bug was introduced by a recent change in which readQueryFromClient
      is using freeClientAsync, and despite the fact that now
      freeClientsInAsyncFreeQueue is in beforeSleep, that's not enough since
      it's not called during loading in processEventsWhileBlocked.
      furthermore, afterSleep was called in that case but beforeSleep wasn't.
      
      This bug also caused slowness sine the level-triggered mode of epoll
      kept signaling these connections as readable causing us to keep doing
      connRead again and again for ll of these, which keep accumulating.
      
      now both before and after sleep are called, but not all of their actions
      are performed during loading, some are only reserved for the main loop.
      
      fixes issue #7215
      905e28ee
  6. 10 May, 2020 2 commits
  7. 09 May, 2020 4 commits
  8. 08 May, 2020 1 commit
  9. 06 May, 2020 6 commits
  10. 05 May, 2020 8 commits