1. 20 Jul, 2020 4 commits
    • Wen Hui's avatar
      correct error msg for num connections reaching maxclients in cluster mode (#7444) · 0f75036c
      Wen Hui authored
      
      (cherry picked from commit d85af4d6)
      0f75036c
    • Oran Agra's avatar
      EXEC always fails with EXECABORT and multi-state is cleared · 05e483cb
      Oran Agra authored
      In order to support the use of multi-exec in pipeline, it is important that
      MULTI and EXEC are never rejected and it is easy for the client to know if the
      connection is still in multi state.
      
      It was easy to make sure MULTI and DISCARD never fail (done by previous
      commits) since these only change the client state and don't do any actual
      change in the server, but EXEC is a different story.
      
      Since in the past, it was possible for clients to handle some EXEC errors and
      retry the EXEC, we now can't affort to return any error on EXEC other than
      EXECABORT, which now carries with it the real reason for the abort too.
      
      Other fixes in this commit:
      - Some checks that where performed at the time of queuing need to be re-
        validated when EXEC runs, for instance if the transaction contains writes
        commands, it needs to be aborted. there was one check that was already done
        in execCommand (-READONLY), but other checks where missing: -OOM, -MISCONF,
        -NOREPLICAS, -MASTERDOWN
      - When a command is rejected by processCommand it was rejected with addReply,
        which was not recognized as an error in case the bad command came from the
        master. this will enable to count or MONITOR these errors in the future.
      - make it easier for tests to create additional (non deferred) clients.
      - add tests for the fixes of this commit.
      
      (cherry picked from commit 65a3307b)
      05e483cb
    • antirez's avatar
      Include cluster.h for getClusterConnectionsCount(). · c8f250f8
      antirez authored
      (cherry picked from commit 21f62c33)
      c8f250f8
    • antirez's avatar
      Use cluster connections too, to limit maxclients. · 0ebbc360
      antirez authored
      See #7401.
      
      (cherry picked from commit 4b8d8826)
      0ebbc360
  2. 09 Jun, 2020 3 commits
  3. 28 May, 2020 3 commits
    • antirez's avatar
      Replication: showLatestBacklog() refactored out. · cc549b46
      antirez authored
      cc549b46
    • antirez's avatar
      Remove the meaningful offset feature. · 2112a570
      antirez authored
      After a closer look, the Redis core devleopers all believe that this was
      too fragile, caused many bugs that we didn't expect and that were very
      hard to track. Better to find an alternative solution that is simpler.
      2112a570
    • antirez's avatar
      Set a protocol error if master use the inline protocol. · d2eb6e0b
      antirez authored
      We want to react a bit more aggressively if we sense that the master is
      sending us some corrupted stream. By setting the protocol error we both
      ensure that the replica will disconnect, and avoid caching the master so
      that a full SYNC will be required. This is protective against
      replication bugs.
      d2eb6e0b
  4. 25 May, 2020 2 commits
    • antirez's avatar
      Make disconnectSlaves() synchronous in the base case. · e3f864b5
      antirez authored
      Otherwise we run into that:
      
      Backtrace:
      src/redis-server 127.0.0.1:21322(logStackTrace+0x45)[0x479035]
      src/redis-server 127.0.0.1:21322(sigsegvHandler+0xb9)[0x4797f9]
      /lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7fd373c5e390]
      src/redis-server 127.0.0.1:21322(_serverAssert+0x6a)[0x47660a]
      src/redis-server 127.0.0.1:21322(freeReplicationBacklog+0x42)[0x451282]
      src/redis-server 127.0.0.1:21322[0x4552d4]
      src/redis-server 127.0.0.1:21322[0x4c5593]
      src/redis-server 127.0.0.1:21322(aeProcessEvents+0x2e6)[0x42e786]
      src/redis-server 127.0.0.1:21322(aeMain+0x1d)[0x42eb0d]
      src/redis-server 127.0.0.1:21322(main+0x4c5)[0x42b145]
      /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7fd3738a3830]
      src/redis-server 127.0.0.1:21322(_start+0x29)[0x42b409]
      
      Since we disconnect all the replicas and free the replication backlog in
      certain replication paths, and the code that will free the replication
      backlog expects that no replica is connected.
      
      However we still need to free the replicas asynchronously in certain
      cases, as documented in the top comment of disconnectSlaves().
      e3f864b5
    • antirez's avatar
      Fix #7306 less aggressively. · 3c21418c
      antirez authored
      Citing from the issue:
      
      btw I suggest we change this fix to something else:
      * We revert the fix.
      * We add a call that disconnects chained replicas in the place where we trim the replica (that is a master i this case) offset.
      This way we can avoid disconnections when there is no trimming of the backlog.
      
      Note that we now want to disconnect replicas asynchronously in
      disconnectSlaves(), because it's in general safer now that we can call
      it from freeClient(). Otherwise for instance the command:
      
          CLIENT KILL TYPE master
      
      May crash: clientCommand() starts running the linked of of clients,
      looking for clients to kill. However it finds the master, kills it
      calling freeClient(), but this in turn calls replicationCacheMaster()
      that may also call disconnectSlaves() now. So the linked list iterator
      of the clientCommand() will no longer be valid.
      3c21418c
  5. 22 May, 2020 2 commits
  6. 16 May, 2020 1 commit
    • antirez's avatar
      Remove the client from CLOSE_ASAP list before caching the master. · 1eab62f7
      antirez authored
      This was broken in 1a7cd2c0: we identified a crash in the CI, what
      was happening before the fix should be like that:
      
      1. The client gets in the async free list.
      2. However freeClient() gets called again against the same client
         which is a master.
      3. The client arrived in freeClient() with the CLOSE_ASAP flag set.
      4. The master gets cached, but NOT removed from the CLOSE_ASAP linked
         list.
      5. The master client that was cached was immediately removed since it
         was still in the list.
      6. Redis accessed a freed cached master.
      
      This is how the crash looked like:
      
      === REDIS BUG REPORT START: Cut & paste starting from here ===
      1092:S 16 May 2020 11:44:09.731 # Redis 999.999.999 crashed by signal: 11
      1092:S 16 May 2020 11:44:09.731 # Crashed running the instruction at: 0x447e18
      1092:S 16 May 2020 11:44:09.731 # Accessing address: 0xffffffffffffffff
      1092:S 16 May 2020 11:44:09.731 # Failed assertion:  (:0)
      
      ------ STACK TRACE ------
      EIP:
      src/redis-server 127.0.0.1:21300(readQueryFromClient+0x48)[0x447e18]
      
      And the 0xffff address access likely comes from accessing an SDS that is
      set to NULL (we go -1 offset to read the header).
      1eab62f7
  7. 15 May, 2020 1 commit
    • antirez's avatar
      Cache master without checking of deferred close flags. · 80c906bd
      antirez authored
      The context is issue #7205: since the introduction of threaded I/O we close
      clients asynchronously by default from readQueryFromClient(). So we
      should no longer prevent the caching of the master client, to later
      PSYNC incrementally, if such flags are set. However we also don't want
      the master client to be cached with such flags (would be closed
      immediately after being restored). And yet we want a way to understand
      if a master was closed because of a protocol error, and in that case
      prevent the caching.
      80c906bd
  8. 14 May, 2020 2 commits
  9. 08 May, 2020 3 commits
    • antirez's avatar
      84d9766d
    • zhenwei pi's avatar
      Support setcpuaffinity on linux/bsd · d6436eb7
      zhenwei pi authored
      Currently, there are several types of threads/child processes of a
      redis server. Sometimes we need deeply optimise the performance of
      redis, so we would like to isolate threads/processes.
      
      There were some discussion about cpu affinity cases in the issue:
      https://github.com/antirez/redis/issues/2863
      
      
      
      So implement cpu affinity setting by redis.conf in this patch, then
      we can config server_cpulist/bio_cpulist/aof_rewrite_cpulist/
      bgsave_cpulist by cpu list.
      
      Examples of cpulist in redis.conf:
      server_cpulist 0-7:2      means cpu affinity 0,2,4,6
      bio_cpulist 1,3           means cpu affinity 1,3
      aof_rewrite_cpulist 8-11  means cpu affinity 8,9,10,11
      bgsave_cpulist 1,10-11    means cpu affinity 1,10,11
      
      Test on linux/freebsd, both work fine.
      Signed-off-by: default avatarzhenwei pi <pizhenwei@bytedance.com>
      d6436eb7
    • Oran Agra's avatar
      optimize memory usage of deferred replies - fixed · 75addb4f
      Oran Agra authored
      When deffered reply is added the previous reply node cannot be used so
      all the extra space we allocated in it is wasted. in case someone uses
      deffered replies in a loop, each time adding a small reply, each of
      these reply nodes (the small string reply) would have consumed a 16k
      block.
      now when we add anther diferred reply node, we trim the unused portion
      of the previous reply block.
      
      see #7123
      
      cherry picked from commit fb732f7a
      with fix to handle a crash with LIBC allocator, which apparently can
      return the same pointer despite changing it's size.
      i.e. shrinking an allocation of 16k into 56 bytes without changing the
      pointer.
      75addb4f
  10. 01 May, 2020 2 commits
  11. 30 Apr, 2020 1 commit
  12. 27 Apr, 2020 2 commits
    • Oran Agra's avatar
      optimize memory usage of deferred replies · 8110ba88
      Oran Agra authored
      When deffered reply is added the previous reply node cannot be used so
      all the extra space we allocated in it is wasted. in case someone uses
      deffered replies in a loop, each time adding a small reply, each of
      these reply nodes (the small string reply) would have consumed a 16k
      block.
      now when we add anther diferred reply node, we trim the unused portion
      of the previous reply block.
      
      see #7123
      8110ba88
    • Oran Agra's avatar
      Keep track of meaningful replication offset in replicas too · e4d2bb62
      Oran Agra authored
      Now both master and replicas keep track of the last replication offset
      that contains meaningful data (ignoring the tailing pings), and both
      trim that tail from the replication backlog, and the offset with which
      they try to use for psync.
      
      the implication is that if someone missed some pings, or even have
      excessive pings that the promoted replica has, it'll still be able to
      psync (avoid full sync).
      
      the downside (which was already committed) is that replicas running old
      code may fail to psync, since the promoted replica trims pings form it's
      backlog.
      
      This commit adds a test that reproduces several cases of promotions and
      demotions with stale and non-stale pings
      
      Background:
      The mearningful offset on the master was added recently to solve a problem were
      the master is left all alone, injecting PINGs into it's backlog when no one is
      listening and then gets demoted and tries to replicate from a replica that didn't
      have any of the PINGs (or at least not the last ones).
      
      however, consider this case:
      master A has two replicas (B and C) replicating directly from it.
      there's no traffic at all, and also no network issues, just many pings in the
      tail of the backlog. now B gets promoted, A becomes a replica of B, and C
      remains a replica of A. when A gets demoted, it trims the pings from its
      backlog, and successfully replicate from B. however, C is still aware of
      these PINGs, when it'll disconnect and re-connect to A, it'll ask for something
      that's not in the backlog anymore (since A trimmed the tail of it's backlog),
      and be forced to do a full sync (something it didn't have to do before the
      meaningful offset fix).
      
      Besides that, the psync2 test was always failing randomly here and there, it
      turns out the reason were PINGs. Investigating it shows the following scenario:
      
      cycle 1: redis #1 is master, and all the rest are direct replicas of #1
      cycle 2: redis #2 is promoted to master, #1 is a replica of #2 and #3 is replica of #1
      now we see that when #1 is demoted it prints:
      17339:S 21 Apr 2020 11:16:38.523 * Using the meaningful offset 3929963 instead of 3929977 to exclude the final PINGs (14 bytes difference)
      17339:S 21 Apr 2020 11:16:39.391 * Trying a partial resynchronization (request e2b3f8817735fdfe5fa4626766daa938b61419e5:3929964).
      17339:S 21 Apr 2020 11:16:39.392 * Successful partial resynchronization with master.
      and when #3 connects to the demoted #2, #2 says:
      17339:S 21 Apr 2020 11:16:40.084 * Partial resynchronization not accepted: Requested offset for secondary ID was 3929978, but I can reply up to 3929964
      
      so the issue here is that the meaningful offset feature saved the day for the
      demoted master (since it needs to sync from a replica that didn't get the last
      ping), but it didn't help one of the other replicas which did get the last ping.
      e4d2bb62
  13. 24 Apr, 2020 2 commits
    • antirez's avatar
      e63bb7ec
    • zhenwei pi's avatar
      Threaded IO: set thread name for redis-server · 3575b870
      zhenwei pi authored
      
      
      Set thread name for each thread of redis-server, this helps us to
      monitor the utilization and optimise the performance.
      
      And suggested-by Salvatore, implement this feature for multi
      platforms. Currently support linux and bsd, ignore other OS.
      
      An exmaple on Linux:
       # top -d 5 -p `pidof redis-server ` -H
      
          PID USER      PR  NI    VIRT    RES    SHR S %CPU %MEM     TIME+ COMMAND
      3682671 root      20   0  227744   8248   3836 R 99.2  0.0   0:19.53 redis-server
      3682677 root      20   0  227744   8248   3836 S 26.4  0.0   0:04.15 io_thd_3
      3682675 root      20   0  227744   8248   3836 S 23.6  0.0   0:03.98 io_thd_1
      3682676 root      20   0  227744   8248   3836 S 23.6  0.0   0:03.97 io_thd_2
      3682672 root      20   0  227744   8248   3836 S  0.2  0.0   0:00.02 bio_close_file
      3682673 root      20   0  227744   8248   3836 S  0.2  0.0   0:00.02 bio_aof_fsync
      3682674 root      20   0  227744   8248   3836 S  0.0  0.0   0:00.00 bio_lazy_free
      3682678 root      20   0  227744   8248   3836 S  0.0  0.0   0:00.00 jemalloc_bg_thd
      3682682 root      20   0  227744   8248   3836 S  0.0  0.0   0:00.00 jemalloc_bg_thd
      3682683 root      20   0  227744   8248   3836 S  0.0  0.0   0:00.00 jemalloc_bg_thd
      3682684 root      20   0  227744   8248   3836 S  0.0  0.0   0:00.00 jemalloc_bg_thd
      3682685 root      20   0  227744   8248   3836 S  0.0  0.0   0:00.00 jemalloc_bg_thd
      3682687 root      20   0  227744   8248   3836 S  0.0  0.0   0:00.00 jemalloc_bg_thd
      
      Another exmaple on FreeBSD-12.1:
        PID USERNAME    PRI NICE   SIZE    RES STATE    C   TIME    WCPU COMMAND
       5212 root        100    0    48M  7280K CPU2     2   0:26  99.52% redis-server{redis-server}
       5212 root         38    0    48M  7280K umtxn    4   0:06  26.94% redis-server{io_thd_3}
       5212 root         36    0    48M  7280K umtxn    6   0:06  26.84% redis-server{io_thd_1}
       5212 root         39    0    48M  7280K umtxn    1   0:06  25.30% redis-server{io_thd_2}
       5212 root         20    0    48M  7280K uwait    3   0:00   0.00% redis-server{redis-server}
       5212 root         21    0    48M  7280K uwait    2   0:00   0.00% redis-server{bio_close_file}
       5212 root         21    0    48M  7280K uwait    3   0:00   0.00% redis-server{bio_aof_fsync}
       5212 root         21    0    48M  7280K uwait    0   0:00   0.00% redis-server{bio_lazy_free}
      Signed-off-by: default avatarzhenwei pi <pizhenwei@bytedance.com>
      3575b870
  14. 16 Apr, 2020 1 commit
  15. 15 Apr, 2020 1 commit
  16. 07 Apr, 2020 3 commits
  17. 31 Mar, 2020 2 commits
  18. 25 Mar, 2020 5 commits