1. 09 Jun, 2020 1 commit
  2. 08 Jun, 2020 3 commits
    • Oran Agra's avatar
      fix disconnectSlaves, to try to free each slave. · 12504105
      Oran Agra authored
      the recent change in that loop (iteration rather than waiting for it to
      be empty) was intended to avoid an endless loop in case some slave would
      refuse to be freed.
      
      but the lookup of the first client remained, which would have caused it
      to try the first one again and again instead of moving on.
      12504105
    • Oran Agra's avatar
      2bb297b1
    • Oran Agra's avatar
      Avoid rejecting WATCH / UNWATCH, like MULTI/EXEC/DISCARD · 2fa077b0
      Oran Agra authored
      Much like MULTI/EXEC/DISCARD, the WATCH and UNWATCH are not actually
      operating on the database or server state, but instead operate on the
      client state. the client may send them all in one long pipeline and check
      all the responses only at the end, so failing them may lead to a
      mismatch between the client state on the server and the one on the
      client end, and execute the wrong commands (ones that were meant to be
      discarded)
      
      the watched keys are not actually stored in the client struct, but they
      are in fact part of the client state. for instance, they're not cleared
      or moved in SWAPDB or FLUSHDB.
      2fa077b0
  3. 07 Jun, 2020 1 commit
  4. 06 Jun, 2020 2 commits
  5. 03 Jun, 2020 1 commit
  6. 02 Jun, 2020 1 commit
  7. 29 May, 2020 1 commit
    • antirez's avatar
      Fix handling of special chars in ACL LOAD. · 1f8ea99b
      antirez authored
      Now it is also possible for ACL SETUSER to accept empty strings
      as valid operations (doing nothing), so for instance
      
          ACL SETUSER myuser ""
      
      Will have just the effect of creating a user in the default state.
      
      This should fix #7329.
      1f8ea99b
  8. 28 May, 2020 1 commit
  9. 27 May, 2020 5 commits
    • antirez's avatar
      Drop useless line from replicationCacheMaster(). · 484af8ed
      antirez authored
      484af8ed
    • Kevin Fwu's avatar
      Fix TLS certificate loading for chained certificates. · 151b12a8
      Kevin Fwu authored
      This impacts client verification for chained certificates (such as Lets
      Encrypt certificates). Client Verify requires the full chain in order to
      properly verify the certificate.
      151b12a8
    • antirez's avatar
      Remove the meaningful offset feature. · 22472fe5
      antirez authored
      After a closer look, the Redis core devleopers all believe that this was
      too fragile, caused many bugs that we didn't expect and that were very
      hard to track. Better to find an alternative solution that is simpler.
      22472fe5
    • antirez's avatar
      Set a protocol error if master use the inline protocol. · 325409a0
      antirez authored
      We want to react a bit more aggressively if we sense that the master is
      sending us some corrupted stream. By setting the protocol error we both
      ensure that the replica will disconnect, and avoid caching the master so
      that a full SYNC will be required. This is protective against
      replication bugs.
      325409a0
    • Liu Zhen's avatar
      fix clusters mixing accidentally by gossip · 3984dc65
      Liu Zhen authored
      `clusterStartHandshake` will start hand handshake
      and eventually send CLUSTER MEET message, which is strictly prohibited
      in the REDIS CLUSTER SPEC.
      Only system administrator can initiate CLUSTER MEET message.
      Futher, according to the SPEC, rather than IP/PORT pairs, only nodeid
      can be trusted.
      3984dc65
  10. 26 May, 2020 2 commits
  11. 25 May, 2020 2 commits
  12. 22 May, 2020 3 commits
    • antirez's avatar
      Make disconnectSlaves() synchronous in the base case. · adc5df1b
      antirez authored
      Otherwise we run into that:
      
      Backtrace:
      src/redis-server 127.0.0.1:21322(logStackTrace+0x45)[0x479035]
      src/redis-server 127.0.0.1:21322(sigsegvHandler+0xb9)[0x4797f9]
      /lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7fd373c5e390]
      src/redis-server 127.0.0.1:21322(_serverAssert+0x6a)[0x47660a]
      src/redis-server 127.0.0.1:21322(freeReplicationBacklog+0x42)[0x451282]
      src/redis-server 127.0.0.1:21322[0x4552d4]
      src/redis-server 127.0.0.1:21322[0x4c5593]
      src/redis-server 127.0.0.1:21322(aeProcessEvents+0x2e6)[0x42e786]
      src/redis-server 127.0.0.1:21322(aeMain+0x1d)[0x42eb0d]
      src/redis-server 127.0.0.1:21322(main+0x4c5)[0x42b145]
      /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7fd3738a3830]
      src/redis-server 127.0.0.1:21322(_start+0x29)[0x42b409]
      
      Since we disconnect all the replicas and free the replication backlog in
      certain replication paths, and the code that will free the replication
      backlog expects that no replica is connected.
      
      However we still need to free the replicas asynchronously in certain
      cases, as documented in the top comment of disconnectSlaves().
      adc5df1b
    • antirez's avatar
      Fix #7306 less aggressively. · b407590c
      antirez authored
      Citing from the issue:
      
      btw I suggest we change this fix to something else:
      * We revert the fix.
      * We add a call that disconnects chained replicas in the place where we trim the replica (that is a master i this case) offset.
      This way we can avoid disconnections when there is no trimming of the backlog.
      
      Note that we now want to disconnect replicas asynchronously in
      disconnectSlaves(), because it's in general safer now that we can call
      it from freeClient(). Otherwise for instance the command:
      
          CLIENT KILL TYPE master
      
      May crash: clientCommand() starts running the linked of of clients,
      looking for clients to kill. However it finds the master, kills it
      calling freeClient(), but this in turn calls replicationCacheMaster()
      that may also call disconnectSlaves() now. So the linked list iterator
      of the clientCommand() will no longer be valid.
      b407590c
    • Qu Chen's avatar
      Disconnect chained replicas when the replica performs PSYNC with the master... · 42f5da5d
      Qu Chen authored
      Disconnect chained replicas when the replica performs PSYNC with the master always to avoid replication offset mismatch between master and chained replicas.
      42f5da5d
  13. 21 May, 2020 6 commits
  14. 20 May, 2020 2 commits
    • Oran Agra's avatar
      fix a rare active defrag edge case bug leading to stagnation · 88d71f47
      Oran Agra authored
      There's a rare case which leads to stagnation in the defragger, causing
      it to keep scanning the keyspace and do nothing (not moving any
      allocation), this happens when all the allocator slabs of a certain bin
      have the same % utilization, but the slab from which new allocations are
      made have a lower utilization.
      
      this commit fixes it by removing the current slab from the overall
      average utilization of the bin, and also eliminate any precision loss in
      the utilization calculation and move the decision about the defrag to
      reside inside jemalloc.
      
      and also add a test that consistently reproduce this issue.
      88d71f47
    • Oran Agra's avatar
      improve DEBUG MALLCTL to be able to write to write only fields. · 5d83e9e1
      Oran Agra authored
      also support:
        debug mallctl-str thread.tcache.flush VOID
      5d83e9e1
  15. 19 May, 2020 1 commit
  16. 18 May, 2020 2 commits
    • hujie's avatar
      fix clear USER_FLAG_ALLCOMMANDS flag in acl · edc1f7b1
      hujie authored
      in ACLSetUserCommandBit, when the command bit overflows, no operation
      is performed, so no need clear the USER_FLAG_ALLCOMMANDS flag.
      
      in ACLSetUser, when adding subcommand, we don't need to call
      ACLGetCommandID ahead since subcommand may be empty.
      edc1f7b1
    • ShooterIT's avatar
      Redis Benchmark: generate random test data · abff2640
      ShooterIT authored
      The function of generating random data is designed by antirez. See #7196.
      abff2640
  17. 16 May, 2020 1 commit
    • antirez's avatar
      Remove the client from CLOSE_ASAP list before caching the master. · 624742d9
      antirez authored
      This was broken in 1a7cd2c0: we identified a crash in the CI, what
      was happening before the fix should be like that:
      
      1. The client gets in the async free list.
      2. However freeClient() gets called again against the same client
         which is a master.
      3. The client arrived in freeClient() with the CLOSE_ASAP flag set.
      4. The master gets cached, but NOT removed from the CLOSE_ASAP linked
         list.
      5. The master client that was cached was immediately removed since it
         was still in the list.
      6. Redis accessed a freed cached master.
      
      This is how the crash looked like:
      
      === REDIS BUG REPORT START: Cut & paste starting from here ===
      1092:S 16 May 2020 11:44:09.731 # Redis 999.999.999 crashed by signal: 11
      1092:S 16 May 2020 11:44:09.731 # Crashed running the instruction at: 0x447e18
      1092:S 16 May 2020 11:44:09.731 # Accessing address: 0xffffffffffffffff
      1092:S 16 May 2020 11:44:09.731 # Failed assertion:  (:0)
      
      ------ STACK TRACE ------
      EIP:
      src/redis-server 127.0.0.1:21300(readQueryFromClient+0x48)[0x447e18]
      
      And the 0xffff address access likely comes from accessing an SDS that is
      set to NULL (we go -1 offset to read the header).
      624742d9
  18. 15 May, 2020 1 commit
    • antirez's avatar
      Cache master without checking of deferred close flags. · 1a7cd2c0
      antirez authored
      The context is issue #7205: since the introduction of threaded I/O we close
      clients asynchronously by default from readQueryFromClient(). So we
      should no longer prevent the caching of the master client, to later
      PSYNC incrementally, if such flags are set. However we also don't want
      the master client to be cached with such flags (would be closed
      immediately after being restored). And yet we want a way to understand
      if a master was closed because of a protocol error, and in that case
      prevent the caching.
      1a7cd2c0
  19. 14 May, 2020 4 commits