1. 12 Jun, 2018 1 commit
  2. 11 Jun, 2018 2 commits
    • antirez's avatar
      Use a less aggressive query buffer resize policy. · cec404f0
      antirez authored
      A user with many connections (10 thousand) on a single Redis server
      reports in issue #4983 that sometimes Redis is idle becuase at the same
      time many clients need to resize their query buffer according to the old
      policy.
      
      It looks like this was created by the fact that we allow the query
      buffer to grow without problems to a size up to PROTO_MBULK_BIG_ARG
      normally, but when the client is idle we immediately are more strict,
      and a query buffer greater than 1024 bytes is already enough to trigger
      the resize. So for instance if most of the clients stop at the same time
      this issue should be easily triggered.
      
      This behavior actually looks odd, and there should be only a clear limit
      after we say, let's look at this query buffer to check if it's time to
      resize it. This commit puts the limit at PROTO_MBULK_BIG_ARG, and the
      check is performed both if compared to the peak usage the current usage
      is too big, or if the client is idle.
      
      Then when the check is performed, to waste just a few kbytes is
      considered enough to proceed with the resize. This should fix the issue.
      cec404f0
    • antirez's avatar
      Fix client unblocking for XREADGROUP, issue #4978. · 34bd4418
      antirez authored
      We unblocked the client too early, when the group name object was no
      longer valid in client->bpop, so propagating XCLAIM later in
      streamPropagateXCLAIM() deferenced a field already set to NULL.
      34bd4418
  3. 10 Jun, 2018 3 commits
  4. 09 Jun, 2018 1 commit
  5. 08 Jun, 2018 1 commit
  6. 07 Jun, 2018 5 commits
  7. 06 Jun, 2018 2 commits
  8. 05 Jun, 2018 2 commits
  9. 04 Jun, 2018 3 commits
    • antirez's avatar
      XGROUP SETID implemented + consumer groups core fixes. · 36b392a0
      antirez authored
      Now that we have SETID, the inetrnals of consumer groups should be able
      to handle the case of the same message delivered multiple times just
      as a side effect of calling XREADGROUP. Normally this should never
      happen but if the admin manually "XGROUP SETID mykey mygroup 0",
      messages will get re-delivered to clients waiting for the ">" special
      ID. The consumer groups internals were not able to handle the case of a
      message re-delivered in this circumstances that was already assigned to
      another owner.
      36b392a0
    • antirez's avatar
      Rax library updated. · 05a29966
      antirez authored
      05a29966
    • antirez's avatar
      XGROUP DESTROY implemented. · 7c6f1be5
      antirez authored
      7c6f1be5
  10. 03 Jun, 2018 2 commits
  11. 01 Jun, 2018 1 commit
  12. 31 May, 2018 3 commits
  13. 30 May, 2018 1 commit
  14. 29 May, 2018 2 commits
    • antirez's avatar
      Don't expire keys while loading RDB from AOF preamble. · 49147f36
      antirez authored
      The AOF tail of a combined RDB+AOF is based on the premise of applying
      the AOF commands to the exact state that there was in the server while
      the RDB was persisted. By expiring keys while loading the RDB file, we
      change the state, so applying the AOF tail later may change the state.
      
      Test case:
      
      * Time1: SET a 10
      * Time2: EXPIREAT a $time5
      * Time3: INCR a
      * Time4: PERSIT A. Start bgrewiteaof with RDB preamble. The value of a is 11 without expire time.
      * Time5: Restart redis from the RDB+AOF: consistency violation.
      
      Thanks to @soloestoy for providing the patch.
      Thanks to @trevor211 for the original issue report and the initial fix.
      
      Check issue #4950 for more info.
      49147f36
    • WuYunlong's avatar
      Fix rdb save by allowing dumping of expire keys, so that when · 2a887bd5
      WuYunlong authored
      we add a new slave, and do a failover, eighter by manual or
      not, other local slaves will delete the expired keys properly.
      2a887bd5
  15. 27 May, 2018 1 commit
  16. 25 May, 2018 8 commits
  17. 24 May, 2018 1 commit
  18. 23 May, 2018 1 commit
    • antirez's avatar
      Sentinel: fix delay in detecting ODOWN. · 8631e647
      antirez authored
      See issue #2819 for details. The gist is that when we want to send INFO
      because we are over the time, we used to send only INFO commands, no
      longer sending PING commands. However if a master fails exactly when we
      are about to send an INFO command, the PING times will result zero
      because the PONG reply was already received, and we'll fail to send more
      PINGs, since we try only to send INFO commands: the failure detector
      will delay until the connection is closed and re-opened for "long
      timeout".
      
      This commit changes the logic so that we can send the three kind of
      messages regardless of the fact we sent another one already in the same
      code path. It could happen that we go over the message limit for the
      link by a few messages, but this is not significant. However now we'll
      not introduce delays in sending commands just because there was
      something else to send at the same time.
      8631e647