1. 11 Mar, 2016 1 commit
  2. 26 Jan, 2016 1 commit
  3. 18 Dec, 2015 1 commit
  4. 17 Dec, 2015 1 commit
    • antirez's avatar
      Fix a race that may lead to the active (slave) client to be freed. · d999f5a6
      antirez authored
      In issue #2948 a crash was reported in processCommand(). Later Oran Agra
      (@oranagra) traced the bug (in private chat) in the following sequence
      of events:
      
      1. Some maxmemory is set.
      2. The slave is the currently active client and is executing PING or
         REPLCONF or whatever a slave can send to its master.
      3. freeMemoryIfNeeded() is called since maxmemory is set.
      4. flushSlavesOutputBuffers() is called by freeMemoryIfNeeded().
      5. During slaves buffers flush, a write error could be encoutered in
         writeToClient() or sendReplyToClient() depending on the version of
         Redis. This will trigger freeClient() against the currently active
         client, so a segmentation fault will likely happen in
         processCommand() immediately after the call to freeMemoryIfNeeded().
      
      There are different possible fixes:
      
      1. Add flags to writeToClient() (recent versions code base) so that
         we can ignore the write errors, and use this flag in
         flushSlavesOutputBuffers(). However this is not simple to do in older
         versions of Redis.
      2. Use freeClientAsync() during write errors. This works but changes the
         current behavior of releasing clients ASAP when possible. Normally
         we write to clients during the normal event loop processing, in the
         writable client, where there is no active client, so no care must be
         taken.
      3. The fix of this commit: to detect that the current client is no
         longer valid. This fix is a bit "ad-hoc", but works across all the
         versions and has the advantage of not changing the remaining
         behavior. Only alters what happens during this race condition,
         hopefully.
      d999f5a6
  5. 13 Dec, 2015 1 commit
  6. 27 Nov, 2015 1 commit
    • antirez's avatar
      Handle wait3() errors. · 3626699f
      antirez authored
      My guess was that wait3() with WNOHANG could never return -1 and an
      error. However issue #2897 may possibly indicate that this could happen
      under non clear conditions. While we try to understand this better,
      better to handle a return value of -1 explicitly, otherwise in the
      case a BGREWRITE is in progress but wait3() returns -1, the effect is to
      match the first branch of the if/else block since server.rdb_child_pid
      is -1, and call backgroundSaveDoneHandler() without a good reason, that
      will, in turn, crash the Redis server with an assertion.
      3626699f
  7. 17 Nov, 2015 2 commits
    • antirez's avatar
      Remove "s" flag for MIGRATE in command table. · 8e491b17
      antirez authored
      Maybe there are legitimate use cases for MIGRATE inside Lua scripts, at
      least for now. When the command will be executed in an asynchronous
      fashion (planned) it is possible we'll no longer be able to permit it
      from within Lua scripts.
      8e491b17
    • antirez's avatar
      Fix MIGRATE entry in command table. · d4f55990
      antirez authored
      Thanks to Oran Agra (@oranagra) for reporting. Key extraction would not
      work otherwise and it does not make sense to take wrong data in the
      command table.
      d4f55990
  8. 09 Nov, 2015 1 commit
  9. 20 Aug, 2015 1 commit
  10. 17 Jul, 2015 2 commits
  11. 16 Jul, 2015 2 commits
    • antirez's avatar
      Client timeout handling improved. · b029ff11
      antirez authored
      The previos attempt to process each client at least once every ten
      seconds was not a good idea, because:
      
      1. Usually because of the past min iterations set to 50, you get much
      better processing period most of the times.
      
      2. However when there are many clients and a normal setting for
      server.hz, the edge case is triggered, and waiting 10 seconds for a
      BLPOP that asked for 1 second is not ok.
      
      3. Moreover, because of the high min-itereations limit of 50, when HZ
      was set to an high value, the actual behavior was to process a lot of
      clients per second.
      
      Also the function checking for timeouts called gettimeofday() at each
      iteration which can be costly.
      
      The new implementation will try to process each client once per second,
      gets the current time as argument, and does not attempt to process more
      than 5 clients per iteration if not needed.
      
      So now:
      
      1. The CPU usage of an idle Redis process is the same or better.
      2. The CPU usage of a busy Redis process is the same or better.
      3. However a non trivial amount of work may be performed per iteration
      when there are many many clients. In this particular case the user may
      want to raise the "HZ" value if needed.
      
      Btw with 4000 clients it was still not possible to noticy any actual
      latency created by processing 400 clients per second, since the work
      performed for each client is pretty small.
      b029ff11
    • antirez's avatar
      Clarify a comment in clientsCron(). · 5dcba26b
      antirez authored
      5dcba26b
  12. 13 Jul, 2015 1 commit
    • antirez's avatar
      EXISTS is now variadic. · 7ae1d4d6
      antirez authored
      The new return value is the number of keys existing, among the ones
      specified in the command line, counting the same key multiple times if
      given multiple times (and if it exists).
      
      See PR #2667.
      7ae1d4d6
  13. 24 Mar, 2015 1 commit
    • antirez's avatar
      Cluster: redirection refactoring + handling of blocked clients. · 3468cd36
      antirez authored
      There was a bug in Redis Cluster caused by clients blocked in a blocking
      list pop operation, for keys no longer handled by the instance, or
      in a condition where the cluster became down after the client blocked.
      
      A typical situation is:
      
      1) BLPOP <somekey> 0
      2) <somekey> hash slot is resharded to another master.
      
      The client will block forever int this case.
      
      A symmentrical non-cluster-specific bug happens when an instance is
      turned from master to slave. In that case it is more serious since this
      will desynchronize data between slaves and masters. This other bug was
      discovered as a side effect of thinking about the bug explained and
      fixed in this commit, but will be fixed in a separated commit.
      3468cd36
  14. 22 Mar, 2015 1 commit
  15. 21 Mar, 2015 1 commit
  16. 20 Mar, 2015 2 commits
    • antirez's avatar
      Cluster: better cluster state transiction handling. · 62893f5b
      antirez authored
      Before we relied on the global cluster state to make sure all the hash
      slots are linked to some node, when getNodeByQuery() is called. So
      finding the hash slot unbound was checked with an assertion. However
      this is fragile. The cluster state is often updated in the
      clusterBeforeSleep() function, and not ASAP on state change, so it may
      happen to process clients with a cluster state that is 'ok' but yet
      certain hash slots set to NULL.
      
      With this commit the condition is also checked in getNodeByQuery() and
      reported with a identical error code of -CLUSTERDOWN but slightly
      different error message so that we have more debugging clue in the
      future.
      
      Root cause of issue #2288.
      62893f5b
    • antirez's avatar
      Cluster: move clusterBeforeSleep() call before unblocked clients processing. · 585f68ac
      antirez authored
      Related to issue #2288.
      585f68ac
  17. 18 Mar, 2015 1 commit
  18. 11 Feb, 2015 3 commits
  19. 23 Dec, 2014 1 commit
    • antirez's avatar
      INFO loading stats: three fixes. · 1e8f1577
      antirez authored
      1. Server unxtime may remain not updated while loading AOF, so ETA is
      not updated correctly.
      
      2. Number of processed byte was not initialized.
      
      3. Possible division by zero condition (likely cause of issue #1932).
      1e8f1577
  20. 19 Dec, 2014 1 commit
  21. 13 Dec, 2014 2 commits
  22. 04 Dec, 2014 1 commit
  23. 03 Dec, 2014 2 commits
  24. 02 Dec, 2014 1 commit
    • antirez's avatar
      Mark PFCOUNT as read-only, even if not true. · 69efb59a
      antirez authored
      PFCOUNT is technically speaking a write command, since the cached value
      of the HLL is exposed in the data structure (design error, mea culpa), and
      can be modified by PFCOUNT.
      
      However if we flag PFCOUNT as "w", read only slaves can't execute the
      command, which is a problem since there are environments where slaves
      are used to scale PFCOUNT reads.
      
      Nor it is possible to just prevent PFCOUNT to modify the data structure
      in slaves, since without the cache we lose too much efficiency.
      
      So while this commit allows slaves to create a temporary inconsistency
      (the strings representing the HLLs in the master and slave can be
      different in certain moments) it is actually harmless.
      
      In the long run this should be probably fixed by turning the HLL into a
      more opaque representation, for example by storing the cached value in
      the part of the string which is not exposed (this should be possible
      with SDS strings).
      69efb59a
  25. 12 Nov, 2014 1 commit
  26. 29 Oct, 2014 5 commits
  27. 06 Oct, 2014 2 commits