1. 27 Sep, 2012 22 commits
    • antirez's avatar
      Sentinel: client reconfiguration script execution. · 26a34009
      antirez authored
      This commit adds support to optionally execute a script when one of the
      following events happen:
      
      * The failover starts (with a slave already promoted).
      * The failover ends.
      * The failover is aborted.
      
      The script is called with enough parameters (documented in the example
      sentinel.conf file) to provide information about the old and new ip:port
      pair of the master, the role of the sentinel (leader or observer) and
      the name of the master.
      
      The goal of the script is to inform clients of the configuration change
      in a way specific to the environment Sentinel is running, that can't be
      implemented in a genereal way inside Sentinel itself.
      26a34009
    • antirez's avatar
      Sentinel: when leader in wait-start, sense another leader as race. · 524b79d2
      antirez authored
      When we are in wait start, if another leader (or any other external
      entity) turns a slave into a master, abort the failover, and detect it
      as an observer.
      
      Note that the wait-start state is mainly there for this reason but the
      abort was yet not implemented.
      
      This adds a new sentinel event -failover-abort-race.
      524b79d2
    • antirez's avatar
    • antirez's avatar
      7c9bfe10
    • antirez's avatar
      Sentinel: abort failover when in wait-start if master is back. · 3da75e2c
      antirez authored
      When we are a Leader Sentinel in wait-start state, starting with this
      commit the failover is aborted if the master returns online.
      
      This improves the way we handle a notable case of net split, that is the
      split between Sentinels and Redis servers, that will be a very common
      case of split becase Sentinels will often be installed in the client's
      network and servers can be in a differnt arm of the network.
      
      When Sentinels and Redis servers are isolated the master is in ODOWN
      condition since the Sentinels can agree about this state, however the
      failover does not start since there are no good slaves to promote (in
      this specific case all the slaves are unreachable).
      
      However when the split is resolved, Sentinels may sense the slave back
      a moment before they sense the master is back, so the failover may start
      without a good reason (since the master is actually working too).
      
      Now this condition is reversible, so the failover will be aborted
      immediately after if the master is detected to be working again, that
      is, not in SDOWN nor in ODOWN condition.
      3da75e2c
    • antirez's avatar
      Sentinel: scripts execution engine improved. · e328e41a
      antirez authored
      We no longer use a vanilla fork+execve but take a queue of jobs of
      scripts to execute, with retry on error, timeouts, and so forth.
      
      Currently this is used only for notifications but soon the ability to
      also call clients reconfiguration scripts will be added.
      e328e41a
    • Jan-Erik Rediger's avatar
      Include sys/wait.h to avoid compiler warning · 8a8e560b
      Jan-Erik Rediger authored
      gcc warned about an implicit declaration of function 'wait3'.
      Including this header fixes this.
      8a8e560b
    • antirez's avatar
    • Jeremy Zawodny's avatar
      comment fix · af41f6cf
      Jeremy Zawodny authored
      improve English a bit. :-)
      af41f6cf
    • antirez's avatar
      999fe0d3
    • mrb's avatar
      Fix warning in redis.c for sentinel config load · f1057534
      mrb authored
      f1057534
    • mrb's avatar
      Some cleanup in sentinel.conf · fcc8bf99
      mrb authored
      fcc8bf99
    • antirez's avatar
      Sentinel: abort failover if no good slave is available. · 374eed7d
      antirez authored
      The previous behavior of the state machine was to wait some time and
      retry the slave selection, but this is not robust enough against drastic
      changes in the conditions of the monitored instances.
      
      What we do now when the slave selection fails is to abort the failover
      and return back monitoring the master. If the ODOWN condition is still
      present a new failover will be triggered and so forth.
      
      This commit also refactors the code we use to abort a failover.
      374eed7d
    • antirez's avatar
      2085fdb1
    • antirez's avatar
      Prevent a spurious +sdown event on switch. · f8a19e32
      antirez authored
      When we reset the master we should start with clean timestamps for ping
      replies otherwise we'll detect a spurious +sdown event, because on
      +master-switch event the previous master instance was probably in +sdown
      condition. Since we updated the address we should count time from
      scratch again.
      
      Also this commit makes sure to explicitly reset the count of pending
      commands, now we can do this because of the new way the hiredis link
      is closed.
      f8a19e32
    • antirez's avatar
      Sentinel: debugging message removed. · 7c39b55d
      antirez authored
      7c39b55d
    • antirez's avatar
      Sentinel: changes to connection handling and redirection. · e47236d8
      antirez authored
      We disconnect the Redis instances hiredis link in a more robust way now.
      Also we change the way we perform the redirection for the +switch-master
      event, that is not just an instance reset with an address change.
      
      Using the same system we now implement the +redirect-to-master event
      that is triggered by an instance that is configured to be master but
      found to be a slave at the first INFO reply. In that case we monitor the
      master instead, logging the incident as an event.
      e47236d8
    • antirez's avatar
      Sentinel: check that instance still exists in reply callbacks. · 8ab7e998
      antirez authored
      We can't be sure the instance object still exists when the reply
      callback is called.
      8ab7e998
    • antirez's avatar
      Sentinel: more robust failover detection as observer. · e01a415d
      antirez authored
      Sentinel observers detect failover checking if a slave attached to the
      monitored master turns into its replication state from slave to master.
      However while this change may in theory only happen after a SLAVEOF NO
      ONE command, in practie it is very easy to reboot a slave instance with
      a wrong configuration that turns it into a master, especially if it was
      a past master before a successfull failover.
      
      This commit changes the detection policy so that if an instance goes
      from slave to master, but at the same time the runid has changed, we
      sense a reboot, and in that case we don't detect a failover at all.
      
      This commit also introduces the "reboot" sentinel event, that is logged
      at "warning" level (so this will trigger an admin notification).
      
      The commit also fixes a problem in the disconnect handler that assumed
      that the instance object always existed, that is not the case. Now we
      no longer assume that redisAsyncFree() will call the disconnection
      handler before returning.
      e01a415d
    • antirez's avatar
      Fixed an error in the example sentinel.conf. · d26a8fb4
      antirez authored
      d26a8fb4
    • antirez's avatar
      Typo. · 5b5eb192
      antirez authored
      5b5eb192
    • antirez's avatar
      First implementation of Redis Sentinel. · 120ba392
      antirez authored
      This commit implements the first, beta quality implementation of Redis
      Sentinel, a distributed monitoring system for Redis with notification
      and automatic failover capabilities.
      
      More info at http://redis.io/topics/sentinel
      120ba392
  2. 21 Sep, 2012 3 commits
    • antirez's avatar
      Test for SRANDMEMBER with <count>. · 2812b945
      antirez authored
      2812b945
    • antirez's avatar
      SRANDMEMBER <count> leak fixed. · 31fe053a
      antirez authored
      For "CASE 4" (see code) we need to free the element if it's already in
      the result dictionary and adding it failed.
      31fe053a
    • antirez's avatar
      Added the SRANDMEMBER key <count> variant. · dd947715
      antirez authored
      SRANDMEMBER called with just the key argument can just return a single
      random element from a Redis Set. However many users need to return
      multiple unique elements from a Set, this is not a trivial problem to
      handle in the client side, and for truly good performance a C
      implementation was required.
      
      After many requests for this feature it was finally implemented.
      
      The problem implementing this command is the strategy to follow when
      the number of elements the user asks for is near to the number of
      elements that are already inside the set. In this case asking random
      elements to the dictionary API, and trying to add it to a temporary set,
      may result into an extremely poor performance, as most add operations
      will be wasted on duplicated elements.
      
      For this reason this implementation uses a different strategy in this
      case: the Set is copied, and random elements are returned to reach the
      specified count.
      
      The code actually uses 4 different algorithms optimized for the
      different cases.
      
      If the count is negative, the command changes behavior and allows for
      duplicated elements in the returned subset.
      dd947715
  3. 17 Sep, 2012 4 commits
    • antirez's avatar
      8b6b1b27
    • antirez's avatar
      Redis 2.5.13 (2.6.0 RC7). · 44038626
      antirez authored
      44038626
    • antirez's avatar
      174518ff
    • antirez's avatar
      A reimplementation of blocking operation internals. · f444e2af
      antirez authored
      Redis provides support for blocking operations such as BLPOP or BRPOP.
      This operations are identical to normal LPOP and RPOP operations as long
      as there are elements in the target list, but if the list is empty they
      block waiting for new data to arrive to the list.
      
      All the clients blocked waiting for th same list are served in a FIFO
      way, so the first that blocked is the first to be served when there is
      more data pushed by another client into the list.
      
      The previous implementation of blocking operations was conceived to
      serve clients in the context of push operations. For for instance:
      
      1) There is a client "A" blocked on list "foo".
      2) The client "B" performs `LPUSH foo somevalue`.
      3) The client "A" is served in the context of the "B" LPUSH,
      synchronously.
      
      Processing things in a synchronous way was useful as if "A" pushes a
      value that is served by "B", from the point of view of the database is a
      NOP (no operation) thing, that is, nothing is replicated, nothing is
      written in the AOF file, and so forth.
      
      However later we implemented two things:
      
      1) Variadic LPUSH that could add multiple values to a list in the
      context of a single call.
      2) BRPOPLPUSH that was a version of BRPOP that also provided a "PUSH"
      side effect when receiving data.
      
      This forced us to make the synchronous implementation more complex. If
      client "B" is waiting for data, and "A" pushes three elemnents in a
      single call, we needed to propagate an LPUSH with a missing argument
      in the AOF and replication link. We also needed to make sure to
      replicate the LPUSH side of BRPOPLPUSH, but only if in turn did not
      happened to serve another blocking client into another list ;)
      
      This were complex but with a few of mutually recursive functions
      everything worked as expected... until one day we introduced scripting
      in Redis.
      
      Scripting + synchronous blocking operations = Issue #614.
      
      Basically you can't "rewrite" a script to have just a partial effect on
      the replicas and AOF file if the script happened to serve a few blocked
      clients.
      
      The solution to all this problems, implemented by this commit, is to
      change the way we serve blocked clients. Instead of serving the blocked
      clients synchronously, in the context of the command performing the PUSH
      operation, it is now an asynchronous and iterative process:
      
      1) If a key that has clients blocked waiting for data is the subject of
      a list push operation, We simply mark keys as "ready" and put it into a
      queue.
      2) Every command pushing stuff on lists, as a variadic LPUSH, a script,
      or whatever it is, is replicated verbatim without any rewriting.
      3) Every time a Redis command, a MULTI/EXEC block, or a script,
      completed its execution, we run the list of keys ready to serve blocked
      clients (as more data arrived), and process this list serving the
      blocked clients.
      4) As a result of "3" maybe more keys are ready again for other clients
      (as a result of BRPOPLPUSH we may have push operations), so we iterate
      back to step "3" if it's needed.
      
      The new code has a much simpler semantics, and a simpler to understand
      implementation, with the disadvantage of not being able to "optmize out"
      a PUSH+BPOP as a No OP.
      
      This commit will be tested with care before the final merge, more tests
      will be added likely.
      f444e2af
  4. 11 Sep, 2012 1 commit
    • antirez's avatar
      Make sure that SELECT argument is an integer or return an error. · b58f03a0
      antirez authored
      Unfortunately we had still the lame atoi() without any error checking in
      place, so "SELECT foo" would work as "SELECT 0". This was not an huge
      problem per se but some people expected that DB can be strings and not
      just numbers, and without errors you get the feeling that they can be
      numbers, but not the behavior.
      
      Now getLongFromObjectOrReply() is used as almost everybody else across
      the code, generating an error if the number is not an integer or
      overflows the long type.
      
      Thanks to @mipearson for reporting that on Twitter.
      b58f03a0
  5. 10 Sep, 2012 1 commit
  6. 05 Sep, 2012 3 commits
    • antirez's avatar
      BITCOUNT regression test for #582 fixed for 32 bit target. · 58889867
      antirez authored
      Bug #582 was not present in 32 bit builds of Redis as
      getObjectFromLong() will return an error for overflow.
      
      This commit makes sure that the test does not fail because of the error
      returned when running against 32 bit builds.
      58889867
    • Haruto Otake's avatar
      BITCOUNT: fix segmentation fault. · 4c3d4190
      Haruto Otake authored
      remove unsafe and unnecessary cast.
      until now, this cast may lead segmentation fault when end > UINT_MAX
      
      setbit foo 0 1
      bitcount  0 4294967295
      => ok
      bitcount  0 4294967296
      => cause segmentation fault.
      
      Note by @antirez: the commit was modified a bit to also change the
      string length type to long, since it's guaranteed to be at max 512 MB in
      size, so we can work with the same type across all the code path.
      
      A regression test was also added.
      4c3d4190
    • Saj Goonatilleke's avatar
      Bug fix: slaves being pinged every second · 0671d88c
      Saj Goonatilleke authored
      REDIS_REPL_PING_SLAVE_PERIOD controls how often the master should
      transmit a heartbeat (PING) to its slaves.  This period, which defaults
      to 10, is measured in seconds.
      
      Redis 2.4 masters used to ping their slaves every ten seconds, just like
      it says on the tin.
      
      The Redis 2.6 masters I have been experimenting with, on the other hand,
      ping their slaves *every second*.  (master_last_io_seconds_ago never
      approaches 10.)  I think the ping period was inadvertently slashed to
      one-tenth of its nominal value around the time REDIS_HZ was introduced.
      This commit reintroduces correct ping schedule behaviour.
      0671d88c
  7. 04 Sep, 2012 1 commit
    • antirez's avatar
      Scripting: Force SORT BY constant determinism inside SORT itself. · 5ddee9b7
      antirez authored
      SORT is able to return (faster than when ordering) unordered output if
      the "BY" clause is used with a constant value. However we try to play
      well with scripting requirements of determinism providing always sorted
      outputs when SORT (and other similar commands) are called by Lua
      scripts.
      
      However we used the general mechanism in place in scripting in order to
      reorder SORT output, that is, if the command has the "S" flag set, the
      Lua scripting engine will take an additional step when converting a
      multi bulk reply to Lua value, calling a Lua sorting function.
      
      This is suboptimal as we can do it faster inside SORT itself.
      This is also broken as issue #545 shows us: basically when SORT is used
      with a constant BY, and additionally also GET is used, the Lua scripting
      engine was trying to order the output as a flat array, while it was
      actually a list of key-value pairs.
      
      What we do know is to recognized if the caller of SORT is the Lua client
      (since we can check this using the REDIS_LUA_CLIENT flag). If so, and if
      a "don't sort" condition is triggered by the BY option with a constant
      string, we force the lexicographical sorting.
      
      This commit fixes this bug and improves the performance, and at the same
      time simplifies the implementation. This does not mean I'm smart today,
      it means I was stupid when I committed the original implementation ;)
      5ddee9b7
  8. 03 Sep, 2012 1 commit
    • antirez's avatar
      Send an async PING before starting replication with master. · fd2a8951
      antirez authored
      During the first synchronization step of the replication process, a Redis
      slave connects with the master in a non blocking way. However once the
      connection is established the replication continues sending the REPLCONF
      command, and sometimes the AUTH command if needed. Those commands are
      send in a partially blocking way (blocking with timeout in the order of
      seconds).
      
      Because it is common for a blocked master to accept connections even if
      it is actually not able to reply to the slave requests, it was easy for
      a slave to block if the master had serious issues, but was still able to
      accept connections in the listening socket.
      
      For this reason we now send an asynchronous PING request just after the
      non blocking connection ended in a successful way, and wait for the
      reply before to continue with the replication process. It is very
      unlikely that a master replying to PING can't reply to the other
      commands.
      
      This solution was proposed by Didier Spezia (Thanks!) so that we don't
      need to turn all the replication process into a non blocking affair, but
      still the probability of a slave blocked is minimal even in the event of
      a failing master.
      
      Also we now use getsockopt(SO_ERROR) in order to check errors ASAP
      in the event handler, instead of waiting for actual I/O to return an
      error.
      
      This commit fixes issue #632.
      fd2a8951
  9. 31 Aug, 2012 4 commits
    • antirez's avatar
      Scripting: Reset Lua fake client reply_bytes after command execution. · 42a239b8
      antirez authored
      Lua scripting uses a fake client in order to run commands in the context
      of a client, accumulate the reply, and convert it into a Lua object
      to return to the caller. This client is reused again and again, and is
      referenced by the server.lua_client globally accessible pointer.
      
      However after every call to redis.call() or redis.pcall(), that is
      handled by the luaRedisGenericCommand() function, the reply_bytes field
      of the client was not set back to zero. This filed is used to estimate
      the amount of memory currently used in the reply. Because of the lack of
      reset, script after script executed, this value used to get bigger and
      bigger, and in the end on 32 bit systems it triggered the following
      assert:
      
          redisAssert(c->reply_bytes < ULONG_MAX-(1024*64));
      
      On 64 bit systems this does not happen because it takes too much time to
      reach values near to 2^64 for users to see the practical effect of the
      bug.
      
      Now in the cleanup stage of luaRedisGenericCommand() we reset the
      reply_bytes counter to zero, avoiding the issue. It is not practical to
      add a test for this bug, but the fix was manually tested using a
      debugger.
      
      This commit fixes issue #656.
      42a239b8
    • antirez's avatar
    • antirez's avatar
      Sentinel: Redis-side support for slave priority. · 48d26a48
      antirez authored
      A Redis slave can now be configured with a priority, that is an integer
      number that is shown in INFO output and can be get and set using the
      redis.conf file or the CONFIG GET/SET command.
      
      This field is used by Sentinel during slave election. A slave with lower
      priority is preferred. A slave with priority zero is never elected (and
      is considered to be impossible to elect even if it is the only slave
      available).
      
      A next commit will add support in the Sentinel side as well.
      48d26a48
    • antirez's avatar
      Scripting: require at least one argument for redis.call(). · edfaa64f
      antirez authored
      Redis used to crash with a call like the following:
      
          EVAL "redis.call()" 0
      
      Now the explicit check for at least one argument prevents the problem.
      
      This commit fixes issue #655.
      edfaa64f