1. 14 May, 2014 1 commit
    • antirez's avatar
      Cluster: better handling of stolen slots. · 6baac558
      antirez authored
      The previous code handling a lost slot (by another master with an higher
      configuration for the slot) was defensive, considering it an error and
      putting the cluster in an odd state requiring redis-cli fix.
      
      This was changed, because actually this only happens either in a
      legitimate way, with failovers, or when the admin messed with the config
      in order to reconfigure the cluster. So the new code instead will try to
      make sure that the keys stored match the new slots map, by removing all
      the keys in the slots we lost ownership from.
      
      The function that deletes the keys from the lost slots is called only
      if the node does not lose all its slots (resulting in a reconfiguration
      as a slave of the node that got ownership). This is an optimization
      since the replication code will anyway flush all the instance data in
      a faster way.
      6baac558
  2. 12 May, 2014 9 commits
  3. 10 May, 2014 1 commit
  4. 09 May, 2014 5 commits
  5. 08 May, 2014 2 commits
    • antirez's avatar
      Sentinel: log when a failover will be attempted again. · 21027786
      antirez authored
      When a Sentinel performs a failover (successful or not), or when a
      Sentinel votes for a different Sentinel trying to start a failover, it
      sets a min delay before it will try to get elected for a failover.
      
      While not strictly needed, because if multiple Sentinels will try
      to failover the same master at the same time, only one configuration
      will eventually win, this serialization is practically very useful.
      Normal failovers are cleaner: one Sentinel starts to failover, the
      others update their config when the Sentinel performing the failover
      is able to get the selected slave to move from the role of slave to the
      one of master.
      
      However currently this timeout was implicit, so users could see
      Sentinels not reacting, after a failed failover, for some time, without
      giving any feedback in the logs to the poor sysadmin waiting for clues.
      
      This commit makes Sentinels more verbose about the delay: when a master
      is down and a failover attempt is not performed because the delay has
      still not elaped, something like that will be logged:
      
          Next failover delay: I will not start a failover
          before Thu May  8 16:48:59 2014
      21027786
    • antirez's avatar
      Sentinel: generate +config-update-from event when a new config is received. · 931beae9
      antirez authored
      This event makes clear, before the switch-master event is generated,
      that a Sentinel received a configuration update from another Sentinel.
      931beae9
  6. 07 May, 2014 10 commits
  7. 29 Apr, 2014 1 commit
    • antirez's avatar
      CLUSTER SET-CONFIG-EPOCH implemented. · 11d9ecb7
      antirez authored
      Initially Redis Cluster accepted that after cluster creation all the
      nodes were at configEpoch 0, evolving from zero as failovers happen.
      
      However later the semantic was made more strict in order to make sure a
      cluster has always all the master nodes with a different configEpoch,
      which is more robust in some corner case (especially resulting from
      errors by the system administrator).
      
      To assign different configEpochs to different nodes at startup was a
      task performed naturally by the config conflicts resolution algorithm
      (see the Cluster specification). However this works well only for small
      clusters or when there are actually just a few collisions, since it is
      designed for exceptional cases.
      
      When a large cluster is created hundred of nodes can be at epoch 0, so
      the conflict resolution code is slow to provide an unique config to each
      node. For this reason this new command was introduced. It can be called
      only when a node is totally fresh: no other nodes known, and configEpoch
      set to zero, so it is safe even against misuses.
      
      redis-trib will use the new command in order to start the cluster
      already setting an incremental unique config to every node.
      11d9ecb7
  8. 28 Apr, 2014 4 commits
  9. 24 Apr, 2014 6 commits
    • antirez's avatar
      Process events with processEventsWhileBlocked() when blocked. · e29d3307
      antirez authored
      When we are blocked and a few events a processed from time to time, it
      is smarter to call the event handler a few times in order to handle the
      accept, read, write, close cycle of a client in a single pass, otherwise
      there is too much latency added for clients to receive a reply while the
      server is busy in some way (for example during the DB loading).
      e29d3307
    • antirez's avatar
      Accept multiple clients per iteration. · 3a3458ee
      antirez authored
      When the listening sockets readable event is fired, we have the chance
      to accept multiple clients instead of accepting a single one. This makes
      Redis more responsive when there is a mass-connect event (for example
      after the server startup), and in workloads where a connect-disconnect
      pattern is used often, so that multiple clients are waiting to be
      accepted continuously.
      
      As a side effect, this commit makes the LOADING, BUSY, and similar
      errors much faster to deliver to the client, making Redis more
      responsive when there is to return errors to inform the clients that the
      server is blocked in an not interruptible operation.
      3a3458ee
    • antirez's avatar
      AE_ERR -> ANET_ERR in acceptUnixHandler(). · cac4bae1
      antirez authored
      No actual changes since the value is the same.
      cac4bae1
    • antirez's avatar
      7d9b45b4
    • antirez's avatar
      clusterLoadConfig() REDIS_ERR retval semantics refined. · e3cf812c
      antirez authored
      We should return REDIS_ERR to signal we can't read the configuration
      because there is no config file only after checking errno, othewise
      we risk to rewrite an existing file that was not accessible for some
      other reason.
      e3cf812c
    • antirez's avatar
      Lock nodes.conf to avoid multiple processes using the same file. · db06108b
      antirez authored
      This was a common source of problems among users.
      The solution adopted is not bullet-proof as if the user deletes the
      nodes.conf file manually, and starts a new instance with the same
      nodes.conf file path, two instances will use the same file. However
      following this reasoning the user may drop a nuclear bomb into the
      datacenter as well.
      db06108b
  10. 23 Apr, 2014 1 commit
    • Glauber Costa's avatar
      fix null pointer access with no file pointer · 7dd44327
      Glauber Costa authored
      I happen to be working on a system that lacks urandom. While the code does try
      to handle this case and artificially create some bytes if the file pointer is
      empty, it does try to close it unconditionally, leading to a segfault.
      7dd44327