1. 01 Mar, 2018 1 commit
  2. 27 Feb, 2018 1 commit
    • antirez's avatar
      Cluster: improve crash-recovery safety after failover auth vote. · 533d0e03
      antirez authored
      Add AE_BARRIER to the writable event loop so that slaves requesting
      votes can't be served before we re-enter the event loop in the next
      iteration, so clusterBeforeSleep() will fsync to disk in time.
      Also add the call to explicitly fsync, given that we modified the last
      vote epoch variable.
      533d0e03
  3. 18 Jan, 2018 1 commit
  4. 09 Jan, 2018 1 commit
  5. 13 Dec, 2017 1 commit
  6. 06 Dec, 2017 1 commit
    • antirez's avatar
      Change indentation and other minor details of PR #4489. · 522760fa
      antirez authored
      The main change introduced by this commit is pretending that help
      arrays are more text than code, thus indenting them at level 0. This
      improves readability, and is an old practice when defining arrays of
      C strings describing text.
      
      Additionally a few useless return statements are removed, and the HELP
      subcommand capitalized when printed to the user.
      522760fa
  7. 03 Dec, 2017 1 commit
  8. 31 Oct, 2017 1 commit
    • antirez's avatar
      Fix buffer overflows occurring reading redis.conf. · ffcf7d5a
      antirez authored
      There was not enough sanity checking in the code loading the slots of
      Redis Cluster from the nodes.conf file, this resulted into the
      attacker's ability to write data at random addresses in the process
      memory, by manipulating the index of the array. The bug seems
      exploitable using the following techique: the config file may be altered so
      that one of the nodes gets, as node ID (which is the first field inside the
      structure) some data that is actually executable: then by writing this
      address in selected places, this node ID part can be executed after a
      jump. So it is mostly just a matter of effort in order to exploit the
      bug. In practice however the issue is not very critical because the
      bug requires an unprivileged user to be able to modify the Redis cluster
      nodes configuration, and at the same time this should result in some
      gain. However Redis normally is unprivileged as well. Yet much better to
      have this fixed indeed.
      
      Fix #4278.
      ffcf7d5a
  9. 27 Jul, 2017 1 commit
  10. 20 Jul, 2017 2 commits
    • antirez's avatar
      Make representClusterNodeFlags() more robust. · a3778f3b
      antirez authored
      This function failed when an internal-only flag was set as an only flag
      in a node: the string was trimmed expecting a final comma before
      exiting the function, causing a crash. See issue #4142.
      Moreover generation of flags representation only needed at DEBUG log
      level was always performed: a waste of CPU time. This is fixed as well
      by this commit.
      a3778f3b
    • Leon Chen's avatar
      fix return wrong value of clusterDelNodeSlots · 9e7a8c02
      Leon Chen authored
      9e7a8c02
  11. 18 Jul, 2017 1 commit
  12. 11 Jul, 2017 1 commit
  13. 23 Jun, 2017 2 commits
  14. 04 Jun, 2017 1 commit
  15. 15 Apr, 2017 1 commit
    • antirez's avatar
      Cluster: discard pong times in the future. · 271733f4
      antirez authored
      However we allow for 500 milliseconds of tolerance, in order to
      avoid often discarding semantically valid info (the node is up)
      because of natural few milliseconds desync among servers even when
      NTP is used.
      
      Note that anyway we should ping the node from time to time regardless and
      discover if it's actually down from our point of view, since no update
      is accepted while we have an active ping on the node.
      
      Related to #3929.
      271733f4
  16. 14 Apr, 2017 4 commits
    • antirez's avatar
      Cluster: always add PFAIL nodes at end of gossip section. · 02777bb2
      antirez authored
      To rely on the fact that nodes in PFAIL state will be shared around by
      randomly adding them in the gossip section is a weak assumption,
      especially after changes related to sending less ping/pong packets.
      
      We want to always include gossip entries for all the nodes that are in
      PFAIL state, so that the PFAIL -> FAIL state promotion can happen much
      faster and reliably.
      
      Related to #3929.
      02777bb2
    • antirez's avatar
      Cluster: fix gossip section ping/pong times encoding. · 8c829d9e
      antirez authored
      The gossip section times are 32 bit, so cannot store the milliseconds
      time but just the seconds approximation, which is good enough for our
      uses. At the same time however, when comparing the gossip section times
      of other nodes with our node's view, we need to convert back to
      milliseconds.
      
      Related to #3929. Without this change the patch to reduce the traffic in
      the bus message does not work.
      8c829d9e
    • antirez's avatar
      6878a3fe
    • antirez's avatar
      Cluster: decrease ping/pong traffic by trusting other nodes reports. · 8f7bf284
      antirez authored
      Cluster of bigger sizes tend to have a lot of traffic in the cluster bus
      just for failure detection: a node will try to get a ping reply from
      another node no longer than when the half the node timeout would elapsed,
      in order to avoid a false positive.
      
      However this means that if we have N nodes and the node timeout is set
      to, for instance M seconds, we'll have to ping N nodes every M/2
      seconds. This N*M/2 pings will receive the same number of pongs, so
      a total of N*M packets per node. However given that we have a total of N
      nodes doing this, the total number of messages will be N*N*M.
      
      In a 100 nodes cluster with a timeout of 60 seconds, this translates
      to a total of 100*100*30 packets per second, summing all the packets
      exchanged by all the nodes.
      
      This is, as you can guess, a lot... So this patch changes the
      implementation in a very simple way in order to trust the reports of
      other nodes: if a node A reports a node B as alive at least up to
      a given time, we update our view accordingly.
      
      The problem with this approach is that it could result into a subset of
      nodes being able to reach a given node X, and preventing others from
      detecting that is actually not reachable from the majority of nodes.
      So the above algorithm is refined by trusting other nodes only if we do
      not have currently a ping pending for the node X, and if there are no
      failure reports for that node.
      
      Since each node, anyway, pings 10 other nodes every second (one node
      every 100 milliseconds), anyway eventually even trusting the other nodes
      reports, we will detect if a given node is down from our POV.
      
      Now to understand the number of packets that the cluster would exchange
      for failure detection with the patch, we can start considering the
      random PINGs that the cluster sent anyway as base line:
      Each node sends 10 packets per second, so the total traffic if no
      additioal packets would be sent, including PONG packets, would be:
      
          Total messages per second = N*10*2
      
      However by trusting other nodes gossip sections will not AWALYS prevent
      pinging nodes for the "half timeout reached" rule all the times. The
      math involved in computing the actual rate as N and M change is quite
      complex and depends also on another parameter, which is the number of
      entries in the gossip section of PING and PONG packets. However it is
      possible to compare what happens in cluster of different sizes
      experimentally. After applying this patch a very important reduction in
      the number of packets exchanged is trivial to observe, without apparent
      impacts on the failure detection performances.
      
      Actual numbers with different cluster sizes should be published in the
      Reids Cluster documentation in the future.
      
      Related to #3929.
      8f7bf284
  17. 13 Apr, 2017 1 commit
  18. 27 Mar, 2017 1 commit
  19. 09 Feb, 2017 1 commit
    • antirez's avatar
      Fix MIGRATE closing of cached socket on error. · f917e0da
      antirez authored
      After investigating issue #3796, it was discovered that MIGRATE
      could call migrateCloseSocket() after the original MIGRATE c->argv
      was already rewritten as a DEL operation. As a result the host/port
      passed to migrateCloseSocket() could be anything, often a NULL pointer
      that gets deferenced crashing the server.
      
      Now the socket is closed at an earlier time when there is a socket
      error in a later stage where no retry will be performed, before we
      rewrite the argument vector. Moreover a check was added so that later,
      in the socket_err label, there is no further attempt at closing the
      socket if the argument was rewritten.
      
      This fix should resolve the bug reported in #3796.
      f917e0da
  20. 14 Dec, 2016 1 commit
    • antirez's avatar
      MIGRATE: Remove upfront ttl initialization. · b53e73e1
      antirez authored
      After the fix for #3673 the ttl var is always initialized inside the
      loop itself, so the early initialization is not needed.
      
      Variables declaration also moved to a more local scope.
      b53e73e1
  21. 13 Dec, 2016 1 commit
    • antirez's avatar
      Replication: fix the infamous key leakage of writable slaves + EXPIRE. · 04542cff
      antirez authored
      BACKGROUND AND USE CASEj
      
      Redis slaves are normally write only, however the supprot a "writable"
      mode which is very handy when scaling reads on slaves, that actually
      need write operations in order to access data. For instance imagine
      having slaves replicating certain Sets keys from the master. When
      accessing the data on the slave, we want to peform intersections between
      such Sets values. However we don't want to intersect each time: to cache
      the intersection for some time often is a good idea.
      
      To do so, it is possible to setup a slave as a writable slave, and
      perform the intersection on the slave side, perhaps setting a TTL on the
      resulting key so that it will expire after some time.
      
      THE BUG
      
      Problem: in order to have a consistent replication, expiring of keys in
      Redis replication is up to the master, that synthesize DEL operations to
      send in the replication stream. However slaves logically expire keys
      by hiding them from read attempts from clients so that if the master did
      not promptly sent a DEL, the client still see logically expired keys
      as non existing.
      
      Because slaves don't actively expire keys by actually evicting them but
      just masking from the POV of read operations, if a key is created in a
      writable slave, and an expire is set, the key will be leaked forever:
      
      1. No DEL will be received from the master, which does not know about
      such a key at all.
      
      2. No eviction will be performed by the slave, since it needs to disable
      eviction because it's up to masters, otherwise consistency of data is
      lost.
      
      THE FIX
      
      In order to fix the problem, the slave should be able to tag keys that
      were created in the slave side and have an expire set in some way.
      
      My solution involved using an unique additional dictionary created by
      the writable slave only if needed. The dictionary is obviously keyed by
      the key name that we need to track: all the keys that are set with an
      expire directly by a client writing to the slave are tracked.
      
      The value in the dictionary is a bitmap of all the DBs where such a key
      name need to be tracked, so that we can use a single dictionary to track
      keys in all the DBs used by the slave (actually this limits the solution
      to the first 64 DBs, but the default with Redis is to use 16 DBs).
      
      This solution allows to pay both a small complexity and CPU penalty,
      which is zero when the feature is not used, actually. The slave-side
      eviction is encapsulated in code which is not coupled with the rest of
      the Redis core, if not for the hook to track the keys.
      
      TODO
      
      I'm doing the first smoke tests to see if the feature works as expected:
      so far so good. Unit tests should be added before merging into the
      4.0 branch.
      04542cff
  22. 08 Dec, 2016 1 commit
    • Jan-Erik Rediger's avatar
      Reset the ttl for additional keys · 2a32f037
      Jan-Erik Rediger authored
      Before, if a previous key had a TTL set but the current one didn't, the
      TTL was reused and thus resulted in wrong expirations set.
      
      This behaviour was experienced, when `MigrateDefaultPipeline` in
      redis-trib was set to >1
      
      Fixes #3655
      2a32f037
  23. 29 Nov, 2016 1 commit
  24. 16 Nov, 2016 1 commit
  25. 16 Jun, 2016 1 commit
    • antirez's avatar
      RESTORE: accept RDB dumps with older versions. · a3f893b8
      antirez authored
      Reference issue #3218.
      
      Checking the code I can't find a reason why the original RESTORE
      code was so opinionated about restoring only the current version. The
      code in to `rdb.c` appears to be capable as always to restore data from
      older versions of Redis, and the only places where it is needed the
      current version in order to correctly restore data, is while loading the
      opcodes, not the values itself as it happens in the case of RESTORE.
      
      For the above reasons, this commit enables RESTORE to accept older
      versions of values payloads.
      a3f893b8
  26. 05 May, 2016 1 commit
  27. 02 May, 2016 1 commit
    • antirez's avatar
      New masters with slots are now targets of migration if others are. · 4fdde78c
      antirez authored
      This fixes issue #3043.
      
      Before this fix, after a complete resharding of a master slots
      to other nodes, the master remains empty and the slaves migrate away
      to other masters with non-zero nodes. However the old master now empty,
      is no longer considered a target for migration, because the system has
      no way to tell it had slaves in the past.
      
      This fix leaves the algorithm used in the past untouched, but adds a
      new rule. When a new or old master which is empty and without slaves,
      are assigend with their first slot, if other masters in the cluster have
      slaves, they are automatically considered to be targets for replicas
      migration.
      4fdde78c
  28. 02 Feb, 2016 1 commit
  29. 01 Feb, 2016 1 commit
  30. 29 Jan, 2016 6 commits