1. 13 Dec, 2016 1 commit
    • antirez's avatar
      Replication: fix the infamous key leakage of writable slaves + EXPIRE. · 04542cff
      antirez authored
      BACKGROUND AND USE CASEj
      
      Redis slaves are normally write only, however the supprot a "writable"
      mode which is very handy when scaling reads on slaves, that actually
      need write operations in order to access data. For instance imagine
      having slaves replicating certain Sets keys from the master. When
      accessing the data on the slave, we want to peform intersections between
      such Sets values. However we don't want to intersect each time: to cache
      the intersection for some time often is a good idea.
      
      To do so, it is possible to setup a slave as a writable slave, and
      perform the intersection on the slave side, perhaps setting a TTL on the
      resulting key so that it will expire after some time.
      
      THE BUG
      
      Problem: in order to have a consistent replication, expiring of keys in
      Redis replication is up to the master, that synthesize DEL operations to
      send in the replication stream. However slaves logically expire keys
      by hiding them from read attempts from clients so that if the master did
      not promptly sent a DEL, the client still see logically expired keys
      as non existing.
      
      Because slaves don't actively expire keys by actually evicting them but
      just masking from the POV of read operations, if a key is created in a
      writable slave, and an expire is set, the key will be leaked forever:
      
      1. No DEL will be received from the master, which does not know about
      such a key at all.
      
      2. No eviction will be performed by the slave, since it needs to disable
      eviction because it's up to masters, otherwise consistency of data is
      lost.
      
      THE FIX
      
      In order to fix the problem, the slave should be able to tag keys that
      were created in the slave side and have an expire set in some way.
      
      My solution involved using an unique additional dictionary created by
      the writable slave only if needed. The dictionary is obviously keyed by
      the key name that we need to track: all the keys that are set with an
      expire directly by a client writing to the slave are tracked.
      
      The value in the dictionary is a bitmap of all the DBs where such a key
      name need to be tracked, so that we can use a single dictionary to track
      keys in all the DBs used by the slave (actually this limits the solution
      to the first 64 DBs, but the default with Redis is to use 16 DBs).
      
      This solution allows to pay both a small complexity and CPU penalty,
      which is zero when the feature is not used, actually. The slave-side
      eviction is encapsulated in code which is not coupled with the rest of
      the Redis core, if not for the hook to track the keys.
      
      TODO
      
      I'm doing the first smoke tests to see if the feature works as expected:
      so far so good. Unit tests should be added before merging into the
      4.0 branch.
      04542cff
  2. 12 Dec, 2016 2 commits
  3. 06 Dec, 2016 2 commits
  4. 05 Dec, 2016 5 commits
    • antirez's avatar
      Modules: types doc updated to new API. · 16cce320
      antirez authored
      16cce320
    • antirez's avatar
      Modules: API doc updated (auto generated). · 37b6e16a
      antirez authored
      37b6e16a
    • antirez's avatar
    • antirez's avatar
      Geo: improve fuzz test. · b1fc06f7
      antirez authored
      The test now uses more diverse radius sizes, especially sizes near or
      greater the whole earth surface are used, that are known to trigger edge
      cases. Moreover the PRNG seeding was probably resulting into the same
      sequence tested over and over again, now seeding unsing the current unix
      time in milliseconds.
      
      Related to #3631.
      b1fc06f7
    • antirez's avatar
      Geo: fix computation of bounding box. · 001138ae
      antirez authored
      A bug was reported in the context in issue #3631. The root cause of the
      bug was that certain neighbor boxes were zeroed after the "inside the
      bounding box or not" check, simply because the bounding box computation
      function was wrong.
      
      A few debugging infos where enhanced and moved in other parts of the
      code. A check to avoid steps=0 was added, but is unrelated to this
      issue and I did not verified it was an actual bug in practice.
      001138ae
  5. 02 Dec, 2016 1 commit
  6. 01 Dec, 2016 2 commits
  7. 30 Nov, 2016 4 commits
  8. 29 Nov, 2016 5 commits
  9. 28 Nov, 2016 2 commits
    • antirez's avatar
      PSYNC2: stop sending newlines to sub-slaves when master is down. · eab865a0
      antirez authored
      This actually includes two changes:
      
      1) No newlines to take the master-slave link up when the upstream master
      is down. Doing this is dangerous because the sub-slave often is received
      replication protocol for an half-command, so can't receive newlines
      without desyncing the replication link, even with the code in order to
      cancel out the bytes that PSYNC2 was using. Moreover this is probably
      also not needed/sane, because anyway the slave can keep serving
      requests, and because if it's configured to don't serve stale data, it's
      a good idea, actually, to break the link.
      
      2) When a +CONTINUE with a different ID is received, we now break
      connection with the sub-slaves: they need to be notified as well. This
      was part of the original specification but for some reason it was not
      implemented in the code, and was alter found as a PSYNC2 bug in the
      integration testing.
      eab865a0
    • antirez's avatar
      PSYNC2: Test (WIP). · 16559a02
      antirez authored
      This is the PSYNC2 test that helped find issues in the code, and that
      still can show a protocol desync from time to time. Work is in progress
      in order to find the issue. For now the test is not enabled in "make
      test" and must be run manually.
      16559a02
  10. 25 Nov, 2016 1 commit
  11. 24 Nov, 2016 2 commits
  12. 23 Nov, 2016 1 commit
    • antirez's avatar
      PSYNC2: bugfixing pre release. · 5b7d42ff
      antirez authored
      1. Master replication offset was cleared after switching configuration
      to some other slave, since it was assumed you can't PSYNC after a
      switch. Note the case anymore and when we successfully PSYNC we need to
      have our offset untouched.
      
      2. Secondary replication ID was not reset to "000..." pattern at
      startup.
      
      3. Master in error state replying -LOADING or other transient errors
      forced the slave to discard the cached master and full resync. This is
      now fixed.
      
      4. Better logging of what's happening on failed PSYNCs.
      5b7d42ff
  13. 18 Nov, 2016 3 commits
  14. 17 Nov, 2016 1 commit
  15. 16 Nov, 2016 3 commits
  16. 10 Nov, 2016 1 commit
    • antirez's avatar
      PSYNC2: Save replication ID/offset on RDB file. · 28c96d73
      antirez authored
      This means that stopping a slave and restarting it will still make it
      able to PSYNC with the master. Moreover the master itself will retain
      its ID/offset, in case it gets turned into a slave, or if a slave will
      try to PSYNC with it with an exactly updated offset (otherwise there is
      no backlog).
      
      This change was possible thanks to PSYNC v2 that makes saving the current
      replication state much simpler.
      28c96d73
  17. 09 Nov, 2016 2 commits
    • antirez's avatar
      PSYNC2: Wrap debugging code with if(0) · 4e5e366e
      antirez authored
      4e5e366e
    • antirez's avatar
      PSYNC2: different improvements to Redis replication. · 2669fb83
      antirez authored
      The gist of the changes is that now, partial resynchronizations between
      slaves and masters (without the need of a full resync with RDB transfer
      and so forth), work in a number of cases when it was impossible
      in the past. For instance:
      
      1. When a slave is promoted to mastrer, the slaves of the old master can
      partially resynchronize with the new master.
      
      2. Chained slalves (slaves of slaves) can be moved to replicate to other
      slaves or the master itsef, without requiring a full resync.
      
      3. The master itself, after being turned into a slave, is able to
      partially resynchronize with the new master, when it joins replication
      again.
      
      In order to obtain this, the following main changes were operated:
      
      * Slaves also take a replication backlog, not just masters.
      
      * Same stream replication for all the slaves and sub slaves. The
      replication stream is identical from the top level master to its slaves
      and is also the same from the slaves to their sub-slaves and so forth.
      This means that if a slave is later promoted to master, it has the
      same replication backlong, and can partially resynchronize with its
      slaves (that were previously slaves of the old master).
      
      * A given replication history is no longer identified by the `runid` of
      a Redis node. There is instead a `replication ID` which changes every
      time the instance has a new history no longer coherent with the past
      one. So, for example, slaves publish the same replication history of
      their master, however when they are turned into masters, they publish
      a new replication ID, but still remember the old ID, so that they are
      able to partially resynchronize with slaves of the old master (up to a
      given offset).
      
      * The replication protocol was slightly modified so that a new extended
      +CONTINUE reply from the master is able to inform the slave of a
      replication ID change.
      
      * REPLCONF CAPA is used in order to notify masters that a slave is able
      to understand the new +CONTINUE reply.
      
      * The RDB file was extended with an auxiliary field that is able to
      select a given DB after loading in the slave, so that the slave can
      continue receiving the replication stream from the point it was
      disconnected without requiring the master to insert "SELECT" statements.
      This is useful in order to guarantee the "same stream" property, because
      the slave must be able to accumulate an identical backlog.
      
      * Slave pings to sub-slaves are now sent in a special form, when the
      top-level master is disconnected, in order to don't interfer with the
      replication stream. We just use out of band "\n" bytes as in other parts
      of the Redis protocol.
      
      An old design document is available here:
      
      https://gist.github.com/antirez/ae068f95c0d084891305
      
      However the implementation is not identical to the description because
      during the work to implement it, different changes were needed in order
      to make things working well.
      2669fb83
  18. 02 Nov, 2016 2 commits