1. 28 Jul, 2016 1 commit
  2. 27 Jul, 2016 6 commits
    • antirez's avatar
      Multiple GEORADIUS bugs fixed. · fdafe233
      antirez authored
      By grepping the continuous integration errors log a number of GEORADIUS
      tests failures were detected.
      
      Fortunately when a GEORADIUS failure happens, the test suite logs enough
      information in order to reproduce the problem: the PRNG seed,
      coordinates and radius of the query.
      
      By reproducing the issues, three different bugs were discovered and
      fixed in this commit. This commit also improves the already good
      reporting of the fuzzer and adds the failure vectors as regression
      tests.
      
      The issues found:
      
      1. We need larger squares around the poles in order to cover the area
      requested by the user. There were already checks in order to use a
      smaller step (larger squares) but the limit set (+/- 67 degrees) is not
      enough in certain edge cases, so 66 is used now.
      
      2. Even near the equator, when the search area center is very near the
      edge of the square, the north, south, west or ovest square may not be
      able to fully cover the specified radius. Now a test is performed at the
      edge of the initial guessed search area, and larger squares are used in
      case the test fails.
      
      3. Because of rounding errors between Redis and Tcl, sometimes the test
      signaled false positives. This is now addressed.
      
      Whenever possible the original code was improved a bit in other ways. A
      debugging example stanza was added in order to make the next debugging
      session simpler when the next bug is found.
      fdafe233
    • antirez's avatar
      Replication: when possible start RDB saving ASAP. · a1bfe22a
      antirez authored
      In a previous commit the replication code was changed in order to
      centralize the BGSAVE for replication trigger in replicationCron(),
      however after further testings, the 1 second delay imposed by this
      change is not acceptable.
      
      So now the BGSAVE is only delayed if the AOF rewriting process is
      active. However past comments made sure that replicationCron() is always
      able to trigger the BGSAVE when needed, making the code generally more
      robust.
      
      The new code is more similar to the initial @oranagra patch where the
      BGSAVE was delayed only if an AOF rewrite was in progress.
      
      Trivia: delaying the BGSAVE uncovered a minor Sentinel issue that is now
      fixed.
      a1bfe22a
    • antirez's avatar
      7ca69aff
    • antirez's avatar
      Sentinel: check Slave INFO state more often when disconnected. · 5b5e6520
      antirez authored
      During the initial handshake with the master a slave will report to have
      a very high disconnection time from its master (since technically it was
      disconnected since forever, so the current UNIX time in seconds is
      reported).
      
      However when the slave is connected again the Sentinel may re-scan the
      INFO output again only after 10 seconds, which is a long time. During
      this time Sentinels will consider this instance unable to failover, so
      a useless delay is introduced.
      
      Actaully this hardly happened in the practice because when a slave's
      master is down, the INFO period for slaves changes to 1 second. However
      when a manual failover is attempted immediately after adding slaves
      (like in the case of the Sentinel unit test), this problem may happen.
      
      This commit changes the INFO period to 1 second even in the case the
      slave's master is not down, but the slave reported to be disconnected
      from the master (by publishing, last time we checked, a master
      disconnection time field in INFO).
      
      This change is required as a result of an unrelated change in the
      replication code that adds a small delay in the master-slave first
      synchronization.
      5b5e6520
    • antirez's avatar
      Avoid simultaneous RDB and AOF child process. · 21cffc26
      antirez authored
      This patch, written in collaboration with Oran Agra (@oranagra) is a companion
      to 780a8b1d. Together the two patches should avoid that the AOF and RDB saving
      processes can be spawned at the same time. Previously conditions that
      could lead to two saving processes at the same time were:
      
      1. When AOF is enabled via CONFIG SET and an RDB saving process is
         already active.
      
      2. When the SYNC command decides to start an RDB saving process ASAP in
         order to serve a new slave that cannot partially resynchronize (but
         only if we have a disk target for replication, for diskless
         replication there is not such a problem).
      
      Condition "1" is not very severe but "2" can happen often and is
      definitely good at degrading Redis performances in an unexpected way.
      
      The two commits have the effect of always spawning RDB savings for
      replication in replicationCron() instead of attempting to start an RDB
      save synchronously. Moreover when a BGSAVE or AOF rewrite must be
      performed, they are instead just postponed using flags that will try to
      perform such operations ASAP.
      
      Finally the BGSAVE command was modified in order to accept a SCHEDULE
      option so that if an AOF rewrite is in progress, when this option is
      given, the command no longer returns an error, but instead schedules an
      RDB rewrite operation for when it will be possible to start it.
      21cffc26
    • antirez's avatar
      Replication: start BGSAVE for replication always in replicationCron(). · 017378ec
      antirez authored
      This makes the replication code conceptually simpler by removing the
      synchronous BGSAVE trigger in syncCommand(). This also means that
      socket and disk BGSAVE targets are handled by the same code.
      017378ec
  3. 06 Jul, 2016 2 commits
  4. 05 Jul, 2016 2 commits
  5. 04 Jul, 2016 12 commits
  6. 30 Jun, 2016 3 commits
  7. 27 Jun, 2016 1 commit
    • antirez's avatar
      Fix quicklistReplaceAtIndex() by updating the quicklist ziplist size. · 70419679
      antirez authored
      The quicklist takes a cached version of the ziplist representation size
      in bytes. The implementation must update this length every time the
      underlying ziplist changes. However quicklistReplaceAtIndex() failed to
      fix the length.
      
      During LSET calls, the size of the ziplist blob and the cached size
      inside the quicklist diverged. Later, when this size is used in an
      authoritative way, for example during nodes splitting in order to copy
      the nodes, we end with a duplicated node that may contain random
      garbage.
      
      This commit should fix issue #3343, however several problems were found
      reviewing the quicklist.c code in search of this bug that should be
      addressed soon or later.
      
      For example:
      
      1. To take a cached ziplist length is fragile since failing to update it
      leads to this kind of issues.
      
      2. The node splitting code needs auditing. For example it works just for
      a side effect of ziplistDeleteRange() to be able to cope with a wrong
      count of elements to remove. The code inside quicklist.c assumes that
      -1 means "delete till the end" while actually it's just a count of how
      many elements to delete, and is an unsigned count. So -1 gets converted
      into the maximum integer, and just by chance the ziplist code stops
      deleting elements after there are no more to delete.
      
      3. Node splitting is extremely inefficient, it copies the node and
      removes elements from both nodes even when actually there is to move a
      single entry from one node to the other, or when the new resulting node
      is empty at all so there is nothing to copy but just to create a new
      node.
      
      However at least for Redis 3.2 to introduce fresh code inside
      quicklist.c may be even more risky, so instead I'm writing a better
      fuzzy tester to stress the internals a bit more in order to anticipate
      other possible bugs.
      
      This bug was found using a fuzzy tester written after having some clue
      about where the bug could be. The tester eventually created a ~2000
      commands sequence able to always crash Redis. I wrote a better version
      of the tester that searched for the smallest sequence that could crash
      Redis automatically. Later this smaller sequence was minimized by
      removing random commands till it still crashed the server. This resulted
      into a sequence of 7 commands. With this small sequence it was just a
      matter of filling the code with enough printf() to understand enough
      state to fix the bug.
      70419679
  8. 17 Jun, 2016 2 commits
  9. 16 Jun, 2016 9 commits
    • antirez's avatar
      Fix Sentinel pending commands counting. · 6ad0371c
      antirez authored
      This bug most experienced effect was an inability of Redis to
      reconfigure back old masters to slaves after they are reachable again
      after a failover. This was due to failing to reset the count of the
      pending commands properly, so the master appeared fovever down.
      
      Was introduced in Redis 3.2 new Sentinel connection sharing feature
      which is a lot more complex than the 3.0 code, but more scalable.
      
      Many thanks to people reporting the issue, and especially to
      @sskorgal for investigating the issue in depth.
      
      Hopefully closes #3285.
      6ad0371c
    • antirez's avatar
      redis-cli: really connect to the right server. · 58f1d446
      antirez authored
      I recently introduced populating the autocomplete help array with the
      COMMAND command if available. However this was performed before parsing
      the arguments, defaulting to instance 6379. After the connection is
      performed it remains stable.
      
      The effect is that if there is an instance running on port 6339,
      whatever port you specify is ignored and 6379 is connected to instead.
      The right port will be selected only after a reconnection.
      
      Close #3314.
      58f1d446
    • Jan-Erik Rediger's avatar
      Remove debug printing · b6007b32
      Jan-Erik Rediger authored
      b6007b32
    • antirez's avatar
      RESTORE: accept RDB dumps with older versions. · f592b4d3
      antirez authored
      Reference issue #3218.
      
      Checking the code I can't find a reason why the original RESTORE
      code was so opinionated about restoring only the current version. The
      code in to `rdb.c` appears to be capable as always to restore data from
      older versions of Redis, and the only places where it is needed the
      current version in order to correctly restore data, is while loading the
      opcodes, not the values itself as it happens in the case of RESTORE.
      
      For the above reasons, this commit enables RESTORE to accept older
      versions of values payloads.
      f592b4d3
    • oranagra's avatar
      CLIENT error message was out of date · 047ced44
      oranagra authored
      047ced44
    • oranagra's avatar
      fix georadius returns multiple replies · 14e04847
      oranagra authored
      14e04847
    • antirez's avatar
      Minor aesthetic fixes to PR #3264. · bd23ea3f
      antirez authored
      Comment format fixed + local var modified from camel case to underscore
      separators as Redis code base normally does (camel case is mostly used
      for global symbols like structure names, function names, global vars,
      ...).
      bd23ea3f
    • oranagra's avatar
      check WRONGTYPE in BITFIELD before looping on the operations. · 2a3ee58e
      oranagra authored
      optimization: lookup key only once, and grow at once to the max need
      fixes #3259 and #3221, and also an early return if wrongtype is discovered by SET
      2a3ee58e
    • oranagra's avatar
      fix crash in BITFIELD GET on non existing key or wrong type see #3259 · a2e27b81
      oranagra authored
      this was a bug in the recent refactoring: bee963c4
      a2e27b81
  10. 15 Jun, 2016 2 commits