1. 17 Sep, 2017 1 commit
    • Oran Agra's avatar
      Flush append only buffers before existing. · b122cadc
      Oran Agra authored
      when SHUTDOWN command is recived it is possible that some of the recent
      command were not yet flushed from the AOF buffer, and the server
      experiences data loss at shutdown.
      b122cadc
  2. 03 Aug, 2017 1 commit
  3. 28 Jul, 2017 1 commit
  4. 24 Jul, 2017 1 commit
  5. 23 Jul, 2017 2 commits
    • antirez's avatar
      Modules: don't crash when Lua calls a module blocking command. · 31404355
      antirez authored
      Lua scripting does not support calling blocking commands, however all
      the native Redis commands are flagged as "s" (no scripting flag), so
      this is not possible at all. With modules there is no such mechanism in
      order to flag a command as non callable by the Lua scripting engine,
      moreover we cannot trust the modules users from complying all the times:
      it is likely that modules will be released to have blocking commands
      without such commands being flagged correctly, even if we provide a way to
      signal this fact.
      
      This commit attempts to address the problem in a short term way, by
      detecting that a module is trying to block in the context of the Lua
      scripting engine client, and preventing to do this. The module will
      actually believe to block as usually, but what happens is that the Lua
      script receives an error immediately, and the background call is ignored
      by the Redis engine (if not for the cleanup callbacks, once it
      unblocks).
      
      Long term, the more likely solution, is to introduce a new call called
      RedisModule_GetClientFlags(), so that a command can detect if the caller
      is a Lua script, and return an error, or avoid blocking at all.
      
      Being the blocking API experimental right now, more work is needed in
      this regard in order to reach a level well blocking module commands and
      all the other Redis subsystems interact peacefully.
      
      Now the effect is like the following:
      
          127.0.0.1:6379> eval "redis.call('hello.block',1,5000)" 0
          (error) ERR Error running script (call to
          f_b5ba35ff97bc1ef23debc4d6e9fd802da187ed53): @user_script:1: ERR
          Blocking module command called from Lua script
      
      This commit fixes issue #4127 in the short term.
      31404355
    • antirez's avatar
      5bfdfbe1
  6. 20 Jul, 2017 3 commits
    • antirez's avatar
      Make representClusterNodeFlags() more robust. · a3778f3b
      antirez authored
      This function failed when an internal-only flag was set as an only flag
      in a node: the string was trimmed expecting a final comma before
      exiting the function, causing a crash. See issue #4142.
      Moreover generation of flags representation only needed at DEBUG log
      level was always performed: a waste of CPU time. This is fixed as well
      by this commit.
      a3778f3b
    • antirez's avatar
      Fix two bugs in moduleTypeLookupModuleByID(). · b1c2e1a1
      antirez authored
      The function cache was not working at all, and the function returned
      wrong values if there where two or more modules exporting native data
      types.
      
      See issue #4131 for more details.
      b1c2e1a1
    • Leon Chen's avatar
      fix return wrong value of clusterDelNodeSlots · 9e7a8c02
      Leon Chen authored
      9e7a8c02
  7. 18 Jul, 2017 1 commit
  8. 15 Jul, 2017 1 commit
  9. 14 Jul, 2017 5 commits
  10. 12 Jul, 2017 1 commit
    • antirez's avatar
      Fix replication of SLAVEOF inside transaction. · e74f0aa6
      antirez authored
      In Redis 4.0 replication, with the introduction of PSYNC2, masters and
      slaves replicate commands to cascading slaves and to the replication
      backlog itself in a different way compared to the past.
      
      Masters actually replicate the effects of client commands.
      Slaves just propagate what they receive from masters.
      
      This mechanism can cause problems when the configuration of an instance
      is changed from master to slave inside a transaction. For instance
      we could send to a master instance the following sequence:
      
          MULTI
          SLAVEOF 127.0.0.1 0
          EXEC
          SLAVEOF NO ONE
      
      Before the fixes in this commit, the MULTI command used to be propagated
      into the replication backlog, however after the SLAVEOF command the
      instance is a slave, so the EXEC implementation failed to also propagate
      the EXEC command. When the slaves of the above instance reconnected,
      they were incrementally synchronized just sending a "MULTI". This put
      the master client (in the slaves) into MULTI state, breaking the
      replication.
      
      Notably even Redis Sentinel uses the above approach in order to guarantee
      that configuration changes are always performed together with rewrites
      of the configuration and with clients disconnection. Sentiel does:
      
          MULTI
          SLAVEOF ...
          CONFIG REWRITE
          CLIENT KILL TYPE normal
          EXEC
      
      So this was a really problematic issue. However even with the fix in
      this commit, that will add the final EXEC to the replication stream in
      case the instance was switched from master to slave during the
      transaction, the result would be to increment the slave replication
      offset, so a successive reconnection with the new master, will not
      permit a successful partial resynchronization: no way the new master can
      provide us with the backlog needed, we incremented our offset to a value
      that the new master cannot have.
      
      However the EXEC implementation waits to emit the MULTI, so that if the
      commands inside the transaction actually do not need to be replicated,
      no commands propagation happens at all. From multi.c:
      
          if (!must_propagate && !(c->cmd->flags & (CMD_READONLY|CMD_ADMIN))) {
      	execCommandPropagateMulti(c);
      	must_propagate = 1;
          }
      
      The above code is already modified by this commit you are reading.
      Now also ADMIN commands do not trigger the emission of MULTI. It is actually
      not clear why we do not just check for CMD_WRITE... Probably I wrote it this
      way in order to make the code more reliable: better to over-emit MULTI
      than not emitting it in time.
      
      So this commit should indeed fix issue #3836 (verified), however it looks
      like some reconsideration of this code path is needed in the long term.
      
      BONUS POINT: The reverse bug.
      
      Even in a read only slave "B", in a replication setup like:
      
      	A -> B -> C
      
      There are commands without the READONLY nor the ADMIN flag, that are also
      not flagged as WRITE commands. An example is just the PING command.
      
      So if we send B the following sequence:
      
          MULTI
          PING
          SLAVEOF NO ONE
          EXEC
      
      The result will be the reverse bug, where only EXEC is emitted, but not the
      previous MULTI. However this apparently does not create problems in practice
      but it is yet another acknowledge of the fact some work is needed here
      in order to make this code path less surprising.
      
      Note that there are many different approaches we could follow. For instance
      MULTI/EXEC blocks containing administrative commands may be allowed ONLY
      if all the commands are administrative ones, otherwise they could be
      denined. When allowed, the commands could simply never be replicated at all.
      e74f0aa6
  11. 11 Jul, 2017 3 commits
  12. 10 Jul, 2017 4 commits
  13. 06 Jul, 2017 3 commits
  14. 05 Jul, 2017 4 commits
  15. 04 Jul, 2017 1 commit
    • antirez's avatar
      Add symmetrical assertion to track c->reply_buffer infinite growth. · eddd8d34
      antirez authored
      Redis clients need to have an instantaneous idea of the amount of memory
      they are consuming (if the number is not exact should at least be
      proportional to the actual memory usage). We do that adding and
      subtracting the SDS length when pushing / popping from the client->reply
      list. However it is quite simple to add bugs in such a setup, by not
      taking the objects in the list and the count in sync. For such reason,
      Redis has an assertion to track counts near 2^64: those are always the
      result of the counter wrapping around because we subtract more than we
      add. This commit adds the symmetrical assertion: when the list is empty
      since we sent everything, the reply_bytes count should be zero. Thanks
      to the new assertion it should be simple to also detect the other
      problem, where the count slowly increases because of over-counting.
      The assertion adds a conditional in the code that sends the buffer to
      the socket but should not create any measurable performance slowdown,
      listLength() just accesses a structure field, and this code path is
      totally dominated by write(2).
      
      Related to #4100.
      eddd8d34
  16. 03 Jul, 2017 2 commits
    • Dvir Volk's avatar
      fixed #4100 · 86e564e9
      Dvir Volk authored
      86e564e9
    • antirez's avatar
      Fix GEORADIUS edge case with huge radius. · b2cd9fca
      antirez authored
      This commit closes issue #3698, at least for now, since the root cause
      was not fixed: the bounding box function, for huge radiuses, does not
      return a correct bounding box, there are points still within the radius
      that are left outside.
      
      So when using GEORADIUS queries with radiuses in the order of 5000 km or
      more, it was possible to see, at the edge of the area, certain points
      not correctly reported.
      
      Because the bounding box for now was used just as an optimization, and
      such huge radiuses are not common, for now the optimization is just
      switched off when the radius is near such magnitude.
      
      Three test cases found by the Continuous Integration test were added, so
      that we can easily trigger the bug again, both for regression testing
      and in order to properly fix it as some point in the future.
      b2cd9fca
  17. 30 Jun, 2017 3 commits
    • antirez's avatar
      redis-cli --latency: ability to run non interactively. · 26e638a8
      antirez authored
      This feature was proposed by @rosmo in PR #2643 and later redesigned
      in order to fit better with the other options for non-interactive modes
      of redis-cli. The idea is basically to allow to collect latency
      information in scripts, cron jobs or whateever, just running for a
      limited time and then producing a single output.
      26e638a8
    • antirez's avatar
      Fix abort typo in Lua debugger help screen. · 7bad78bd
      antirez authored
      7bad78bd
    • antirez's avatar
      Added GEORADIUS(BYMEMBER)_RO variants for read-only operations. · f8547e53
      antirez authored
      Issue #4084 shows how for a design error, GEORADIUS is a write command
      because of the STORE option. Because of this it does not work
      on readonly slaves, gets redirected to masters in Redis Cluster even
      when the connection is in READONLY mode and so forth.
      
      To break backward compatibility at this stage, with Redis 4.0 to be in
      advanced RC state, is problematic for the user base. The API can be
      fixed into the unstable branch soon if we'll decide to do so in order to
      be more consistent, and reease Redis 5.0 with this incompatibility in
      the future. This is still unclear.
      
      However, the ability to scale GEO queries in slaves easily is too
      important so this commit adds two read-only variants to the GEORADIUS
      and GEORADIUSBYMEMBER command: GEORADIUS_RO and GEORADIUSBYMEMBER_RO.
      The commands are exactly as the original commands, but they do not
      accept the STORE and STOREDIST options.
      f8547e53
  18. 29 Jun, 2017 1 commit
  19. 27 Jun, 2017 1 commit
    • antirez's avatar
      RDB modules values serialization format version 2. · 365dd037
      antirez authored
      The original RDB serialization format was not parsable without the
      module loaded, becuase the structure was managed only by the module
      itself. Moreover RDB is a streaming protocol in the sense that it is
      both produce di an append-only fashion, and is also sometimes directly
      sent to the socket (in the case of diskless replication).
      
      The fact that modules values cannot be parsed without the relevant
      module loaded is a problem in many ways: RDB checking tools must have
      loaded modules even for doing things not involving the value at all,
      like splitting an RDB into N RDBs by key or alike, or just checking the
      RDB for sanity.
      
      In theory module values could be just a blob of data with a prefixed
      length in order for us to be able to skip it. However prefixing the values
      with a length would mean one of the following:
      
      1. To be able to write some data at a previous offset. This breaks
      stremaing.
      2. To bufferize values before outputting them. This breaks performances.
      3. To have some chunked RDB output format. This breaks simplicity.
      
      Moreover, the above solution, still makes module values a totally opaque
      matter, with the fowllowing problems:
      
      1. The RDB check tool can just skip the value without being able to at
      least check the general structure. For datasets composed mostly of
      modules values this means to just check the outer level of the RDB not
      actually doing any checko on most of the data itself.
      2. It is not possible to do any recovering or processing of data for which a
      module no longer exists in the future, or is unknown.
      
      So this commit implements a different solution. The modules RDB
      serialization API is composed if well defined calls to store integers,
      floats, doubles or strings. After this commit, the parts generated by
      the module API have a one-byte prefix for each of the above emitted
      parts, and there is a final EOF byte as well. So even if we don't know
      exactly how to interpret a module value, we can always parse it at an
      high level, check the overall structure, understand the types used to
      store the information, and easily skip the whole value.
      
      The change is backward compatible: older RDB files can be still loaded
      since the new encoding has a new RDB type: MODULE_2 (of value 7).
      The commit also implements the ability to check RDB files for sanity
      taking advantage of the new feature.
      365dd037
  20. 26 Jun, 2017 1 commit