1. 15 Mar, 2018 6 commits
  2. 14 Mar, 2018 1 commit
    • antirez's avatar
      Cluster: ability to prevent slaves from failing over their masters. · 432bf477
      antirez authored
      This commit, in some parts derived from PR #3041 which is no longer
      possible to merge (because the user deleted the original branch),
      implements the ability of slaves to have a special configuration
      preventing that they try to start a failover when the master is failing.
      
      There are multiple reasons for wanting this, and the feautre was
      requested in issue #3021 time ago.
      
      The differences between this patch and the original PR are the
      following:
      
      1. The flag is saved/loaded on the nodes configuration.
      2. The 'myself' node is now flag-aware, the flag is updated as needed
         when the configuration is changed via CONFIG SET.
      3. The flag name uses NOFAILOVER instead of NO_FAILOVER to be consistent
         with existing NOADDR.
      4. The redis.conf documentation was rewritten.
      
      Thanks to @deep011 for the original patch.
      432bf477
  3. 19 Feb, 2018 1 commit
    • antirez's avatar
      Track number of logically expired keys still in memory. · ffde73c5
      antirez authored
      This commit adds two new fields in the INFO output, stats section:
      
      expired_stale_perc:0.34
      expired_time_cap_reached_count:58
      
      The first field is an estimate of the number of keys that are yet in
      memory but are already logically expired. They reason why those keys are
      yet not reclaimed is because the active expire cycle can't spend more
      time on the process of reclaiming the keys, and at the same time nobody
      is accessing such keys. However as the active expire cycle runs, while
      it will eventually have to return to the caller, because of time limit
      or because there are less than 25% of keys logically expired in each
      given database, it collects the stats in order to populate this INFO
      field.
      
      Note that expired_stale_perc is a running average, where the current
      sample accounts for 5% and the history for 95%, so you'll see it
      changing smoothly over time.
      
      The other field, expired_time_cap_reached_count, counts the number
      of times the expire cycle had to stop, even if still it was finding a
      sizeable number of keys yet to expire, because of the time limit.
      This allows people handling operations to understand if the Redis
      server, during mass-expiration events, is able to collect keys fast
      enough usually. It is normal for this field to increment during mass
      expires, but normally it should very rarely increment. When instead it
      constantly increments, it means that the current workloads is using
      a very important percentage of CPU time to expire keys.
      
      This feature was created thanks to the hints of Rashmi Ramesh and
      Bart Robinson from Twitter. In private email exchanges, they noted how
      it was important to improve the observability of this parameter in the
      Redis server. Actually in big deployments, the amount of keys that are
      yet to expire in each server, even if they are logically expired, may
      account for a very big amount of wasted memory.
      ffde73c5
  4. 14 Feb, 2018 2 commits
  5. 11 Jan, 2018 1 commit
  6. 29 Dec, 2017 1 commit
  7. 05 Dec, 2017 1 commit
    • antirez's avatar
      add linkClient(): adds the client and caches the list node. · 62a4b817
      antirez authored
      We have this operation in two places: when caching the master and
      when linking a new client after the client creation. By having an API
      for this we avoid incurring in errors when modifying one of the two
      places forgetting the other. The function is also a good place where to
      document why we cache the linked list node.
      
      Related to #4497 and #4210.
      62a4b817
  8. 04 Dec, 2017 1 commit
    • antirez's avatar
      Refactoring: improve luaCreateFunction() API. · 60d26acf
      antirez authored
      The function in its initial form, and after the fixes for the PSYNC2
      bugs, required code duplication in multiple spots. This commit modifies
      it in order to always compute the script name independently, and to
      return the SDS of the SHA of the body: this way it can be used in all
      the places, including for SCRIPT LOAD, without duplicating the code to
      create the Lua function name. Note that this requires to re-compute the
      body SHA1 in the case of EVAL seeing a script for the first time, but
      this should not change scripting performance in any way because new
      scripts definition is a rare event happening the first time a script is
      seen, and the SHA1 computation is anyway not a very slow process against
      the typical Redis script and compared to the actua Lua byte compiling of
      the body.
      
      Note that the function used to assert() if a duplicated script was
      loaded, however actually now two times over three, we want the function
      to handle duplicated scripts just fine: this happens in SCRIPT LOAD and
      in RDB AUX "lua" loading. Moreover the assert was not defending against
      some obvious failure mode, so now the function always tests against
      already defined functions at start.
      60d26acf
  9. 01 Dec, 2017 14 commits
  10. 30 Nov, 2017 2 commits
  11. 28 Nov, 2017 1 commit
    • Itamar Haber's avatar
      Standardizes the 'help' subcommand · 59d52f7f
      Itamar Haber authored
      This adds a new `addReplyHelp` helper that's used by commands
      when returning a help text. The following commands have been
      touched: DEBUG, OBJECT, COMMAND, PUBSUB, SCRIPT and SLOWLOG.
      
      WIP
      
      Fix entry command table entry for OBJECT for HELP option.
      
      After #4472 the command may have just 2 arguments.
      
      Improve OBJECT HELP descriptions.
      
      See #4472.
      
      WIP 2
      
      WIP 3
      59d52f7f
  12. 27 Nov, 2017 2 commits
    • zhaozhao.zz's avatar
      LFU: do some changes about LFU to find hotkeys · 583c3147
      zhaozhao.zz authored
      Firstly, use access time to replace the decreas time of LFU.
      For function LFUDecrAndReturn,
      it should only try to get decremented counter,
      not update LFU fields, we will update it in an explicit way.
      And we will times halve the counter according to the times of
      elapsed time than server.lfu_decay_time.
      Everytime a key is accessed, we should update the LFU
      including update access time, and increment the counter after
      call function LFUDecrAndReturn.
      If a key is overwritten, the LFU should be also updated.
      Then we can use `OBJECT freq` command to get a key's frequence,
      and LFUDecrAndReturn should be called in `OBJECT freq` command
      in case of the key has not been accessed for a long time,
      because we update the access time only when the key is read or
      overwritten.
      583c3147
    • zhaozhao.zz's avatar
      LFU: change lfu* parameters to int · 53cea972
      zhaozhao.zz authored
      53cea972
  13. 12 Jul, 2017 1 commit
    • antirez's avatar
      Fix replication of SLAVEOF inside transaction. · e74f0aa6
      antirez authored
      In Redis 4.0 replication, with the introduction of PSYNC2, masters and
      slaves replicate commands to cascading slaves and to the replication
      backlog itself in a different way compared to the past.
      
      Masters actually replicate the effects of client commands.
      Slaves just propagate what they receive from masters.
      
      This mechanism can cause problems when the configuration of an instance
      is changed from master to slave inside a transaction. For instance
      we could send to a master instance the following sequence:
      
          MULTI
          SLAVEOF 127.0.0.1 0
          EXEC
          SLAVEOF NO ONE
      
      Before the fixes in this commit, the MULTI command used to be propagated
      into the replication backlog, however after the SLAVEOF command the
      instance is a slave, so the EXEC implementation failed to also propagate
      the EXEC command. When the slaves of the above instance reconnected,
      they were incrementally synchronized just sending a "MULTI". This put
      the master client (in the slaves) into MULTI st...
      e74f0aa6
  14. 10 Jul, 2017 1 commit
  15. 06 Jul, 2017 1 commit
  16. 30 Jun, 2017 1 commit
    • antirez's avatar
      Added GEORADIUS(BYMEMBER)_RO variants for read-only operations. · f8547e53
      antirez authored
      Issue #4084 shows how for a design error, GEORADIUS is a write command
      because of the STORE option. Because of this it does not work
      on readonly slaves, gets redirected to masters in Redis Cluster even
      when the connection is in READONLY mode and so forth.
      
      To break backward compatibility at this stage, with Redis 4.0 to be in
      advanced RC state, is problematic for the user base. The API can be
      fixed into the unstable branch soon if we'll decide to do so in order to
      be more consistent, and reease Redis 5.0 with this incompatibility in
      the future. This is still unclear.
      
      However, the ability to scale GEO queries in slaves easily is too
      important so this commit adds two read-only variants to the GEORADIUS
      and GEORADIUSBYMEMBER command: GEORADIUS_RO and GEORADIUSBYMEMBER_RO.
      The commands are exactly as the original commands, but they do not
      accept the STORE and STOREDIST options.
      f8547e53
  17. 27 Jun, 2017 1 commit
    • antirez's avatar
      RDB modules values serialization format version 2. · 365dd037
      antirez authored
      The original RDB serialization format was not parsable without the
      module loaded, becuase the structure was managed only by the module
      itself. Moreover RDB is a streaming protocol in the sense that it is
      both produce di an append-only fashion, and is also sometimes directly
      sent to the socket (in the case of diskless replication).
      
      The fact that modules values cannot be parsed without the relevant
      module loaded is a problem in many ways: RDB checking tools must have
      loaded modules even for doing things not involving the value at all,
      like splitting an RDB into N RDBs by key or alike, or just checking the
      RDB for sanity.
      
      In theory module values could be just a blob of data with a prefixed
      length in order for us to be able to skip it. However prefixing the values
      with a length would mean one of the following:
      
      1. To be able to write some data at a previous offset. This breaks
      stremaing.
      2. To bufferize values before outputting them. This breaks performances.
      3. To have some chunked RDB output format. This breaks simplicity.
      
      Moreover, the above solution, still makes module values a totally opaque
      matter, with the fowllowing problems:
      
      1. The RDB check tool can just skip the value without being able to at
      least check the general structure. For datasets composed mostly of
      modules values this means to just check the outer level of the RDB not
      actually doing any checko on most of the data itself.
      2. It is not possible to do any recovering or processing of data for which a
      module no longer exists in the future, or is unknown.
      
      So this commit implements a different solution. The modules RDB
      serialization API is composed if well defined calls to store integers,
      floats, doubles or strings. After this commit, the parts generated by
      the module API have a one-byte prefix for each of the above emitted
      parts, and there is a final EOF byte as well. So even if we don't know
      exactly how to interpret a module value, we can always parse it at an
      high level, check the overall structure, understand the types used to
      store the information, and easily skip the whole value.
      
      The change is backward compatible: older RDB files can be still loaded
      since the new encoding has a new RDB type: MODULE_2 (of value 7).
      The commit also implements the ability to check RDB files for sanity
      taking advantage of the new feature.
      365dd037
  18. 16 Jun, 2017 1 commit
  19. 14 Jun, 2017 1 commit