1. 22 Jul, 2012 1 commit
    • antirez's avatar
      Allow Pub/Sub in contexts where other commands are blocked. · 5d73073f
      antirez authored
      Redis loading data from disk, and a Redis slave disconnected from its
      master with serve-stale-data disabled, are two conditions where
      commands are normally refused by Redis, returning an error.
      
      However there is no reason to disable Pub/Sub commands as well, given
      that this layer does not interact with the dataset. To allow Pub/Sub in
      as many contexts as possible is especially interesting now that Redis
      Sentinel uses Pub/Sub of a Redis master as a communication channel
      between Sentinels.
      
      This commit allows Pub/Sub to be used in the above two contexts where
      it was previously denied.
      5d73073f
  2. 27 Jun, 2012 1 commit
    • antirez's avatar
      REPLCONF internal command introduced. · 3a328978
      antirez authored
      The REPLCONF command is an internal command (not designed to be directly
      used by normal clients) that allows a slave to set some replication
      related state in the master before issuing SYNC to start the
      replication.
      
      The initial motivation for this command, and the only reason currently
      it is used by the implementation, is to let the slave instance
      communicate its listening port to the slave, so that the master can
      show all the slaves with their listening ports in the "replication"
      section of the INFO output.
      
      This allows clients to auto discover and query all the slaves attached
      into a master.
      
      Currently only a single option of the REPLCONF command is supported, and
      it is called "listening-port", so the slave now starts the replication
      process with something like the following chat:
      
          REPLCONF listening-prot 6380
          SYNC
      
      Note that this works even if the master is an older version of Redis and
      does not understand REPLCONF, because the slave ignores the REPLCONF
      error.
      
      In the future REPLCONF can be used for partial replication and other
      replication related features where there is the need to exchange
      information between master and slave.
      
      NOTE: This commit also fixes a bug: the INFO outout already carried
      information about slaves, but the port was broken, and was obtained
      with getpeername(2), so it was actually just the ephemeral port used
      by the slave to connect to the master as a client.
      3a328978
  3. 21 Jun, 2012 1 commit
    • antirez's avatar
      Fixed a timing attack on AUTH (Issue #560). · 31a1439b
      antirez authored
      The way we compared the authentication password using strcmp() allowed
      an attacker to gain information about the password using a well known
      class of attacks called "timing attacks".
      
      The bug appears to be practically not exploitable in most modern systems
      running Redis since even using multiple bytes of differences in the
      input at a time instead of one the difference in running time in in the
      order of 10 nanoseconds, making it hard to exploit even on LAN. However
      attacks always get better so we are providing a fix ASAP.
      
      The new implementation uses two fixed length buffers and a constant time
      comparison function, with the goal of:
      
      1) Completely avoid leaking information about the content of the
      password, since the comparison is always performed between 512
      characters and without conditionals.
      2) Partially avoid leaking information about the length of the
      password.
      
      About "2" we still have a stage in the code where the real password and
      the user provided password are copied in the static buffers, we also run
      two strlen() operations against the two inputs, so the running time
      of the comparison is a fixed amount plus a time proportional to
      LENGTH(A)+LENGTH(B). This means that the absolute time of the operation
      performed is still related to the length of the password in some way,
      but there is no way to change the input in order to get a difference in
      the execution time in the comparison that is not just proportional to
      the string provided by the user (because the password length is fixed).
      
      Thus in practical terms the user should try to discover LENGTH(PASSWORD)
      looking at the whole execution time of the AUTH command and trying to
      guess a proportionality between the whole execution time and the
      password length: this appears to be mostly unfeasible in the real world.
      
      Also protecting from this attack is not very useful in the case of Redis
      as a brute force attack is anyway feasible if the password is too short,
      while with a long password makes it not an issue that the attacker knows
      the length.
      31a1439b
  4. 11 Jun, 2012 1 commit
    • antirez's avatar
      Dump ziplist hex value on failed assertion. · ee789e15
      antirez authored
      The ziplist -> hashtable conversion code is triggered every time an hash
      value must be promoted to a full hash table because the number or size of
      elements reached the threshold.
      
      If a problem in the ziplist causes the same field to be present
      multiple times, the assertion of successful addition of the element
      inside the hash table will fail, crashing server with a failed
      assertion, but providing little information about the problem.
      
      This code adds a new logging function to perform the hex dump of binary
      data, and makes sure that the ziplist -> hashtable conversion code uses
      this new logging facility to dump the content of the ziplist when the
      assertion fails.
      
      This change was originally made in order to investigate issue #547.
      ee789e15
  5. 25 May, 2012 1 commit
    • antirez's avatar
      Four new persistence fields in INFO. A few renamed. · 33e1db36
      antirez authored
      The 'persistence' section of INFO output now contains additional four
      fields related to RDB and AOF persistence:
      
       rdb_last_bgsave_time_sec       Duration of latest BGSAVE in sec.
       rdb_current_bgsave_time_sec    Duration of current BGSAVE in sec.
       aof_last_rewrite_time_sec      Duration of latest AOF rewrite in sec.
       aof_current_rewrite_time_sec   Duration of current AOF rewrite in sec.
      
      The 'current' fields are set to -1 if a BGSAVE / AOF rewrite is not in
      progress. The 'last' fileds are set to -1 if no previous BGSAVE / AOF
      rewrites were performed.
      
      Additionally a few fields in the persistence section were renamed for
      consistency:
      
       changes_since_last_save -> rdb_changes_since_last_save
       bgsave_in_progress -> rdb_bgsave_in_progress
       last_save_time -> rdb_last_save_time
       last_bgsave_status -> rdb_last_bgsave_status
       bgrewriteaof_in_progress -> aof_rewrite_in_progress
       bgrewriteaof_scheduled -> aof_rewrite_scheduled
      
      After the renaming, fields in the persistence section start with rdb_ or
      aof_ prefix depending on the persistence method they describe.
      The field 'loading' and related fields are not prefixed because they are
      unique for both the persistence methods.
      33e1db36
  6. 24 May, 2012 2 commits
    • antirez's avatar
      New commands: BITOP and BITCOUNT. · 0bd6d68e
      antirez authored
      The motivation for this new commands is to be search in the usage of
      Redis for real time statistics. See the article "Fast real time metrics
      using Redis".
      
      http://blog.getspool.com/2011/11/29/fast-easy-realtime-metrics-using-redis-bitmaps/
      
      In general Redis strings when used as bitmaps using the SETBIT/GETBIT
      command provide a very space-efficient and fast way to store statistics.
      For instance in a web application with users, every user can be
      associated with a key that shows every day in which the user visited the
      web service. This information can be really valuable to extract user
      behaviour information.
      
      With Redis bitmaps doing this is very simple just saying that a given
      day is 0 (the data the service was put online) and all the next days are
      1, 2, 3, and so forth. So with SETBIT it is possible to set the bit
      corresponding to the current day every time the user visits the site.
      
      It is possible to take the count of the bit sets on the run, this is
      extremely easy using a Lua script. However a fast bit count native
      operation can be useful, especially if it can operate on ranges, or when
      the string is small like in the case of days (even if you consider many
      years it is still extremely little data).
      
      For this reason BITOP was introduced. The command counts the number of
      bits set to 1 in a string, with optional range:
      
      BITCOUNT key [start end]
      
      The start/end parameters are similar to GETRANGE. If omitted the whole
      string is tested.
      
      Population counting is more useful when bit-level operations like AND,
      OR and XOR are avaialble. For instance I can test multiple users to see
      the number of days three users visited the site at the same time. To do
      this we can take the AND of all the bitmaps, and then count the set bits.
      
      For this reason the BITOP command was introduced:
      
      BITOP [AND|OR|XOR|NOT] dest_key src_key1 src_key2 src_key3 ... src_keyN
      
      In the special case of NOT (that inverts the bits) only one source key
      can be passed.
      
      The judicious use of BITCOUNT and BITOP combined can lead to interesting
      use cases with very space efficient representation of data.
      
      The implementation provided is still not tested and optimized for speed,
      next commits will introduce unit tests. Later the implementation will be
      profiled to see if it is possible to gain an important amount of speed
      without making the code much more complex.
      0bd6d68e
    • antirez's avatar
      Allow an AOF rewrite buffer > 2GB (Fix for issue #504). · 47ca4b6e
      antirez authored
      During the AOF rewrite process, the parent process needs to accumulate
      the new writes in an in-memory buffer: when the child will terminate the
      AOF rewriting process this buffer (that ist the difference between the
      dataset when the rewrite was started, and the current dataset) is
      flushed to the new AOF file.
      
      We used to implement this buffer using an sds.c string, but sds.c has a
      2GB limit. Sometimes the dataset can be big enough, the amount of writes
      so high, and the rewrite process slow enough that we overflow the 2GB
      limit, causing a crash, documented on github by issue #504.
      
      In order to prevent this from happening, this commit introduces a new
      system to accumulate writes, implemented by a linked list of blocks of
      10 MB each, so that we also avoid paying the reallocation cost.
      
      Note that theoretically modern operating systems may implement realloc()
      simply as a remaping of the old pages, thus with very good performances,
      see for instance the mremap() syscall on Linux. However this is not
      always true, and jemalloc by default avoids doing this because there are
      issues with the current implementation of mremap().
      
      For this reason we are using a linked list of blocks instead of a single
      block that gets reallocated again and again.
      
      The changes in this commit lacks testing, that will be performed before
      merging into the unstable branch. This fix will not enter 2.4 because it
      is too invasive. However 2.4 will log a warning when the AOF rewrite
      buffer is near to the 2GB limit.
      47ca4b6e
  7. 13 May, 2012 2 commits
    • antirez's avatar
      Impovements for: Redis timer, hashes rehashing, keys collection. · 61daf891
      antirez authored
      A previous commit introduced REDIS_HZ define that changes the frequency
      of calls to the serverCron() Redis function. This commit improves
      different related things:
      
      1) Software watchdog: now the minimal period can be set according to
      REDIS_HZ. The minimal period is two times the timer period, that is:
      
          (1000/REDIS_HZ)*2 milliseconds
      
      2) The incremental rehashing is now performed in the expires dictionary
      as well.
      
      3) The activeExpireCycle() function was improved in different ways:
      
      - Now it checks if it already used too much time using microseconds
        instead of milliseconds for better precision.
      - The time limit is now calculated correctly, in the previous version
        the division was performed before of the multiplication resulting in
        a timelimit of 0 if HZ was big enough.
      - Databases with less than 1% of buckets fill in the hash table are
        skipped, because getting random keys is too expensive in this
        condition.
      
      4) tryResizeHashTables() is now called at every timer call, we need to
         match the number of calls we do to the expired keys colleciton cycle.
      
      5) REDIS_HZ was raised to 100.
      61daf891
    • antirez's avatar
      Redis timer interrupt frequency configurable as REDIS_HZ. · 94343492
      antirez authored
      Redis uses a function called serverCron() that is very similar to the
      timer interrupt of an operating system. This function is used to handle
      a number of asynchronous things, like active expired keys collection,
      clients timeouts, update of statistics, things related to the cluster
      and replication, triggering of BGSAVE and AOF rewrite process, and so
      forth.
      
      In the past the timer was called 1 time per second. At some point it was
      raised to 10 times per second, but it still was fixed and could not be
      changed even at compile time, because different functions called from
      serverCron() assumed a given fixed frequency.
      
      This commmit makes the frequency configurable, so that it is simpler to
      pick a good tradeoff between overhead of this function (that is usually
      very small) and the responsiveness of Redis during a few critical
      circumstances where a lot of work is done inside the timer.
      
      An example of such a critical condition is mass-expire of a lot of keys
      in the same second. Up to a given percentage of CPU time is used to
      perform expired keys collection per expire cylce. Now changing the
      REDIS_HZ macro it is possible to do less work but more times per second
      in order to block the server for less time.
      
      If this patch will work well in our tests it will enter Redis 2.6-final.
      94343492
  8. 11 May, 2012 1 commit
    • antirez's avatar
      More incremental active expired keys collection process. · 1dcc95d0
      antirez authored
      If a large amonut of keys are all expiring about at the same time, the
      "active" expired keys collection cycle used to block as far as the
      percentage of already expired keys was >= 25% of the total population of
      keys with an expire set.
      
      This could block the server even for many seconds in order to reclaim
      memory ASAP. The new algorithm uses at max a small amount of
      milliseconds per cycle, even if this means reclaiming the memory less
      promptly it also means a more responsive server.
      1dcc95d0
  9. 02 May, 2012 1 commit
  10. 21 Apr, 2012 1 commit
    • antirez's avatar
      Limit memory used by big SLOWLOG entries. · d3701d27
      antirez authored
      Two limits are added:
      
      1) Up to SLOWLOG_ENTRY_MAX_ARGV arguments are logged.
      2) Up to SLOWLOG_ENTRY_MAX_STRING bytes per argument are logged.
      3) slowlog-max-len is set to 128 by default (was 1024).
      
      The number of remaining arguments / bytes is logged in the entry
      so that the user can understand better the nature of the logged command.
      d3701d27
  11. 13 Apr, 2012 2 commits
    • antirez's avatar
      Stop access to global vars. Not configurable. · 6663653f
      antirez authored
      After considering the interaction between ability to delcare globals in
      scripts using the 'global' function, and the complexities related to
      hanlding replication and AOF in a sane way with globals AND ability to
      turn protection On and Off, we reconsidered the design. The new design
      makes clear that there is only one good way to write Redis scripts, that
      is not using globals. In the rare cases state must be retained across
      calls a Redis key can be used.
      6663653f
    • antirez's avatar
      37b29ef2
  12. 10 Apr, 2012 1 commit
  13. 09 Apr, 2012 1 commit
  14. 07 Apr, 2012 1 commit
  15. 02 Apr, 2012 2 commits
  16. 31 Mar, 2012 1 commit
  17. 30 Mar, 2012 1 commit
  18. 29 Mar, 2012 1 commit
  19. 28 Mar, 2012 1 commit
  20. 27 Mar, 2012 2 commits
  21. 25 Mar, 2012 1 commit
    • antirez's avatar
      New INFO field aof_delayed_fsync introduced. · c1d01b3c
      antirez authored
      This new field counts all the times Redis is configured with AOF enabled and
      fsync policy 'everysec', but the previous fsync performed by the
      background thread was not able to complete within two seconds, forcing
      Redis to perform a write against the AOF file while the fsync is still
      in progress (likely a blocking operation).
      c1d01b3c
  22. 20 Mar, 2012 1 commit
    • antirez's avatar
      Support for read-only slaves. Semantical fixes. · f3fd419f
      antirez authored
      This commit introduces support for read only slaves via redis.conf and CONFIG GET/SET commands. Also various semantical fixes are implemented here:
      
      1) MULTI/EXEC with only read commands now work where the server is into a state where writes (or commands increasing memory usage) are not allowed. Before this patch everything inside a transaction would fail in this conditions.
      
      2) Scripts just calling read-only commands will work against read only
      slaves, when the server is out of memory, or when persistence is into an
      error condition. Before the patch EVAL always failed in this condition.
      f3fd419f
  23. 14 Mar, 2012 1 commit
  24. 13 Mar, 2012 1 commit
  25. 08 Mar, 2012 3 commits
    • antirez's avatar
      Instantaneous ops/sec figure in INFO output. · 250e7f69
      antirez authored
      250e7f69
    • antirez's avatar
      run_id added to INFO output. · 91d664d6
      antirez authored
      The Run ID is a field that identifies a single execution of the Redis
      server. It can be useful for many purposes as it makes easy to detect if
      the instance we are talking about is the same, or if it is a different
      one or was rebooted. An application of run_id will be in the partial
      synchronization of replication, where a slave may request a partial sync
      from a given offset only if it is talking with the same master. Another
      application is in failover and monitoring scripts.
      91d664d6
    • antirez's avatar
      clusterGetRandomName() generalized into getRandomHexChars() so that we can use... · 44f508f1
      antirez authored
      clusterGetRandomName() generalized into getRandomHexChars() so that we can use it for the run_id field as well.
      44f508f1
  26. 07 Mar, 2012 4 commits
  27. 29 Feb, 2012 1 commit
  28. 28 Feb, 2012 3 commits
    • antirez's avatar
      Better system for additional commands replication. · 78d6a22d
      antirez authored
      The new code uses a more generic data structure to describe redis operations.
      The new design allows for multiple alsoPropagate() calls within the scope of a
      single command, that is useful in different contexts. For instance there
      when there are multiple clients doing BRPOPLPUSH against the same list,
      and a variadic LPUSH is performed against this list, the blocked clients
      will both be served, and we should correctly replicate multiple LPUSH
      commands after the replication of the current command.
      78d6a22d
    • antirez's avatar
      Added a new API to replicate an additional command after the replication of... · eeb34eff
      antirez authored
      Added a new API to replicate an additional command after the replication of the currently executed command, in order to propagte the LPUSH originating from RPOPLPUSH and indirectly by BRPOPLPUSH.
      eeb34eff
    • antirez's avatar
      propagate() prototype added to redis.h · d8b1228b
      antirez authored
      d8b1228b