1. 29 Aug, 2018 6 commits
  2. 02 Aug, 2018 4 commits
  3. 30 Jul, 2018 4 commits
  4. 25 Jul, 2018 1 commit
  5. 23 Jul, 2018 1 commit
    • Itamar Haber's avatar
      Adds memory information about the script's cache to INFO · faf3dbfc
      Itamar Haber authored
      Implementation notes: as INFO is "already broken", I didn't want to break it further. Instead of computing the server.lua_script dict size on every call, I'm keeping a running sum of the body's length and dict overheads.
      
      This implementation is naive as it **does not** take into consideration dict rehashing, but that inaccuracy pays off in speed ;)
      
      Demo time:
      
      ```bash
      $ redis-cli info memory | grep "script"
      used_memory_scripts:96
      used_memory_scripts_human:96B
      number_of_cached_scripts:0
      $ redis-cli eval "" 0 ; redis-cli info memory | grep "script"
      (nil)
      used_memory_scripts:120
      used_memory_scripts_human:120B
      number_of_cached_scripts:1
      $ redis-cli script flush ; redis-cli info memory | grep "script"
      OK
      used_memory_scripts:96
      used_memory_scripts_human:96B
      number_of_cached_scripts:0
      $ redis-cli eval "return('Hello, Script Cache :)')" 0 ; redis-cli info memory | grep "script"
      "Hello, Script Cache :)"
      used_memory_scripts:152
      used_memory_scripts_human:152B
      number_of_cached_scripts:1
      $ redis-cli eval "return redis.sha1hex(\"return('Hello, Script Cache :)')\")" 0 ; redis-cli info memory | grep "script"
      "1be72729d43da5114929c1260a749073732dc822"
      used_memory_scripts:232
      used_memory_scripts_human:232B
      number_of_cached_scripts:2
       19:03:54 redis [lua_scripts-in-info-memory L ✚…⚑] $ redis-cli evalsha 1be72729d43da5114929c1260a749073732dc822 0
      "Hello, Script Cache :)"
      ```
      faf3dbfc
  6. 20 Jul, 2018 2 commits
  7. 19 Jul, 2018 8 commits
  8. 17 Jul, 2018 1 commit
    • Oran Agra's avatar
      fix rare replication stream corruption with disk-based replication · d5559898
      Oran Agra authored
      The slave sends \n keepalive messages to the master while parsing the rdb,
      and later sends REPLCONF ACK once a second. rarely, the master recives both
      a linefeed char and a REPLCONF in the same read, \n*3\r\n$8\r\nREPLCONF\r\n...
      and it tries to trim two chars (\r\n) from the query buffer,
      trimming the '*' from *3\r\n$8\r\nREPLCONF\r\n...
      
      then the master tries to process a command starting with '3' and replies to
      the slave a bunch of -ERR and one +OK.
      although the slave silently ignores these (prints a log message), this corrupts
      the replication offset at the slave since the slave increases the replication
      offset, and the master did not.
      
      other than the fix in processInlineBuffer, i did several other improvments
      while hunting this very rare bug.
      
      - when redis replies with "unknown command" it includes a portion of the
        arguments, not just the command name. so it would be easier to understand
        what was recived, in my case, on the slave side,  it was -ERR, but
        the "arguments" were the interesting part (containing info on the error).
      - about a year ago i added code in addReplyErrorLength to print the error to
        the log in case of a reply to master (since this string isn't actually
        trasmitted to the master), now changed that block to print a similar log
        message to indicate an error being sent from the master to the slave.
        note that the slave is marked as CLIENT_SLAVE only after PSYNC was received,
        so this will not cause any harm for REPLCONF, and will only indicate problems
        that are gonna corrupt the replication stream anyway.
      - two places were c->reply was emptied, and i wanted to reset sentlen
        this is a precaution (i did not actually see such a problem), since a
        non-zero sentlen will cause corruption to be transmitted on the socket.
      d5559898
  9. 16 Jul, 2018 1 commit
    • Oran Agra's avatar
      slave buffers were wasteful and incorrectly counted causing eviction · bf680b6f
      Oran Agra authored
      A) slave buffers didn't count internal fragmentation and sds unused space,
         this caused them to induce eviction although we didn't mean for it.
      
      B) slave buffers were consuming about twice the memory of what they actually needed.
      - this was mainly due to sdsMakeRoomFor growing to twice as much as needed each time
        but networking.c not storing more than 16k (partially fixed recently in 237a38737).
      - besides it wasn't able to store half of the new string into one buffer and the
        other half into the next (so the above mentioned fix helped mainly for small items).
      - lastly, the sds buffers had up to 30% internal fragmentation that was wasted,
        consumed but not used.
      
      C) inefficient performance due to starting from a small string and reallocing many times.
      
      what i changed:
      - creating dedicated buffers for reply list, counting their size with zmalloc_size
      - when creating a new reply node from, preallocate it to at least 16k.
      - when appending a new reply to the buffer, first fill all the unused space of the
        previous node before starting a new one.
      
      other changes:
      - expose mem_not_counted_for_evict info field for the benefit of the test suite
      - add a test to make sure slave buffers are counted correctly and that they don't cause eviction
      bf680b6f
  10. 14 Jul, 2018 1 commit
  11. 04 Jul, 2018 3 commits
  12. 03 Jul, 2018 2 commits
  13. 02 Jul, 2018 1 commit
  14. 01 Jul, 2018 1 commit
  15. 27 Jun, 2018 1 commit
  16. 21 Jun, 2018 1 commit
  17. 12 Jun, 2018 2 commits
    • antirez's avatar
      Use a less aggressive query buffer resize policy. · 093ec57d
      antirez authored
      A user with many connections (10 thousand) on a single Redis server
      reports in issue #4983 that sometimes Redis is idle becuase at the same
      time many clients need to resize their query buffer according to the old
      policy.
      
      It looks like this was created by the fact that we allow the query
      buffer to grow without problems to a size up to PROTO_MBULK_BIG_ARG
      normally, but when the client is idle we immediately are more strict,
      and a query buffer greater than 1024 bytes is already enough to trigger
      the resize. So for instance if most of the clients stop at the same time
      this issue should be easily triggered.
      
      This behavior actually looks odd, and there should be only a clear limit
      after we say, let's look at this query buffer to check if it's time to
      resize it. This commit puts the limit at PROTO_MBULK_BIG_ARG, and the
      check is performed both if compared to the peak usage the current usage
      is too big, or if the client is idle.
      
      Then when the check is performed, to waste just a few kbytes is
      considered enough to proceed with the resize. This should fix the issue.
      093ec57d
    • antirez's avatar
      Streams: use non static macro node limits. · d01af7ab
      antirez authored
      Also add the concept of size/items limit, instead of just having as
      limit the number of bytes.
      d01af7ab