1. 22 Aug, 2021 2 commits
  2. 20 Aug, 2021 1 commit
  3. 18 Aug, 2021 2 commits
  4. 15 Aug, 2021 1 commit
  5. 14 Aug, 2021 1 commit
  6. 12 Aug, 2021 3 commits
  7. 11 Aug, 2021 1 commit
    • Yossi Gottlieb's avatar
      Add debian:oldoldstable build target for CI. (#9358) · 08c46f2b
      Yossi Gottlieb authored
      Making sure Redis builds properly on older compiler is important given the wide range of systems it is built for. So far Ubuntu 16.04 has been used for this purpose, but as it's getting phased out we'll move to `oldoldstable` Debian as an "old system" precursor.
      08c46f2b
  8. 10 Aug, 2021 6 commits
  9. 09 Aug, 2021 4 commits
    • sundb's avatar
      Sanitize dump payload: handle remaining empty key when RDB loading and restore command (#9349) · cbda4929
      sundb authored
      This commit mainly fixes empty keys due to RDB loading and restore command,
      which was omitted in #9297.
      
      1) When loading quicklsit, if all the ziplists in the quicklist are empty, NULL will be returned.
          If only some of the ziplists are empty, then we will skip the empty ziplists silently.
      2) When loading hash zipmap, if zipmap is empty, sanitization check will fail.
      3) When loading hash ziplist, if ziplist is empty, NULL will be returned.
      4) Add RDB loading test with sanitize.
      cbda4929
    • Qu Chen's avatar
      Cleanup: createAOFClient uses createClient to avoid overlooked mismatches (#9338) · 0b643e93
      Qu Chen authored
      AOF fake client creation (createAOFClient) was doing similar work as createClient,
      with some minor differences, most of which unintended, this was dangerous and
      meant that many changes to createClient should have always been reflected to aof.c
      
      This cleanup changes createAOFClient to call createClient with NULL, like we
      do in module.c and elsewhere.
      0b643e93
    • Eduardo Semprebon's avatar
      Add SORT_RO command (#9299) · d3356bf6
      Eduardo Semprebon authored
      Add a readonly variant of the STORE command, so it can be used on
      read-only workloads (replica, ACL, etc)
      d3356bf6
    • Qu Chen's avatar
      Allow master to replicate command longer than replica's query buffer limit (#9340) · e8eeba7b
      Qu Chen authored
      Replication client no longer checks incoming command length against the client-query-buffer-limit. This makes the master able to replicate commands longer than replica's configured client-query-buffer-limit 
      e8eeba7b
  10. 08 Aug, 2021 3 commits
  11. 07 Aug, 2021 1 commit
  12. 06 Aug, 2021 1 commit
  13. 05 Aug, 2021 10 commits
    • Oran Agra's avatar
      corrupt-dump-fuzzer test, avoid creating junk keys (#9302) · 3f3f678a
      Oran Agra authored
      The execution of the RPOPLPUSH command by the fuzzer created junk keys,
      that were later being selected by RANDOMKEY and modified.
      This also meant that lists were statistically tested more than other
      files.
      
      Fix the fuzzer not to pass junk key names to RPOPLPUSH, and add a check
      that detects that new keys are not added by the fuzzer to detect future
      similar issues.
      3f3f678a
    • Oran Agra's avatar
      Improvements to corrupt payload sanitization (#9321) · 0c90370e
      Oran Agra authored
      
      
      Recently we found two issues in the fuzzer tester: #9302 #9285
      After fixing them, more problems surfaced and this PR (as well as #9297) aims to fix them.
      
      Here's a list of the fixes
      - Prevent an overflow when allocating a dict hashtable
      - Prevent OOM when attempting to allocate a huge string
      - Prevent a few invalid accesses in listpack
      - Improve sanitization of listpack first entry
      - Validate integrity of stream consumer groups PEL
      - Validate integrity of stream listpack entry IDs
      - Validate ziplist tail followed by extra data which start with 0xff
      Co-authored-by: default avatarsundb <sundbcn@gmail.com>
      0c90370e
    • sundb's avatar
      Sanitize dump payload: fix empty keys when RDB loading and restore command (#9297) · 8ea777a6
      sundb authored
      
      
      When we load rdb or restore command, if we encounter a length of 0, it will result in the creation of an empty key.
      This could either be a corrupt payload, or a result of a bug (see #8453 )
      
      This PR mainly fixes the following:
      1) When restore command will return `Bad data format` error.
      2) When loading RDB, we will silently discard the key.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      8ea777a6
    • Madelyn Olson's avatar
      Add debug config flag to print certain config values on engine crash (#9304) · 39a4a44d
      Madelyn Olson authored
      Add debug config flag to print certain config values on engine crash
      39a4a44d
    • Binbin's avatar
      Make sure execute SLAVEOF command in the right order in psync2 test. (#9316) · d0244bfc
      Binbin authored
      
      
      The psync2 test has failed several times recently.
      In #9159 we only solved half of the problem.
      i.e. reordering of the replica that's already connected to
      the newly promoted master.
      
      Consider this scenario:
      0 slaveof 2
      1 slaveof 2
      3 slaveof 2
      4 slaveof 1
      0 slaveof no one, became a new master got a new replid
      2 slaveof 0, partial resync and got the new replid
      3 reconnect 2, inherit the new replid
      3 slaveof 4, use the new replid and got a full resync
      
      And another scenario:
      1 slaveof 3
      2 slaveof 4
      3 slaveof 0
      4 slaveof 0
      4 slaveof no one, became a new master got a new replid
      2 reconnect 4, inherit the new replid
      2 slaveof 1, use the new replid and got a full resync
      
      So maybe we should reattach replicas in the right order.
      i.e. In the above example, if it would have reattached 1, 3 and 0 to
      the new chain formed by 4 before trying to attach 2 to 1, it would succeed.
      
      This commit break the SLAVEOF loop into two loops. (ideas from oran)
      
      First loop that uses random to decide who replicates from who.
      Second loop that does the actual SLAVEOF command.
      In the second loop, we make sure to execute it in the right order,
      and after each SLAVEOF, wait for it to be connected before we proceed.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      d0244bfc
    • Wen Hui's avatar
      Add sentinel debug option command (#9291) · 63e2a6d2
      Wen Hui authored
      
      
      This makes it possible to tune many parameters that were previously hard coded.
      We don't intend these to be user configurable, but only used by tests to accelerate certain conditions which would otherwise take a long time and slow down the test suite.
      Co-authored-by: default avatarLucas Guang Yang <l84193800@china.huawei.com>
      63e2a6d2
    • menwen's avatar
      Add latency monitor sample when key is deleted via lazy expire (#9317) · ca559819
      menwen authored
      Fix that there is no sample latency after the key expires via expireIfNeeded().
      Some refactoring for shared code.
      ca559819
    • yoav-steinberg's avatar
      fix dict access broken by #9228 (#9319) · d32f8641
      yoav-steinberg authored
      d32f8641
    • yoav-steinberg's avatar
      dict struct memory optimizations (#9228) · 5e908a29
      yoav-steinberg authored
      Reduce dict struct memory overhead
      on 64bit dict size goes down from jemalloc's 96 byte bin to its 56 byte bin.
      
      summary of changes:
      - Remove `privdata` from callbacks and dict creation. (this affects many files, see "Interface change" below).
      - Meld `dictht` struct into the `dict` struct to eliminate struct padding. (this affects just dict.c and defrag.c)
      - Eliminate the `sizemask` field, can be calculated from size when needed.
      - Convert the `size` field into `size_exp` (exponent), utilizes one byte instead of 8.
      
      Interface change: pass dict pointer to dict type call back functions.
      This is instead of passing the removed privdata field. In the future if
      we'd like to have private data in the callbacks we can extract it from
      the dict type. We can extend dictType to include a custom dict struct
      allocator and use it to allocate more data at the end of the dict
      struct. This data can then be used to store private data later acccessed
      by the callbacks.
      5e908a29
    • Viktor Söderqvist's avatar
  14. 04 Aug, 2021 4 commits
    • Wang Yuan's avatar
      Use madvise(MADV_DONTNEED) to release memory to reduce COW (#8974) · d4bca53c
      Wang Yuan authored
      
      
      ## Backgroud
      As we know, after `fork`, one process will copy pages when writing data to these
      pages(CoW), and another process still keep old pages, they totally cost more memory.
      For redis, we suffered that redis consumed much memory when the fork child is serializing
      key/values, even that maybe cause OOM.
      
      But actually we find, in redis fork child process, the child process don't need to keep some
      memory and parent process may write or update that, for example, child process will never
      access the key-value that is serialized but users may update it in parent process.
      So we think it may reduce COW if the child process release memory that it is not needed.
      
      ## Implementation
      For releasing key value in child process, we may think we call `decrRefCount` to free memory,
      but i find the fork child process still use much memory when we don't write any data to redis,
      and it costs much more time that slows down bgsave. Maybe because memory allocator doesn't
      really release memory to OS, and it may modify some inner data for this free operation, especially
      when we free small objects.
      
      Moreover, CoW is based on  pages, so it is a easy way that we only free the memory bulk that is
      not less than kernel page size. madvise(MADV_DONTNEED) can quickly release specified region
      pages to OS bypassing memory allocator, and allocator still consider that this memory still is used
      and don't change its inner data.
      
      There are some buffers we can release in the fork child process:
      - **Serialized key-values**
        the fork child process never access serialized key-values, so we try to free them.
        Because we only can release big bulk memory, and it is time consumed to iterate all
        items/members/fields/entries of complex data type. So we decide to iterate them and
        try to release them only when their average size of item/member/field/entry is more
        than page size of OS.
      - **Replication backlog**
        Because replication backlog is a cycle buffer, it will be changed quickly if redis has heavy
        write traffic, but in fork child process, we don't need to access that.
      - **Client buffers**
        If clients have requests during having the fork child process, clients' buffer also be changed
        frequently. The memory includes client query buffer, output buffer, and client struct used memory.
      
      To get child process peak private dirty memory, we need to count peak memory instead
      of last used memory, because the child process may continue to release memory (since
      COW used to only grow till now, the last was equivalent to the peak).
      Also we're adding a new `current_cow_peak` info variable (to complement the existing
      `current_cow_size`)
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      d4bca53c
    • Meir Shpilraien (Spielrein)'s avatar
      Fix tests failure on 32bit build (#9318) · 56eb7f7d
      Meir Shpilraien (Spielrein) authored
      Fix test introduced in #9202 that failed on 32bit CI.
      The failure was due to a wrong double comparison.
      Change code to stringify the double first and then compare.
      56eb7f7d
    • Meir Shpilraien (Spielrein)'s avatar
      Unified Lua and modules reply parsing and added RESP3 support to RM_Call (#9202) · 2237131e
      Meir Shpilraien (Spielrein) authored
      
      
      ## Current state
      1. Lua has its own parser that handles parsing `reds.call` replies and translates them
        to Lua objects that can be used by the user Lua code. The parser partially handles
        resp3 (missing big number, verbatim, attribute, ...)
      2. Modules have their own parser that handles parsing `RM_Call` replies and translates
        them to RedisModuleCallReply objects. The parser does not support resp3.
      
      In addition, in the future, we want to add Redis Function (#8693) that will probably
      support more languages. At some point maintaining so many parsers will stop
      scaling (bug fixes and protocol changes will need to be applied on all of them).
      We will probably end up with different parsers that support different parts of the
      resp protocol (like we already have today with Lua and modules)
      
      ## PR Changes
      This PR attempt to unified the reply parsing of Lua and modules (and in the future
      Redis Function) by introducing a new parser unit (`resp_parser.c`). The new parser
      handles parsing the reply and calls different callbacks to allow the users (another
      unit that uses the parser, i.e, Lua, modules, or Redis Function) to analyze the reply.
      
      ### Lua API Additions
      The code that handles reply parsing on `scripting.c` was removed. Instead, it uses
      the resp_parser to parse and create a Lua object out of the reply. As mentioned
      above the Lua parser did not handle parsing big numbers, verbatim, and attribute.
      The new parser can handle those and so Lua also gets it for free.
      Those are translated to Lua objects in the following way:
      1. Big Number - Lua table `{'big_number':'<str representation for big number>'}`
      2. Verbatim - Lua table `{'verbatim_string':{'format':'<verbatim format>', 'string':'<verbatim string value>'}}`
      3. Attribute - currently ignored and not expose to the Lua parser, another issue will be open to decide how to expose it.
      
      Tests were added to check resp3 reply parsing on Lua
      
      ### Modules API Additions
      The reply parsing code on `module.c` was also removed and the new resp_parser is used instead.
      In addition, the RedisModuleCallReply was also extracted to a separate unit located on `call_reply.c`
      (in the future, this unit will also be used by Redis Function). A nice side effect of unified parsing is
      that modules now also support resp3. Resp3 can be enabled by giving `3` as a parameter to the
      fmt argument of `RM_Call`. It is also possible to give `0`, which will indicate an auto mode. i.e, Redis
      will automatically chose the reply protocol base on the current client set on the RedisModuleCtx
      (this mode will mostly be used when the module want to pass the reply to the client as is).
      In addition, the following RedisModuleAPI were added to allow analyzing resp3 replies:
      
      * New RedisModuleCallReply types:
         * `REDISMODULE_REPLY_MAP`
         * `REDISMODULE_REPLY_SET`
         * `REDISMODULE_REPLY_BOOL`
         * `REDISMODULE_REPLY_DOUBLE`
         * `REDISMODULE_REPLY_BIG_NUMBER`
         * `REDISMODULE_REPLY_VERBATIM_STRING`
         * `REDISMODULE_REPLY_ATTRIBUTE`
      
      * New RedisModuleAPI:
         * `RedisModule_CallReplyDouble` - getting double value from resp3 double reply
         * `RedisModule_CallReplyBool` - getting boolean value from resp3 boolean reply
         * `RedisModule_CallReplyBigNumber` - getting big number value from resp3 big number reply
         * `RedisModule_CallReplyVerbatim` - getting format and value from resp3 verbatim reply
         * `RedisModule_CallReplySetElement` - getting element from resp3 set reply
         * `RedisModule_CallReplyMapElement` - getting key and value from resp3 map reply
         * `RedisModule_CallReplyAttribute` - getting a reply attribute
         * `RedisModule_CallReplyAttributeElement` - getting key and value from resp3 attribute reply
         
      * New context flags:
         * `REDISMODULE_CTX_FLAGS_RESP3` - indicate that the client is using resp3
      
      Tests were added to check the new RedisModuleAPI
      
      ### Modules API Changes
      * RM_ReplyWithCallReply might return REDISMODULE_ERR if the given CallReply is in resp3
        but the client expects resp2. This is not a breaking change because in order to get a resp3
        CallReply one needs to specifically specify `3` as a parameter to the fmt argument of
        `RM_Call` (as mentioned above).
      
      Tests were added to check this change
      
      ### More small Additions
      * Added `debug set-disable-deny-scripts` that allows to turn on and off the commands no-script
      flag protection. This is used by the Lua resp3 tests so it will be possible to run `debug protocol`
      and check the resp3 parsing code.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      Co-authored-by: default avatarYossi Gottlieb <yossigo@gmail.com>
      2237131e
    • sundb's avatar
      Fix head and tail check with negative offset in _quicklistInsert (#9311) · b4eda142
      sundb authored
      Some background:
      This fixes a problem that used to be dead code till now,
      but became alive (only in the unit tests, not in redis) when #9113 got merged.
      The problem it fixes doesn't actually cause any significant harm,
      but that PR also added a test that fails verification because of that.
      This test was merged with that problem due to human error, we didn't run it
      on the last modified version before merging.
      The fix in this PR existed in #8641 (closed because it's just dead code)
      and #4674 (still pending but has other changes in it).
      
      Now to the actual fix:
      On quicklist insertion, if the insertion offset is -1 or `-(quicklist->count)`,
      we can insert into the head of the next node rather than the tail of the
      current node. this is especially important when the current node is full,
      and adding anything to it will cause it to be split (or be over it's fill limit setting).
      
      The bug was that the code attempted to determine that we're adding to
      the tail of the current node by matching `offset == node->count` when in
      fact it should have been `offset == node->count-1` (so it never entered that `if`).
      and also that since we take negative offsets too, we can also match `-1`.
      same applies for the head, i.e. `0` and `-count`.
      
      The bug will cause the code to attempt inserting into the current node (thinking
      we have to insert into the middle of the node rather than head or tail), and
      in case the current node is full it'll have to be split (something that also
      happens in valid cases).
      On top of that, since it calls _quicklistSplitNode with an edge case, it'll actually
      split the node in a way that all the entries fall into one split, and 0 into the other,
      and then still insert the new entry into the first one, causing it to be populated
      beyond it's intended fill limit.
      
      This problem does not create any bug in redis, because the existing code does
      not iterate from tail to head, and the offset never has a negative value when insert.
      
      The other change this PR makes in the test code is just for some coverage,
      insertion at index 0 is tested a lot, so it's nice to test some negative offsets too.
      b4eda142