1. 04 Oct, 2021 2 commits
    • Oran Agra's avatar
      Fix ziplist and listpack overflows and truncations (CVE-2021-32627, CVE-2021-32628) (#9589) · c5e6a620
      Oran Agra authored
      
      
      - fix possible heap corruption in ziplist and listpack resulting by trying to
        allocate more than the maximum size of 4GB.
      - prevent ziplist (hash and zset) from reaching size of above 1GB, will be
        converted to HT encoding, that's not a useful size.
      - prevent listpack (stream) from reaching size of above 1GB.
      - XADD will start a new listpack if the new record may cause the previous
        listpack to grow over 1GB.
      - XADD will respond with an error if a single stream record is over 1GB
      - List type (ziplist in quicklist) was truncating strings that were over 4GB,
        now it'll respond with an error.
      Co-authored-by: default avatarsundb <sundbcn@gmail.com>
      c5e6a620
    • Oran Agra's avatar
      Prevent unauthenticated client from easily consuming lots of memory (CVE-2021-32675) (#9588) · fba15850
      Oran Agra authored
      This change sets a low limit for multibulk and bulk length in the
      protocol for unauthenticated connections, so that they can't easily
      cause redis to allocate massive amounts of memory by sending just a few
      characters on the network.
      The new limits are 10 arguments of 16kb each (instead of 1m of 512mb)
      fba15850
  2. 03 Oct, 2021 2 commits
    • yoav-steinberg's avatar
      Remove argument count limit, dynamically grow argv. (#9528) · 93e85347
      yoav-steinberg authored
      Remove hard coded multi-bulk limit (was 1,048,576), new limit is INT_MAX.
      When client sends an m-bulk that's higher than 1024, we initially only allocate
      the argv array for 1024 arguments, and gradually grow that allocation as arguments
      are received.
      93e85347
    • Binbin's avatar
      Cleanup typos, incorrect comments, and fixed small memory leak in redis-cli (#9153) · dd3ac97f
      Binbin authored
      1. Remove forward declarations from header files to functions that do not exist:
      hmsetCommand and rdbSaveTime.
      2. Minor phrasing fixes in #9519
      3. Add missing sdsfree(title) and fix typo in redis-benchmark.
      4. Modify some error comments in some zset commands.
      5. Fix copy-paste bug comment in syncWithMaster about `ip-address`.
      dd3ac97f
  3. 26 Sep, 2021 1 commit
    • yoav-steinberg's avatar
      Client eviction ci issues (#9549) · 66002530
      yoav-steinberg authored
      Fixing CI test issues introduced in #8687
      - valgrind warnings in readQueryFromClient when client was freed by processInputBuffer
      - adding DEBUG pause-cron for tests not to be time dependent.
      - skipping a test that depends on socket buffers / events not compatible with TLS
      - making sure client got subscribed by not using deferring client
      66002530
  4. 23 Sep, 2021 3 commits
    • yoav-steinberg's avatar
      Client eviction (#8687) · 2753429c
      yoav-steinberg authored
      
      
      ### Description
      A mechanism for disconnecting clients when the sum of all connected clients is above a
      configured limit. This prevents eviction or OOM caused by accumulated used memory
      between all clients. It's a complimentary mechanism to the `client-output-buffer-limit`
      mechanism which takes into account not only a single client and not only output buffers
      but rather all memory used by all clients.
      
      #### Design
      The general design is as following:
      * We track memory usage of each client, taking into account all memory used by the
        client (query buffer, output buffer, parsed arguments, etc...). This is kept up to date
        after reading from the socket, after processing commands and after writing to the socket.
      * Based on the used memory we sort all clients into buckets. Each bucket contains all
        clients using up up to x2 memory of the clients in the bucket below it. For example up
        to 1m clients, up to 2m clients, up to 4m clients, ...
      * Before processing a command and before sleep we check if we're over the configured
        limit. If we are we start disconnecting clients from larger buckets downwards until we're
        under the limit.
      
      #### Config
      `maxmemory-clients` max memory all clients are allowed to consume, above this threshold
      we disconnect clients.
      This config can either be set to 0 (meaning no limit), a size in bytes (possibly with MB/GB
      suffix), or as a percentage of `maxmemory` by using the `%` suffix (e.g. setting it to `10%`
      would mean 10% of `maxmemory`).
      
      #### Important code changes
      * During the development I encountered yet more situations where our io-threads access
        global vars. And needed to fix them. I also had to handle keeps the clients sorted into the
        memory buckets (which are global) while their memory usage changes in the io-thread.
        To achieve this I decided to simplify how we check if we're in an io-thread and make it
        much more explicit. I removed the `CLIENT_PENDING_READ` flag used for checking
        if the client is in an io-thread (it wasn't used for anything else) and just used the global
        `io_threads_op` variable the same way to check during writes.
      * I optimized the cleanup of the client from the `clients_pending_read` list on client freeing.
        We now store a pointer in the `client` struct to this list so we don't need to search in it
        (`pending_read_list_node`).
      * Added `evicted_clients` stat to `INFO` command.
      * Added `CLIENT NO-EVICT ON|OFF` sub command to exclude a specific client from the
        client eviction mechanism. Added corrosponding 'e' flag in the client info string.
      * Added `multi-mem` field in the client info string to show how much memory is used up
        by buffered multi commands.
      * Client `tot-mem` now accounts for buffered multi-commands, pubsub patterns and
        channels (partially), tracking prefixes (partially).
      * CLIENT_CLOSE_ASAP flag is now handled in a new `beforeNextClient()` function so
        clients will be disconnected between processing different clients and not only before sleep.
        This new function can be used in the future for work we want to do outside the command
        processing loop but don't want to wait for all clients to be processed before we get to it.
        Specifically I wanted to handle output-buffer-limit related closing before we process client
        eviction in case the two race with each other.
      * Added a `DEBUG CLIENT-EVICTION` command to print out info about the client eviction
        buckets.
      * Each client now holds a pointer to the client eviction memory usage bucket it belongs to
        and listNode to itself in that bucket for quick removal.
      * Global `io_threads_op` variable now can contain a `IO_THREADS_OP_IDLE` value
        indicating no io-threading is currently being executed.
      * In order to track memory used by each clients in real-time we can't rely on updating
        these stats in `clientsCron()` alone anymore. So now I call `updateClientMemUsage()`
        (used to be `clientsCronTrackClientsMemUsage()`) after command processing, after
        writing data to pubsub clients, after writing the output buffer and after reading from the
        socket (and maybe other places too). The function is written to be fast.
      * Clients are evicted if needed (with appropriate log line) in `beforeSleep()` and before
        processing a command (before performing oom-checks and key-eviction).
      * All clients memory usage buckets are grouped as follows:
        * All clients using less than 64k.
        * 64K..128K
        * 128K..256K
        * ...
        * 2G..4G
        * All clients using 4g and up.
      * Added client-eviction.tcl with a bunch of tests for the new mechanism.
      * Extended maxmemory.tcl to test the interaction between maxmemory and
        maxmemory-clients settings.
      * Added an option to flag a numeric configuration variable as a "percent", this means that
        if we encounter a '%' after the number in the config file (or config set command) we
        consider it as valid. Such a number is store internally as a negative value. This way an
        integer value can be interpreted as either a percent (negative) or absolute value (positive).
        This is useful for example if some numeric configuration can optionally be set to a percentage
        of something else.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      2753429c
    • YaacovHazan's avatar
      Adding ACL support for modules (#9309) · a56d4533
      YaacovHazan authored
      This commit introduced a new flag to the RM_Call:
      'C' - Check if the command can be executed according to the ACLs associated with it.
      
      Also, three new API's added to check if a command, key, or channel can be executed or accessed
      by a user, according to the ACLs associated with it.
      - RM_ACLCheckCommandPerm
      - RM_ACLCheckKeyPerm
      - RM_ACLCheckChannelPerm
      
      The user for these API's is a RedisModuleUser object, that for a Module user returned by the RM_CreateModuleUser API, or for a general ACL user can be retrieved by these two new API's:
      - RM_GetCurrentUserName - Retrieve the user name of the client connection behind the current context.
      - RM_GetModuleUserFromUserName - Get a RedisModuleUser from a user name
      
      As a result of getting a RedisModuleUser from name, it can now also access the general ACL users (not just ones created by the module).
      This mean the already existing API RM_SetModuleUserACL(), can be used to change the ACL rules for such users.
      a56d4533
    • Binbin's avatar
      Add ZMPOP/BZMPOP commands. (#9484) · 14d6abd8
      Binbin authored
      This is similar to the recent addition of LMPOP/BLMPOP (#9373), but zset.
      
      Syntax for the new ZMPOP command:
      `ZMPOP numkeys [<key> ...] MIN|MAX [COUNT count]`
      
      Syntax for the new BZMPOP command:
      `BZMPOP timeout numkeys [<key> ...] MIN|MAX [COUNT count]`
      
      Some background:
      - ZPOPMIN/ZPOPMAX take only one key, and can return multiple elements.
      - BZPOPMIN/BZPOPMAX take multiple keys, but return only one element from just one key.
      - ZMPOP/BZMPOP can take multiple keys, and can return multiple elements from just one key.
      
      Note that ZMPOP/BZMPOP can take multiple keys, it eventually operates on just on key.
      And it will propagate as ZPOPMIN or ZPOPMAX with the COUNT option.
      
      As new commands, if we can not pop any elements, the response like:
      - ZMPOP: Return a NIL in both RESP2 and RESP3, unlike ZPOPMIN/ZPOPMAX return emptyarray.
      - BZMPOP: Return a NIL in both RESP2 and RESP3 when timeout is reached, like BZPOPMIN/BZPOPMAX.
      
      For the normal response is nested arrays in RESP2 and RESP3:
      ```
      ZMPOP/BZMPOP
      1) keyname
      2) 1) 1) member1
            2) score1
         2) 1) member2
            2) score2
      
      In RESP2:
      1) "myzset"
      2) 1) 1) "three"
            2) "3"
         2) 1) "two"
            2) "2"
      
      In RESP3:
      1) "myzset"
      2) 1) 1) "three"
            2) (double) 3
         2) 1) "two"
            2) (double) 2
      ```
      14d6abd8
  5. 16 Sep, 2021 1 commit
    • Binbin's avatar
      Adds limit to SINTERCARD/ZINTERCARD. (#9425) · f898a9e9
      Binbin authored
      Implements the [LIMIT limit] variant of SINTERCARD/ZINTERCARD.
      Now with the LIMIT, we can stop the searching when cardinality
      reaching the limit, and return the cardinality ASAP.
      
      Note that in SINTERCARD, the old synatx was: `SINTERCARD key [key ...]`
      In order to add a optional parameter, we must break the old synatx.
      So the new syntax of SINTERCARD will be consistent with ZINTERCARD.
      New syntax: `SINTERCARD numkeys key [key ...] [LIMIT limit]`.
      
      Note that this means that SINTERCARD has a different syntax than
      SINTER and SINTERSTORE (taking numkeys argument)
      
      As for ZINTERCARD, we can easily add a optional parameter to it.
      New syntax: `ZINTERCARD numkeys key [key ...] [LIMIT limit]`
      f898a9e9
  6. 15 Sep, 2021 2 commits
    • guybe7's avatar
      Cleanup: propagate and alsoPropagate do not need redisCommand (#9502) · 7759ec7c
      guybe7 authored
      The `cmd` argument was completely unused, and all the code that bothered to pass it was unnecessary.
      This is a prepartion for a future commit that treats subcommands as commands
      7759ec7c
    • guybe7's avatar
      A better approach for COMMAND INFO for movablekeys commands (#8324) · 03fcc211
      guybe7 authored
      Fix #7297
      
      The problem:
      
      Today, there is no way for a client library or app to know the key name indexes for commands such as
      ZUNIONSTORE/EVAL and others with "numkeys", since COMMAND INFO returns no useful info for them.
      
      For cluster-aware redis clients, this requires to 'patch' the client library code specifically for each of these commands or to
      resolve each execution of these commands with COMMAND GETKEYS.
      
      The solution:
      
      Introducing key specs other than the legacy "range" (first,last,step)
      
      The 8th element of the command info array, if exists, holds an array of key specs. The array may be empty, which indicates
      the command doesn't take any key arguments or may contain one or more key-specs, each one may leads to the discovery
      of 0 or more key arguments.
      
      A client library that doesn't support this key-spec feature will keep using the first,last,step and movablekeys flag which will
      obviously remain unchanged.
      
      A client that supports this key-specs feature needs only to look at the key-specs array. If it finds an unrecognized spec, it
      must resort to using COMMAND GETKEYS if it wishes to get all key name arguments, but if all it needs is one key in order
      to know which cluster node to use, then maybe another spec (if the command has several) can supply that, and there's no
      need to use GETKEYS.
      
      Each spec is an array of arguments, first one is the spec name, the second is an array of flags, and the third is an array
      containing details about the spec (specific meaning for each spec type)
      The initial flags we support are "read" and "write" indicating if the keys that this key-spec finds are used for read or for write.
      clients should ignore any unfamiliar flags.
      
      In order to easily find the positions of keys in a given array of args we introduce keys specs. There are two logical steps of
      key specs:
      1. `start_search`: Given an array of args, indicate where we should start searching for keys
      2. `find_keys`: Given the output of start_search and an array of args, indicate all possible indices of keys.
      
      ### start_search step specs
      - `index`: specify an argument index explicitly
        - `index`: 0 based index (1 means the first command argument)
      - `keyword`: specify a string to match in `argv`. We should start searching for keys just after the keyword appears.
        - `keyword`: the string to search for
        - `start_search`: an index from which to start the keyword search (can be negative, which means to search from the end)
      
      Examples:
      - `SET` has start_search of type `index` with value `1`
      - `XREAD` has start_search of type `keyword` with value `[“STREAMS”,1]`
      - `MIGRATE` has start_search of type `keyword` with value `[“KEYS”,-2]`
      
      ### find_keys step specs
      - `range`: specify `[count, step, limit]`.
        - `lastkey`: index of the last key. relative to the index returned from begin_search. -1 indicating till the last argument, -2 one before the last
        - `step`: how many args should we skip after finding a key, in order to find the next one
        - `limit`: if count is -1, we use limit to stop the search by a factor. 0 and 1 mean no limit. 2 means ½ of the remaining args, 3 means ⅓, and so on.
      - “keynum”: specify `[keynum_index, first_key_index, step]`.
        - `keynum_index`: is relative to the return of the `start_search` spec.
        - `first_key_index`: is relative to `keynum_index`.
        - `step`: how many args should we skip after finding a key, in order to find the next one
      
      Examples:
      - `SET` has `range` of `[0,1,0]`
      - `MSET` has `range` of `[-1,2,0]`
      - `XREAD` has `range` of `[-1,1,2]`
      - `ZUNION` has `start_search` of type `index` with value `1` and `find_keys` of type `keynum` with value `[0,1,1]`
      - `AI.DAGRUN` has `start_search` of type `keyword` with value `[“LOAD“,1]` and `find_keys` of type `keynum` with value
        `[0,1,1]` (see https://oss.redislabs.com/redisai/master/commands/#aidagrun)
      
      Note: this solution is not perfect as the module writers can come up with anything, but at least we will be able to find the key
      args of the vast majority of commands.
      If one of the above specs can’t describe the key positions, the module writer can always fall back to the `getkeys-api` option.
      
      Some keys cannot be found easily (`KEYS` in `MIGRATE`: Imagine the argument for `AUTH` is the string “KEYS” - we will
      start searching in the wrong index). 
      The guarantee is that the specs may be incomplete (`incomplete` will be specified in the spec to denote that) but we never
      report false information (assuming the command syntax is correct).
      For `MIGRATE` we start searching from the end - `startfrom=-1` - and if one of the keys is actually called "keys" we will
      report only a subset of all keys - hence the `incomplete` flag.
      Some `incomplete` specs can be completely empty (i.e. UNKNOWN begin_search) which should tell the client that
      COMMAND GETKEYS (or any other way to get the keys) must be used (Example: For `SORT` there is no way to describe
      the STORE keyword spec, as the word "store" can appear anywhere in the command).
      
      We will expose these key specs in the `COMMAND` command so that clients can learn, on startup, where the keys are for
      all commands instead of holding hardcoded tables or use `COMMAND GETKEYS` in runtime.
      
      Comments:
      1. Redis doesn't internally use the new specs, they are only used for COMMAND output.
      2. In order to support the current COMMAND INFO format (reply array indices 4, 5, 6) we created a synthetic range, called
         legacy_range, that, if possible, is built according to the new specs.
      3. Redis currently uses only getkeys_proc or the legacy_range to get the keys indices (in COMMAND GETKEYS for
         example).
      
      "incomplete" specs:
      the command we have issues with are MIGRATE, STRALGO, and SORT
      for MIGRATE, because the token KEYS, if exists, must be the last token, we can search in reverse. it one of the keys is
      actually the string "keys" will return just a subset of the keys (hence, it's "incomplete")
      for SORT and STRALGO we can use this heuristic (the keys can be anywhere in the command) and therefore we added a
      key spec that is both "incomplete" and of "unknown type"
      
      if a client encounters an "incomplete" spec it means that it must find a different way (either COMMAND GETKEYS or have
      its own parser) to retrieve the keys.
      please note that all commands, apart from the three mentioned above, have "complete" key specs
      03fcc211
  7. 14 Sep, 2021 1 commit
    • Viktor Söderqvist's avatar
      Modules: Add remaining list API functions (#8439) · ea36d4de
      Viktor Söderqvist authored
      List functions operating on elements by index:
      
      * RM_ListGet
      * RM_ListSet
      * RM_ListInsert
      * RM_ListDelete
      
      Iteration is done using a simple for loop over indices.
      The index based functions use an internal iterator as an optimization.
      This is explained in the docs:
      
      ```
       * Many of the list functions access elements by index. Since a list is in
       * essence a doubly-linked list, accessing elements by index is generally an
       * O(N) operation. However, if elements are accessed sequentially or with
       * indices close together, the functions are optimized to seek the index from
       * the previous index, rather than seeking from the ends of the list.
       *
       * This enables iteration to be done efficiently using a simple for loop:
       *
       *     long n = RM_ValueLength(key);
       *     for (long i = 0; i < n; i++) {
       *         RedisModuleString *elem = RedisModule_ListGet(key, i);
       *         // Do stuff...
       *     }
      ```
      ea36d4de
  8. 13 Sep, 2021 1 commit
    • zhaozhao.zz's avatar
      PSYNC2: make partial sync possible after master reboot (#8015) · 794442b1
      zhaozhao.zz authored
      The main idea is how to allow a master to load replication info from RDB file when rebooting, if master can load replication info it means that replicas may have the chance to psync with master, it can save much traffic.
      
      The key point is we need guarantee safety and consistency, so there
      are two differences between master and replica:
      
      1. master would load the replication info as secondary ID and
         offset, in case other masters have the same replid.
      2. when master loading RDB, it would propagate expired keys as DEL
         command to replication backlog, then replica can receive these
         commands to delete stale keys.
         p.s. the expired keys when RDB loading is useful for users, so
         we show it as `rdb_last_load_keys_expired` and `rdb_last_load_keys_loaded` in info persistence.
      
      Moreover, after load replication info, master should update
      `no_replica_time` in case loading RDB cost too long time.
      794442b1
  9. 09 Sep, 2021 3 commits
    • sundb's avatar
      Replace all usage of ziplist with listpack for t_zset (#9366) · 3ca6972e
      sundb authored
      Part two of implementing #8702 (zset), after #8887.
      
      ## Description of the feature
      Replaced all uses of ziplist with listpack in t_zset, and optimized some of the code to optimize performance.
      
      ## Rdb format changes
      New `RDB_TYPE_ZSET_LISTPACK` rdb type.
      
      ## Rdb loading improvements:
      1) Pre-expansion of dict for validation of duplicate data for listpack and ziplist.
      2) Simplifying the release of empty key objects when RDB loading.
      3) Unify ziplist and listpack data verify methods for zset and hash, and move code to rdb.c.
      
      ## Interface changes
      1) New `zset-max-listpack-entries` config is an alias for `zset-max-ziplist-entries` (same with `zset-max-listpack-value`).
      2) OBJECT ENCODING will return listpack instead of ziplist.
      
      ## Listpack improvements:
      1) Add `lpDeleteRange` and `lpDeleteRangeWithEntry` functions to delete a range of entries from listpack.
      2) Improve the performance of `lpCompare`, converting from string to integer is faster than converting from integer to string.
      3) Replace `snprintf` with `ll2string` to improve performance in converting numbers to strings in `lpGet()`.
      
      ## Zset improvements:
      1) Improve the performance of `zzlFind` method, use `lpFind` instead of `lpCompare` in a loop.
      2) Use `lpDeleteRangeWithEntry` instead of `lpDelete` twice to delete a element of zset.
      
      ## Tests
      1) Add some unittests for `lpDeleteRange` and `lpDeleteRangeWithEntry` function.
      2) Add zset RDB loading test.
      3) Add benchmark test for `lpCompare` and `ziplsitCompare`.
      4) Add empty listpack zset corrupt dump test.
      3ca6972e
    • Binbin's avatar
      Add LMPOP/BLMPOP commands. (#9373) · c50af0ae
      Binbin authored
      We want to add COUNT option for BLPOP.
      But we can't do it without breaking compatibility due to the command arguments syntax.
      So this commit introduce two new commands.
      
      Syntax for the new LMPOP command:
      `LMPOP numkeys [<key> ...] LEFT|RIGHT [COUNT count]`
      
      Syntax for the new BLMPOP command:
      `BLMPOP timeout numkeys [<key> ...] LEFT|RIGHT [COUNT count]`
      
      Some background:
      - LPOP takes one key, and can return multiple elements.
      - BLPOP takes multiple keys, but returns one element from just one key.
      - LMPOP can take multiple keys and return multiple elements from just one key.
      
      Note that LMPOP/BLMPOP  can take multiple keys, it eventually operates on just one key.
      And it will propagate as LPOP or RPOP with the COUNT option.
      
      As a new command, it still return NIL if we can't pop any elements.
      For the normal response is nested arrays in RESP2 and RESP3, like:
      ```
      LMPOP/BLMPOP 
      1) keyname
      2) 1) element1
         2) element2
      ```
      I.e. unlike BLPOP that returns a key name and one element so it uses a flat array,
      and LPOP that returns multiple elements with no key name, and again uses a flat array,
      this one has to return a nested array, and it does for for both RESP2 and RESP3 (like SCAN does)
      
      Some discuss can see: #766 #8824
      c50af0ae
    • Huang Zhw's avatar
      Add INFO total_active_defrag_time and current_active_defrag_time (#9377) · 216f168b
      Huang Zhw authored
      Add two INFO metrics:
      ```
      total_active_defrag_time:12345
      current_active_defrag_time:456
      ```
      `current_active_defrag_time` if greater than 0, means how much time has
      passed since active defrag started running. If active defrag stops, this metric is reset to 0.
      `total_active_defrag_time` means total time the fragmentation
      was over the defrag threshold since the server started.
      
      This is a followup PR for #9031
      216f168b
  10. 31 Aug, 2021 1 commit
    • Viktor Söderqvist's avatar
      Slot-to-keys using dict entry metadata (#9356) · f24c63a2
      Viktor Söderqvist authored
      
      
      * Enhance dict to support arbitrary metadata carried in dictEntry
      Co-authored-by: default avatarViktor Söderqvist <viktor.soderqvist@est.tech>
      
      * Rewrite slot-to-keys mapping to linked lists using dict entry metadata
      
      This is a memory enhancement for Redis Cluster.
      
      The radix tree slots_to_keys (which duplicates all key names prefixed with their
      slot number) is replaced with a linked list for each slot. The dict entries of
      the same cluster slot form a linked list and the pointers are stored as metadata
      in each dict entry of the main DB dict.
      
      This commit also moves the slot-to-key API from db.c to cluster.c.
      Co-authored-by: default avatarJim Brunner <brunnerj@amazon.com>
      f24c63a2
  11. 30 Aug, 2021 1 commit
    • Wang Yuan's avatar
      Use sync_file_range to optimize fsync if possible (#9409) · 9a0c0617
      Wang Yuan authored
      We implement incremental data sync in rio.c by call fsync, on slow disk, that may cost a lot of time,
      sync_file_range could provide async fsync, so we could serialize key/value and sync file data at the same time.
      
      > one tip for sync_file_range usage: http://lkml.iu.edu/hypermail/linux/kernel/1005.2/01845.html
      
      Additionally, this change avoids a single large write to be used, which can result in a mass of dirty
      pages in the kernel (increasing the risk of someone else's write to block).
      
      On HDD, current solution could reduce approximate half of dumping RDB time,
      this PR costs 50s for dump 7.7G rdb but unstable branch costs 93s.
      On NVME SSD, this PR can't reduce much time,  this PR costs 40s, unstable branch costs 48s.
      
      Moreover, I find calling data sync every 4MB is better than 32MB.
      9a0c0617
  12. 10 Aug, 2021 2 commits
    • Meir Shpilraien (Spielrein)'s avatar
      Format fixes and naming. SentReplyOnKeyMiss -> addReplyOrErrorObject (#9346) · 8f8117f7
      Meir Shpilraien (Spielrein) authored
      Following the comments on #8659, this PR fix some formatting
      and naming issues.
      8f8117f7
    • sundb's avatar
      Replace all usage of ziplist with listpack for t_hash (#8887) · 02fd76b9
      sundb authored
      
      
      Part one of implementing #8702 (taking hashes first before other types)
      
      ## Description of the feature
      1. Change ziplist encoded hash objects to listpack encoding.
      2. Convert existing ziplists on RDB loading time. an O(n) operation.
      
      ## Rdb format changes
      1. Add RDB_TYPE_HASH_LISTPACK rdb type.
      2. Bump RDB_VERSION to 10
      
      ## Interface changes
      1. New `hash-max-listpack-entries` config is an alias for `hash-max-ziplist-entries` (same with `hash-max-listpack-value`)
      2. OBJECT ENCODING will return `listpack` instead of `ziplist`
      
      ## Listpack improvements:
      1. Support direct insert, replace integer element (rather than convert back and forth from string)
      3. Add more listpack capabilities to match the ziplist ones (like `lpFind`, `lpRandomPairs` and such)
      4. Optimize element length fetching, avoid multiple calculations
      5. Use inline to avoid function call overhead.
      
      ## Tests
      1. Add a new test to the RDB load time conversion
      2. Adding the listpack unit tests. (based on the one in ziplist.c)
      3. Add a few "corrupt payload: fuzzer findings" tests, and slightly modify existing ones.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      02fd76b9
  13. 09 Aug, 2021 1 commit
  14. 07 Aug, 2021 1 commit
  15. 06 Aug, 2021 1 commit
  16. 05 Aug, 2021 4 commits
    • Oran Agra's avatar
      Improvements to corrupt payload sanitization (#9321) · 0c90370e
      Oran Agra authored
      
      
      Recently we found two issues in the fuzzer tester: #9302 #9285
      After fixing them, more problems surfaced and this PR (as well as #9297) aims to fix them.
      
      Here's a list of the fixes
      - Prevent an overflow when allocating a dict hashtable
      - Prevent OOM when attempting to allocate a huge string
      - Prevent a few invalid accesses in listpack
      - Improve sanitization of listpack first entry
      - Validate integrity of stream consumer groups PEL
      - Validate integrity of stream listpack entry IDs
      - Validate ziplist tail followed by extra data which start with 0xff
      Co-authored-by: default avatarsundb <sundbcn@gmail.com>
      0c90370e
    • Madelyn Olson's avatar
      Add debug config flag to print certain config values on engine crash (#9304) · 39a4a44d
      Madelyn Olson authored
      Add debug config flag to print certain config values on engine crash
      39a4a44d
    • menwen's avatar
      Add latency monitor sample when key is deleted via lazy expire (#9317) · ca559819
      menwen authored
      Fix that there is no sample latency after the key expires via expireIfNeeded().
      Some refactoring for shared code.
      ca559819
    • yoav-steinberg's avatar
      dict struct memory optimizations (#9228) · 5e908a29
      yoav-steinberg authored
      Reduce dict struct memory overhead
      on 64bit dict size goes down from jemalloc's 96 byte bin to its 56 byte bin.
      
      summary of changes:
      - Remove `privdata` from callbacks and dict creation. (this affects many files, see "Interface change" below).
      - Meld `dictht` struct into the `dict` struct to eliminate struct padding. (this affects just dict.c and defrag.c)
      - Eliminate the `sizemask` field, can be calculated from size when needed.
      - Convert the `size` field into `size_exp` (exponent), utilizes one byte instead of 8.
      
      Interface change: pass dict pointer to dict type call back functions.
      This is instead of passing the removed privdata field. In the future if
      we'd like to have private data in the callbacks we can extract it from
      the dict type. We can extend dictType to include a custom dict struct
      allocator and use it to allocate more data at the end of the dict
      struct. This data can then be used to store private data later acccessed
      by the callbacks.
      5e908a29
  17. 04 Aug, 2021 2 commits
    • Wang Yuan's avatar
      Use madvise(MADV_DONTNEED) to release memory to reduce COW (#8974) · d4bca53c
      Wang Yuan authored
      
      
      ## Backgroud
      As we know, after `fork`, one process will copy pages when writing data to these
      pages(CoW), and another process still keep old pages, they totally cost more memory.
      For redis, we suffered that redis consumed much memory when the fork child is serializing
      key/values, even that maybe cause OOM.
      
      But actually we find, in redis fork child process, the child process don't need to keep some
      memory and parent process may write or update that, for example, child process will never
      access the key-value that is serialized but users may update it in parent process.
      So we think it may reduce COW if the child process release memory that it is not needed.
      
      ## Implementation
      For releasing key value in child process, we may think we call `decrRefCount` to free memory,
      but i find the fork child process still use much memory when we don't write any data to redis,
      and it costs much more time that slows down bgsave. Maybe because memory allocator doesn't
      really release memory to OS, and it may modify some inner data for this free operation, especially
      when we free small objects.
      
      Moreover, CoW is based on  pages, so it is a easy way that we only free the memory bulk that is
      not less than kernel page size. madvise(MADV_DONTNEED) can quickly release specified region
      pages to OS bypassing memory allocator, and allocator still consider that this memory still is used
      and don't change its inner data.
      
      There are some buffers we can release in the fork child process:
      - **Serialized key-values**
        the fork child process never access serialized key-values, so we try to free them.
        Because we only can release big bulk memory, and it is time consumed to iterate all
        items/members/fields/entries of complex data type. So we decide to iterate them and
        try to release them only when their average size of item/member/field/entry is more
        than page size of OS.
      - **Replication backlog**
        Because replication backlog is a cycle buffer, it will be changed quickly if redis has heavy
        write traffic, but in fork child process, we don't need to access that.
      - **Client buffers**
        If clients have requests during having the fork child process, clients' buffer also be changed
        frequently. The memory includes client query buffer, output buffer, and client struct used memory.
      
      To get child process peak private dirty memory, we need to count peak memory instead
      of last used memory, because the child process may continue to release memory (since
      COW used to only grow till now, the last was equivalent to the peak).
      Also we're adding a new `current_cow_peak` info variable (to complement the existing
      `current_cow_size`)
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      d4bca53c
    • Meir Shpilraien (Spielrein)'s avatar
      Unified Lua and modules reply parsing and added RESP3 support to RM_Call (#9202) · 2237131e
      Meir Shpilraien (Spielrein) authored
      
      
      ## Current state
      1. Lua has its own parser that handles parsing `reds.call` replies and translates them
        to Lua objects that can be used by the user Lua code. The parser partially handles
        resp3 (missing big number, verbatim, attribute, ...)
      2. Modules have their own parser that handles parsing `RM_Call` replies and translates
        them to RedisModuleCallReply objects. The parser does not support resp3.
      
      In addition, in the future, we want to add Redis Function (#8693) that will probably
      support more languages. At some point maintaining so many parsers will stop
      scaling (bug fixes and protocol changes will need to be applied on all of them).
      We will probably end up with different parsers that support different parts of the
      resp protocol (like we already have today with Lua and modules)
      
      ## PR Changes
      This PR attempt to unified the reply parsing of Lua and modules (and in the future
      Redis Function) by introducing a new parser unit (`resp_parser.c`). The new parser
      handles parsing the reply and calls different callbacks to allow the users (another
      unit that uses the parser, i.e, Lua, modules, or Redis Function) to analyze the reply.
      
      ### Lua API Additions
      The code that handles reply parsing on `scripting.c` was removed. Instead, it uses
      the resp_parser to parse and create a Lua object out of the reply. As mentioned
      above the Lua parser did not handle parsing big numbers, verbatim, and attribute.
      The new parser can handle those and so Lua also gets it for free.
      Those are translated to Lua objects in the following way:
      1. Big Number - Lua table `{'big_number':'<str representation for big number>'}`
      2. Verbatim - Lua table `{'verbatim_string':{'format':'<verbatim format>', 'string':'<verbatim string value>'}}`
      3. Attribute - currently ignored and not expose to the Lua parser, another issue will be open to decide how to expose it.
      
      Tests were added to check resp3 reply parsing on Lua
      
      ### Modules API Additions
      The reply parsing code on `module.c` was also removed and the new resp_parser is used instead.
      In addition, the RedisModuleCallReply was also extracted to a separate unit located on `call_reply.c`
      (in the future, this unit will also be used by Redis Function). A nice side effect of unified parsing is
      that modules now also support resp3. Resp3 can be enabled by giving `3` as a parameter to the
      fmt argument of `RM_Call`. It is also possible to give `0`, which will indicate an auto mode. i.e, Redis
      will automatically chose the reply protocol base on the current client set on the RedisModuleCtx
      (this mode will mostly be used when the module want to pass the reply to the client as is).
      In addition, the following RedisModuleAPI were added to allow analyzing resp3 replies:
      
      * New RedisModuleCallReply types:
         * `REDISMODULE_REPLY_MAP`
         * `REDISMODULE_REPLY_SET`
         * `REDISMODULE_REPLY_BOOL`
         * `REDISMODULE_REPLY_DOUBLE`
         * `REDISMODULE_REPLY_BIG_NUMBER`
         * `REDISMODULE_REPLY_VERBATIM_STRING`
         * `REDISMODULE_REPLY_ATTRIBUTE`
      
      * New RedisModuleAPI:
         * `RedisModule_CallReplyDouble` - getting double value from resp3 double reply
         * `RedisModule_CallReplyBool` - getting boolean value from resp3 boolean reply
         * `RedisModule_CallReplyBigNumber` - getting big number value from resp3 big number reply
         * `RedisModule_CallReplyVerbatim` - getting format and value from resp3 verbatim reply
         * `RedisModule_CallReplySetElement` - getting element from resp3 set reply
         * `RedisModule_CallReplyMapElement` - getting key and value from resp3 map reply
         * `RedisModule_CallReplyAttribute` - getting a reply attribute
         * `RedisModule_CallReplyAttributeElement` - getting key and value from resp3 attribute reply
         
      * New context flags:
         * `REDISMODULE_CTX_FLAGS_RESP3` - indicate that the client is using resp3
      
      Tests were added to check the new RedisModuleAPI
      
      ### Modules API Changes
      * RM_ReplyWithCallReply might return REDISMODULE_ERR if the given CallReply is in resp3
        but the client expects resp2. This is not a breaking change because in order to get a resp3
        CallReply one needs to specifically specify `3` as a parameter to the fmt argument of
        `RM_Call` (as mentioned above).
      
      Tests were added to check this change
      
      ### More small Additions
      * Added `debug set-disable-deny-scripts` that allows to turn on and off the commands no-script
      flag protection. This is used by the Lua resp3 tests so it will be possible to run `debug protocol`
      and check the resp3 parsing code.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      Co-authored-by: default avatarYossi Gottlieb <yossigo@gmail.com>
      2237131e
  18. 03 Aug, 2021 1 commit
  19. 26 Jul, 2021 1 commit
    • Huang Zhw's avatar
      Add INFO stat total_eviction_exceeded_time and current_eviction_exceeded_time (#9031) · 17511df5
      Huang Zhw authored
      
      
      Add two INFO metrics:
      ```
      total_eviction_exceeded_time:69734
      current_eviction_exceeded_time:10230
      ```
      `current_eviction_exceeded_time` if greater than 0, means how much time current used memory is greater than `maxmemory`. And we are still over the maxmemory. If used memory is below `maxmemory`, this metric is reset to 0.
      `total_eviction_exceeded_time` means total time used memory is greater than `maxmemory` since server startup. 
      The units of these two metrics are ms.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      17511df5
  20. 21 Jul, 2021 1 commit
    • Huang Zhw's avatar
      On 32 bit platform, the bit position of GETBIT/SETBIT/BITFIELD/BITCOUNT,BITPOS... · 71d45287
      Huang Zhw authored
      On 32 bit platform, the bit position of GETBIT/SETBIT/BITFIELD/BITCOUNT,BITPOS may overflow (see CVE-2021-32761) (#9191)
      
      GETBIT, SETBIT may access wrong address because of wrap.
      BITCOUNT and BITPOS may return wrapped results.
      BITFIELD may access the wrong address but also allocate insufficient memory and segfault (see CVE-2021-32761).
      
      This commit uses `uint64_t` or `long long` instead of `size_t`.
      related https://github.com/redis/redis/pull/8096
      
      At 32bit platform:
      > setbit bit 4294967295 1
      (integer) 0
      > config set proto-max-bulk-len 536870913
      OK
      > append bit "\xFF"
      (integer) 536870913
      > getbit bit 4294967296
      (integer) 0
      
      When the bit index is larger than 4294967295, size_t can't hold bit index. In the past,  `proto-max-bulk-len` is limit to 536870912, so there is no problem.
      
      After this commit, bit position is stored in `uint64_t` or `long long`. So when `proto-max-bulk-len > 536870912`, 32bit platforms can still be correct.
      
      For 64bit platform, this problem still exists. The major reason is bit pos 8 times of byte pos. When proto-max-bulk-len is very larger, bit pos may overflow.
      But at 64bit platform, we don't have so long string. So this bug may never happen.
      
      Additionally this commit add a test cost `512MB` memory which is tag as `large-memory`. Make freebsd ci and valgrind ci ignore this test.
      71d45287
  21. 14 Jul, 2021 1 commit
    • Oran Agra's avatar
      Test infra, handle RESP3 attributes and big-numbers and bools (#9235) · 6a5bac30
      Oran Agra authored
      - promote the code in DEBUG PROTOCOL to addReplyBigNum
      - DEBUG PROTOCOL ATTRIB skips the attribute when client is RESP2
      - networking.c addReply for push and attributes generate assertion when
        called on a RESP2 client, anything else would produce a broken
        protocol that clients can't handle.
      6a5bac30
  22. 11 Jul, 2021 1 commit
    • perryitay's avatar
      Fail EXEC command in case a watched key is expired (#9194) · ac8b1df8
      perryitay authored
      
      
      There are two issues fixed in this commit: 
      1. we want to fail the EXEC command in case there is a watched key that's logically
         expired but not yet deleted by active expire or lazy expire.
      2. we saw that currently cache time is update in every `call()` (including nested calls),
         this time is being also being use for the isKeyExpired comparison, we want to update
         the cache time only in the first call (execCommand)
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      ac8b1df8
  23. 03 Jul, 2021 1 commit
  24. 01 Jul, 2021 1 commit
    • Yossi Gottlieb's avatar
      Fix CLIENT UNBLOCK crashing modules. (#9167) · aa139e2f
      Yossi Gottlieb authored
      Modules that use background threads with thread safe contexts are likely
      to use RM_BlockClient() without a timeout function, because they do not
      set up a timeout.
      
      Before this commit, `CLIENT UNBLOCK` would result with a crash as the
      `NULL` timeout callback is called. Beyond just crashing, this is also
      logically wrong as it may throw the module into an unexpected client
      state.
      
      This commits makes `CLIENT UNBLOCK` on such clients behave the same as
      any other client that is not in a blocked state and therefore cannot be
      unblocked.
      aa139e2f
  25. 24 Jun, 2021 1 commit
    • Yossi Gottlieb's avatar
      Add bind-source-addr configuration argument. (#9142) · f233c4c5
      Yossi Gottlieb authored
      In the past, the first bind address that was explicitly specified was
      also used to bind outgoing connections. This could result with some
      problems. For example: on some systems using `bind 127.0.0.1` would
      result with outgoing connections also binding to `127.0.0.1` and failing
      to connect to remote addresses.
      
      With the recent change to the way `bind` is handled, this presented
      other issues:
      
      * The default first bind address is '*' which is not a valid address.
      * We make no distinction between user-supplied config that is identical
      to the default, and the default config.
      
      This commit addresses both these issues by introducing an explicit
      configuration parameter to control the bind address on outgoing
      connections.
      f233c4c5
  26. 22 Jun, 2021 1 commit
    • Yossi Gottlieb's avatar
      Improve bind and protected-mode config handling. (#9034) · 07b0d144
      Yossi Gottlieb authored
      * Specifying an empty `bind ""` configuration prevents Redis from listening on any TCP port. Before this commit, such configuration was not accepted.
      * Using `CONFIG GET bind` will always return an explicit configuration value. Before this commit, if a bind address was not specified the returned value was empty (which was an anomaly).
      
      Another behavior change is that modifying the `bind` configuration to a non-default value will NO LONGER DISABLE protected-mode implicitly.
      07b0d144
  27. 16 Jun, 2021 2 commits
    • yoav-steinberg's avatar
      Remove gopher protocol support. (#9057) · 362786c5
      yoav-steinberg authored
      Gopher support was added mainly because it was simple (trivial to add).
      But apparently even something that was trivial at the time, does cause complications
      down the line when adding more features.
      We recently ran into a few issues with io-threads conflicting with the gopher support.
      We had to either complicate the code further in order to solve them, or drop gopher.
      AFAIK it's completely unused, so we wanna chuck it, rather than keep supporting it.
      362786c5
    • chenyang8094's avatar
      Enhance mem_usage/free_effort/unlink/copy callbacks and add GetDbFromIO api. (#8999) · e0cd3ad0
      chenyang8094 authored
      Create new module type enhanced callbacks: mem_usage2, free_effort2, unlink2, copy2.
      These will be given a context point from which the module can obtain the key name and database id.
      In addition the digest and defrag context can now be used to obtain the key name and database id.
      e0cd3ad0