1. 12 Dec, 2022 1 commit
    • Meir Shpilraien (Spielrein)'s avatar
      Avoid crash on crash report when a bad function pointer was called (#11298) · b1ede212
      Meir Shpilraien (Spielrein) authored
      If Redis crashes due to calling an invalid function pointer,
      the `backtrace` function will try to dereference this invalid pointer
      which will cause a crash inside the crash report and will kill
      the processes without having all the crash report information.
      
      Example:
      
      ```
      === REDIS BUG REPORT START: Cut & paste starting from here ===
      198672:M 19 Sep 2022 18:06:12.936 # Redis 255.255.255 crashed by signal: 11, si_code: 1
      198672:M 19 Sep 2022 18:06:12.936 # Accessing address: 0x1
      198672:M 19 Sep 2022 18:06:12.936 # Crashed running the instruction at: 0x1
      // here the processes is crashing
      ```
      
      This PR tries to fix this crash be:
      1. Identify the issue when it happened.
      2. Replace the invalid pointer with a pointer to some dummy function
         so that `backtrace` will not crash.
      
      I identification is done by comparing `eip` to `info->si_addr`, if they
      are the same we know that the crash happened on the same address it tries to
      accesses and we can conclude that it tries to call and invalid function pointer.
      
      To replace the invalid pointer we introduce a new function, `setMcontextEip`,
      which is very similar to `getMcontextEip` and it knows to set the Eip for the
      different supported OS's. After printing the trace we retrieve the old `Eip` value.
      
      (cherry picked from commit 0bf90d94)
      b1ede212
  2. 21 Sep, 2022 1 commit
    • sundb's avatar
      Fix crash due to delete entry from compress quicklistNode and wrongly split quicklistNode (#11242) · 01358df3
      sundb authored
      This PR mainly deals with 2 crashes introduced in #9357,
      and fix the QUICKLIST-PACKED-THRESHOLD mess in external test mode.
      
      1. Fix crash due to deleting an entry from a compress quicklistNode
         When inserting a large element, we need to create a new quicklistNode first,
         and then delete its previous element, if the node where the deleted element is
         located is compressed, it will cause a crash.
         Now add `dont_compress` to quicklistNode, if we want to use a quicklistNode
         after some operation, we can use this flag like following:
      
          ```c
          node->dont_compress = 1; /* Prevent to be compressed */
          some_operation(node); /* This operation might try to compress this node */
          some_other_operation(node); /* We can use this node without decompress it */
          node->dont_compress = 0; /* Re-able compression */
          quicklistCompressNode(node);
          ```
      
         Perhaps in the future, we could just disable the current entry from being
         compressed during the iterator loop, but that would require more work.
      
      2. Fix crash due to wrongly split quicklist
         before #9357, the offset param of _quicklistSplitNode() will not negative.
         For now, when offset is negative, the split extent will be wrong.
         following example:
          ```c
          int orig_start = after ? offset + 1 : 0;
          int orig_extent = after ? -1 : offset;
          int new_start = after ? 0 : offset;
          int new_extent = after ? offset + 1 : -1;
          # offset: -2, after: 1, node->count: 2
          # current wrong range: [-1,-1] [0,-1]
          # correct range: [1,-1] [0, 1]
          ```
      
         Because only `_quicklistInsert()` splits the quicklistNode and only
         `quicklistInsertAfter()`, `quicklistInsertBefore()` call _quicklistInsert(), 
         so `quicklistReplaceEntry()` and `listTypeInsert()` might occur this crash.
         But the iterator of `listTypeInsert()` is alway from head to tail(iter->offset is
         always positive), so it is not affected.
         The final conclusion is this crash only occur when we insert a large element
         with negative index into a list, that affects `LSET` command and `RM_ListSet`
         module api.
           
      3. In external test mode, we need to restore quicklist packed threshold after
         when the end of test.
      4. Show `node->count` in quicklistRepr().
      5. Add new tcl proc `config_get_set` to support restoring config in tests.
      
      (cherry picked from commit 13d25dd9)
      01358df3
  3. 06 Jun, 2022 1 commit
  4. 26 Apr, 2022 1 commit
    • Madelyn Olson's avatar
      Set replicas to panic on disk errors, and optionally panic on replication errors (#10504) · 6fa8e4f7
      Madelyn Olson authored
      * Till now, replicas that were unable to persist, would still execute the commands
        they got from the master, now they'll panic by default, and we add a new
        `replica-ignore-disk-errors` config to change that.
      * Till now, when a command failed on a replica or AOF-loading, it only logged a
        warning and a stat, we add a new `propagation-error-behavior` config to allow
        panicking in that state (may become the default one day)
      
      Note that commands that fail on the replica can either indicate a bug that could
      cause data inconsistency between the replica and the master, or they could be
      in some cases (specifically in previous versions), a result of a command (e.g. EVAL)
      that failed on the master, but still had to be propagated to fail on the replica as well.
      6fa8e4f7
  5. 28 Mar, 2022 1 commit
  6. 22 Mar, 2022 1 commit
  7. 01 Mar, 2022 1 commit
    • ranshid's avatar
      Introduce debug command to disable reply buffer resizing (#10360) · 9b15dd28
      ranshid authored
      In order to resolve some flaky tests which hard rely on examine memory footprint.
      we introduce the following fixes:
      
      # Fix in client-eviction test - by @yoav-steinberg 
      Sometime the libc allocator can use different size client struct allocations.
      this may cause unexpected memory calculations to fail the test.
      
      # Introduce new DEBUG command for disabling reply buffer resizing
      In order to eliminate reply buffer resizing during specific tests.
      we introduced the ability to disable (and enable) the resizing cron job
      
      Co-authored-by: yoav-steinberg yoav@redislabs.com
      9b15dd28
  8. 22 Feb, 2022 1 commit
    • ranshid's avatar
      introduce dynamic client reply buffer size - save memory on idle clients (#9822) · 47c51d0c
      ranshid authored
      
      
      Current implementation simple idle client which serves no traffic still
      use ~17Kb of memory. this is mainly due to a fixed size reply buffer
      currently set to 16kb.
      
      We have encountered some cases in which the server operates in a low memory environments.
      In such cases a user who wishes to create large connection pools to support potential burst period,
      will exhaust a large amount of memory  to maintain connected Idle clients.
      Some users may choose to "sacrifice" performance in order to save memory.
      
      This commit introduce a dynamic mechanism to shrink and expend the client reply buffer based on
      periodic observed peak.
      the algorithm works as follows:
      1. each time a client reply buffer has been fully written, the last recorded peak is updated: 
      new peak = MAX( last peak, current written size)
      2. during clients cron we check for each client if the last observed peak was:
           a. matching the current buffer size - in which case we expend (resize) the buffer size by 100%
           b. less than half the buffer size - in which case we shrink the buffer size by 50%
      3. In any case we will **not** resize the buffer in case:
          a. the current buffer peak is less then the current buffer usable size and higher than 1/2 the
            current buffer usable size
          b. the value of (current buffer usable size/2) is less than 1Kib
          c. the value of  (current buffer usable size*2) is larger than 16Kib
      4. the peak value is reset to the current buffer position once every **5** seconds. we maintain a new
         field in the client structure (buf_peak_last_reset_time) which is used to keep track of how long it
         passed since the last buffer peak reset.
      
      ### **Interface changes:**
      **CIENT LIST** - now contains 2 new extra fields:
      rbs= < the current size in bytes of the client reply buffer >
      rbp=< the current value in bytes of the last observed buffer peak position >
      
      **INFO STATS** - now contains 2 new statistics:
      reply_buffer_shrinks = < total number of buffer shrinks performed >
      reply_buffer_expends = < total number of buffer expends performed >
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      Co-authored-by: default avatarYoav Steinberg <yoav@redislabs.com>
      47c51d0c
  9. 16 Feb, 2022 1 commit
  10. 10 Feb, 2022 1 commit
  11. 08 Feb, 2022 1 commit
    • Wen Hui's avatar
      Make INFO command variadic (#6891) · 2e1bc942
      Wen Hui authored
      
      
      This is an enhancement for INFO command, previously INFO only support one argument
      for different info section , if user want to get more categories information, either perform
      INFO all / default or calling INFO for multiple times.
      
      **Description of the feature**
      
      The goal of adding this feature is to let the user retrieve multiple categories via the INFO
      command, and still avoid emitting the same section twice.
      
      A use case for this is like Redis Sentinel, which periodically calling INFO command to refresh
      info from monitored Master/Slaves, only Server and Replication part categories are used for
      parsing information. If the INFO command can return just enough categories that client side
      needs, it can save a lot of time for client side parsing it as well as network bandwidth.
      
      **Implementation**
      To share code between redis, sentinel, and other users of INFO (DEBUG and modules),
      we have a new `genInfoSectionDict` function that returns a dict and some boolean flags
      (e.g. `all`) to the caller (built from user input).
      Sentinel is later purging unwanted sections from that, and then it is forwarded to the info `genRedisInfoString`.
      
      **Usage Examples**
      INFO Server Replication   
      INFO CPU Memory
      INFO default commandstats
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      2e1bc942
  12. 06 Feb, 2022 1 commit
  13. 04 Feb, 2022 1 commit
    • Binbin's avatar
      Fix SENTINEL SET config rewrite test (#10232) · d7fcb3c5
      Binbin authored
      Change the sentinel config file to a directory in SENTINEL SET test.
      So it will now fail on the `rename` in `rewriteConfigOverwriteFile`.
      
      The test used to set the sentinel config file permissions to `000` to
      simulate failure. But it fails on centos7 / freebsd / alpine. (introduced in #10151)
      
      Other changes:
      1. More error messages after the config rewrite failure.
      2. Modify arg name `force_all` in `rewriteConfig` to `force_write`. (was rename in #9304)
      3. Fix a typo in debug quicklist-packed-threshold, then -> than. (#9357)
      d7fcb3c5
  14. 18 Jan, 2022 1 commit
    • Wang Yuan's avatar
      Use const char pointer in redismodule.h as far as possible (#10064) · d697daa7
      Wang Yuan authored
      When I used C++ to develop a redis module. i  used `string.data()` as the second parameter `ele`
      of  `RedisModule_DigestAddStringBuffer`, but there is a warning, since we never change the `ele`,
      i think we should use `const char` for it.
      
      This PR adds const to just a handful of module APIs that required it, all not very widely used.
      The implication is a breaking change in terms of compilation error that's easy to resolve, and no ABI impact.
      The affected APIs are around Digest, Info injection, and Cluster bus messages.
      d697daa7
  15. 03 Jan, 2022 2 commits
    • chenyang8094's avatar
      Implement Multi Part AOF mechanism to avoid AOFRW overheads. (#9788) · 87789fae
      chenyang8094 authored
      
      
      Implement Multi-Part AOF mechanism to avoid overheads during AOFRW.
      Introducing a folder with multiple AOF files tracked by a manifest file.
      
      The main issues with the the original AOFRW mechanism are:
      * buffering of commands that are processed during rewrite (consuming a lot of RAM)
      * freezes of the main process when the AOFRW completes to drain the remaining part of the buffer and fsync it.
      * double disk IO for the data that arrives during AOFRW (had to be written to both the old and new AOF files)
      
      The main modifications of this PR:
      1. Remove the AOF rewrite buffer and related code.
      2. Divide the AOF into multiple files, they are classified as two types, one is the the `BASE` type,
        it represents the full amount of data (Maybe AOF or RDB format) after each AOFRW, there is only
        one `BASE` file at most. The second is `INCR` type, may have more than one. They represent the
        incremental commands since the last AOFRW.
      3. Use a AOF manifest file to record and manage these AOF files mentioned above.
      4. The original configuration of `appendfilename` will be the base part of the new file name, for example:
        `appendonly.aof.1.base.rdb` and `appendonly.aof.2.incr.aof`
      5. Add manifest-related TCL tests, and modified some existing tests that depend on the `appendfilename`
      6. Remove the `aof_rewrite_buffer_length` field in info.
      7. Add `aof-disable-auto-gc` configuration. By default we're automatically deleting HISTORY type AOFs.
        It also gives users the opportunity to preserve the history AOFs. just for testing use now.
      8. Add AOFRW limiting measure. When the AOFRW failures reaches the threshold (3 times now),
        we will delay the execution of the next AOFRW by 1 minute. If the next AOFRW also fails, it will be
        delayed by 2 minutes. The next is 4, 8, 16, the maximum delay is 60 minutes (1 hour). During the limit
        period, we can still use the 'bgrewriteaof' command to execute AOFRW immediately.
      9. Support upgrade (load) data from old version redis.
      10. Add `appenddirname` configuration, as the directory name of the append only files. All AOF files and
        manifest file will be placed in this directory.
      11. Only the last AOF file (BASE or INCR) can be truncated. Otherwise redis will exit even if
        `aof-load-truncated` is enabled.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      87789fae
    • Madelyn Olson's avatar
      Implement clusterbus message extensions and cluster hostname support (#9530) · 5460c100
      Madelyn Olson authored
      Implement the ability for cluster nodes to advertise their location with extension messages.
      5460c100
  16. 02 Jan, 2022 1 commit
    • yoav-steinberg's avatar
      Generate RDB with Functions only via redis-cli --functions-rdb (#9968) · 1bf6d6f1
      yoav-steinberg authored
      
      
      This is needed in order to ease the deployment of functions for ephemeral cases, where user
      needs to spin up a server with functions pre-loaded.
      
      #### Details:
      
      * Added `--functions-rdb` option to _redis-cli_.
      * Functions only rdb via `REPLCONF rdb-filter-only functions`. This is a placeholder for a space
        separated inclusion filter for the RDB. In the future can be `REPLCONF rdb-filter-only
        "functions db:3 key-patten:user*"` and a complementing `rdb-filter-exclude` `REPLCONF`
        can also be added.
      * Handle "slave requirements" specification to RDB saving code so we can use the same RDB
        when different slaves express the same requirements (like functions-only) and not share the
        RDB when their requirements differ. This is currently just a flags `int`, but can be extended to
        a more complex structure with various filter fields.
      * make sure to support filters only in diskless replication mode (not to override the persistence file),
        we do that by forcing diskless (even if disabled by config)
      
      other changes:
      * some refactoring in rdb.c (extract portion of a big function to a sub-function)
      * rdb_key_save_delay used in AOFRW too
      * sendChildInfo takes the number of updated keys (incremental, rather than absolute)
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      1bf6d6f1
  17. 21 Dec, 2021 2 commits
    • Meir Shpilraien (Spielrein)'s avatar
      Change FUNCTION CREATE, DELETE and FLUSH to be WRITE commands instead of MAY_REPLICATE. (#9953) · 3bcf1084
      Meir Shpilraien (Spielrein) authored
      The issue with MAY_REPLICATE is that all automatic mechanisms to handle
      write commands will not work. This require have a special treatment for:
      * Not allow those commands to be executed on RO replica.
      * Allow those commands to be executed on RO replica from primary connection.
      * Allow those commands to be executed on the RO replica from AOF.
      
      By setting those commands as WRITE commands we are getting all those properties from Redis.
      Test was added to verify that those properties work as expected.
      
      In addition, rearrange when and where functions are flushed. Before this PR functions were
      flushed manually on `rdbLoadRio` and cleaned manually on failure. This contradicts the
      assumptions that functions are data and need to be created/deleted alongside with the
      data. A side effect of this, for example, `debug reload noflush` did not flush the data but
      did flush the functions, `debug loadaof` flush the data but not the functions.
      This PR move functions deletion into `emptyDb`. `emptyDb` (renamed to `emptyData`) will
      now accept an additional flag, `NOFUNCTIONS` which specifically indicate that we do not
      want to flush the functions (on all other cases, functions will be flushed). Used the new flag
      on FLUSHALL and FLUSHDB only! Tests were added to `debug reload` and `debug loadaof`
      to verify that functions behave the same as the data.
      
      Notice that because now functions will be deleted along side with the data we can not allow
      `CLUSTER RESET` to be called from within a function (it will cause the function to be released
      while running), this PR adds `NO_SCRIPT` flag to `CLUSTER RESET`  so it will not be possible
      to be called from within a function. The other cluster commands are allowed from within a
      function (there are use-cases that uses `GETKEYSINSLOT` to iterate over all the keys on a
      given slot). Tests was added to verify `CLUSTER RESET` is denied from within a script.
      
      Another small change on this PR is that `RDBFLAGS_ALLOW_DUP` is also applicable on functions.
      When loading functions, if this flag is set, we will replace old functions with new ones on collisions. 
      3bcf1084
    • zhugezy's avatar
      Remove EVAL script verbatim replication, propagation, and deterministic execution logic (#9812) · 1b0968df
      zhugezy authored
      
      
      # Background
      
      The main goal of this PR is to remove relevant logics on Lua script verbatim replication,
      only keeping effects replication logic, which has been set as default since Redis 5.0.
      As a result, Lua in Redis 7.0 would be acting the same as Redis 6.0 with default
      configuration from users' point of view.
      
      There are lots of reasons to remove verbatim replication.
      Antirez has listed some of the benefits in Issue #5292:
      
      >1. No longer need to explain to users side effects into scripts.
          They can do whatever they want.
      >2. No need for a cache about scripts that we sent or not to the slaves.
      >3. No need to sort the output of certain commands inside scripts
          (SMEMBERS and others): this both simplifies and gains speed.
      >4. No need to store scripts inside the RDB file in order to startup correctly.
      >5. No problems about evicting keys during the script execution.
      
      When looking back at Redis 5.0, antirez and core team decided to set the config
      `lua-replicate-commands yes` by default instead of removing verbatim replication
      directly, in case some bad situations happened. 3 years later now before Redis 7.0,
      it's time to remove it formally.
      
      # Changes
      
      - configuration for lua-replicate-commands removed
        - created config file stub for backward compatibility
      - Replication script cache removed
        - this is useless under script effects replication
        - relevant statistics also removed
      - script persistence in RDB files is also removed
      - Propagation of SCRIPT LOAD and SCRIPT FLUSH to replica / AOF removed
      - Deterministic execution logic in scripts removed (i.e. don't run write commands
        after random ones, and sorting output of commands with random order)
        - the flags indicating which commands have non-deterministic results are kept as hints to clients.
      - `redis.replicate_commands()` & `redis.set_repl()` changed
        - now `redis.replicate_commands()` does nothing and return an 1
        - ...and then `redis.set_repl()` can be issued before `redis.replicate_commands()` now
      - Relevant TCL cases adjusted
      - DEBUG lua-always-replicate-commands removed
      
      # Other changes
      - Fix a recent bug comparing CLIENT_ID_AOF to original_client->flags instead of id. (introduced in #9780)
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      1b0968df
  18. 01 Dec, 2021 1 commit
    • meir@redislabs.com's avatar
      Redis Functions - Move Lua related variable into luaCtx struct · e0cd580a
      meir@redislabs.com authored
      The following variable was renamed:
      1. lua_caller 			-> script_caller
      2. lua_time_limit 		-> script_time_limit
      3. lua_timedout 		-> script_timedout
      4. lua_oom 			-> script_oom
      5. lua_disable_deny_script 	-> script_disable_deny_script
      6. in_eval			-> in_script
      
      The following variables was moved to lctx under eval.c
      1.  lua
      2.  lua_client
      3.  lua_cur_script
      4.  lua_scripts
      5.  lua_scripts_mem
      6.  lua_replicate_commands
      7.  lua_write_dirty
      8.  lua_random_dirty
      9.  lua_multi_emitted
      10. lua_repl
      11. lua_kill
      12. lua_time_start
      13. lua_time_snapshot
      
      This commit is in a low risk of introducing any issues and it
      is just moving varibales around and not changing any logic.
      e0cd580a
  19. 30 Nov, 2021 1 commit
  20. 24 Nov, 2021 1 commit
    • sundb's avatar
      Replace ziplist with listpack in quicklist (#9740) · 45129059
      sundb authored
      
      
      Part three of implementing #8702, following #8887 and #9366 .
      
      ## Description of the feature
      1. Replace the ziplist container of quicklist with listpack.
      2. Convert existing quicklist ziplists on RDB loading time. an O(n) operation.
      
      ## Interface changes
      1. New `list-max-listpack-size` config is an alias for `list-max-ziplist-size`.
      2. Replace `debug ziplist` command with `debug listpack`.
      
      ## Internal changes
      1. Add `lpMerge` to merge two listpacks . (same as `ziplistMerge`)
      2. Add `lpRepr` to print info of listpack which is used in debugCommand and `quicklistRepr`. (same as `ziplistRepr`)
      3. Replace `QUICKLIST_NODE_CONTAINER_ZIPLIST` with `QUICKLIST_NODE_CONTAINER_PACKED`(following #9357 ).
          It represent that a quicklistNode is a packed node, as opposed to a plain node.
      4. Remove `createZiplistObject` method, which is never used.
      5. Calculate listpack entry size using overhead overestimation in `quicklistAllowInsert`.
          We prefer an overestimation, which would at worse lead to a few bytes below the lowest limit of 4k.
      
      ## Improvements
      1. Calling `lpShrinkToFit` after converting Ziplist to listpack, which was missed at #9366.
      2. Optimize `quicklistAppendPlainNode` to avoid memcpy data.
      
      ## Bugfix
      1. Fix crash in `quicklistRepr` when ziplist is compressed, introduced from #9366.
      
      ## Test
      1. Add unittest for `lpMerge`.
      2. Modify the old quicklist ziplist corrupt dump test.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      45129059
  21. 21 Nov, 2021 1 commit
    • Oran Agra's avatar
      Fix false positive leak reported by GCC ASAN (#9816) · 183b90a6
      Oran Agra authored
      Leak found by the corrupt-dump-fuzzer when using GCC ASAN, which seems
      to falsely report leaks on pointers kept only on the stack when calling exit.
      Instead we now use _exit on panic / assert to skip these leak checks.
      
      Additionally, check for sanitizer warnings in the corrupt-dump-fuzzer between iterations,
      so that when something is found we know which test to relate it too (and it prints reproduction command list)
      183b90a6
  22. 11 Nov, 2021 1 commit
    • Ozan Tezcan's avatar
      Add sanitizer support and clean up sanitizer findings (#9601) · b91d8b28
      Ozan Tezcan authored
      - Added sanitizer support. `address`, `undefined` and `thread` sanitizers are available.  
      - To build Redis with desired sanitizer : `make SANITIZER=undefined`
      - There were some sanitizer findings, cleaned up codebase
      - Added tests with address and undefined behavior sanitizers to daily CI.
      - Added tests with address sanitizer to the per-PR CI (smoke out mem leaks sooner).
      
      Basically, there are three types of issues : 
      
      **1- Unaligned load/store** : Most probably, this issue may cause a crash on a platform that
      does not support unaligned access. Redis does unaligned access only on supported platforms.
      
      **2- Signed integer overflow.** Although, signed overflow issue can be problematic time to time
      and change how compiler generates code, current findings mostly about signed shift or simple
      addition overflow. For most platforms Redis can be compiled for, this wouldn't cause any issue
      as far as I can tell (checked generated code on godbolt.org).
      
       **3 -Minor leak** (redis-cli), **use-after-free**(just before calling exit());
      
      UB means nothing guaranteed and risky to reason about program behavior but I don't think any
      of the fixes here worth backporting. As sanitizers are now part of the CI, preventing new issues
      will be the real benefit. 
      b91d8b28
  23. 07 Nov, 2021 1 commit
    • yoav-steinberg's avatar
      Refactor config.c for generic setter interface (#9644) · 79ac5756
      yoav-steinberg authored
      
      
      This refactors all `CONFIG SET`s and conf file loading arguments go through
      the generic config handling interface.
      
      Refactoring changes:
      - All config params go through the `standardConfig` interface (some stuff which
        is only related to the config file and not the `CONFIG` command still has special
        handling for rewrite/config file parsing, `loadmodule`, for example.) .
      - Added `MULTI_ARG_CONFIG` flag for configs to signify they receive a variable
        number of arguments instead of a single argument. This is used to break up space
        separated arguments to `CONFIG SET` so the generic setter interface can pass
        multiple arguments to the setter function. When parsing the config file we also break
        up anything after the config name into multiple arguments to the setter function.
      
      Interface changes:
      - A side effect of the above interface is that the `bind` argument in the config file can
        be empty (no argument at all) this is treated the same as passing an single empty
        string argument (same as `save` already used to work).
      - Support rewrite and setting `watchdog-period` from config file (was only supported
        by the CONFIG command till now).
      - Another side effect is that the `save T X` config argument now supports multiple
        Time-Changes pairs in a single line like its `CONFIG SET` counterpart. So in the
        config file you can either do:
        ```
        save 3600 1
        save 600 10
        ```
        or do
        ```
        save 3600 1 600 10
        ```
      Co-authored-by: default avatarBjorn Svensson <bjorn.a.svensson@est.tech>
      79ac5756
  24. 03 Nov, 2021 1 commit
    • perryitay's avatar
      Add support for list type to store elements larger than 4GB (#9357) · f27083a4
      perryitay authored
      
      
      Redis lists are stored in quicklist, which is currently a linked list of ziplists.
      Ziplists are limited to storing elements no larger than 4GB, so when bigger
      items are added they're getting truncated.
      This PR changes quicklists so that they're capable of storing large items
      in quicklist nodes that are plain string buffers rather than ziplist.
      
      As part of the PR there were few other changes in redis: 
      1. new DEBUG sub-commands: 
         - QUICKLIST-PACKED-THRESHOLD - set the threshold of for the node type to
           be plan or ziplist. default (1GB)
         - QUICKLIST <key> - Shows low level info about the quicklist encoding of <key>
      2. rdb format change:
         - A new type was added - RDB_TYPE_LIST_QUICKLIST_2 . 
         - container type (packed / plain) was added to the beginning of the rdb object
           (before the actual node list).
      3. testing:
         - Tests that requires over 100MB will be by default skipped. a new flag was
           added to 'runtest' to run the large memory tests (not used by default)
      Co-authored-by: default avatarsundb <sundbcn@gmail.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      f27083a4
  25. 01 Nov, 2021 1 commit
    • Oran Agra's avatar
      fix valgrind issues with long double module test (#9709) · f1f3cceb
      Oran Agra authored
      The module test in reply.tcl was introduced by #8521 but didn't run until recently (see #9639)
      and then it started failing with valgrind.
      This is because valgrind uses 64 bit long double (unlike most other platforms that have at least 80 bits)
      But besides valgrind, the tests where also incompatible with ARM32, which also uses 64 bit long doubles.
      
      We now use appropriate value to avoid issues with either valgrind or ARM32
      
      In all the double tests, i use 3.141, which is safe since since addReplyDouble uses
      `%.17Lg` which is able to represent this value without adding any digits due to precision loss. 
      
      In the long double, since we use `%.17Lf` in ld2string, it preserves 17 significant
      digits, rather than 17 digit after the decimal point (like in `%.17Lg`).
      So to make these similar, i use value lower than 1 (no digits left of
      the period)
      
      Lastly, we have the same issue with TCL (no long doubles) so we read
      raw protocol in that test.
      
      Note that the only error before this fix (in both valgrind and ARM32 is this:
      ```
      *** [err]: RM_ReplyWithLongDouble: a float reply in tests/unit/moduleapi/reply.tcl
      Expected '3.141' to be equal to '3.14100000000000001' (context: type eval line 2 cmd {assert_equal 3.141 [r rw.longdouble 3.141]} proc ::test)
      ```
      so the changes to debug.c and scripting.tcl aren't really needed, but i consider them a cleanup
      (i.e. scripting.c validated a different constant than the one that's sent to it from debug.c).
      
      Another unrelated change is to add the RESP version to the repeated tests in reply.tcl
      f1f3cceb
  26. 26 Sep, 2021 1 commit
    • yoav-steinberg's avatar
      Client eviction ci issues (#9549) · 66002530
      yoav-steinberg authored
      Fixing CI test issues introduced in #8687
      - valgrind warnings in readQueryFromClient when client was freed by processInputBuffer
      - adding DEBUG pause-cron for tests not to be time dependent.
      - skipping a test that depends on socket buffers / events not compatible with TLS
      - making sure client got subscribed by not using deferring client
      66002530
  27. 23 Sep, 2021 1 commit
    • yoav-steinberg's avatar
      Client eviction (#8687) · 2753429c
      yoav-steinberg authored
      
      
      ### Description
      A mechanism for disconnecting clients when the sum of all connected clients is above a
      configured limit. This prevents eviction or OOM caused by accumulated used memory
      between all clients. It's a complimentary mechanism to the `client-output-buffer-limit`
      mechanism which takes into account not only a single client and not only output buffers
      but rather all memory used by all clients.
      
      #### Design
      The general design is as following:
      * We track memory usage of each client, taking into account all memory used by the
        client (query buffer, output buffer, parsed arguments, etc...). This is kept up to date
        after reading from the socket, after processing commands and after writing to the socket.
      * Based on the used memory we sort all clients into buckets. Each bucket contains all
        clients using up up to x2 memory of the clients in the bucket below it. For example up
        to 1m clients, up to 2m clients, up to 4m clients, ...
      * Before processing a command and before sleep we check if we're over the configured
        limit. If we are we start disconnecting clients from larger buckets downwards until we're
        under the limit.
      
      #### Config
      `maxmemory-clients` max memory all clients are allowed to consume, above this threshold
      we disconnect clients.
      This config can either be set to 0 (meaning no limit), a size in bytes (possibly with MB/GB
      suffix), or as a percentage of `maxmemory` by using the `%` suffix (e.g. setting it to `10%`
      would mean 10% of `maxmemory`).
      
      #### Important code changes
      * During the development I encountered yet more situations where our io-threads access
        global vars. And needed to fix them. I also had to handle keeps the clients sorted into the
        memory buckets (which are global) while their memory usage changes in the io-thread.
        To achieve this I decided to simplify how we check if we're in an io-thread and make it
        much more explicit. I removed the `CLIENT_PENDING_READ` flag used for checking
        if the client is in an io-thread (it wasn't used for anything else) and just used the global
        `io_threads_op` variable the same way to check during writes.
      * I optimized the cleanup of the client from the `clients_pending_read` list on client freeing.
        We now store a pointer in the `client` struct to this list so we don't need to search in it
        (`pending_read_list_node`).
      * Added `evicted_clients` stat to `INFO` command.
      * Added `CLIENT NO-EVICT ON|OFF` sub command to exclude a specific client from the
        client eviction mechanism. Added corrosponding 'e' flag in the client info string.
      * Added `multi-mem` field in the client info string to show how much memory is used up
        by buffered multi commands.
      * Client `tot-mem` now accounts for buffered multi-commands, pubsub patterns and
        channels (partially), tracking prefixes (partially).
      * CLIENT_CLOSE_ASAP flag is now handled in a new `beforeNextClient()` function so
        clients will be disconnected between processing different clients and not only before sleep.
        This new function can be used in the future for work we want to do outside the command
        processing loop but don't want to wait for all clients to be processed before we get to it.
        Specifically I wanted to handle output-buffer-limit related closing before we process client
        eviction in case the two race with each other.
      * Added a `DEBUG CLIENT-EVICTION` command to print out info about the client eviction
        buckets.
      * Each client now holds a pointer to the client eviction memory usage bucket it belongs to
        and listNode to itself in that bucket for quick removal.
      * Global `io_threads_op` variable now can contain a `IO_THREADS_OP_IDLE` value
        indicating no io-threading is currently being executed.
      * In order to track memory used by each clients in real-time we can't rely on updating
        these stats in `clientsCron()` alone anymore. So now I call `updateClientMemUsage()`
        (used to be `clientsCronTrackClientsMemUsage()`) after command processing, after
        writing data to pubsub clients, after writing the output buffer and after reading from the
        socket (and maybe other places too). The function is written to be fast.
      * Clients are evicted if needed (with appropriate log line) in `beforeSleep()` and before
        processing a command (before performing oom-checks and key-eviction).
      * All clients memory usage buckets are grouped as follows:
        * All clients using less than 64k.
        * 64K..128K
        * 128K..256K
        * ...
        * 2G..4G
        * All clients using 4g and up.
      * Added client-eviction.tcl with a bunch of tests for the new mechanism.
      * Extended maxmemory.tcl to test the interaction between maxmemory and
        maxmemory-clients settings.
      * Added an option to flag a numeric configuration variable as a "percent", this means that
        if we encounter a '%' after the number in the config file (or config set command) we
        consider it as valid. Such a number is store internally as a negative value. This way an
        integer value can be interpreted as either a percent (negative) or absolute value (positive).
        This is useful for example if some numeric configuration can optionally be set to a percentage
        of something else.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      2753429c
  28. 09 Sep, 2021 1 commit
    • sundb's avatar
      Replace all usage of ziplist with listpack for t_zset (#9366) · 3ca6972e
      sundb authored
      Part two of implementing #8702 (zset), after #8887.
      
      ## Description of the feature
      Replaced all uses of ziplist with listpack in t_zset, and optimized some of the code to optimize performance.
      
      ## Rdb format changes
      New `RDB_TYPE_ZSET_LISTPACK` rdb type.
      
      ## Rdb loading improvements:
      1) Pre-expansion of dict for validation of duplicate data for listpack and ziplist.
      2) Simplifying the release of empty key objects when RDB loading.
      3) Unify ziplist and listpack data verify methods for zset and hash, and move code to rdb.c.
      
      ## Interface changes
      1) New `zset-max-listpack-entries` config is an alias for `zset-max-ziplist-entries` (same with `zset-max-listpack-value`).
      2) OBJECT ENCODING will return listpack instead of ziplist.
      
      ## Listpack improvements:
      1) Add `lpDeleteRange` and `lpDeleteRangeWithEntry` functions to delete a range of entries from listpack.
      2) Improve the performance of `lpCompare`, converting from string to integer is faster than converting from integer to string.
      3) Replace `snprintf` with `ll2string` to improve performance in converting numbers to strings in `lpGet()`.
      
      ## Zset improvements:
      1) Improve the performance of `zzlFind` method, use `lpFind` instead of `lpCompare` in a loop.
      2) Use `lpDeleteRangeWithEntry` instead of `lpDelete` twice to delete a element of zset.
      
      ## Tests
      1) Add some unittests for `lpDeleteRange` and `lpDeleteRangeWithEntry` function.
      2) Add zset RDB loading test.
      3) Add benchmark test for `lpCompare` and `ziplsitCompare`.
      4) Add empty listpack zset corrupt dump test.
      3ca6972e
  29. 18 Aug, 2021 1 commit
  30. 05 Aug, 2021 1 commit
  31. 04 Aug, 2021 1 commit
    • Meir Shpilraien (Spielrein)'s avatar
      Unified Lua and modules reply parsing and added RESP3 support to RM_Call (#9202) · 2237131e
      Meir Shpilraien (Spielrein) authored
      
      
      ## Current state
      1. Lua has its own parser that handles parsing `reds.call` replies and translates them
        to Lua objects that can be used by the user Lua code. The parser partially handles
        resp3 (missing big number, verbatim, attribute, ...)
      2. Modules have their own parser that handles parsing `RM_Call` replies and translates
        them to RedisModuleCallReply objects. The parser does not support resp3.
      
      In addition, in the future, we want to add Redis Function (#8693) that will probably
      support more languages. At some point maintaining so many parsers will stop
      scaling (bug fixes and protocol changes will need to be applied on all of them).
      We will probably end up with different parsers that support different parts of the
      resp protocol (like we already have today with Lua and modules)
      
      ## PR Changes
      This PR attempt to unified the reply parsing of Lua and modules (and in the future
      Redis Function) by introducing a new parser unit (`resp_parser.c`). The new parser
      handles parsing the reply and calls different callbacks to allow the users (another
      unit that uses the parser, i.e, Lua, modules, or Redis Function) to analyze the reply.
      
      ### Lua API Additions
      The code that handles reply parsing on `scripting.c` was removed. Instead, it uses
      the resp_parser to parse and create a Lua object out of the reply. As mentioned
      above the Lua parser did not handle parsing big numbers, verbatim, and attribute.
      The new parser can handle those and so Lua also gets it for free.
      Those are translated to Lua objects in the following way:
      1. Big Number - Lua table `{'big_number':'<str representation for big number>'}`
      2. Verbatim - Lua table `{'verbatim_string':{'format':'<verbatim format>', 'string':'<verbatim string value>'}}`
      3. Attribute - currently ignored and not expose to the Lua parser, another issue will be open to decide how to expose it.
      
      Tests were added to check resp3 reply parsing on Lua
      
      ### Modules API Additions
      The reply parsing code on `module.c` was also removed and the new resp_parser is used instead.
      In addition, the RedisModuleCallReply was also extracted to a separate unit located on `call_reply.c`
      (in the future, this unit will also be used by Redis Function). A nice side effect of unified parsing is
      that modules now also support resp3. Resp3 can be enabled by giving `3` as a parameter to the
      fmt argument of `RM_Call`. It is also possible to give `0`, which will indicate an auto mode. i.e, Redis
      will automatically chose the reply protocol base on the current client set on the RedisModuleCtx
      (this mode will mostly be used when the module want to pass the reply to the client as is).
      In addition, the following RedisModuleAPI were added to allow analyzing resp3 replies:
      
      * New RedisModuleCallReply types:
         * `REDISMODULE_REPLY_MAP`
         * `REDISMODULE_REPLY_SET`
         * `REDISMODULE_REPLY_BOOL`
         * `REDISMODULE_REPLY_DOUBLE`
         * `REDISMODULE_REPLY_BIG_NUMBER`
         * `REDISMODULE_REPLY_VERBATIM_STRING`
         * `REDISMODULE_REPLY_ATTRIBUTE`
      
      * New RedisModuleAPI:
         * `RedisModule_CallReplyDouble` - getting double value from resp3 double reply
         * `RedisModule_CallReplyBool` - getting boolean value from resp3 boolean reply
         * `RedisModule_CallReplyBigNumber` - getting big number value from resp3 big number reply
         * `RedisModule_CallReplyVerbatim` - getting format and value from resp3 verbatim reply
         * `RedisModule_CallReplySetElement` - getting element from resp3 set reply
         * `RedisModule_CallReplyMapElement` - getting key and value from resp3 map reply
         * `RedisModule_CallReplyAttribute` - getting a reply attribute
         * `RedisModule_CallReplyAttributeElement` - getting key and value from resp3 attribute reply
         
      * New context flags:
         * `REDISMODULE_CTX_FLAGS_RESP3` - indicate that the client is using resp3
      
      Tests were added to check the new RedisModuleAPI
      
      ### Modules API Changes
      * RM_ReplyWithCallReply might return REDISMODULE_ERR if the given CallReply is in resp3
        but the client expects resp2. This is not a breaking change because in order to get a resp3
        CallReply one needs to specifically specify `3` as a parameter to the fmt argument of
        `RM_Call` (as mentioned above).
      
      Tests were added to check this change
      
      ### More small Additions
      * Added `debug set-disable-deny-scripts` that allows to turn on and off the commands no-script
      flag protection. This is used by the Lua resp3 tests so it will be possible to run `debug protocol`
      and check the resp3 parsing code.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      Co-authored-by: default avatarYossi Gottlieb <yossigo@gmail.com>
      2237131e
  32. 18 Jul, 2021 1 commit
  33. 14 Jul, 2021 1 commit
    • Oran Agra's avatar
      Test infra, handle RESP3 attributes and big-numbers and bools (#9235) · 6a5bac30
      Oran Agra authored
      - promote the code in DEBUG PROTOCOL to addReplyBigNum
      - DEBUG PROTOCOL ATTRIB skips the attribute when client is RESP2
      - networking.c addReply for push and attributes generate assertion when
        called on a RESP2 client, anything else would produce a broken
        protocol that clients can't handle.
      6a5bac30
  34. 20 Jun, 2021 1 commit
  35. 16 Jun, 2021 1 commit
  36. 14 Jun, 2021 1 commit
    • YaacovHazan's avatar
      cleanup around loadAppendOnlyFile (#9012) · 1677efb9
      YaacovHazan authored
      Today when we load the AOF on startup, the loadAppendOnlyFile checks if
      the file is openning for reading.
      This check is redundent (dead code) as we open the AOF file for writing at initServer,
      and the file will always be existing for the loadAppendOnlyFile.
      
      In this commit:
      - remove all the exit(1) from loadAppendOnlyFile, as it is the caller
        responsibility to decide what to do in case of failure.
      - move the opening of the AOF file for writing, to be after we loading it.
      - avoid return -ERR in DEBUG LOADAOF, when the AOF is existing but empty
      1677efb9
  37. 10 Jun, 2021 1 commit
    • Binbin's avatar
      Fixed some typos, add a spell check ci and others minor fix (#8890) · 0bfccc55
      Binbin authored
      This PR adds a spell checker CI action that will fail future PRs if they introduce typos and spelling mistakes.
      This spell checker is based on blacklist of common spelling mistakes, so it will not catch everything,
      but at least it is also unlikely to cause false positives.
      
      Besides that, the PR also fixes many spelling mistakes and types, not all are a result of the spell checker we use.
      
      Here's a summary of other changes:
      1. Scanned the entire source code and fixes all sorts of typos and spelling mistakes (including missing or extra spaces).
      2. Outdated function / variable / argument names in comments
      3. Fix outdated keyspace masks error log when we check `config.notify-keyspace-events` in loadServerConfigFromString.
      4. Trim the white space at the end of line in `module.c`. Check: https://github.com/redis/redis/pull/7751
      5. Some outdated https link URLs.
      6. Fix some outdated comment. Such as:
          - In README: about the rdb, we used to said create a `thread`, change to `process`
          - dbRandomKey function coment (about the dictGetRandomKey, change to dictGetFairRandomKey)
          - notifyKeyspaceEvent fucntion comment (add type arg)
          - Some others minor fix in comment (Most of them are incorrectly quoted by variable names)
      7. Modified the error log so that users can easily distinguish between TCP and TLS in `changeBindAddr`
      0bfccc55
  38. 31 Mar, 2021 1 commit