1. 27 Apr, 2022 1 commit
  2. 26 Apr, 2022 1 commit
    • chenyang8094's avatar
      Fix bug when AOF enabled after startup. put the new incr file in the manifest... · 46ec6ad9
      chenyang8094 authored
      Fix bug when AOF enabled after startup. put the new incr file in the manifest only when AOFRW is done. (#10616)
      
      Changes:
      
      - When AOF is enabled **after** startup, the data accumulated during `AOF_WAIT_REWRITE`
        will only be stored in a temp INCR AOF file. Only after the first AOFRW is successful, we will
        add it to manifest file.
        Before this fix, the manifest referred to the temp file which could cause a restart during that
        time to load it without it's base.
      - Add `aof_rewrites_consecutive_failures` info field for  aofrw limiting implementation.
      
      Now we can guarantee that these behaviors of MP-AOF are the same as before (past redis releases):
      - When AOF is enabled after startup, the data accumulated during `AOF_WAIT_REWRITE` will only
        be stored in a visible place. Only after the first AOFRW is successful, we will add it to manifest file.
      - When disable AOF, we did not delete the AOF file in the past so there's no need to change that
        behavior now (yet).
      - When toggling AOF off and then on (could be as part of a full-sync), a crash or restart before the
        first rewrite is completed, would result with the previous version being loaded (might not be right thing,
        but that's what we always had).
      46ec6ad9
  3. 19 Apr, 2022 1 commit
    • judeng's avatar
      Fixes around AOF failed rewrite rate limiting (#10582) · d4cbd814
      judeng authored
      
      
      Changes:
      1. Check the failed rewrite time threshold only when we actually consider triggering a rewrite.
        i.e. this should be the last condition tested, since the test has side effects (increasing time threshold)
        Could have happened in some rare scenarios 
      2. no limit in startup state (e.g. after restarting redis that previously failed and had many incr files)
      3. the “triggered the limit” log would be recorded only when the limit status is returned
      4. remove failure count in log (could be misleading in some cases)
      Co-authored-by: default avatarchenyang8094 <chenyang8094@users.noreply.github.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      d4cbd814
  4. 07 Apr, 2022 1 commit
    • chenyang8094's avatar
      Fix auto-aof-rewrite-percentage based AOFRW trigger after restart (#10550) · 625bdaf3
      chenyang8094 authored
      
      
      The `auto-aof-rewrite-percentage` config defines at what growth percentage
      an automatic AOF rewrite is triggered.
      This normally works OK since the size of the AOF file at the end of a rewrite
      is stored in `server.aof_rewrite_base_size`.
      However, on startup, redis used to store the entire size of the AOF file into that
      variable, resulting in a wrong automatic AOF rewrite trigger (could have been
      triggered much later than desired).
      This issue would only affect the first AOFRW after startup, after that future AOFRW
      would have been triggered correctly.
      This bug existed in all previous versions of Redis.
      
      This PR unifies the meaning of `server.aof_rewrite_base_size`, which only represents
      the size of BASE AOF.
      Note that after an AOFRW this size includes the size of the incremental file (all the
      commands that executed during rewrite), so that auto-aof-rewrite-percentage is the
      ratio from the size of the AOF after rewrite.
      However, on startup, it is complicated to know that size, and we compromised on
      taking just the size of the base file, this means that the first rewrite after startup can
      happen a little bit too soon.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      Co-authored-by: default avataryoav-steinberg <yoav@redislabs.com>
      625bdaf3
  5. 05 Apr, 2022 1 commit
    • Meir Shpilraien (Spielrein)'s avatar
      Functions: Move library meta data to be part of the library payload. (#10500) · ae020e3d
      Meir Shpilraien (Spielrein) authored
      ## Move library meta data to be part of the library payload.
      
      Following the discussion on https://github.com/redis/redis/issues/10429 and the intention to add (in the future) library versioning support, we believe that the entire library metadata (like name and engine) should be part of the library payload and not provided by the `FUNCTION LOAD` command. The reasoning behind this is that the programmer who developed the library should be the one who set those values (name, engine, and in the future also version). **It is not the responsibility of the admin who load the library into the database.**
      
      The PR moves all the library metadata (engine and function name) to be part of the library payload. The metadata needs to be provided on the first line of the payload using the shebang format (`#!<engine> name=<name>`), example:
      
      ```lua
      #!lua name=test
      redis.register_function('foo', function() return 1 end)
      ```
      
      The above script will run on the Lua engine and will create a library called `test`.
      
      ## API Changes (compare to 7.0 rc2)
      
      * `FUNCTION LOAD` command was change and now it simply gets the library payload and extract the engine and name from the payload. In addition, the command will now return the function name which can later be used on `FUNCTION DELETE` and `FUNCTION LIST`.
      * The description field was completely removed from`FUNCTION LOAD`, and `FUNCTION LIST`
      
      
      ## Breaking Changes (compare to 7.0 rc2)
      
      * Library description was removed (we can re-add it in the future either as part of the shebang line or an additional line).
      * Loading an AOF file that was generated by either 7.0 rc1 or 7.0 rc2 will fail because the old command syntax is invalid.
      
      ## Notes
      
      * Loading an RDB file that was generated by rc1 / rc2 **is** supported, Redis will automatically add the shebang to the libraries payloads (we can probably delete that code after 7.0.3 or so since there's no need to keep supporting upgrades from an RC build).
      ae020e3d
  6. 04 Apr, 2022 1 commit
  7. 23 Feb, 2022 1 commit
    • Itamar Haber's avatar
      Add stream consumer group lag tracking and reporting (#9127) · c81c7f51
      Itamar Haber authored
      
      
      Adds the ability to track the lag of a consumer group (CG), that is, the number
      of entries yet-to-be-delivered from the stream.
      
      The proposed constant-time solution is in the spirit of "best-effort."
      
      Partially addresses #8737.
      
      ## Description of approach
      
      We add a new "entries_added" property to the stream. This starts at 0 for a new
      stream and is incremented by 1 with every `XADD`.  It is essentially an all-time
      counter of the entries added to the stream.
      
      Given the stream's length and this counter value, we can trivially find the logical
      "entries_added" counter of the first ID if and only if the stream is contiguous.
      A fragmented stream contains one or more tombstones generated by `XDEL`s.
      The new "xdel_max_id" stream property tracks the latest tombstone.
      
      The CG also tracks its last delivered ID's as an "entries_read" counter and
      increments it independently when delivering new messages, unless the this
      read counter is invalid (-1 means invalid offset). When the CG's counter is
      available, the reported lag is the difference between added and read counters.
      
      Lastly, this also adds a "first_id" field to the stream structure in order to make
      looking it up cheaper in most cases.
      
      ## Limitations
      
      There are two cases in which the mechanism isn't able to track the lag.
      In these cases, `XINFO` replies with `null` in the "lag" field.
      
      The first case is when a CG is created with an arbitrary last delivered ID,
      that isn't "0-0", nor the first or the last entries of the stream. In this case,
      it is impossible to obtain a valid read counter (short of an O(N) operation).
      The second case is when there are one or more tombstones fragmenting
      the stream's entries range.
      
      In both cases, given enough time and assuming that the consumers are
      active (reading and lacking) and advancing, the CG should be able to
      catch up with the tip of the stream and report zero lag.
      Once that's achieved, lag tracking would resume as normal (until the
      next tombstone is set).
      
      ## API changes
      
      * `XGROUP CREATE` added with the optional named argument `[ENTRIESREAD entries-read]`
        for explicitly specifying the new CG's counter.
      * `XGROUP SETID` added with an optional positional argument `[ENTRIESREAD entries-read]`
        for specifying the CG's counter.
      * `XINFO` reports the maximal tombstone ID, the recorded first entry ID, and total
        number of entries added to the stream.
      * `XINFO` reports the current lag and logical read counter of CGs.
      * `XSETID` is an internal command that's used in replication/aof. It has been added with
        the optional positional arguments `[ENTRIESADDED entries-added] [MAXDELETEDID max-deleted-entry-id]`
        for propagating the CG's offset and maximal tombstone ID of the stream.
      
      ## The generic unsolved problem
      
      The current stream implementation doesn't provide an efficient way to obtain the
      approximate/exact size of a range of entries. While it could've been nice to have
      that ability (#5813) in general, let alone specifically in the context of CGs, the risk
      and complexities involved in such implementation are in all likelihood prohibitive.
      
      ## A refactoring note
      
      The `streamGetEdgeID` has been refactored to accommodate both the existing seek
      of any entry as well as seeking non-deleted entries (the addition of the `skip_tombstones`
      argument). Furthermore, this refactoring also migrated the seek logic to use the
      `streamIterator` (rather than `raxIterator`) that was, in turn, extended with the
      `skip_tombstones` Boolean struct field to control the emission of these.
      Co-authored-by: default avatarGuy Benoish <guy.benoish@redislabs.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      c81c7f51
  8. 22 Feb, 2022 1 commit
    • YaacovHazan's avatar
      fix return value of loadAppendOnlyFiles (#10295) · 65e4bce0
      YaacovHazan authored
      
      
      Make sure the status return from loading multiple AOF files reflects the overall
      result, not just the one of the last file.
      
      When one of the AOF files succeeded to load, but the last AOF file
      was empty, the loadAppendOnlyFiles will return AOF_EMPTY.
      This commit changes this behavior, and return AOF_OK in that case.
      
      This can happen for example, when loading old AOF file, and no more commands processed,
      the manifest file will include base AOF file with data, and empty incr AOF file.
      Co-authored-by: default avatarchenyang8094 <chenyang8094@users.noreply.github.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      65e4bce0
  9. 17 Feb, 2022 2 commits
    • yoav-steinberg's avatar
      aof rewrite and rdb save counters in info (#10178) · 56fa48ff
      yoav-steinberg authored
      Add aof_rewrites and rdb_snapshots counters to info.
      This is useful to figure our if a rewrite or snapshot happened since last check.
      This was part of the (ongoing) effort to provide a safe backup solution for multipart-aof backups.
      56fa48ff
    • chenyang8094's avatar
      Adapt redis-check-aof tool for Multi Part Aof (#10061) · a50aa29b
      chenyang8094 authored
      Modifications of this PR:
      1. Support the verification of `Multi Part AOF`, while still maintaining support for the
        old-style `AOF/RDB-preamble`. `redis-check-aof` will automatically choose which
        mode to use according to the incoming file format.
         
      `Usage: redis-check-aof [--fix|--truncate-to-timestamp $timestamp] <AOF/manifest>`
       
      2. Refactor part of the code to make it easier to understand
      3. Currently only supports truncate  (`--fix` or `--truncate-to-timestamp`) the last AOF
        file (may be `BASE` or `INCR`)
      
      The reasons for 3 above:
      - for `--fix`: Only the last AOF may be truncated, this is guaranteed by redis
      - for `--truncate-to-timestamp`:  Normally, we only have `BASE` + `INCR` files
        at most, and `BASE` cannot be truncated(It only contains a timestamp annotation
        at the beginning of the file), so only `INCR` can be truncated. If we have a
        `BASE+INCR1+INCR2` file (meaning we have an interrupted AOFRW), Only `INCR2`
        files can be truncated at this time. If we still insist on truncate `INCR1`, we need to
        manually delete `INCR2` and update the manifest file, then re-run `redis-check-aof`
      - If we want to support truncate any file, we need to add very complicated code to support
        the atomic modification of multiple file deletion and update manifest, I think this is unnecessary
      a50aa29b
  10. 16 Feb, 2022 1 commit
  11. 11 Feb, 2022 1 commit
    • chenyang8094's avatar
      Modify AOF preamble related logs, and change the RDB aux field (#10283) · a2f2b6f5
      chenyang8094 authored
      In multi-part aof,  We no longer have the concept of `RDB-preamble`, so the related logs should be removed.
      However, in order to print compatible logs when loading old-style AOFs, we also have to keep the relevant code.
      Additionally, when saving an RDB, change the RDB aux field from "aof-preamble" to "aof-base".
      a2f2b6f5
  12. 30 Jan, 2022 1 commit
  13. 25 Jan, 2022 1 commit
  14. 19 Jan, 2022 1 commit
  15. 18 Jan, 2022 1 commit
    • Yossi Gottlieb's avatar
      Fix additional AOF filename issues. (#10110) · 25e6d4d4
      Yossi Gottlieb authored
      This extends the previous fix (#10049) to address any form of
      non-printable or whitespace character (including newlines, quotes,
      non-printables, etc.)
      
      Also, removes the limitation on appenddirname, to align with the way
      filenames are handled elsewhere in Redis.
      25e6d4d4
  16. 17 Jan, 2022 1 commit
  17. 13 Jan, 2022 1 commit
    • chenyang8094's avatar
      Always create base AOF file when redis start from empty. (#10102) · e9bff797
      chenyang8094 authored
      
      
      Force create a BASE file (use a foreground `rewriteAppendOnlyFile`) when redis starts from an
      empty data set and  `appendonly` is  yes.
      
      The reasoning is that normally, after redis is running for some time, and the AOF has gone though
      a few rewrites, there's always a base rdb file. and the scenario where the base file is missing, is
      kinda rare (happens only at empty startup), so this change normalizes it.
      But more importantly, there are or could be some complex modules that are started with some
      configuration, when they create persistence they write that configuration to RDB AUX fields, so
      that can can always know with which configuration the persistence file they're loading was
      created (could be critical). there is (was) one scenario in which they could load their persisted data,
      and that configuration was missing, and this change fixes it.
      
      Add a new module event: REDISMODULE_SUBEVENT_PERSISTENCE_SYNC_AOF_START, similar to
      REDISMODULE_SUBEVENT_PERSISTENCE_AOF_START which is async.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      e9bff797
  18. 10 Jan, 2022 1 commit
  19. 05 Jan, 2022 1 commit
  20. 04 Jan, 2022 1 commit
    • guybe7's avatar
      Ban snapshot-creating commands and other admin commands from transactions (#10015) · ac84b1cd
      guybe7 authored
      
      
      Creating fork (or even a foreground SAVE) during a transaction breaks the atomicity of the transaction.
      In addition to that, it could mess up the propagated transaction to the AOF file.
      
      This change blocks SAVE, PSYNC, SYNC and SHUTDOWN from being executed inside MULTI-EXEC.
      It does that by adding a command flag, so that modules can flag their commands with that flag too.
      
      Besides it changes BGSAVE, BGREWRITEAOF, and CONFIG SET appendonly, to turn the
      scheduled flag instead of forking righ taway.
      
      Other changes:
      * expose `protected`, `no-async-loading`, and `no_multi` flags in COMMAND command
      * add a test to validate propagation of FLUSHALL inside a transaction.
      * add a test to validate how CONFIG SET that errors reacts in a transaction
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      ac84b1cd
  21. 03 Jan, 2022 1 commit
    • chenyang8094's avatar
      Implement Multi Part AOF mechanism to avoid AOFRW overheads. (#9788) · 87789fae
      chenyang8094 authored
      
      
      Implement Multi-Part AOF mechanism to avoid overheads during AOFRW.
      Introducing a folder with multiple AOF files tracked by a manifest file.
      
      The main issues with the the original AOFRW mechanism are:
      * buffering of commands that are processed during rewrite (consuming a lot of RAM)
      * freezes of the main process when the AOFRW completes to drain the remaining part of the buffer and fsync it.
      * double disk IO for the data that arrives during AOFRW (had to be written to both the old and new AOF files)
      
      The main modifications of this PR:
      1. Remove the AOF rewrite buffer and related code.
      2. Divide the AOF into multiple files, they are classified as two types, one is the the `BASE` type,
        it represents the full amount of data (Maybe AOF or RDB format) after each AOFRW, there is only
        one `BASE` file at most. The second is `INCR` type, may have more than one. They represent the
        incremental commands since the last AOFRW.
      3. Use a AOF manifest file to record and manage these AOF files mentioned above.
      4. The original configuration of `appendfilename` will be the base part of the new file name, for example:
        `appendonly.aof.1.base.rdb` and `appendonly.aof.2.incr.aof`
      5. Add manifest-related TCL tests, and modified some existing tests that depend on the `appendfilename`
      6. Remove the `aof_rewrite_buffer_length` field in info.
      7. Add `aof-disable-auto-gc` configuration. By default we're automatically deleting HISTORY type AOFs.
        It also gives users the opportunity to preserve the history AOFs. just for testing use now.
      8. Add AOFRW limiting measure. When the AOFRW failures reaches the threshold (3 times now),
        we will delay the execution of the next AOFRW by 1 minute. If the next AOFRW also fails, it will be
        delayed by 2 minutes. The next is 4, 8, 16, the maximum delay is 60 minutes (1 hour). During the limit
        period, we can still use the 'bgrewriteaof' command to execute AOFRW immediately.
      9. Support upgrade (load) data from old version redis.
      10. Add `appenddirname` configuration, as the directory name of the append only files. All AOF files and
        manifest file will be placed in this directory.
      11. Only the last AOF file (BASE or INCR) can be truncated. Otherwise redis will exit even if
        `aof-load-truncated` is enabled.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      87789fae
  22. 02 Jan, 2022 1 commit
    • yoav-steinberg's avatar
      Generate RDB with Functions only via redis-cli --functions-rdb (#9968) · 1bf6d6f1
      yoav-steinberg authored
      
      
      This is needed in order to ease the deployment of functions for ephemeral cases, where user
      needs to spin up a server with functions pre-loaded.
      
      #### Details:
      
      * Added `--functions-rdb` option to _redis-cli_.
      * Functions only rdb via `REPLCONF rdb-filter-only functions`. This is a placeholder for a space
        separated inclusion filter for the RDB. In the future can be `REPLCONF rdb-filter-only
        "functions db:3 key-patten:user*"` and a complementing `rdb-filter-exclude` `REPLCONF`
        can also be added.
      * Handle "slave requirements" specification to RDB saving code so we can use the same RDB
        when different slaves express the same requirements (like functions-only) and not share the
        RDB when their requirements differ. This is currently just a flags `int`, but can be extended to
        a more complex structure with various filter fields.
      * make sure to support filters only in diskless replication mode (not to override the persistence file),
        we do that by forcing diskless (even if disabled by config)
      
      other changes:
      * some refactoring in rdb.c (extract portion of a big function to a sub-function)
      * rdb_key_save_delay used in AOFRW too
      * sendChildInfo takes the number of updated keys (incremental, rather than absolute)
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      1bf6d6f1
  23. 22 Dec, 2021 1 commit
    • guybe7's avatar
      Sort out mess around propagation and MULTI/EXEC (#9890) · 7ac21307
      guybe7 authored
      The mess:
      Some parts use alsoPropagate for late propagation, others using an immediate one (propagate()),
      causing edge cases, ugly/hacky code, and the tendency for bugs
      
      The basic idea is that all commands are propagated via alsoPropagate (i.e. added to a list) and the
      top-most call() is responsible for going over that list and actually propagating them (and wrapping
      them in MULTI/EXEC if there's more than one command). This is done in the new function,
      propagatePendingCommands.
      
      Callers to propagatePendingCommands:
      1. top-most call() (we want all nested call()s to add to the also_propagate array and just the top-most
         one to propagate them) - via `afterCommand`
      2. handleClientsBlockedOnKeys: it is out of call() context and it may propagate stuff - via `afterCommand`. 
      3. handleClientsBlockedOnKeys edge case: if the looked-up key is already expired, we will propagate the
         expire but will not unblock any client so `afterCommand` isn't called. in that case, we have to propagate
         the deletion explicitly.
      4. cron stuff: active-expire and eviction may also propagate stuff
      5. modules: the module API allows to propagate stuff from just about anywhere (timers, keyspace notifications,
         threads). I could have tried to catch all the out-of-call-context places but it seemed easier to handle it in one
         place: when we free the context. in the spirit of what was done in call(), only the top-most freeing of a module
         context may cause propagation.
      6. modules: when using a thread-safe ctx it's not clear when/if the ctx will be freed. we do know that the module
         must lock the GIL before calling RM_Replicate/RM_Call so we propagate the pending commands when
         releasing the GIL.
      
      A "known limitation", which were actually a bug, was fixed because of this commit (see propagate.tcl):
         When using a mix of RM_Call with `!` and RM_Replicate, the command would propagate out-of-order:
         first all the commands from RM_Call, and then the ones from RM_Replicate
      
      Another thing worth mentioning is that if, in the past, a client would issue a MULTI/EXEC with just one
      write command the server would blindly propagate the MULTI/EXEC too, even though it's redundant.
      not anymore.
      
      This commit renames propagate() to propagateNow() in order to cause conflicts in pending PRs.
      propagatePendingCommands is the only caller of propagateNow, which is now a static, internal helper function.
      
      Optimizations:
      1. alsoPropagate will not add stuff to also_propagate if there's no AOF and replicas
      2. alsoPropagate reallocs also_propagagte exponentially, to save calls to memmove
      
      Bugfixes:
      1. CONFIG SET can create evictions, sending notifications which can cause to dirty++ with modules.
         we need to prevent it from propagating to AOF/replicas
      2. We need to set current_client in RM_Call. buggy scenario:
         - CONFIG SET maxmemory, eviction notifications, module hook calls RM_Call
         - assertion in lookupKey crashes, because current_client has CONFIG SET, which isn't CMD_WRITE
      3. minor: in eviction, call propagateDeletion after notification, like active-expire and all commands
         (we always send a notification before propagating the command)
      7ac21307
  24. 21 Dec, 2021 1 commit
    • zhugezy's avatar
      Remove EVAL script verbatim replication, propagation, and deterministic execution logic (#9812) · 1b0968df
      zhugezy authored
      
      
      # Background
      
      The main goal of this PR is to remove relevant logics on Lua script verbatim replication,
      only keeping effects replication logic, which has been set as default since Redis 5.0.
      As a result, Lua in Redis 7.0 would be acting the same as Redis 6.0 with default
      configuration from users' point of view.
      
      There are lots of reasons to remove verbatim replication.
      Antirez has listed some of the benefits in Issue #5292:
      
      >1. No longer need to explain to users side effects into scripts.
          They can do whatever they want.
      >2. No need for a cache about scripts that we sent or not to the slaves.
      >3. No need to sort the output of certain commands inside scripts
          (SMEMBERS and others): this both simplifies and gains speed.
      >4. No need to store scripts inside the RDB file in order to startup correctly.
      >5. No problems about evicting keys during the script execution.
      
      When looking back at Redis 5.0, antirez and core team decided to set the config
      `lua-replicate-commands yes` by default instead of removing verbatim replication
      directly, in case some bad situations happened. 3 years later now before Redis 7.0,
      it's time to remove it formally.
      
      # Changes
      
      - configuration for lua-replicate-commands removed
        - created config file stub for backward compatibility
      - Replication script cache removed
        - this is useless under script effects replication
        - relevant statistics also removed
      - script persistence in RDB files is also removed
      - Propagation of SCRIPT LOAD and SCRIPT FLUSH to replica / AOF removed
      - Deterministic execution logic in scripts removed (i.e. don't run write commands
        after random ones, and sorting output of commands with random order)
        - the flags indicating which commands have non-deterministic results are kept as hints to clients.
      - `redis.replicate_commands()` & `redis.set_repl()` changed
        - now `redis.replicate_commands()` does nothing and return an 1
        - ...and then `redis.set_repl()` can be issued before `redis.replicate_commands()` now
      - Relevant TCL cases adjusted
      - DEBUG lua-always-replicate-commands removed
      
      # Other changes
      - Fix a recent bug comparing CLIENT_ID_AOF to original_client->flags instead of id. (introduced in #9780)
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      1b0968df
  25. 02 Dec, 2021 1 commit
    • meir@redislabs.com's avatar
      Redis Functions - Added redis function unit and Lua engine · cbd46317
      meir@redislabs.com authored
      Redis function unit is located inside functions.c
      and contains Redis Function implementation:
      1. FUNCTION commands:
        * FUNCTION CREATE
        * FCALL
        * FCALL_RO
        * FUNCTION DELETE
        * FUNCTION KILL
        * FUNCTION INFO
      2. Register engine
      
      In addition, this commit introduce the first engine
      that uses the Redis Function capabilities, the
      Lua engine.
      cbd46317
  26. 28 Nov, 2021 1 commit
    • Viktor Söderqvist's avatar
      Sort out the mess around writable replicas and lookupKeyRead/Write (#9572) · acf3495e
      Viktor Söderqvist authored
      Writable replicas now no longer use the values of expired keys. Expired keys are
      deleted when lookupKeyWrite() is used, even on a writable replica. Previously,
      writable replicas could use the value of an expired key in write commands such
      as INCR, SUNIONSTORE, etc..
      
      This commit also sorts out the mess around the functions lookupKeyRead() and
      lookupKeyWrite() so they now indicate what we intend to do with the key and
      are not affected by the command calling them.
      
      Multi-key commands like SUNIONSTORE, ZUNIONSTORE, COPY and SORT with the
      store option now use lookupKeyRead() for the keys they're reading from (which will
      not allow reading from logically expired keys).
      
      This commit also fixes a bug where PFCOUNT could return a value of an
      expired key.
      
      Test modules commands have their readonly and write flags updated to correctly
      reflect their lookups for reading or writing. Modules are not required to
      correctly reflect this in their command flags, but this change is made for
      consistency since the tests serve as usage examples.
      
      Fixes #6842. Fixes #7475.
      acf3495e
  27. 24 Nov, 2021 1 commit
  28. 16 Nov, 2021 1 commit
  29. 11 Nov, 2021 1 commit
    • Ozan Tezcan's avatar
      Add sanitizer support and clean up sanitizer findings (#9601) · b91d8b28
      Ozan Tezcan authored
      - Added sanitizer support. `address`, `undefined` and `thread` sanitizers are available.  
      - To build Redis with desired sanitizer : `make SANITIZER=undefined`
      - There were some sanitizer findings, cleaned up codebase
      - Added tests with address and undefined behavior sanitizers to daily CI.
      - Added tests with address sanitizer to the per-PR CI (smoke out mem leaks sooner).
      
      Basically, there are three types of issues : 
      
      **1- Unaligned load/store** : Most probably, this issue may cause a crash on a platform that
      does not support unaligned access. Redis does unaligned access only on supported platforms.
      
      **2- Signed integer overflow.** Although, signed overflow issue can be problematic time to time
      and change how compiler generates code, current findings mostly about signed shift or simple
      addition overflow. For most platforms Redis can be compiled for, this wouldn't cause any issue
      as far as I can tell (checked generated code on godbolt.org).
      
       **3 -Minor leak** (redis-cli), **use-after-free**(just before calling exit());
      
      UB means nothing guaranteed and risky to reason about program behavior but I don't think any
      of the fixes here worth backporting. As sanitizers are now part of the CI, preventing new issues
      will be the real benefit. 
      b91d8b28
  30. 04 Nov, 2021 1 commit
    • Eduardo Semprebon's avatar
      Replica keep serving data during repl-diskless-load=swapdb for better availability (#9323) · 91d0c758
      Eduardo Semprebon authored
      
      
      For diskless replication in swapdb mode, considering we already spend replica memory
      having a backup of current db to restore in case of failure, we can have the following benefits
      by instead swapping database only in case we succeeded in transferring db from master:
      
      - Avoid `LOADING` response during failed and successful synchronization for cases where the
        replica is already up and running with data.
      - Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
        time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
      - This could be implemented also for disk replication with similar benefits if consumers are willing
        to spend the extra memory usage.
      
      General notes:
      - The concept of `backupDb` becomes `tempDb` for clarity.
      - Async loading mode will only kick in if the replica is syncing from a master that has the same
        repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline. 
      - New property in INFO: `async_loading` to differentiate from the blocking loading
      - Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
        and the tempDb that is passed around.
      - Because this is affecting replicas only, we assume that if they are not readonly and write commands
        during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
        here anyways to avoid complications.
      
      Considerations for review:
      - We have many cases where server.loading flag is used and even though I tried my best, there may
        be cases where async_loading should be checked as well and cases where it shouldn't (would require
        very good understanding of whole code)
      - Several places that had different behavior depending on the loading flag where actually meant to just
        handle commands coming from the AOF client differently than ones coming from real clients, changed
        to check CLIENT_ID_AOF instead.
      
      **Additional for Release Notes**
      - Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
        contribute on triggering next database SAVE
      - New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
      - Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
        Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
        ABORTED and COMPLETED.
      - New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
        to allow modules to declare they support the diskless replication with async loading (when absent, we fall
        back to disk-based loading).
      Co-authored-by: default avatarEduardo Semprebon <edus@saxobank.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      91d0c758
  31. 02 Nov, 2021 1 commit
  32. 25 Oct, 2021 1 commit
    • Wang Yuan's avatar
      Add timestamp annotations in AOF (#9326) · 9ec3294b
      Wang Yuan authored
      Add timestamp annotation in AOF, one part of #9325.
      
      Enabled with the new `aof-timestamp-enabled` config option.
      
      Timestamp annotation format is "#TS:${timestamp}\r\n"."
      TS" is short of timestamp and this method could save extra bytes in AOF.
      
      We can use timestamp annotation for some special functions. 
      - know the executing time of commands
      - restore data to a specific point-in-time (by using redis-check-rdb to truncate the file)
      9ec3294b
  33. 20 Oct, 2021 1 commit
    • guybe7's avatar
      Treat subcommands as commands (#9504) · 43e736f7
      guybe7 authored
      ## Intro
      
      The purpose is to allow having different flags/ACL categories for
      subcommands (Example: CONFIG GET is ok-loading but CONFIG SET isn't)
      
      We create a small command table for every command that has subcommands
      and each subcommand has its own flags, etc. (same as a "regular" command)
      
      This commit also unites the Redis and the Sentinel command tables
      
      ## Affected commands
      
      CONFIG
      Used to have "admin ok-loading ok-stale no-script"
      Changes:
      1. Dropped "ok-loading" in all except GET (this doesn't change behavior since
      there were checks in the code doing that)
      
      XINFO
      Used to have "read-only random"
      Changes:
      1. Dropped "random" in all except CONSUMERS
      
      XGROUP
      Used to have "write use-memory"
      Changes:
      1. Dropped "use-memory" in all except CREATE and CREATECONSUMER
      
      COMMAND
      No changes.
      
      MEMORY
      Used to have "random read-only"
      Changes:
      1. Dropped "random" in PURGE and USAGE
      
      ACL
      Used to have "admin no-script ok-loading ok-stale"
      Changes:
      1. Dropped "admin" in WHOAMI, GENPASS, and CAT
      
      LATENCY
      No changes.
      
      MODULE
      No changes.
      
      SLOWLOG
      Used to have "admin random ok-loading ok-stale"
      Changes:
      1. Dropped "random" in RESET
      
      OBJECT
      Used to have "read-only random"
      Changes:
      1. Dropped "random" in ENCODING and REFCOUNT
      
      SCRIPT
      Used to have "may-replicate no-script"
      Changes:
      1. Dropped "may-replicate" in all except FLUSH and LOAD
      
      CLIENT
      Used to have "admin no-script random ok-loading ok-stale"
      Changes:
      1. Dropped "random" in all except INFO and LIST
      2. Dropped "admin" in ID, TRACKING, CACHING, GETREDIR, INFO, SETNAME, GETNAME, and REPLY
      
      STRALGO
      No changes.
      
      PUBSUB
      No changes.
      
      CLUSTER
      Changes:
      1. Dropped "admin in countkeysinslots, getkeysinslot, info, nodes, keyslot, myid, and slots
      
      SENTINEL
      No changes.
      
      (note that DEBUG also fits, but we decided not to convert it since it's for
      debugging and anyway undocumented)
      
      ## New sub-command
      This commit adds another element to the per-command output of COMMAND,
      describing the list of subcommands, if any (in the same structure as "regular" commands)
      Also, it adds a new subcommand:
      ```
      COMMAND LIST [FILTERBY (MODULE <module-name>|ACLCAT <cat>|PATTERN <pattern>)]
      ```
      which returns a set of all commands (unless filters), but excluding subcommands.
      
      ## Module API
      A new module API, RM_CreateSubcommand, was added, in order to allow
      module writer to define subcommands
      
      ## ACL changes:
      1. Now, that each subcommand is actually a command, each has its own ACL id.
      2. The old mechanism of allowed_subcommands is redundant
      (blocking/allowing a subcommand is the same as blocking/allowing a regular command),
      but we had to keep it, to support the widespread usage of allowed_subcommands
      to block commands with certain args, that aren't subcommands (e.g. "-select +select|0").
      3. I have renamed allowed_subcommands to allowed_firstargs to emphasize the difference.
      4. Because subcommands are commands in ACL too, you can now use "-" to block subcommands
      (e.g. "+client -client|kill"), which wasn't possible in the past.
      5. It is also possible to use the allowed_firstargs mechanism with subcommand.
      For example: `+config -config|set +config|set|loglevel` will block all CONFIG SET except
      for setting the log level.
      6. All of the ACL changes above required some amount of refactoring.
      
      ## Misc
      1. There are two approaches: Either each subcommand has its own function or all
         subcommands use the same function, determining what to do according to argv[0].
         For now, I took the former approaches only with CONFIG and COMMAND,
         while other commands use the latter approach (for smaller blamelog diff).
      2. Deleted memoryGetKeys: It is no longer needed because MEMORY USAGE now uses the "range" key spec.
      4. Bugfix: GETNAME was missing from CLIENT's help message.
      5. Sentinel and Redis now use the same table, with the same function pointer.
         Some commands have a different implementation in Sentinel, so we redirect
         them (these are ROLE, PUBLISH, and INFO).
      6. Command stats now show the stats per subcommand (e.g. instead of stats just
         for "config" you will have stats for "config|set", "config|get", etc.)
      7. It is now possible to use COMMAND directly on subcommands:
         COMMAND INFO CONFIG|GET (The pipeline syntax was inspired from ACL, and
         can be used in functions lookupCommandBySds and lookupCommandByCString)
      8. STRALGO is now a container command (has "help")
      
      ## Breaking changes:
      1. Command stats now show the stats per subcommand (see (5) above)
      43e736f7
  34. 06 Oct, 2021 1 commit
  35. 04 Oct, 2021 1 commit
  36. 26 Sep, 2021 1 commit
  37. 15 Sep, 2021 1 commit
  38. 09 Sep, 2021 1 commit
    • sundb's avatar
      Replace all usage of ziplist with listpack for t_zset (#9366) · 3ca6972e
      sundb authored
      Part two of implementing #8702 (zset), after #8887.
      
      ## Description of the feature
      Replaced all uses of ziplist with listpack in t_zset, and optimized some of the code to optimize performance.
      
      ## Rdb format changes
      New `RDB_TYPE_ZSET_LISTPACK` rdb type.
      
      ## Rdb loading improvements:
      1) Pre-expansion of dict for validation of duplicate data for listpack and ziplist.
      2) Simplifying the release of empty key objects when RDB loading.
      3) Unify ziplist and listpack data verify methods for zset and hash, and move code to rdb.c.
      
      ## Interface changes
      1) New `zset-max-listpack-entries` config is an alias for `zset-max-ziplist-entries` (same with `zset-max-listpack-value`).
      2) OBJECT ENCODING will return listpack instead of ziplist.
      
      ## Listpack improvements:
      1) Add `lpDeleteRange` and `lpDeleteRangeWithEntry` functions to delete a range of entries from listpack.
      2) Improve the performance of `lpCompare`, converting from string to integer is faster than converting from integer to string.
      3) Replace `snprintf` with `ll2string` to improve performance in converting numbers to strings in `lpGet()`.
      
      ## Zset improvements:
      1) Improve the performance of `zzlFind` method, use `lpFind` instead of `lpCompare` in a loop.
      2) Use `lpDeleteRangeWithEntry` instead of `lpDelete` twice to delete a element of zset.
      
      ## Tests
      1) Add some unittests for `lpDeleteRange` and `lpDeleteRangeWithEntry` function.
      2) Add zset RDB loading test.
      3) Add benchmark test for `lpCompare` and `ziplsitCompare`.
      4) Add empty listpack zset corrupt dump test.
      3ca6972e
  39. 10 Aug, 2021 1 commit
    • sundb's avatar
      Replace all usage of ziplist with listpack for t_hash (#8887) · 02fd76b9
      sundb authored
      
      
      Part one of implementing #8702 (taking hashes first before other types)
      
      ## Description of the feature
      1. Change ziplist encoded hash objects to listpack encoding.
      2. Convert existing ziplists on RDB loading time. an O(n) operation.
      
      ## Rdb format changes
      1. Add RDB_TYPE_HASH_LISTPACK rdb type.
      2. Bump RDB_VERSION to 10
      
      ## Interface changes
      1. New `hash-max-listpack-entries` config is an alias for `hash-max-ziplist-entries` (same with `hash-max-listpack-value`)
      2. OBJECT ENCODING will return `listpack` instead of `ziplist`
      
      ## Listpack improvements:
      1. Support direct insert, replace integer element (rather than convert back and forth from string)
      3. Add more listpack capabilities to match the ziplist ones (like `lpFind`, `lpRandomPairs` and such)
      4. Optimize element length fetching, avoid multiple calculations
      5. Use inline to avoid function call overhead.
      
      ## Tests
      1. Add a new test to the RDB load time conversion
      2. Adding the listpack unit tests. (based on the one in ziplist.c)
      3. Add a few "corrupt payload: fuzzer findings" tests, and slightly modify existing ones.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      02fd76b9