1. 20 Jun, 2024 1 commit
    • Moti Cohen's avatar
      Fix rdbLoadObject() empty hash (#13347) · e18a173a
      Moti Cohen authored
      As part of HFE feature, the logic of rdbLoadObject() was wrongly
      modified to indicate of loaded empty hash from RDB as hash that all its
      fields got expired. Rollback to `emptykey` logic. This function should
      load blindly all fields, expired or not. Manually verified.
      
      Few more minor fixes:
      - remove hash double check of emptyKey
      - Fix from `sds` to `hfield` in rdbLoadObject() (not really a bug. Both
      are of type char*)
      - Revert code rdbLoadObject() to get dbid instead of db
      e18a173a
  2. 29 May, 2024 1 commit
    • Moti Cohen's avatar
      HFE to support AOF and replicas (#13285) · 33fc0fbf
      Moti Cohen authored
      * For replica sake, rewrite commands `H*EXPIRE*` , `HSETF`, `HGETF` to
      have absolute unix time in msec.
      * On active-expiration of field, propagate HDEL to replica
      (`propagateHashFieldDeletion()`)
      * On lazy-expiration, propagate HDEL to replica (`hashTypeGetValue()`
      now calls `hashTypeDelete()`. It also takes care to call
      `propagateHashFieldDeletion()`).
      * Fix `H*EXPIRE*` command such that if it gets flag `LT` and it doesn’t
      have any expiration on the field then it will considered as valid
      condition.
      
      Note, replicas doesn’t make any active expiration, and should avoid lazy
      expiration. On `hashTypeGetValue()` it doesn't check expiration (As long
      as the master didn’t request to delete the field, it is valid)
      
      TODO: 
      * Attach `dbid` to HASH metadata. See
      [here](https://github.com/redis/redis/pull/13209#discussion_r1593385850
      
      )
      
      ---------
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      33fc0fbf
  3. 17 May, 2024 1 commit
    • Ronen Kalish's avatar
      Hfe serialization listpack (#13243) · 323be4d6
      Ronen Kalish authored
      Add RDB de/serialization for HFE
      
      This PR adds two new RDB types: `RDB_TYPE_HASH_METADATA` and
      `RDB_TYPE_HASH_LISTPACK_TTL` to save HFE data.
      When the hash RAM encoding is dict, it will be saved in the former, and
      when it is listpack it will be saved in the latter.
      Both formats just add the TTL value for each field after the data that
      was previously saved, i.e HASH_METADATA will save the number of entries
      and, for each entry, key, value and TTL, whereas listpack is saved as a
      blob.
      On read, the usual dict <--> listpack conversion takes place if
      required.
      In addition, when reading a hash that was saved as a dict fields are
      actively expired if expiry is due. Currently this slao holds for
      listpack encoding, but it is supposed to be removed.
      
      TODO:
      Remove active expiry on load when loading from listpack format (unless
      we'll decide to keep it)
      323be4d6
  4. 18 Apr, 2024 1 commit
    • Moti Cohen's avatar
      Hash Field Expiration - Basic support · c18ff056
      Moti Cohen authored
      - Add ebuckets & mstr data structures
      - Integrate active & lazy expiration
      - Add most of the commands 
      - Add support for dict (listpack is missing)
      TODOs:  RDB, notification, listpack, HSET, HGETF, defrag, aof
      c18ff056
  5. 20 Mar, 2024 1 commit
  6. 01 Feb, 2024 1 commit
    • Yanqi Lv's avatar
      Refine the purpose of rdb saving with accurate flags (#12925) · 62153b3b
      Yanqi Lv authored
      In Redis, rdb is produced in three scenarios mainly.
      
      - backup, such as `bgsave` and `save` command
      - full sync in replication
      - aof rewrite if `aof-use-rdb-preamble` is yes
      
      We also have some RDB flags to identify the purpose of rdb saving.
      ```C
      /* flags on the purpose of rdb save or load */
      #define RDBFLAGS_NONE 0                 /* No special RDB loading. */
      #define RDBFLAGS_AOF_PREAMBLE (1<<0)    /* Load/save the RDB as AOF preamble. */
      #define RDBFLAGS_REPLICATION (1<<1)     /* Load/save for SYNC. */
      ```
      
      But currently, it seems that these flags and purposes of rdb saving
      don't exactly match. I find it in `rdbSaveRioWithEOFMark` which calls
      `startSaving` with `RDBFLAGS_REPLICATION` but `rdbSaveRio` with
      `RDBFLAGS_NONE`.
      ```C
      int rdbSaveRioWithEOFMark(int req, rio *rdb, int *error, rdbSaveInfo *rsi) {
          char eofmark[RDB_EOF_MARK_SIZE];
      
          startSaving(RDBFLAGS_REPLICATION);
          getRandomHexChars(eofmark,RDB_EOF_MARK_SIZE);
          if (error) *error = 0;
          if (rioWrite(rdb,"$EOF:",5) == 0) goto werr;
          if (rioWrite(rdb,eofmark,RDB_EOF_MARK_SIZE) == 0) goto werr;
          if (rioWrite(rdb,"\r\n",2) == 0) goto werr;
          if (rdbSaveRio(req,rdb,error,RDBFLAGS_NONE,rsi) == C_ERR) goto werr;
          if (rioWrite(rdb,eofmark,RDB_EOF_MARK_SIZE) == 0) goto werr;
          stopSaving(1);
          return C_OK;
      
      werr: /* Write error. */
          /* Set 'error' only if not already set by rdbSaveRio() call. */
          if (error && *error == 0) *error = errno;
          stopSaving(0);
          return C_ERR;
      }
      ```
      
      In this PR, I refine the purpose of rdb saving with accurate flags.
      62153b3b
  7. 23 Jan, 2024 1 commit
  8. 15 Oct, 2023 1 commit
    • Vitaly's avatar
      Replace cluster metadata with slot specific dictionaries (#11695) · 0270abda
      Vitaly authored
      This is an implementation of https://github.com/redis/redis/issues/10589
      
       that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot.  Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
      
      ## Important changes
      * Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
      * getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
      * Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
      * scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
      * Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot. 
      * Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
      * DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
      
      ## Performance
      This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict. 
      
      RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
      
      ## Interface changes
      * Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
      * Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
      * New RDB version to support the new op code for SLOT information. 
      
      ---------
      Co-authored-by: default avatarVitaly Arbuzov <arvit@amazon.com>
      Co-authored-by: default avatarHarkrishn Patro <harkrisp@amazon.com>
      Co-authored-by: default avatarRoshan Khatri <rvkhatri@amazon.com>
      Co-authored-by: default avatarMadelyn Olson <madelyneolson@gmail.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      0270abda
  9. 16 Aug, 2023 1 commit
    • Wen Hui's avatar
      change return type to be consistant (#12479) · 965dc90b
      Wen Hui authored
      Currently rdbSaveMillisecondTime, rdbSaveDoubleValue api's return type is
      int but they return the value directly from rdbWriteRaw function which has the
      return type of ssize_t. As this may cause overflow to int so changed to ssize_t.
      965dc90b
  10. 09 Apr, 2023 1 commit
    • Ozan Tezcan's avatar
      Add RM_RdbLoad and RM_RdbSave module API functions (#11852) · e55568ed
      Ozan Tezcan authored
      Add `RM_RdbLoad()` and `RM_RdbSave()` to load/save RDB files from the module API. 
      
      In our use case, we have our clustering implementation as a module. As part of this
      implementation, the module needs to trigger RDB save operation at specific points.
      Also, this module delivers RDB files to other nodes (not using Redis' replication).
      When a node receives an RDB file, it should be able to load the RDB. Currently,
      there is no module API to save/load RDB files. 
      
      
      This PR adds four new APIs:
      ```c
      RedisModuleRdbStream *RM_RdbStreamCreateFromFile(const char *filename);
      void RM_RdbStreamFree(RedisModuleRdbStream *stream);
      
      int RM_RdbLoad(RedisModuleCtx *ctx, RedisModuleRdbStream *stream, int flags);
      int RM_RdbSave(RedisModuleCtx *ctx, RedisModuleRdbStream *stream, int flags);
      ```
      
      The first step is to create a `RedisModuleRdbStream` object. This PR provides a function to
      create RedisModuleRdbStream from the filename. (You can load/save RDB with the filename).
      In the future, this API can be extended if needed: 
      e.g., `RM_RdbStreamCreateFromFd()`, `RM_RdbStreamCreateFromSocket()` to save/load
      RDB from an `fd` or a `socket`. 
      
      
      Usage:
      ```c
      /* Save RDB */
      RedisModuleRdbStream *stream = RedisModule_RdbStreamCreateFromFile("example.rdb");
      RedisModule_RdbSave(ctx, stream, 0);
      RedisModule_RdbStreamFree(stream);
      
      /* Load RDB */
      RedisModuleRdbStream *stream = RedisModule_RdbStreamCreateFromFile("example.rdb");
      RedisModule_RdbLoad(ctx, stream, 0);
      RedisModule_RdbStreamFree(stream);
      ```
      e55568ed
  11. 12 Feb, 2023 1 commit
    • Tian's avatar
      Reclaim page cache of RDB file (#11248) · 7dae142a
      Tian authored
      # Background
      The RDB file is usually generated and used once and seldom used again, but the content would reside in page cache until OS evicts it. A potential problem is that once the free memory exhausts, the OS have to reclaim some memory from page cache or swap anonymous page out, which may result in a jitters to the Redis service.
      
      Supposing an exact scenario, a high-capacity machine hosts many redis instances, and we're upgrading the Redis together. The page cache in host machine increases as RDBs are generated. Once the free memory drop into low watermark(which is more likely to happen in older Linux kernel like 3.10, before [watermark_scale_factor](https://lore.kernel.org/lkml/1455813719-2395-1-git-send-email-hannes@cmpxchg.org/) is introduced, the `low watermark` is linear to `min watermark`, and there'is not too much buffer space for `kswapd` to be wake up to reclaim memory), a `direct reclaim` happens, which means the process would stall to wait for memory allocation.
      
      # What the PR does
      The PR introduces a capability to reclaim the cache when the RDB is operated. Generally there're two cases, read and write the RDB. For read it's a little messy to address the incremental reclaim, so the reclaim is done in one go in background after the load is finished to avoid blocking the work thread. For write, incremental reclaim amortizes the work of reclaim so no need to put it into background, and the peak watermark of cache can be reduced in this way.
      
      Two cases are addresses specially, replication and restart, for both of which the cache is leveraged to speed up the processing, so the reclaim is postponed to a right time. To do this, a flag is added to`rdbSave` and `rdbLoad` to control whether the cache need to be kept, with the default value false.
      
      # Something deserve noting
      1. Though `posix_fadvise` is the POSIX standard, but only few platform support it, e.g. Linux, FreeBSD 10.0.
      2. In Linux `posix_fadvise` only take effect on writeback-ed pages, so a `sync`(or `fsync`, `fdatasync`) is needed to flush the dirty page before `posix_fadvise` if we reclaim write cache.
      
      # About test
      A unit test is added to verify the effect of `posix_fadvise`.
      In integration test overall cache increase is checked, as well as the cache backed by RDB as a specific TCL test is executed in isolated Github action job.
      7dae142a
  12. 30 Nov, 2022 1 commit
    • guybe7's avatar
      Stream consumers: Re-purpose seen-time, add active-time (#11099) · 72e90695
      guybe7 authored
      1. "Fixed" the current code so that seen-time/idle actually refers to interaction
        attempts (as documented; breaking change)
      2. Added active-time/inactive to refer to successful interaction (what
        seen-time/idle used to be)
      
      At first, I tried to avoid changing the behavior of seen-time/idle but then realized
      that, in this case, the odds are the people read the docs and implemented their
      code based on the docs (which didn't match the behavior).
      For the most part, that would work fine, except that issue #9996 was found.
      
      I was working under the assumption that people relied on the docs, and for
      the most part, it could have worked well enough. so instead of fixing the docs,
      as I would usually do, I fixed the code to match the docs in this particular case.
      
      Note that, in case the consumer has never read any entries, the values
      for both "active-time" (XINFO FULL) and "inactive" (XINFO CONSUMERS) will
      be -1, meaning here that the consumer was never active.
      
      Note that seen/active time is only affected by XREADGROUP / X[AUTO]CLAIM, not
      by XPENDING, XINFO, and other "read-only" stream CG commands (always has been,
      even before this PR)
      
      Other changes:
      * Another behavioral change (arguably a bugfix) is that XREADGROUP and X[AUTO]CLAIM
        create the consumer regardless of whether it was able to perform some reading/claiming
      * RDB format change to save the `active_time`, and set it to the same value of `seen_time` in old rdb files.
      72e90695
  13. 20 Nov, 2022 1 commit
    • Binbin's avatar
      sanitize dump payload: fix crash with empty set with listpack encoding (#11519) · 51887e61
      Binbin authored
      The following example will create an empty set (listpack encoding):
      ```
      > RESTORE key 0
      "\x14\x25\x25\x00\x00\x00\x00\x00\x02\x01\x82\x5F\x37\x03\x06\x01\x82\x5F\x35\x03\x82\x5F\x33\x03\x00\x01\x82\x5F\x31\x03\x82\x5F\x39\x03\x04\xA9\x08\x01\xFF\x0B\x00\xA3\x26\x49\xB4\x86\xB0\x0F\x41"
      OK
      > SCARD key
      (integer) 0
      > SRANDMEMBER key
      Error: Server closed the connection
      ```
      
      In the spirit of #9297, skip empty set when loading RDB_TYPE_SET_LISTPACK.
      Introduced in #11290
      51887e61
  14. 09 Nov, 2022 1 commit
    • Viktor Söderqvist's avatar
      Listpack encoding for sets (#11290) · 4e472a1a
      Viktor Söderqvist authored
      Small sets with not only integer elements are listpack encoded, by default
      up to 128 elements, max 64 bytes per element, new config `set-max-listpack-entries`
      and `set-max-listpack-value`. This saves memory for small sets compared to using a hashtable.
      
      Sets with only integers, even very small sets, are still intset encoded (up to 1G
      limit, etc.). Larger sets are hashtable encoded.
      
      This PR increments the RDB version, and has an effect on OBJECT ENCODING
      
      Possible conversions when elements are added:
      
          intset -> listpack
          listpack -> hashtable
          intset -> hashtable
      
      Note: No conversion happens when elements are deleted. If all elements are
      deleted and then added again, the set is deleted and recreated, thus implicitly
      converted to a smaller encoding.
      4e472a1a
  15. 18 Oct, 2022 1 commit
    • Meir Shpilraien (Spielrein)'s avatar
      Avoid saving module aux on RDB if no aux data was saved by the module. (#11374) · b43f2548
      Meir Shpilraien (Spielrein) authored
      ### Background
      
      The issue is that when saving an RDB with module AUX data, the module AUX metadata
      (moduleid, when, ...) is saved to the RDB even though the module did not saved any actual data.
      This prevent loading the RDB in the absence of the module (although there is no actual data in
      the RDB that requires the module to be loaded).
      
      ### Solution
      
      The solution suggested in this PR is that module AUX will be saved on the RDB only if the module
      actually saved something during `aux_save` function.
      
      To support backward compatibility, we introduce `aux_save2` callback that acts the same as
      `aux_save` with the tiny change of avoid saving the aux field if no data was actually saved by
      the module. Modules can use the new API to make sure that if they have no data to save,
      then it will be possible to load the created RDB even without the module.
      
      ### Concerns
      
      A module may register for the aux load and save hooks just in order to be notified when
      saving or loading starts or completed (there are better ways to do that, but it still possible
      that someone used it).
      
      However, if a module didn't save a single field in the save callback, it means it's not allowed
      to read in the read callback, since it has no way to distinguish between empty and non-empty
      payloads. furthermore, it means that if the module did that, it must never change it, since it'll
      break compatibility with it's old RDB files, so this is really not a valid use case.
      
      Since some modules (ones who currently save one field indicating an empty payload), need
      to know if saving an empty payload is valid, and if Redis is gonna ignore an empty payload
      or store it, we opted to add a new API (rather than change behavior of an existing API and
      expect modules to check the redis version)
      
      ### Technical Details
      
      To avoid saving AUX data on RDB, we change the code to first save the AUX metadata
      (moduleid, when, ...) into a temporary buffer. The buffer is then flushed to the rio at the first
      time the module makes a write operation inside the `aux_save` function. If the module saves
      nothing (and `aux_save2` was used), the entire temporary buffer is simply dropped and no
      data about this AUX field is saved to the RDB. This make it possible to load the RDB even in
      the absence of the module.
      
      Test was added to verify the fix.
      b43f2548
  16. 15 Aug, 2022 1 commit
  17. 03 Aug, 2022 1 commit
    • Moti Cohen's avatar
      Adding parentheses and do-while(0) to macros (#11080) · 1aa6c4ab
      Moti Cohen authored
      Fixing few macros that doesn't follows most basic safety conventions
      which is wrapping any usage of passed variable
      with parentheses and if written more than one command, then wrap
      it with do-while(0) (or parentheses).
      1aa6c4ab
  18. 05 Apr, 2022 1 commit
    • Meir Shpilraien (Spielrein)'s avatar
      Functions: Move library meta data to be part of the library payload. (#10500) · ae020e3d
      Meir Shpilraien (Spielrein) authored
      ## Move library meta data to be part of the library payload.
      
      Following the discussion on https://github.com/redis/redis/issues/10429 and the intention to add (in the future) library versioning support, we believe that the entire library metadata (like name and engine) should be part of the library payload and not provided by the `FUNCTION LOAD` command. The reasoning behind this is that the programmer who developed the library should be the one who set those values (name, engine, and in the future also version). **It is not the responsibility of the admin who load the library into the database.**
      
      The PR moves all the library metadata (engine and function name) to be part of the library payload. The metadata needs to be provided on the first line of the payload using the shebang format (`#!<engine> name=<name>`), example:
      
      ```lua
      #!lua name=test
      redis.register_function('foo', function() return 1 end)
      ```
      
      The above script will run on the Lua engine and will create a library called `test`.
      
      ## API Changes (compare to 7.0 rc2)
      
      * `FUNCTION LOAD` command was change and now it simply gets the library payload and extract the engine and name from the payload. In addition, the command will now return the function name which can later be used on `FUNCTION DELETE` and `FUNCTION LIST`.
      * The description field was completely removed from`FUNCTION LOAD`, and `FUNCTION LIST`
      
      
      ## Breaking Changes (compare to 7.0 rc2)
      
      * Library description was removed (we can re-add it in the future either as part of the shebang line or an additional line).
      * Loading an AOF file that was generated by either 7.0 rc1 or 7.0 rc2 will fail because the old command syntax is invalid.
      
      ## Notes
      
      * Loading an RDB file that was generated by rc1 / rc2 **is** supported, Redis will automatically add the shebang to the libraries payloads (we can probably delete that code after 7.0.3 or so since there's no need to keep supporting upgrades from an RC build).
      ae020e3d
  19. 23 Feb, 2022 1 commit
    • Itamar Haber's avatar
      Add stream consumer group lag tracking and reporting (#9127) · c81c7f51
      Itamar Haber authored
      
      
      Adds the ability to track the lag of a consumer group (CG), that is, the number
      of entries yet-to-be-delivered from the stream.
      
      The proposed constant-time solution is in the spirit of "best-effort."
      
      Partially addresses #8737.
      
      ## Description of approach
      
      We add a new "entries_added" property to the stream. This starts at 0 for a new
      stream and is incremented by 1 with every `XADD`.  It is essentially an all-time
      counter of the entries added to the stream.
      
      Given the stream's length and this counter value, we can trivially find the logical
      "entries_added" counter of the first ID if and only if the stream is contiguous.
      A fragmented stream contains one or more tombstones generated by `XDEL`s.
      The new "xdel_max_id" stream property tracks the latest tombstone.
      
      The CG also tracks its last delivered ID's as an "entries_read" counter and
      increments it independently when delivering new messages, unless the this
      read counter is invalid (-1 means invalid offset). When the CG's counter is
      available, the reported lag is the difference between added and read counters.
      
      Lastly, this also adds a "first_id" field to the stream structure in order to make
      looking it up cheaper in most cases.
      
      ## Limitations
      
      There are two cases in which the mechanism isn't able to track the lag.
      In these cases, `XINFO` replies with `null` in the "lag" field.
      
      The first case is when a CG is created with an arbitrary last delivered ID,
      that isn't "0-0", nor the first or the last entries of the stream. In this case,
      it is impossible to obtain a valid read counter (short of an O(N) operation).
      The second case is when there are one or more tombstones fragmenting
      the stream's entries range.
      
      In both cases, given enough time and assuming that the consumers are
      active (reading and lacking) and advancing, the CG should be able to
      catch up with the tip of the stream and report zero lag.
      Once that's achieved, lag tracking would resume as normal (until the
      next tombstone is set).
      
      ## API changes
      
      * `XGROUP CREATE` added with the optional named argument `[ENTRIESREAD entries-read]`
        for explicitly specifying the new CG's counter.
      * `XGROUP SETID` added with an optional positional argument `[ENTRIESREAD entries-read]`
        for specifying the CG's counter.
      * `XINFO` reports the maximal tombstone ID, the recorded first entry ID, and total
        number of entries added to the stream.
      * `XINFO` reports the current lag and logical read counter of CGs.
      * `XSETID` is an internal command that's used in replication/aof. It has been added with
        the optional positional arguments `[ENTRIESADDED entries-added] [MAXDELETEDID max-deleted-entry-id]`
        for propagating the CG's offset and maximal tombstone ID of the stream.
      
      ## The generic unsolved problem
      
      The current stream implementation doesn't provide an efficient way to obtain the
      approximate/exact size of a range of entries. While it could've been nice to have
      that ability (#5813) in general, let alone specifically in the context of CGs, the risk
      and complexities involved in such implementation are in all likelihood prohibitive.
      
      ## A refactoring note
      
      The `streamGetEdgeID` has been refactored to accommodate both the existing seek
      of any entry as well as seeking non-deleted entries (the addition of the `skip_tombstones`
      argument). Furthermore, this refactoring also migrated the seek logic to use the
      `streamIterator` (rather than `raxIterator`) that was, in turn, extended with the
      `skip_tombstones` Boolean struct field to control the emission of these.
      Co-authored-by: default avatarGuy Benoish <guy.benoish@redislabs.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      c81c7f51
  20. 06 Jan, 2022 1 commit
    • Meir Shpilraien (Spielrein)'s avatar
      Redis Function Libraries (#10004) · 885f6b5c
      Meir Shpilraien (Spielrein) authored
      # Redis Function Libraries
      
      This PR implements Redis Functions Libraries as describe on: https://github.com/redis/redis/issues/9906.
      
      Libraries purpose is to provide a better code sharing between functions by allowing to create multiple
      functions in a single command. Functions that were created together can safely share code between
      each other without worrying about compatibility issues and versioning.
      
      Creating a new library is done using 'FUNCTION LOAD' command (full API is described below)
      
      This PR introduces a new struct called libraryInfo, libraryInfo holds information about a library:
      * name - name of the library
      * engine - engine used to create the library
      * code - library code
      * description - library description
      * functions - the functions exposed by the library
      
      When Redis gets the `FUNCTION LOAD` command it creates a new empty libraryInfo.
      Redis passes the `CODE` to the relevant engine alongside the empty libraryInfo.
      As a result, the engine will create one or more functions by calling 'libraryCreateFunction'.
      The new funcion will be added to the newly created libraryInfo. So far Everything is happening
      locally on the libraryInfo so it is easy to abort the operation (in case of an error) by simply
      freeing the libraryInfo. After the library info is fully constructed we start the joining phase by
      which we will join the new library to the other libraries currently exist on Redis.
      The joining phase make sure there is no function collision and add the library to the
      librariesCtx (renamed from functionCtx). LibrariesCtx is used all around the code in the exact
      same way as functionCtx was used (with respect to RDB loading, replicatio, ...).
      The only difference is that apart from function dictionary (maps function name to functionInfo
      object), the librariesCtx contains also a libraries dictionary that maps library name to libraryInfo object.
      
      ## New API
      ### FUNCTION LOAD
      `FUNCTION LOAD <ENGINE> <LIBRARY NAME> [REPLACE] [DESCRIPTION <DESCRIPTION>] <CODE>`
      Create a new library with the given parameters:
      * ENGINE - REPLACE Engine name to use to create the library.
      * LIBRARY NAME - The new library name.
      * REPLACE - If the library already exists, replace it.
      * DESCRIPTION - Library description.
      * CODE - Library code.
      
      Return "OK" on success, or error on the following cases:
      * Library name already taken and REPLACE was not used
      * Name collision with another existing library (even if replace was uses)
      * Library registration failed by the engine (usually compilation error)
      
      ## Changed API
      ### FUNCTION LIST
      `FUNCTION LIST [LIBRARYNAME <LIBRARY NAME PATTERN>] [WITHCODE]`
      Command was modified to also allow getting libraries code (so `FUNCTION INFO` command is no longer
      needed and removed). In addition the command gets an option argument, `LIBRARYNAME` allows you to
      only get libraries that match the given `LIBRARYNAME` pattern. By default, it returns all libraries.
      
      ### INFO MEMORY
      Added number of libraries to `INFO MEMORY`
      
      ### Commands flags
      `DENYOOM` flag was set on `FUNCTION LOAD` and `FUNCTION RESTORE`. We consider those commands
      as commands that add new data to the dateset (functions are data) and so we want to disallows
      to run those commands on OOM.
      
      ## Removed API
      * FUNCTION CREATE - Decided on https://github.com/redis/redis/issues/9906
      * FUNCTION INFO - Decided on https://github.com/redis/redis/issues/9899
      
      ## Lua engine changes
      When the Lua engine gets the code given on `FUNCTION LOAD` command, it immediately runs it, we call
      this run the loading run. Loading run is not a usual script run, it is not possible to invoke any
      Redis command from within the load run.
      Instead there is a new API provided by `library` object. The new API's: 
      * `redis.log` - behave the same as `redis.log`
      * `redis.register_function` - register a new function to the library
      
      The loading run purpose is to register functions using the new `redis.register_function` API.
      Any attempt to use any other API will result in an error. In addition, the load run is has a time
      limit of 500ms, error is raise on timeout and the entire operation is aborted.
      
      ### `redis.register_function`
      `redis.register_function(<function_name>, <callback>, [<description>])`
      This new API allows users to register a new function that will be linked to the newly created library.
      This API can only be called during the load run (see definition above). Any attempt to use it outside
      of the load run will result in an error.
      The parameters pass to the API are:
      * function_name - Function name (must be a Lua string)
      * callback - Lua function object that will be called when the function is invokes using fcall/fcall_ro
      * description - Function description, optional (must be a Lua string).
      
      ### Example
      The following example creates a library called `lib` with 2 functions, `f1` and `f1`, returns 1 and 2 respectively:
      ```
      local function f1(keys, args)
          return 1
      end
      
      local function f2(keys, args)
          return 2
      end
      
      redis.register_function('f1', f1)
      redis.register_function('f2', f2)
      ```
      
      Notice: Unlike `eval`, functions inside a library get the KEYS and ARGV as arguments to the
      functions and not as global.
      
      ### Technical Details
      
      On the load run we only want the user to be able to call a white list on API's. This way, in
      the future, if new API's will be added, the new API's will not be available to the load run
      unless specifically added to this white list. We put the while list on the `library` object and
      make sure the `library` object is only available to the load run by using [lua_setfenv](https://www.lua.org/manual/5.1/manual.html#lua_setfenv) API. This API allows us to set
      the `globals` of a function (and all the function it creates). Before starting the load run we
      create a new fresh Lua table (call it `g`) that only contains the `library` API (we make sure
      to set global protection on this table just like the general global protection already exists
      today), then we use [lua_setfenv](https://www.lua.org/manual/5.1/manual.html#lua_setfenv)
      to set `g` as the global table of the load run. After the load run finished we update `g`
      metatable and set `__index` and `__newindex` functions to be `_G` (Lua default globals),
      we also pop out the `library` object as we do not need it anymore.
      This way, any function that was created on the load run (and will be invoke using `fcall`) will
      see the default globals as it expected to see them and will not have the `library` API anymore.
      
      An important outcome of this new approach is that now we can achieve a distinct global table
      for each library (it is not yet like that but it is very easy to achieve it now). In the future we can
      decide to remove global protection because global on different libraries will not collide or we
      can chose to give different API to different libraries base on some configuration or input.
      
      Notice that this technique was meant to prevent errors and was not meant to prevent malicious
      user from exploit it. For example, the load run can still save the `library` object on some local
      variable and then using in `fcall` context. To prevent such a malicious use, the C code also make
      sure it is running in the right context and if not raise an error.
      885f6b5c
  21. 02 Jan, 2022 1 commit
    • yoav-steinberg's avatar
      Generate RDB with Functions only via redis-cli --functions-rdb (#9968) · 1bf6d6f1
      yoav-steinberg authored
      
      
      This is needed in order to ease the deployment of functions for ephemeral cases, where user
      needs to spin up a server with functions pre-loaded.
      
      #### Details:
      
      * Added `--functions-rdb` option to _redis-cli_.
      * Functions only rdb via `REPLCONF rdb-filter-only functions`. This is a placeholder for a space
        separated inclusion filter for the RDB. In the future can be `REPLCONF rdb-filter-only
        "functions db:3 key-patten:user*"` and a complementing `rdb-filter-exclude` `REPLCONF`
        can also be added.
      * Handle "slave requirements" specification to RDB saving code so we can use the same RDB
        when different slaves express the same requirements (like functions-only) and not share the
        RDB when their requirements differ. This is currently just a flags `int`, but can be extended to
        a more complex structure with various filter fields.
      * make sure to support filters only in diskless replication mode (not to override the persistence file),
        we do that by forcing diskless (even if disabled by config)
      
      other changes:
      * some refactoring in rdb.c (extract portion of a big function to a sub-function)
      * rdb_key_save_delay used in AOFRW too
      * sendChildInfo takes the number of updated keys (incremental, rather than absolute)
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      1bf6d6f1
  22. 26 Dec, 2021 1 commit
    • Meir Shpilraien (Spielrein)'s avatar
      Add FUNCTION DUMP and RESTORE. (#9938) · 365cbf46
      Meir Shpilraien (Spielrein) authored
      Follow the conclusions to support Functions in redis cluster (#9899)
      
      Added 2 new FUNCTION sub-commands:
      1. `FUNCTION DUMP` - dump a binary payload representation of all the functions.
      2. `FUNCTION RESTORE <PAYLOAD> [FLUSH|APPEND|REPLACE]` - give the binary payload extracted
         using `FUNCTION DUMP`, restore all the functions on the given payload. Restore policy can be given to
         control how to handle existing functions (default is APPEND):
         * FLUSH: delete all existing functions.
         * APPEND: appends the restored functions to the existing functions. On collision, abort.
         * REPLACE: appends the restored functions to the existing functions. On collision,
           replace the old function with the new function.
      
      Modify `redis-cli --cluster add-node` to use `FUNCTION DUMP` to get existing functions from
      one of the nodes in the cluster, and `FUNCTION RESTORE` to load the same set of functions
      to the new node. `redis-cli` will execute this step before sending the `CLUSTER MEET` command
      to the new node. If `FUNCTION DUMP` returns an error, assume the current Redis version do not
      support functions and skip `FUNCTION RESTORE`. If `FUNCTION RESTORE` fails, abort and do not send
      the `CLUSTER MEET` command. If the new node already contains functions (before the `FUNCTION RESTORE`
      is sent), abort and do not add the node to the cluster. Test was added to verify
      `redis-cli --cluster add-node` works as expected. 
      365cbf46
  23. 02 Dec, 2021 1 commit
    • meir@redislabs.com's avatar
      Redis Functions - Added redis function unit and Lua engine · cbd46317
      meir@redislabs.com authored
      Redis function unit is located inside functions.c
      and contains Redis Function implementation:
      1. FUNCTION commands:
        * FUNCTION CREATE
        * FCALL
        * FCALL_RO
        * FUNCTION DELETE
        * FUNCTION KILL
        * FUNCTION INFO
      2. Register engine
      
      In addition, this commit introduce the first engine
      that uses the Redis Function capabilities, the
      Lua engine.
      cbd46317
  24. 04 Nov, 2021 1 commit
    • Eduardo Semprebon's avatar
      Replica keep serving data during repl-diskless-load=swapdb for better availability (#9323) · 91d0c758
      Eduardo Semprebon authored
      
      
      For diskless replication in swapdb mode, considering we already spend replica memory
      having a backup of current db to restore in case of failure, we can have the following benefits
      by instead swapping database only in case we succeeded in transferring db from master:
      
      - Avoid `LOADING` response during failed and successful synchronization for cases where the
        replica is already up and running with data.
      - Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
        time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
      - This could be implemented also for disk replication with similar benefits if consumers are willing
        to spend the extra memory usage.
      
      General notes:
      - The concept of `backupDb` becomes `tempDb` for clarity.
      - Async loading mode will only kick in if the replica is syncing from a master that has the same
        repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline. 
      - New property in INFO: `async_loading` to differentiate from the blocking loading
      - Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
        and the tempDb that is passed around.
      - Because this is affecting replicas only, we assume that if they are not readonly and write commands
        during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
        here anyways to avoid complications.
      
      Considerations for review:
      - We have many cases where server.loading flag is used and even though I tried my best, there may
        be cases where async_loading should be checked as well and cases where it shouldn't (would require
        very good understanding of whole code)
      - Several places that had different behavior depending on the loading flag where actually meant to just
        handle commands coming from the AOF client differently than ones coming from real clients, changed
        to check CLIENT_ID_AOF instead.
      
      **Additional for Release Notes**
      - Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
        contribute on triggering next database SAVE
      - New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
      - Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
        Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
        ABORTED and COMPLETED.
      - New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
        to allow modules to declare they support the diskless replication with async loading (when absent, we fall
        back to disk-based loading).
      Co-authored-by: default avatarEduardo Semprebon <edus@saxobank.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      91d0c758
  25. 03 Nov, 2021 1 commit
    • perryitay's avatar
      Add support for list type to store elements larger than 4GB (#9357) · f27083a4
      perryitay authored
      
      
      Redis lists are stored in quicklist, which is currently a linked list of ziplists.
      Ziplists are limited to storing elements no larger than 4GB, so when bigger
      items are added they're getting truncated.
      This PR changes quicklists so that they're capable of storing large items
      in quicklist nodes that are plain string buffers rather than ziplist.
      
      As part of the PR there were few other changes in redis: 
      1. new DEBUG sub-commands: 
         - QUICKLIST-PACKED-THRESHOLD - set the threshold of for the node type to
           be plan or ziplist. default (1GB)
         - QUICKLIST <key> - Shows low level info about the quicklist encoding of <key>
      2. rdb format change:
         - A new type was added - RDB_TYPE_LIST_QUICKLIST_2 . 
         - container type (packed / plain) was added to the beginning of the rdb object
           (before the actual node list).
      3. testing:
         - Tests that requires over 100MB will be by default skipped. a new flag was
           added to 'runtest' to run the large memory tests (not used by default)
      Co-authored-by: default avatarsundb <sundbcn@gmail.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      f27083a4
  26. 03 Oct, 2021 1 commit
    • Binbin's avatar
      Cleanup typos, incorrect comments, and fixed small memory leak in redis-cli (#9153) · dd3ac97f
      Binbin authored
      1. Remove forward declarations from header files to functions that do not exist:
      hmsetCommand and rdbSaveTime.
      2. Minor phrasing fixes in #9519
      3. Add missing sdsfree(title) and fix typo in redis-benchmark.
      4. Modify some error comments in some zset commands.
      5. Fix copy-paste bug comment in syncWithMaster about `ip-address`.
      dd3ac97f
  27. 13 Sep, 2021 1 commit
    • zhaozhao.zz's avatar
      PSYNC2: make partial sync possible after master reboot (#8015) · 794442b1
      zhaozhao.zz authored
      The main idea is how to allow a master to load replication info from RDB file when rebooting, if master can load replication info it means that replicas may have the chance to psync with master, it can save much traffic.
      
      The key point is we need guarantee safety and consistency, so there
      are two differences between master and replica:
      
      1. master would load the replication info as secondary ID and
         offset, in case other masters have the same replid.
      2. when master loading RDB, it would propagate expired keys as DEL
         command to replication backlog, then replica can receive these
         commands to delete stale keys.
         p.s. the expired keys when RDB loading is useful for users, so
         we show it as `rdb_last_load_keys_expired` and `rdb_last_load_keys_loaded` in info persistence.
      
      Moreover, after load replication info, master should update
      `no_replica_time` in case loading RDB cost too long time.
      794442b1
  28. 09 Sep, 2021 1 commit
    • sundb's avatar
      Replace all usage of ziplist with listpack for t_zset (#9366) · 3ca6972e
      sundb authored
      Part two of implementing #8702 (zset), after #8887.
      
      ## Description of the feature
      Replaced all uses of ziplist with listpack in t_zset, and optimized some of the code to optimize performance.
      
      ## Rdb format changes
      New `RDB_TYPE_ZSET_LISTPACK` rdb type.
      
      ## Rdb loading improvements:
      1) Pre-expansion of dict for validation of duplicate data for listpack and ziplist.
      2) Simplifying the release of empty key objects when RDB loading.
      3) Unify ziplist and listpack data verify methods for zset and hash, and move code to rdb.c.
      
      ## Interface changes
      1) New `zset-max-listpack-entries` config is an alias for `zset-max-ziplist-entries` (same with `zset-max-listpack-value`).
      2) OBJECT ENCODING will return listpack instead of ziplist.
      
      ## Listpack improvements:
      1) Add `lpDeleteRange` and `lpDeleteRangeWithEntry` functions to delete a range of entries from listpack.
      2) Improve the performance of `lpCompare`, converting from string to integer is faster than converting from integer to string.
      3) Replace `snprintf` with `ll2string` to improve performance in converting numbers to strings in `lpGet()`.
      
      ## Zset improvements:
      1) Improve the performance of `zzlFind` method, use `lpFind` instead of `lpCompare` in a loop.
      2) Use `lpDeleteRangeWithEntry` instead of `lpDelete` twice to delete a element of zset.
      
      ## Tests
      1) Add some unittests for `lpDeleteRange` and `lpDeleteRangeWithEntry` function.
      2) Add zset RDB loading test.
      3) Add benchmark test for `lpCompare` and `ziplsitCompare`.
      4) Add empty listpack zset corrupt dump test.
      3ca6972e
  29. 10 Aug, 2021 1 commit
    • sundb's avatar
      Replace all usage of ziplist with listpack for t_hash (#8887) · 02fd76b9
      sundb authored
      
      
      Part one of implementing #8702 (taking hashes first before other types)
      
      ## Description of the feature
      1. Change ziplist encoded hash objects to listpack encoding.
      2. Convert existing ziplists on RDB loading time. an O(n) operation.
      
      ## Rdb format changes
      1. Add RDB_TYPE_HASH_LISTPACK rdb type.
      2. Bump RDB_VERSION to 10
      
      ## Interface changes
      1. New `hash-max-listpack-entries` config is an alias for `hash-max-ziplist-entries` (same with `hash-max-listpack-value`)
      2. OBJECT ENCODING will return `listpack` instead of `ziplist`
      
      ## Listpack improvements:
      1. Support direct insert, replace integer element (rather than convert back and forth from string)
      3. Add more listpack capabilities to match the ziplist ones (like `lpFind`, `lpRandomPairs` and such)
      4. Optimize element length fetching, avoid multiple calculations
      5. Use inline to avoid function call overhead.
      
      ## Tests
      1. Add a new test to the RDB load time conversion
      2. Adding the listpack unit tests. (based on the one in ziplist.c)
      3. Add a few "corrupt payload: fuzzer findings" tests, and slightly modify existing ones.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      02fd76b9
  30. 05 Aug, 2021 1 commit
  31. 16 Jun, 2021 1 commit
  32. 08 Feb, 2021 1 commit
  33. 22 Sep, 2020 1 commit
  34. 17 Sep, 2020 1 commit
    • Wang Yuan's avatar
      Remove tmp rdb file in background thread (#7762) · b002d2b4
      Wang Yuan authored
      We're already using bg_unlink in several places to delete the rdb file in the background,
      and avoid paying the cost of the deletion from our main thread.
      This commit uses bg_unlink to remove the temporary rdb file in the background too.
      
      However, in case we delete that rdb file just before exiting, we don't actually wait for the
      background thread or the main thread to delete it, and just let the OS clean up after us.
      i.e. we open the file, unlink it and exit with the fd still open.
      
      Furthermore, rdbRemoveTempFile can be called from a thread and was using snprintf which is
      not async-signal-safe, we now use ll2string instead.
      b002d2b4
  35. 09 Apr, 2020 3 commits
  36. 30 Jan, 2020 1 commit
  37. 29 Oct, 2019 1 commit
    • Oran Agra's avatar
      Modules hooks: complete missing hooks for the initial set of hooks · 51c3ff8d
      Oran Agra authored
      * replication hooks: role change, master link status, replica online/offline
      * persistence hooks: saving, loading, loading progress
      * misc hooks: cron loop, shutdown, module loaded/unloaded
      * change the way hooks test work, and add tests for all of the above
      
      startLoading() now gets flag indicating what is loaded.
      stopLoading() now gets an indication of success or failure.
      adding startSaving() and stopSaving() with similar args and role.
      51c3ff8d
  38. 22 Jul, 2019 1 commit