1. 21 Nov, 2024 3 commits
    • Ozan Tezcan's avatar
      Fix memory leak of jemalloc tcache on function flush command (#13661) · 9ebf80a2
      Ozan Tezcan authored
      Starting from https://github.com/redis/redis/pull/13133
      
      , we allocate a
      jemalloc thread cache and use it for lua vm.
      On certain cases, like `script flush` or `function flush` command, we
      free the existing thread cache and create a new one.
      
      Though, for `function flush`, we were not actually destroying the
      existing thread cache itself. Each call creates a new thread cache on
      jemalloc and we leak the previous thread cache instances. Jemalloc
      allows maximum 4096 thread cache instances. If we reach this limit,
      Redis prints "Failed creating the lua jemalloc tcache" log and abort.
      
      There are other cases that can cause this memory leak, including
      replication scenarios when emptyData() is called.
      
      The implication is that it looks like redis `used_memory` is low, but
      `allocator_allocated` and RSS remain high.
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      9ebf80a2
    • Moti Cohen's avatar
      modules API: Support register unprefixed config parameters (#13656) · 15563450
      Moti Cohen authored
      PR #10285 introduced support for modules to register four types of
      configurations — Bool, Numeric, String, and Enum. Accessible through the
      Redis config file and the CONFIG command.
      
      With this PR, it will be possible to register configuration parameters
      without automatically prefixing the parameter names. This provides
      greater flexibility in configuration naming, enabling, for instance,
      both `bf-initial-size` or `initial-size` to be defined in the module
      without automatically prefixing with `<MODULE-NAME>.`. In addition it
      will also be possible to create a single additional alias via the same
      API. This brings us another step closer to integrate modules into redis
      core.
      
      **Example:** Register a configuration parameter `bf-initial-size` with
      an alias `initial-size` without the automatic module name prefix, set
      with new `REDISMODULE_CONFIG_UNPREFIXED` flag:
      ```
      RedisModule_RegisterBoolConfig(ctx, "bf-initial-size|initial-size", default_val, optflags | REDISMODULE_CONFIG_UNPREFIXED, getfn, setfn, applyfn, privdata);
      ```
      # API changes
      Related functions that now support unprefixed configuration flag
      (`REDISMODULE_CONFIG_UNPREFIXED`) along with optional alias:
      ```
      RedisModule_RegisterBoolConfig
      RedisModule_RegisterEnumConfig
      RedisModule_RegisterNumericConfig
      RedisModule_RegisterStringConfig
      ```
      
      # Implementation Details:
      `config.c`: On load server configuration, at function
      `loadServerConfigFromString()`, it collects all unknown configurations
      into `module_configs_queue` dictionary. These may include valid module
      configurations or invalid ones. They will be validated later by
      `loadModuleConfigs()` against the configurations declared by the loaded
      module(s).
      `Module.c:` The `ModuleConfig` structure has been modified to store now:
      (1) Full configuration name (2) Alias (3) Unprefixed flag status -
      ensuring that configurations retain their original registration format
      when triggered in notifications.
      
      Added error printout:
      This change introduces an error printout for unresolved configurations,
      detailing each unresolved parameter detected during startup. The last
      line in the output existed prior to this change and has been retained to
      systems relies on it:
      ```
      595011:M 18 Nov 2024 08:26:23.616 # Unresolved Configuration(s) Detected:
      595011:M 18 Nov 2024 08:26:23.616 #  >>> 'bf-initiel-size 8'
      595011:M 18 Nov 2024 08:26:23.616 #  >>> 'search-sizex 32'
      595011:M 18 Nov 2024 08:26:23.616 # Module Configuration detected without loadmodule directive or no ApplyConfig call: aborting
      ```
      
      # Backward Compatibility:
      Existing modules will function without modification, as the new
      functionality only applies if REDISMODULE_CONFIG_UNPREFIXED is
      explicitly set.
      
      # Module vs. Core API Conflict Behavior
      The new API allows to modules loading duplication of same configuration
      name or same configuration alias, just like redis core configuration
      allows (i.e. the users sets two configs with a different value, but
      these two configs are actually the same one). Unlike redis core, given a
      name and its alias, it doesn't allow have both configuration on load. To
      implement it, it is required to modify DS `module_configs_queue` to
      reflect the order of their loading and later on, during
      `loadModuleConfigs()`, resolve pairs of names and aliases and which one
      is the last one to apply. "Relaxing" this limitation can be deferred to
      a future update if necessary, but for now, we error in this case.
      15563450
    • nafraf's avatar
      Fix module loadex command crash due to invalid config (#13653) · 5b84dc96
      nafraf authored
      Fix to https://github.com/redis/redis/issues/13650
      
      
      
      providing an invalid config to a module with datatype crashes when redis
      tries to unload the module due to the invalid config
      
      ---------
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      5b84dc96
  2. 11 Nov, 2024 1 commit
    • Ozan Tezcan's avatar
      Print command tokens on a crash when hide-user-data-from-log is enabled (#13639) · 54038811
      Ozan Tezcan authored
      If `hide-user-data-from-log` config is enabled, we don't print client
      argv in the crashlog to avoid leaking user info.
      Though, debugging a crash becomes harder as we don't see the command
      arguments causing the crash.
      
      With this PR, we'll be printing command tokens to the log. As we have
      command tokens defined in json schema for each command, using this data,
      we can find tokens in the client argv.
      
      e.g. 
      `SET key value GET EX 10` ---> we'll print `SET * * GET EX *` in the
      log.
      
      Modules should introduce their command structure via
      `RM_SetCommandInfo()`.
      Then, on a crash we'll able to know module command tokens.
      54038811
  3. 29 Oct, 2024 1 commit
    • Moti Cohen's avatar
      Add KEYSIZES section to INFO (#13592) · 2ec78d26
      Moti Cohen authored
      This PR adds a new section to the `INFO` command output, called
      `keysizes`. This section provides detailed statistics on the
      distribution of key sizes for each data type (strings, lists, sets,
      hashes and zsets) within the dataset. The distribution is tracked using
      a base-2 logarithmic histogram.
      
      # Motivation
      Currently, Redis lacks a built-in feature to track key sizes and item
      sizes per data type at a granular level. Understanding the distribution
      of key sizes is critical for monitoring memory usage and optimizing
      performance, particularly in large datasets. This enhancement will allow
      users to inspect the size distribution of keys directly from the `INFO`
      command, assisting with performance analysis and capacity planning.
      
      # Changes
      New Section in `INFO` Command: A new section called `keysizes` has been
      added to the `INFO` command output. This section reports a per-database,
      per-type histogram of key sizes. It provides insights into how many keys
      fall into specific size ranges (represented in powers of 2).
      
      **Example output:**
      ```
      127.0.0.1:6379> INFO keysizes
      # Keysizes
      db0_distrib_strings_sizes:1=19,2=655,512=100899,1K=31,2K=29,4K=23,8K=16,16K=3,32K=2
      db0_distrib_lists_items:1=5784492,32=3558,64=1047,128=676,256=533,512=218,4K=1,8K=42
      db0_distrib_sets_items:1=735564=50612,8=21462,64=1365,128=974,2K=292,4K=154,8K=89,
      db0_distrib_hashes_items:2=1,4=544,32=141169,64=207329,128=4349,256=136226,1K=1
      ```
      ## Future Use Cases:
      The key size distribution is collected per slot as well, laying the
      groundwork for future enhancements related to Redis Cluster.
      2ec78d26
  4. 22 Oct, 2024 1 commit
  5. 15 Oct, 2024 1 commit
  6. 12 Oct, 2024 1 commit
  7. 08 Oct, 2024 2 commits
  8. 23 Sep, 2024 1 commit
    • Moti Cohen's avatar
      Fix race in HFE tests (#13563) · 5f28bd96
      Moti Cohen authored
      Test 1 - give more time for expiration
      Test 2 - Evaluate expiration time boundaries [+1,+2] before setting expiration [+1]
      Test 3 - Avoid race on test HFEs propagated to replica
      5f28bd96
  9. 19 Sep, 2024 1 commit
    • Moti Cohen's avatar
      Extend modules API to read also expired keys and subkeys (#13526) · 3a3cacfe
      Moti Cohen authored
      The PR extends `RedisModule_OpenKey`'s flags to include
      `REDISMODULE_OPEN_KEY_ACCESS_EXPIRED`, which allows to access expired
      keys.
      
      It also allows to access expired subkeys. Currently relevant only for
      hash fields
      and has its impact on `RM_HashGet` and `RM_Scan`.
      3a3cacfe
  10. 12 Sep, 2024 2 commits
  11. 08 Sep, 2024 1 commit
    • Ozan Tezcan's avatar
      Fix flaky replication tests (#13518) · ac03e372
      Ozan Tezcan authored
      #13495 introduced a change to reply -LOADING while flushing existing db on a replica. Some of our tests are
       sensitive to this change and do no expect -LOADING reply.
      
      Fixing a couple of tests that fail time to time.
      ac03e372
  12. 05 Sep, 2024 1 commit
  13. 04 Sep, 2024 2 commits
    • debing.sun's avatar
      Introduce reusable query buffer for client reads (#13488) · ea3e8b79
      debing.sun authored
      This PR is based on the commits from PR
      https://github.com/valkey-io/valkey/pull/258,
      https://github.com/valkey-io/valkey/pull/593,
      https://github.com/valkey-io/valkey/pull/639
      
      This PR optimizes client query buffer handling in Redis by introducing
      a reusable query buffer that is used by default for client reads. This
      reduces memory usage by ~20KB per client by avoiding allocations for
      most clients using short (<16KB) complete commands. For larger or
      partial commands, the client still gets its own private buffer.
      
      The primary changes are:
      
      * Adding a reusable query buffer `thread_shared_qb` that clients use by
      default.
      * Modifying client querybuf initialization and reset logic.
      * Freeing idle client query buffers when empty to allow reuse of the
      reusable query buffer.
      * Master client query buffers are kept private as their contents need to
      be preserved for replication stream.
      * When nested commands is executed, only the first user uses the reuse
      buffer, and subsequent users will still use the private buffer.
      
      In addition to the memory savings, this change shows a 3% improvement in
      latency and throughput when running with 1000 active clients.
      
      The memory reduction may also help reduce the need to evict clients when
      reaching max memory limit, as the query buffer is the main memory
      consumer per client.
      
      This PR is different from https://github.com/valkey-io/valkey/pull/258
      
      
      1. When a client is in the mid of requiring a reused buffer and
      returning it, regardless of whether the query buffer has changed
      (expanded), we do not update the reused query buffer in the middle, but
      return the reused query buffer (expanded or with data remaining) or
      reset it at the end.
      2. Adding a new thread variable `thread_shared_qb_used` to avoid
      multiple clients requiring the reusable query buffer at the same time.
      
      ---------
      Signed-off-by: default avatarUri Yagelnik <uriy@amazon.com>
      Signed-off-by: default avatarMadelyn Olson <matolson@amazon.com>
      Co-authored-by: default avatarUri Yagelnik <uriy@amazon.com>
      Co-authored-by: default avatarMadelyn Olson <madelyneolson@gmail.com>
      Co-authored-by: default avataroranagra <oran@redislabs.com>
      ea3e8b79
    • Ozan Tezcan's avatar
      Fix RM_RdbLoad() to enable AOF after loading is completed (#13510) · ea05c6ac
      Ozan Tezcan authored
      RM_RdbLoad() disables AOF temporarily while loading RDB.
      Later, it does not enable it back as it checks AOF state (disabled by then) 
      rather than AOF config parameter.
      
      Added a change to restart AOF according to config parameter. 
      ea05c6ac
  14. 03 Sep, 2024 1 commit
    • Meir Shpilraien (Spielrein)'s avatar
      Added new defrag API to allocate and free raw memory. (#13509) · d3d94ccf
      Meir Shpilraien (Spielrein) authored
      All the defrag allocations API expects to get a value and replace it, leaving the old value untouchable.
      In some cases a value might be shared between multiple keys, in such cases we can not simply replace
      it when the defrag callback is called.
      
      To allow support such use cases, the PR adds two new API's to the defrag API:
      
      1. `RM_DefragAllocRaw` - allocate memory base on a given size.
      2. `RM_DefragFreeRaw` - Free the given pointer.
      
      Those API's avoid using tcache so they operate just like `RM_DefragAlloc` but allows the user to split
      the allocation and the memory free operations into two stages and control when those happen.
      
      In addition the PR adds new API to allow the module to receive notifications when defrag start and end: `RM_RegisterDefragCallbacks`
      Those callbacks are the same as `RM_RegisterDefragFunc` but promised to be called and the start
      and the end of the defrag process.
      d3d94ccf
  15. 20 Aug, 2024 2 commits
    • Zihao Lin's avatar
      Improve GETRANGE command behavior (#12272) · 6ceadfb5
      Zihao Lin authored
      
      
      Fixed the issue about GETRANGE and SUBSTR command
      return unexpected result caused by the `start` and `end` out of
      definition range of string.
      
      ---
      ## break change
      Before this PR, when negative `end` was out of range (i.e., end <
      -strlen), we would fix it to 0 to get the substring, which also resulted
      in the first character still being returned for this kind of out of
      range.
      After this PR, we ensure that `GETRANGE` returns an empty bulk when the
      negative end index is out of range.
      
      Closes #11738
      
      ---------
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      6ceadfb5
    • judeng's avatar
      improve performance for scan command when matching data type (#12395) · 7f0a7f0a
      judeng authored
      Move the TYPE filtering to the scan callback so that avoided the
      `lookupKey` operation. This is the follow-up to #12209 . In this thread
      we introduced two breaking changes:
      1. we will not attempt to do lazy expire (delete) a key that was
      filtered by not matching the TYPE (like we already do for MATCH
      pattern).
      2. when the specified key TYPE filter is an unknown type, server will
      reply a error immediately instead of doing a full scan that comes back
      empty handed.
      7f0a7f0a
  16. 16 Aug, 2024 1 commit
    • debing.sun's avatar
      Fix incorrect lag due to trimming stream via XTRIM command (#13473) · 2b88db90
      debing.sun authored
      ## Describe
      When using the `XTRIM` command to trim a stream, it does not update the
      maximal tombstone (`max_deleted_entry_id`). This leads to an issue where
      the lag calculation incorrectly assumes that there are no tombstones
      after the consumer group's last_id, resulting in an inaccurate lag.
      
      The reason XTRIM doesn't need to update the maximal tombstone is that it
      always trims from the beginning of the stream. This means that it
      consistently changes the position of the first entry, leading to the
      following scenarios:
      
      1) First entry trimmed after maximal tombstone:
      If the first entry is trimmed to a position after the maximal tombstone,
      all tombstones will be before the first entry, so they won't affect the
      consumer group's lag.
      
      2) First entry trimmed before maximal tombstone:
      If the first entry is trimmed to a position before the maximal
      tombstone, the maximal tombstone will not be updated.
      
      ## Solution
      Therefore, this PR optimizes the lag calculation by ensuring that when
      both the consumer group's last_id and the maximal tombstone are behind
      the first entry, the consumer group's lag is always equal to the number
      of remaining elements in the stream.
      
      Supplement to PR https://github.com/redis/redis/pull/13338
      2b88db90
  17. 14 Aug, 2024 1 commit
  18. 11 Aug, 2024 1 commit
    • Moti Cohen's avatar
      On HDEL last field with expiry, update global HFE DS (#13470) · 806459f4
      Moti Cohen authored
      Hash field expiration is optimized to avoid frequent update global HFE DS for
      each field deletion. Eventually active-expiration will run and update or remove
      the hash from global HFE DS gracefully. Nevertheless, statistic "subexpiry"
      might reflect wrong number of hashes with HFE to the user if HDEL deletes
      the last field with expiration in hash (yet there are more fields without expiration).
      
      Following this change, if HDEL the last field with expiration in the hash then
      take care to remove the hash from global HFE DS as well.
      806459f4
  19. 08 Aug, 2024 1 commit
  20. 31 Jul, 2024 1 commit
  21. 30 Jul, 2024 1 commit
  22. 22 Jul, 2024 1 commit
    • Oran Agra's avatar
      solve race conditions in tests (#13433) · 447ce11a
      Oran Agra authored
      [exception]: Executing test client: ERR FAILOVER target replica is not
      online.. ERR FAILOVER target replica is not online.
          while executing
      "$node_0 failover to $node_1_host $node_1_port"
          ("uplevel" body line 16)
          invoked from within
      "uplevel 1 $code"
          (procedure "test" line 58)
          invoked from within
      "test {failover command to specific replica works} {
      
      [err]: client evicted due to percentage of maxmemory in
      tests/unit/client-eviction.tcl
      Expected 33622 >= 220200 && 33622 < 440401 (context: type eval line 17
      cmd {assert {$tot_mem >= $n && $tot_mem < $maxmemory_clients_actual}}
      proc ::test)
      447ce11a
  23. 17 Jul, 2024 1 commit
    • Oran Agra's avatar
      Fix external test hang in redis-cli test when run in a certain order (#13423) · a3319785
      Oran Agra authored
      When the tests are run against an external server in this order:
      `--single unit/introspection --single unit/moduleapi/blockonbackground
      --single integration/redis-cli`
      the test would hang when the "ASK redirect test" test attempts to create
      a listening socket (it fails, and then redis-cli itself hangs waiting
      for a non-responsive socket created by the introspection test).
      
      the reasons are:
      1. the blockedbackground test includes util.tcl and resets the
      `::last_port_attempted` variable
      2. the test in introspection didn't close the listening server, so it's
      still alive.
      3. find_available_port doesn't properly detect the busy port, and it
      thinks that the port is free even though it's busy.
      
      fixing all 3 of these problems, even though fixing just one would be
      enough to let the test pass.
      a3319785
  24. 16 Jul, 2024 1 commit
    • debing.sun's avatar
      Trigger Lua GC after script loading (#13407) · 88af96c7
      debing.sun authored
      Nowdays we do not trigger LUA GC after loading lua script. This means
      that when a large number of scripts are loaded, such as when functions
      are propagating from the master to the replica, if the LUA scripts are
      never touched on the replica, the garbage might remain there
      indefinitely.
      
      Before this PR, we would share a gc_count between scripts and functions.
      This means that, under certain circumstances, the GC trigger for scripts
      and functions was not fair.
      For example, loading a large number of scripts followed by a small
      number of functions could result in the functions triggering GC.
      In this PR, we assign a unique `gc_count` to each of them, so the GC
      triggers between them will no longer affect each other.
      
      on the other hand, this PR will to bring regession for script loading
      commands(`FUNCTION LOAD` and `SCRIPT LOAD`), but they are not hot path,
      we can ignore it, and it will be replaced
      https://github.com/redis/redis/pull/13375
      
       in the future.
      
      ---------
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      88af96c7
  25. 12 Jul, 2024 1 commit
    • debing.sun's avatar
      Avoid starting defrag after config resetstat for defrag test (#13399) · d39548c8
      debing.sun authored
      
      
      If `config resetstat` is executed and a defrag is started after it, the
      `total_active_defrag_time` will not be 0.
      When we start the defrag again, we will skip the following steps:
      1. waiting for the defrag to start. (s total_active_defrag_time is equal
      0)
      2. waiting for the test to complete. (active_defrag_running is euqal 0)
      which result in the test failed.
      
      ---------
      Co-authored-by: default avataroranagra <oran@redislabs.com>
      d39548c8
  26. 11 Jul, 2024 2 commits
  27. 10 Jul, 2024 1 commit
    • debing.sun's avatar
      Rebuild function engines for function flush command (#13383) · ffff7fea
      debing.sun authored
      
      
      ### Issue
      The current implementation of `FUNCTION FLUSH` command uses
      `lua_unref()` to unreference script closures in Lua vm. However,
      invoking `lua_unref()` during lazy free (`ASYNC` argument) is risky
      since it is not thread-safe.
      
      Another issue is that using `lua_unref()` to unreference references does
      not trigger GC, This can result in the Lua VM leaves a significant
      amount of garbage, which may never be cleaned up if not properly GC.
      
      ### Solution
      The proposed solution is to completely rebuild the engines, resulting in
      a brand new Lua VM.
      
      ---------
      Co-authored-by: default avatarmeir <meir@redis.com>
      ffff7fea
  28. 02 Jul, 2024 1 commit
  29. 01 Jul, 2024 2 commits
    • Oran Agra's avatar
      Solve a race between BGSAVE and FLUSHALL messing up the dirty counter (#13361) · 799c5e5f
      Oran Agra authored
      If we run FLUSHALL when the 'save' config is set, and there's a fork
      child ding BGSAVE, there's a chance the child is already finished, and
      the parent process is unaware of it. in that case the child will not get
      the kill signal and will finish successfully, but the parent process
      thinks it killed it and will reset the dirty counter to 0, then the
      backgroundSaveDoneHandlerDisk method can set the dirty counter to a
      negative value.
      799c5e5f
    • Oran Agra's avatar
      Fix possible crash due to OOM panic on invalid command (#13380) · 69b7137d
      Oran Agra authored
      getKeysUsingKeySpece had the range check AFTER the allocation, of the
      keys buffer, which could lead to an OOM panic when invalid arguments are
      provided, leading to an overflow.
      The allocated memory is only used after the range check, so there's no
      risk of buffer overrun.
      The OOM panic can happen on 32bit builds, or 64 builds running on
      systems with less than 4GB of RAM, and is reachable via the COMMAND
      GETKEYSANDFLAGS, and ACL key name validation.
      69b7137d
  30. 26 Jun, 2024 2 commits
    • Moti Cohen's avatar
      HFE - count in command must match actual number of fields (#13369) · a9267137
      Moti Cohen authored
      There was wrong preliminary assumption that we can optionally provide
      vector of arguments more than count.
      This is error-prone approach that leaded to actual error in that case.
      This PR enforce that vector of argument match count.
      
      Also fixed flaky HRANDFIELD test.
      a9267137
    • debing.sun's avatar
      Don't keep global replication buffer reference for replicas marked CLIENT_CLOSE_ASAP (#13363) · 52e12d8b
      debing.sun authored
      
      
      In certain situations, we might generate a large number of propagates
      (e.g., multi/exec, Lua script, or a single command generating tons of
      propagations) within an event loop.
      During the process of propagating to a replica, if the replica is
      disconnected(marked as CLIENT_CLOSE_ASAP) due to exceeding the output
      buffer limit, we should remove its reference to the global replication
      buffer to avoid the global replication buffer being unable to be
      properly trimmed due to being referenced.
      
      ---------
      Co-authored-by: default avataroranagra <oran@redislabs.com>
      52e12d8b
  31. 25 Jun, 2024 1 commit
    • Moti Cohen's avatar
      Fix H(P)EXPIREAT command to propagate HDEL as well (#13364) · 5eac99c3
      Moti Cohen authored
      H(P)EXPIREAT command might delete fields in case the absolute time is in the 
      past. Those HDELs need to be propagated as well.
       
      In general, as we need to propagate H(P)EXPIRE(AT) command to the replica, each 
      field that is mentioned in the command should be categorized into one of the four
      options:
      1. Managed to update field’s expiration time - propagate it to replica as part 
         of the HPEXPIREAT command.
      2. Deleted the field because the time is in the past - propagate also HDEL command
         to delete the field and remove the field from the propagated HPEXPIREAT.
      3. Condition not met for the field - Remove the field from the propagated
         HPEXPIREAT command.
      4. Field does not exists - Remove the field from the propagated HPEXPIREAT command.
      
      If none of the provided fields match option number 1, then avoid also propagating 
      the HPEXPIREAT command to the replica.
      
      This approach is aligned with the EXPIRE command:
      If a given key has already expired, then DEL will be propagated instead of
      EXPIRE command. If condition not met, then command will be rejected. Otherwise, 
      EXPIRE command will be propagated for given key.
      5eac99c3