1. 03 Jan, 2024 2 commits
    • Lior Kogan's avatar
      Update CONTRIBUTING.md (#12907) · 51898383
      Lior Kogan authored
      - Referring to Redis Discord channel instead of the mailing list.
      - Referring to the licensing instead of repeating it.
      51898383
    • Madelyn Olson's avatar
      Handle recursive serverAsserts and provide more information for recursive segfaults (#12857) · 068051e3
      Madelyn Olson authored
      This change is trying to make two failure modes a bit easier to deep dive:
      1. If a serverPanic or serverAssert occurs during the info (or module)
      printing, it will recursively panic, which is a lot of fun as it will
      just keep recursively printing. It will eventually stack overflow, but
      will generate a lot of text in the process.
      2. When a segfault happens during the segfault handler, no information
      is communicated other than it happened. This can be problematic because
      `info` may help diagnose the real issue, but without fixing the
      recursive crash it might be hard to get at that info.
      068051e3
  2. 02 Jan, 2024 1 commit
    • AshMosh's avatar
      Manage number of new connections per cycle (#12178) · c3f8b542
      AshMosh authored
      
      
      There are situations (especially in TLS) in which the engine gets too occupied managing a large number of new connections. Existing connections may time-out while the server is processing the new connections initial TLS handshakes, which may cause cause new connections to be established, perpetuating the problem. To better manage the tradeoff between new connection rate and other workloads, this change adds a new config to manage maximum number of new connections per event loop cycle, instead of using a predetermined number (currently 1000).
      
      This change introduces two new configurations, max-new-connections-per-cycle and max-new-tls-connections-per-cycle. The default value of the tcp connections is 10 per cycle and the default value of tls connections per cycle is 1.
      ---------
      Co-authored-by: default avatarMadelyn Olson <madelyneolson@gmail.com>
      c3f8b542
  3. 28 Dec, 2023 5 commits
  4. 27 Dec, 2023 3 commits
    • Chen Tianjie's avatar
      Replace slots_to_channels radix tree with slot specific dictionaries for shard channels. (#12804) · 85279595
      Chen Tianjie authored
      
      
      We have achieved replacing `slots_to_keys` radix tree with key->slot
      linked list (#9356), and then replacing the list with slot specific
      dictionaries for keys (#11695).
      
      Shard channels behave just like keys in many ways, and we also need a
      slots->channels mapping. Currently this is still done by using a radix
      tree. So we should split `server.pubsubshard_channels` into 16384 dicts
      and drop the radix tree, just like what we did to DBs.
      
      Some benefits (basically the benefits of what we've done to DBs):
      1. Optimize counting channels in a slot. This is currently used only in
      removing channels in a slot. But this is potentially more useful:
      sometimes we need to know how many channels there are in a specific slot
      when doing slot migration. Counting is now implemented by traversing the
      radix tree, and with this PR it will be as simple as calling `dictSize`,
      from O(n) to O(1).
      2. The radix tree in the cluster has been removed. The shard channel
      names no longer require additional storage, which can save memory.
      3. Potentially useful in slot migration, as shard channels are logically
      split by slots, thus making it easier to migrate, remove or add as a
      whole.
      4. Avoid rehashing a big dict when there is a large number of channels.
      
      Drawbacks:
      1. Takes more memory than using radix tree when there are relatively few
      shard channels.
      
      What this PR does:
      1. in cluster mode, split `server.pubsubshard_channels` into 16384
      dicts, in standalone mode, still use only one dict.
      2. drop the `slots_to_channels` radix tree.
      3. to save memory (to solve the drawback above), all 16384 dicts are
      created lazily, which means only when a channel is about to be inserted
      to the dict will the dict be initialized, and when all channels are
      deleted, the dict would delete itself.
      5. use `server.shard_channel_count` to keep track of the number of all
      shard channels.
      
      ---------
      Co-authored-by: default avatarViktor Söderqvist <viktor.soderqvist@est.tech>
      85279595
    • Moshe Kaplan's avatar
      config.c: Avoid leaking file handle if file is 0 bytes (#12828) · fa751f9b
      Moshe Kaplan authored
      If fopen() is successful and redis_fstat determines that the file is 0
      bytes, the file handle stored in fp will leak. This change closes the
      filehandle stored in fp if the file is 0 bytes.
      
      Second attempt at fixing Coverity 390029
      
      This is a follow-up to #12796
      fa751f9b
    • sundb's avatar
      Fix oom-score-adj test due to no permission (#12887) · bef57153
      sundb authored
      
      
      Fix #12792
      
      On ubuntu 23(lunar), non-root users will not be allowed to change the
      oom_score_adj of a process to a value that is too low.
      Since terminal's default oom_score_adj is 200, if we run the test on
      terminal, we won't be able to set the oom_score_adj of the redis process
      to 9 or 22, which is too low.
      
      Reproduction on ubuntu 23(lunar) terminal:
      ```sh
      $ cat /proc/`pgrep redis-server`/oom_score_adj
      200
      $ echo 100 > /proc/`pgrep redis-server`/oom_score_adj
      # success without error
      $ echo 99 > /proc/`pgrep redis-server`/oom_score_adj
      echo: write error: Permission denied
      ```
      
      As from the output above, we can only set the minimum oom score of redis
      processes to 100.
      By modifying the test, make oom_score_adj only increase upwards and not
      decrease.
      
      ---------
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      bef57153
  5. 26 Dec, 2023 4 commits
  6. 24 Dec, 2023 2 commits
  7. 21 Dec, 2023 1 commit
    • Binbin's avatar
      Move cliVersion to cli_common and add --version support for redis-check-aof (#10856) · 23e980e7
      Binbin authored
      Let us see which version of redis this tool is part of.
      Similarly to redis-cli, redis-benchmark and redis-check-rdb
      
      redis-rdb-check and redis-aof-check are actually symlinks to redis,
      so they will directly use getVersion in server, the format became:
      ```
      {title} v={redis_version} sha={sha}:{dirty} malloc={malloc} bits={bits} build={build}
      ```
      
      Move cliVersion into cli_common, redis-cli and redis-benchmark will
      use it, and the format is not change:
      ```
      {title} {redis_version} (git:{sha})
      ```
      23e980e7
  8. 17 Dec, 2023 1 commit
  9. 15 Dec, 2023 2 commits
    • Binbin's avatar
      Always keep an in-memory history of all commands in redis-cli (#12862) · adbb534f
      Binbin authored
      redis-cli avoids saving sensitive commands in it's history (doesn't
      persist them to the history file).
      this means that if you had a typo and you wanna re-run the command, you
      can't easily do that.
      This PR changes that to keep an in-memory history of all the redacted
      commands, and just
      not persist them to disk. This way we would be able to press the up
      arrow and
      re-try the command freely, and it'll just not survive a redis-cli
      restart.
      adbb534f
    • zhaozhao.zz's avatar
      Unified db rehash method for both standalone and cluster (#12848) · d8a21c57
      zhaozhao.zz authored
      After #11695, we added two functions `rehashingStarted` and
      `rehashingCompleted` to the dict structure. We also registered two
      handlers for the main database's dict and expire structures. This allows
      the main database to record the dict in `rehashing` list when rehashing
      starts. Later, in `serverCron`, the `incrementallyRehash` function is
      continuously called to perform the rehashing operation. However,
      currently, when rehashing is completed, `rehashingCompleted` does not
      remove the dict from the `rehashing` list. This results in the
      `rehashing` list containing many invalid dicts. Although subsequent cron
      checks and removes dicts that don't require rehashing, it is still
      inefficient.
      
      This PR implements the functionality to remove the dict from the
      `rehashing` list in `rehashingCompleted`. This is achieved by adding
      `metadata` to the dict structure, which keeps track of its position in
      the `rehashing` list, allowing for quick removal. This approach avoids
      storing duplicate dicts in the `rehashing` list.
      
      Additionally, there are other modifications:
      
      1. Whether in standalone or cluster mode, the dict in database is
      inserted into the rehashing linked list when rehashing starts. This
      eliminates the need to distinguish between standalone and cluster mode
      in `incrementallyRehash`. The function only needs to focus on the dicts
      in the `rehashing` list that require rehashing.
      2. `rehashing` list is moved from per-database to Redis server level.
      This decouples `incrementallyRehash` from the database ID, and in
      standalone mode, there is no need to iterate over all databases,
      avoiding unnecessary access to databases that do not require rehashing.
      In the future, even if unsharded-cluster mode supports multiple
      databases, there will be no risk involved.
      3. The insertion and removal operations of dict structures in the
      `rehashing` list are decoupled from `activerehashing` config.
      `activerehashing` only controls whether `incrementallyRehash` is
      executed in serverCron. There is no need for additional steps when
      modifying the `activerehashing` switch, as in #12705.
      d8a21c57
  10. 14 Dec, 2023 1 commit
    • Guillaume Koenig's avatar
      Extend rax usage by allowing any long long value (#12837) · 967fb3c6
      Guillaume Koenig authored
      The raxFind implementation uses a special pointer value (the address of
      a static string) as the "not found" value. It works as long as actual
      pointers were used. However we've seen usages where long long,
      non-pointer values have been used. It creates a risk that one of the
      long long value precisely is the address of the special "not found"
      value. This commit changes raxFind to return 1 or 0 to indicate
      elementhood, and take in a new void **value to optionally return the
      associated value.
      
      By extension, this also allow the RedisModule_DictSet/Replace operations
      to also safely insert integers instead of just pointers.
      967fb3c6
  11. 13 Dec, 2023 3 commits
    • Chen Tianjie's avatar
      Support by/get options for sort(_ro) in cluster mode when pattern implies slot. (#12728) · e95a5d48
      Chen Tianjie authored
      The by/get options of sort/sort_ro command used to be forbidden in
      cluster mode, since we are not sure which slot the pattern may be in.
      
      As the optimization done in #12536, patterns now can be mapped to slots,
      we should allow by/get options in cluster mode when the pattern maps to
      the same slot as the key.
      e95a5d48
    • Binbin's avatar
      Redact ACL username information and mark *-key-file-pass configs as sensitive (#12860) · 3c0fd252
      Binbin authored
      In #11489, we consider acl username to be sensitive information,
      and consider the ACL GETUSER a sensitive command and remove it
      from redis-cli historyfile.
      
      This PR redact username information in ACL GETUSER and ACL DELUSER
      from SLOWLOG, and also remove ACL DELUSER from redis-cli historyfile.
      
      This PR also mark tls-key-file-pass and tls-client-key-file-pass
      as sensitive config, will redact it from SLOWLOG and also
      remove them from redis-cli historyfile.
      3c0fd252
    • Chen Tianjie's avatar
      Add metric to INFO CLIENTS: pubsub_clients. (#12849) · f9cc25c1
      Chen Tianjie authored
      In INFO CLIENTS section, we already have blocked_clients and
      tracking_clients. We should add a new metric showing the number of
      pubsub connections, which helps performance monitoring and trouble
      shooting.
      f9cc25c1
  12. 11 Dec, 2023 1 commit
    • Binbin's avatar
      Fix delKeysInSlot server events are not executed inside an execution unit (#12745) · c85a9b78
      Binbin authored
      This is a follow-up fix to #12733. We need to apply the same changes to
      delKeysInSlot. Refer to #12733 for more details.
      
      This PR contains some other minor cleanups / improvements to the test
      suite and docs.
      It uses the postnotifications test module in a cluster mode test which
      revealed a leak in the test module (fixed).
      c85a9b78
  13. 10 Dec, 2023 4 commits
    • Binbin's avatar
      Handle missing fields in dbSwapDatabases and swapMainDbWithTempDb (#12763) · 62419c01
      Binbin authored
      The change in dbSwapDatabases seems harmless. Because in non-clustered
      mode, dbBuckets calculations are strictly accurate and in cluster mode,
      we only have one DB. Modify it for uniformity (just like resize_cursor).
      
      The change in swapMainDbWithTempDb is needed in case we swap with the
      temp db, otherwise the overhead memory usage of db can be miscalculated.
      
      In addition we will swap all fields (including rehashing list), just for
      completeness (and reduce the chance of surprises in the future).
      
      Introduced in #12697.
      62419c01
    • Binbin's avatar
      Fix rehashingStarted miscalculating bucket_count in dict initialization (#12846) · e6423b7a
      Binbin authored
      In the old dictRehashingInfo implementation, for the initialization
      scenario,
      it mistakenly directly set to_size to DICTHT_SIZE(DICT_HT_INITIAL_EXP),
      which
      is 4 in our code by default.
      
      In scenarios where dictExpand directly passes the target size as
      initialization,
      the code will calculate bucket_count incorrectly. For example, in DEBUG
      POPULATE
      or RDB load scenarios, it will cause the final bucket_count to be
      initialized to
      65536 (16384 * 4), see:
      ```
      before:
      DB 0: 10000000 keys (0 volatile) in 65536 slots HT.
      
      it should be:
      DB 0: 10000000 keys (0 volatile) in 16777216 slots HT.
      ```
      
      In PR, new ht will also be initialized before calling rehashingStarted
      in
      _dictExpand, so that the calls in dictRehashingInfo can be unified.
      
      Bug was introduced in #12697.
      e6423b7a
    • Binbin's avatar
      Remove dead code around should_expand_db (#12767) · a3ae2ed3
      Binbin authored
      when dbExpand is called from rdb.c with try_expand set to 0, it will
      either panic panic on OOM, or be non-fatal (should not fail RDB loading)
      
      At the same time, the log text has been slightly adjusted to make it
      more unified.
      a3ae2ed3
    • Binbin's avatar
      Remove overhead.hashtable.slot-to-keys from memory-stats reply_schema (#12784) · 7410d985
      Binbin authored
      overhead.hashtable.slot-to-keys was added in 7.0 in #10017, then removed
      in #11695. Now remove it from reply_schema.
      7410d985
  14. 08 Dec, 2023 2 commits
  15. 07 Dec, 2023 2 commits
    • Chen Tianjie's avatar
      Avoid unnecessary slot computing in KEYS command. (#12843) · f2d59c4f
      Chen Tianjie authored
      If not in cluster mode, there is no need to compute slot.
      
      A bit optimization for #12754
      f2d59c4f
    • zhaozhao.zz's avatar
      Fix replica node cannot expand dicts when loading legacy RDB (#12839) · 8e11f84d
      zhaozhao.zz authored
      When loading RDB on cluster nodes, it is necessary to consider the
      scenario where a node is a replica.
      
      For example, during a rolling upgrade, new version instances are often
      mounted as replicas on old version instances. In this case, the full
      synchronization legacy RDB does not contain slot information, and the
      new version instance, acting as a replica, should be able to handle the
      legacy RDB correctly for `dbExpand`.
      
      Additionally, renaming `getMyClusterSlotCount` to `getMyShardSlotCount`
      would be appropriate.
      
      Introduced in #11695
      8e11f84d
  16. 06 Dec, 2023 5 commits
    • Moshe Kaplan's avatar
      coverity.yml: Upload should go to project redis-unstable (#12841) · e2a3f309
      Moshe Kaplan authored
      Coverity project name was changed from redis to redis-unstable. Fix the
      upload destination to also go to redis-unstable.
      
      Continuation of #12807
      e2a3f309
    • Binbin's avatar
      Fix outdated LFU comments to eliminate confusion (#12244) · 2f6d4dab
      Binbin authored
      The decrement time was replaced by access time in
      583c3147.
      
      The halved and doubled LFU_INIT_VAL logic has been changed in
      06ca9d68.
      Now we just decrement the counter by num_periods. This has been
      previously fixed in redis.conf, #11108.
      2f6d4dab
    • Moshe Kaplan's avatar
      GH Workflows: Create CI job for Coverity scan (#12807) · 77e69d88
      Moshe Kaplan authored
      I've noticed that https://scan.coverity.com/projects/redis already
      exists, but appears to be only updated on an ad-hoc basis. creating
      [redis-unstable](https://scan.coverity.com/projects/redis-unstable?tab=project_settings)
      project in coverity for this CI.
      
      This PR adds a GitHub Action-based CI job to create a new Coverity build
      once daily, so that there is always a recent scan available.
      
      This is within the limit, as Redis is ~150K LOC and per
      https://scan.coverity.com/faq#frequency :
      
      > Up to 21 builds per week, with a maximum of 3 builds per day, for
      projects with 100K to 500K lines of code
      
      Before this is merged in, two new secrets will need to be created:
      
      COVERITY_SCAN_EMAIL with the email address used for accessing Coverity
      COVERITY_SCAN_TOKEN with the Project token from
      https://scan.coverity.com/projects/redis-unstable?tab=project_settings
      
      
      
      ---------
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      77e69d88
    • zhaozhao.zz's avatar
      Fix multi dbs donot dbExpand when loading RDB (#12840) · b730404c
      zhaozhao.zz authored
      Currently, during RDB loading, once a `dbExpand` is performed, the
      `should_expand_db` flag is set to 0. This causes the remaining DBs
      unable to do `dbExpand` when there are multiple DBs.
      
      To fix this issue, we need to set `should_expand_db` back to 1 whenever
      we encounter `RDB_OPCODE_RESIZEDB`. This ensures that each DB can
      perform `dbExpand` correctly.
      
      Additionally, the initial value of `should_expand_db` should also be set
      to 0 to prevent invalid `dbExpand` in older versions of RDB where
      `RDB_OPCODE_RESIZEDB` is not present.
      
      problem introduced in #11695
      b730404c
    • zhaozhao.zz's avatar
      Make the sampling logic in eviction clearer (#12781) · 9ee1cc33
      zhaozhao.zz authored
      
      
      Additional optimizations for the eviction logic in #11695:
      
      To make the eviction logic clearer and decouple the number of sampled
      keys from the running mode (cluster or standalone).
      * When sampling in each database, we only care about the number of keys
      in the current database (not the dicts we sampled from).
      * If there are a insufficient number of keys in the current database
      (e.g. 10 times the value of `maxmemory_samples`), we can break out
      sooner (to avoid looping on a sparse database).
      * We'll never try to sample the db dicts more times than the number of
      non-empty dicts in the db (max 1 in non-cluster mode).
      
      And it also ensures that each database has a sufficient amount of
      sampled keys, so even if unsharded-cluster supports multiple databases,
      there won't be any issues.
      
      other changes:
      1. keep track of the number of non-empty dicts in each database.
      2. move key_count tracking into cumulativeKeyCountAdd rather than all
      it's callers
      
      ---------
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      9ee1cc33
  17. 05 Dec, 2023 1 commit