1. 31 Oct, 2024 1 commit
    • Yossi Gottlieb's avatar
      Use cross-platform-actions for FreeBSD support. (#12732) · e1b3dcd7
      Yossi Gottlieb authored
      This change overcomes many stability issues experienced with the
      vmactions action.
      
      We need to limit VMs to 8GB for better stability, as the 13GB default
      seems to hang them occasionally.
      
      Shell code has been simplified since this action seem to use `bash -e`
      which will abort on non-zero exit codes anyway.
      e1b3dcd7
  2. 29 Oct, 2024 12 commits
    • Steve's avatar
      Avoid cluster.nodes load corruption due to shard-id generation (#13468) · 3a17679f
      Steve authored
      
      
      PR #13428 doesn't fully resolve an issue where corruption errors can
      still occur on loading of cluster.nodes file - seen on upgrade where
      there were no shard_ids (from old Redis), 7.2.5 loading generated new
      random ones, and persisted them to the file before gossip/handshake
      could propagate the correct ones (or some other nodes unreachable).
      This results in a primary/replica having differing shard_id in the
      cluster.nodes and then the server cannot startup - reports corruption.
      
      This PR builds on #13428 by simply ignoring the replica's shard_id in
      cluster.nodes (if it exists), and uses the replica's primary's shard_id.
      Additional handling was necessary to cover the case where the replica
      appears before the primary in cluster.nodes, where it will first use a
      generated shard_id for the primary, and then correct after it loads the
      primary cluster.nodes entry.
      
      ---------
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      3a17679f
    • debing.sun's avatar
      Fix incorrect lag due to trimming stream via XTRIM command (#13473) · 11c40358
      debing.sun authored
      ## Describe
      When using the `XTRIM` command to trim a stream, it does not update the
      maximal tombstone (`max_deleted_entry_id`). This leads to an issue where
      the lag calculation incorrectly assumes that there are no tombstones
      after the consumer group's last_id, resulting in an inaccurate lag.
      
      The reason XTRIM doesn't need to update the maximal tombstone is that it
      always trims from the beginning of the stream. This means that it
      consistently changes the position of the first entry, leading to the
      following scenarios:
      
      1) First entry trimmed after maximal tombstone:
      If the first entry is trimmed to a position after the maximal tombstone,
      all tombstones will be before the first entry, so they won't affect the
      consumer group's lag.
      
      2) First entry trimmed before maximal tombstone:
      If the first entry is trimmed to a position before the maximal
      tombstone, the maximal tombstone will not be updated.
      
      ## Solution
      Therefore, this PR optimizes the lag calculation by ensuring that when
      both the consumer group's last_id and the maximal tombstone are behind
      the first entry, the consumer group's lag is always equal to the number
      of remaining elements in the stream.
      
      Supplement to PR https://github.com/redis/redis/pull/13338
      11c40358
    • debing.sun's avatar
      Pass extensions to node if extension processing is handled by it (#13465) · 7e49f53e
      debing.sun authored
      This PR is based on the commits from PR
      https://github.com/valkey-io/valkey/pull/52.
      Ref: https://github.com/redis/redis/pull/12760
      Close https://github.com/redis/redis/issues/13401
      This PR will replace https://github.com/redis/redis/pull/13449
      
      
      
      Fixes compatibilty of Redis cluster (7.2 - extensions enabled by
      default) with older Redis cluster (< 7.0 - extensions not handled) .
      
      With some of the extensions enabled by default in 7.2 version, new nodes
      running 7.2 and above start sending out larger clusterbus message
      payload including the ping extensions. This caused an incompatibility
      with node running engine versions < 7.0. Old nodes (< 7.0) would receive
      the payload from new nodes (> 7.2) would observe a payload length
      (totlen) > (estlen) and would perform an early exit and won't process
      the message.
      
      This fix does the following things:
      1. Always set `CLUSTERMSG_FLAG0_EXT_DATA`, because during the meet
      phase, we do not know whether the connected node supports ext data, we
      need to make sure that it knows and send back its ext data if it has.
      2. If another node does not support ext data, we will not send it ext
      data to avoid the handshake failure due to the incorrect payload length.
      
      Note: A successful `PING`/`PONG` is required as a sender for a given
      node to be marked as `CLUSTERMSG_FLAG0_EXT_DATA` and then extensions
      message
      will be sent to it. This could cause a slight delay in receiving the
      extensions message(s).
      
      ---------
      Signed-off-by: default avatarHarkrishn Patro <harkrisp@amazon.com>
      Co-authored-by: default avatarHarkrishn Patro <harkrisp@amazon.com>
      
      ---------
      Signed-off-by: default avatarHarkrishn Patro <harkrisp@amazon.com>
      Co-authored-by: default avatarHarkrishn Patro <harkrisp@amazon.com>
      7e49f53e
    • debing.sun's avatar
      Ensure validity of myself as master or replica when loading cluster config (#13443) · 5d919a49
      debing.sun authored
      First, we need to ensure that `curmaster` in
      `clusterUpdateSlotsConfigWith()` is not NULL in the line
      https://github.com/redis/redis/blob/82f00f5179720c8cee6cd650763d184ba943be92/src/cluster_legacy.c#L2320
      otherwise, it will crash in the
      https://github.com/redis/redis/blob/82f00f5179720c8cee6cd650763d184ba943be92/src/cluster_legacy.c#L2395
      
      So when loading cluster node config, we need to ensure that the
      following conditions are met:
      1. A node must be at least one of the master or replica.
      2. If a node is a replica, its master can't be NULL.
      5d919a49
    • debing.sun's avatar
      Fix CLUSTER SHARDS command returns empty array (#13422) · 99fba9a9
      debing.sun authored
      Close https://github.com/redis/redis/issues/13414
      
      When the cluster's master node fails and is switched to another node,
      the first node in the shard node list (the old master) is no longer
      valid.
      Add a new method clusterGetMasterFromShard() to obtain the current
      master.
      99fba9a9
    • debing.sun's avatar
      Fix incorrect lag field in XINFO when tombstone is after the last_id of consume group (#13338) · 0d41ce34
      debing.sun authored
      Fix #13337
      
      Ths PR fixes fixed two bugs that caused lag calculation errors.
      1. When the latest tombstone is before the first entry, the tombstone
      may stil be after the last id of consume group.
      2. When a tombstone is after the last id of consume group, the group's
      counter will be invalid, we should caculate the entries_read by using
      estimates.
      0d41ce34
    • Oran Agra's avatar
      Fix possible crash due to OOM panic on invalid command (#13380) · ad7a6f59
      Oran Agra authored
      getKeysUsingKeySpece had the range check AFTER the allocation, of the
      keys buffer, which could lead to an OOM panic when invalid arguments are
      provided, leading to an overflow.
      The allocated memory is only used after the range check, so there's no
      risk of buffer overrun.
      The OOM panic can happen on 32bit builds, or 64 builds running on
      systems with less than 4GB of RAM, and is reachable via the COMMAND
      GETKEYSANDFLAGS, and ACL key name validation.
      ad7a6f59
    • debing.sun's avatar
      Don't keep global replication buffer reference for replicas marked CLIENT_CLOSE_ASAP (#13363) · e5a42280
      debing.sun authored
      
      
      In certain situations, we might generate a large number of propagates
      (e.g., multi/exec, Lua script, or a single command generating tons of
      propagations) within an event loop.
      During the process of propagating to a replica, if the replica is
      disconnected(marked as CLIENT_CLOSE_ASAP) due to exceeding the output
      buffer limit, we should remove its reference to the global replication
      buffer to avoid the global replication buffer being unable to be
      properly trimmed due to being referenced.
      
      ---------
      Co-authored-by: default avataroranagra <oran@redislabs.com>
      e5a42280
    • gms's avatar
      Fix crash due to unblock client during slot migration (#13311) · b6fac2f3
      gms authored
      
      
      In #13224, we found a crash during cluster slot migration but don't know
      why. So i check all the return C_OK in processCommand to see if we are
      missing some duration reset and see this.
      
      This fix is like #12247, when we reject the command, we should reset the
      duration. I test it and verify it can fix #13224.
      
      So the reason may because we are using stream block and then during the
      slot migration, it got a redirect and then crash the server.
      
      ---------
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      b6fac2f3
    • Ted Lyngmo's avatar
      Log the real reason for why posix_fadvise failed (#13246) · 18d1ec5c
      Ted Lyngmo authored
      
      
      `reclaimFilePageCache` did not set `errno` but `rdbSaveInternal` which
      is logging the error assumed it did. This makes sure `errno` is set.
      
      Fixes #13245
      Signed-off-by: default avatarTed Lyngmo <ted@lyncon.se>
      18d1ec5c
    • debing.sun's avatar
      Have consistent behavior of SPUBLISH within multi/exec like regular command (#13276) · 6d2f2281
      debing.sun authored
      
      
      This PR is based on the commits from PR #12944.
      
      Allow SPUBLISH command within multi/exec on replica
      
      Behavior on unstable:
      
      ```
      127.0.0.1:6380> CLUSTER NODES
      39ce8aa20f1f0d91f1a88d976ee1926dfefcdf1a 127.0.0.1:6380@16380 myself,slave 8b0feb120b68aac489d6a5af9c77dc40d71bc792 0 0 0 connected
      8b0feb120b68aac489d6a5af9c77dc40d71bc792 127.0.0.1:6379@16379 master - 0 1705091681202 0 connected 0-16383
      127.0.0.1:6380> SPUBLISH hello world
      (integer) 0
      127.0.0.1:6380> MULTI
      OK
      127.0.0.1:6380(TX)> SPUBLISH hello world
      QUEUED
      127.0.0.1:6380(TX)> EXEC
      (error) MOVED 866 127.0.0.1:6379
      ```
      
      With this change:
      
      ```
      127.0.0.1:6380> SPUBLISH hello world
      (integer) 0
      127.0.0.1:6380> MULTI
      OK
      127.0.0.1:6380(TX)> SPUBLISH hello world
      QUEUED
      127.0.0.1:6380(TX)> EXEC
      1) (integer) 0
      ```
      
      ---------
      Co-authored-by: default avatarHarkrishn Patro <harkrisp@amazon.com>
      Co-authored-by: default avataroranagra <oran@redislabs.com>
      6d2f2281
    • sundb's avatar
      Fix oom-score-adj test due to no permission (#12887) · 585d83c2
      sundb authored
      
      
      Fix #12792
      
      On ubuntu 23(lunar), non-root users will not be allowed to change the
      oom_score_adj of a process to a value that is too low.
      Since terminal's default oom_score_adj is 200, if we run the test on
      terminal, we won't be able to set the oom_score_adj of the redis process
      to 9 or 22, which is too low.
      
      Reproduction on ubuntu 23(lunar) terminal:
      ```sh
      $ cat /proc/`pgrep redis-server`/oom_score_adj
      200
      $ echo 100 > /proc/`pgrep redis-server`/oom_score_adj
      # success without error
      $ echo 99 > /proc/`pgrep redis-server`/oom_score_adj
      echo: write error: Permission denied
      ```
      
      As from the output above, we can only set the minimum oom score of redis
      processes to 100.
      By modifying the test, make oom_score_adj only increase upwards and not
      decrease.
      
      ---------
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      585d83c2
  3. 02 Oct, 2024 4 commits
  4. 18 Jun, 2024 1 commit
  5. 19 May, 2024 14 commits
    • YaacovHazan's avatar
      Redis 7.2.5 · f60370ce
      YaacovHazan authored
      f60370ce
    • Yanqi Lv's avatar
      fix wrong data type conversion in zrangeResultBeginStore (#13148) · 464aad9e
      Yanqi Lv authored
      In `beginResultEmission`, -1 means the result length is not known in
      advance. But after #12185, if we pass -1 to `zrangeResultBeginStore`, it
      will convert to SIZE_MAX in `zsetTypeCreate` and try to `dictExpand`.
      Although `dictExpand` won't succeed because the size overflows, I think
      we'd better to avoid this wrong conversion.
      
      This bug can be triggered when the source of `zrangestore` doesn't exist
      or we use `zrangestore` command with `byscore` or `bylex`.
      The impact is that dst keys will be converted to use skiplist instead of
      listpack.
      
      (cherry picked from commit bad33f87)
      464aad9e
    • Binbin's avatar
      Fix redis-check-aof incorrectly considering data in manifest format as MP-AOF (#12958) · 439b8da4
      Binbin authored
      The check in fileIsManifest misjudged the manifest file. For example,
      if resp aof contains "file", it will be considered a manifest file and
      the check will fail:
      ```
      *3
      $3
      set
      $4
      file
      $4
      file
      ```
      
      In #12951, if the preamble aof also contains it, it will also fail.
      Fixes #12951.
      
      the bug was happening if the the word "file" is mentioned
      in the first 1024 lines of the AOF. and now as soon as it finds
      a non-comment line it'll break (if it contains "file" or doesn't)
      
      (cherry picked from commit da727ad4)
      439b8da4
    • Matthew Douglass's avatar
      Fix conversion of numbers in lua args to redis args (#13115) · 1caaf581
      Matthew Douglass authored
      
      
      Since lua_Number is not explicitly an integer or a double, we need to
      make an effort
      to convert it as an integer when that's possible, since the string could
      later be used
      in a context that doesn't support scientific notation (e.g. 1e9 instead
      of 100000000).
      
      Since fpconv_dtoa converts numbers with the equivalent of `%f` or `%e`,
      which ever is shorter,
      this would break if we try to pass a long integer number to a command
      that takes integer.
      we'll get an implicit conversion to string in Lua, and then the parsing
      in getLongLongFromObjectOrReply will fail.
      
      ```
      > eval "redis.call('hincrby', 'key', 'field', '1000000000')" 0
      (nil)
      > eval "redis.call('hincrby', 'key', 'field', tonumber('1000000000'))" 0
      (error) ERR value is not an integer or out of range script: ac99c32e4daf7e300d593085b611de261954a946, on @user_script:1.
      ```
      
      Switch to using ll2string if the number can be safely represented as a
      long long.
      
      The problem was introduced in #10587 (Redis 7.2).
      closes #13113.
      
      ---------
      Co-authored-by: default avatarBinbin <binloveplay1314@qq.com>
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      (cherry picked from commit 5fdaa53d)
      1caaf581
    • debing.sun's avatar
      Check user's oom_score_adj write permission for oom-score-adj test (#13111) · e46ddde2
      debing.sun authored
      `CONFIG SET oom-score-adj handles configuration failures` test failed in
      some CI jobs today.
      Failed CI: https://github.com/redis/redis/actions/runs/8152519326
      
      Not sure why the github action's docker image perssions have changed,
      but the issue is similar to #12887,
      where we can't assume the range of oom_score_adj that a user can change.
      
      ## Solution:
      Modify the way of determining whether the current user has no privileges
      or not,
      instead of relying on whether the user id is 0 or not.
      
      (cherry picked from commit 9738ba98)
      e46ddde2
    • LiiNen's avatar
      Fix redis-cli --count (for --scan, --bigkeys, etc) was ignored unless... · 8f70fcc6
      LiiNen authored
      Fix redis-cli --count (for --scan, --bigkeys, etc) was ignored unless --pattern was also used (#13092)
      
      The --count option for redis-cli has been released in redis 7.2.
      https://github.com/redis/redis/pull/12042
      But I have found in code, that some logic was missing for using this
      'count' option.
      
      ```
      static redisReply *sendScan(unsigned long long *it) {
          redisReply *reply;
      
          if (config.pattern)
              reply = redisCommand(context, "SCAN %llu MATCH %b COUNT %d",
                  *it, config.pattern, sdslen(config.pattern), config.count);
          else
              reply = redisCommand(context,"SCAN %llu",*it);
      ```
      
      The intention was being able to using scan count.
      But in this case, the --count will be only applied when 'pattern' is
      declared.
      So, I had fix it simply, to be worked properly - even if --pattern
      option is not being used.
      
      I tested it simply with time() command several times, and I could see it
      works as intended with this commit.
      The examples of test results are below:
      ```
      # unstable build
      
      time(./redis-cli -a $AUTH -p $PORT -h $HOST --scan >/dev/null 2>/dev/null)
      
      real    0m1.287s
      user    0m0.011s
      sys     0m0.022s
      
      # count is not applied
      time(./redis-cli -a $AUTH -p $PORT -h $HOST --scan --count 1000 >/dev/null 2>/dev/null)
      
      real    0m1.117s
      user    0m0.011s
      sys     0m0.020s
      
      # count is applied with --pattern
      
      time(./redis-cli -a $AUTH -p $PORT -h $HOST --scan --count 1000 --pattern "hash:*" >/dev/null 2>/dev/null)
      
      real    0m0.045s
      user    0m0.002s
      sys     0m0.002s
      ```
      
      ```
      # fix-redis-cli-scan-count build
      time(./redis-cli -a $AUTH -p $PORT -h $HOST --scan >/dev/null 2>/dev/null)
      
      real    0m1.084s
      user    0m0.008s
      sys     0m0.024s
      
      # count is applied even if --pattern is not declared
      time(./redis-cli -a $AUTH -p $PORT -h $HOST --scan --count 1000 >/dev/null 2>/dev/null)
      
      real    0m0.043s
      user    0m0.000s
      sys     0m0.004s
      
      # of course this also applied
      time(./redis-cli -a $AUTH -p $PORT -h $HOST --scan --count 1000 --pattern "hash:*" >/dev/null 2>/dev/null)
      
      real    0m0.031s
      user    0m0.002s
      sys     0m0.002s
      ```
      
      Thanks a lot.
      
      (cherry picked from commit 763827c9)
      8f70fcc6
    • Binbin's avatar
      Increase tolerance range to block reprocess tests to avoid timing issues (#13053) · 8d3a1c97
      Binbin authored
      These tests have all failed in daily CI:
      ```
      *** [err]: Blocking XREADGROUP for stream key that has clients blocked on stream - reprocessing command in tests/unit/type/stream-cgroups.tcl
      Expected '1101' to be between to '1000' and '1100' (context: type eval line 23 cmd {assert_range [expr $end-$start] 1000 1100} proc ::test)
      
      *** [err]: BLPOP unblock but the key is expired and then block again - reprocessing command in tests/unit/type/list.tcl
      Expected '1101' to be between to '1000' and '1100' (context: type eval line 23 cmd {assert_range [expr $end-$start] 1000 1100} proc ::test)
      
      *** [err]: BZPOPMIN unblock but the key is expired and then block again - reprocessing command in tests/unit/type/zset.tcl
      Expected '1103' to be between to '1000' and '1100' (context: type eval line 23 cmd {assert_range [expr $end-$start] 1000 1100} proc ::test)
      ```
      
      Increase the range to avoid failures, and improve the comment to be
      clearer.
      tests was introduced in #13004.
      
      (cherry picked from commit 32f44da5)
      8d3a1c97
    • debing.sun's avatar
      Fix crash due to merge of quicklist node introduced by #12955 (#13040) · c34e6484
      debing.sun authored
      Fix two crash introducted by #12955
      
      When a quicklist node can't be inserted and split, we eventually merge
      the current node with its neighboring
      nodes after inserting, and compress the current node and its siblings.
      
      1. When the current node is merged with another node, the current node
      may become invalid and can no longer be used.
      
         Solution: let `_quicklistMergeNodes()` return the merged nodes.
      
      3. If the current node is a LZF quicklist node, its recompress will be
      1. If the split node can be merged with a sibling node to become head or
      tail, recompress may cause the head and tail to be compressed, which is
      not allowed.
      
          Solution: always recompress to 0 after merging.
      
      (cherry picked from commit 1e8dc1da)
      c34e6484
    • debing.sun's avatar
      Prevent LSET command from causing quicklist plain node size to exceed 4GB (#12955) · 1be66b98
      debing.sun authored
      Fix #12864
      
      The main reason for this crash is that when replacing a element of a
      quicklist packed node with lpReplace() method,
      if the final size is larger than 4GB, lpReplace() will fail and returns
      NULL, causing `node->entry` to be incorrectly set to NULL.
      
      Since the inserted data is not a large element, we can't just replace it
      like a large element, first quicklistInsertAfter()
      and then quicklistDelIndex(), because the current node may be merged and
      invalidated in quicklistInsertAfter().
      
      The solution of this PR:
      When replacing a node fails (listpack exceeds 4GB), split the current
      node, create a new node to put in the middle, and try to merge them.
      This is the same as inserting a large element.
      In the worst case, its size will not exceed 4GB.
      
      (cherry picked from commit 1f00c951)
      1be66b98
    • Binbin's avatar
      Fix timeout not being set in module blockClient case (#13011) · 423d1909
      Binbin authored
      This was introduced in #13004, missing this assignment.
      It causes timeout to be a random value (may be less than now),
      and then in `Unblock by timer` test, the client is unblocked
      and then it call timeout_callback, since the callback is NULL,
      the server will crash.
      
      The crash stack is:
      ```
      beforesleep
      handleBlockedClientsTimeout
      checkBlockedClientTimeout
      unblockClientOnTimeout
      replyToBlockedClientTimedOut
      moduleBlockedClientTimedOut
      -- the timeout_callback is NULL, invalidFunctionWasCalled
      bc->timeout_callback(&ctx,(void**)c->argv,c->argc);
      ```
      
      (cherry picked from commit 45a35a79)
      423d1909
    • Binbin's avatar
      Fix blocking commands timeout is reset due to re-processing command (#13004) · 1bda797d
      Binbin authored
      In #11012, we will reprocess command when client is unblocked on keys,
      in some blocking commands, for example, in the XREADGROUP BLOCK
      scenario,
      because of the re-processing command, we will recalculate the block
      timeout,
      causing the blocking time to be reset.
      
      This commit add a new CLIENT_REPROCESSING_COMMAND clent flag, explicitly
      let the command know that it is being re-processed, later in
      blockForKeys
      we will not reset the timeout.
      
      Affected BLOCK cases:
      - list / zset / stream, added test cases for each.
      
      Unaffected cases:
      - module (never re-process the commands).
      - WAIT / WAITAOF (never re-process the commands).
      
      Fixes #12998.
      
      (cherry picked from commit 492021db)
      1bda797d
    • Oran Agra's avatar
      update redis-check-rdb types (#12969) · d5bae505
      Oran Agra authored
      seems that we forgot to update the array in redis-check rdb.
      
      (cherry picked from commit f9a0eb60)
      d5bae505
    • bentotten's avatar
      When one shard, sole primary node marks potentially failed replica as FAIL... · 099a2f40
      bentotten authored
      When one shard, sole primary node marks potentially failed replica as FAIL instead of PFAIL (#12824)
      
      Fixes issue where a single primary cannot mark a replica as failed in a
      single-shard cluster.
      
      (cherry picked from commit b3aaa0a1)
      099a2f40
    • Binbin's avatar
      Add announced-endpoints test to all_tests and fix tls related tests (#12927) · 4bd614c5
      Binbin authored
      The test was introduced in #10745, but we forgot to add it to the
      test_helper.tcl, so our CI did not actually run it. This PR adds it
      and ensures it passes CI tests.
      
      (cherry picked from commit b351a04b)
      4bd614c5
  6. 09 Jan, 2024 8 commits
    • Oran Agra's avatar
      Redis 7.2.4 · d2c8a4b9
      Oran Agra authored
      d2c8a4b9
    • Binbin's avatar
      Fix CLUSTER SHARDS crash in 7.0/7.2 mixed clusters where shard ids are not sync (#12832) · 85408b73
      Binbin authored
      Crash reported in #12695. In the process of upgrading the cluster from
      7.0 to 7.2, because the 7.0 nodes will not gossip shard id, in 7.2 we
      will rely on shard id to build the server.cluster->shards dict.
      
      In some cases, for example, the 7.0 master node and the 7.2 replica node.
      From the view of 7.2 replica node, the cluster->shards dictionary does not
      have its master node. In this case calling CLUSTER SHARDS on the 7.2 replica
      node may crash.
      
      We should fix the underlying assumption of updateShardId, which is that the
      shard dict should be always in sync with the node's shard_id. The fix was
      suggested by PingXie, see more details in #12695.
      
      (cherry picked from commit 5b0c6a82)
      85408b73
    • Binbin's avatar
      Use shard-id of the master if the replica does not support shard-id (#12805) · 5a2f4a1e
      Binbin authored
      If there are nodes in the cluster that do not support shard-id, they
      will gossip shard-id. From the perspective of nodes that support shard-id,
      their shard-id is meaningless (since shard-id is randomly generated when
      we create a node.)
      
      Nodes that support shard-id will save the shard-id information in nodes.conf.
      If the node is restarted according to nodes.conf, the server will report a
      corrupted cluster config file error. Because auxShardIdSetter will reject
      configurations with inconsistent master-replica shard-ids.
      
      A cluster-wide consensus for the node's shard_id is not necessary. The key
      is maintaining consistency of the shard_id on each individual 7.2 node.
      As the cluster progressively upgrades to version 7.2, we can expect the
      shard_ids across all nodes to naturally converge and align.
      
      In this PR, when processing the gossip, if sender is a replica and does not
      support shard-id, set the shard_id to the shard_id of its master.
      
      (cherry picked from commit 4cae66f5)
      5a2f4a1e
    • Binbin's avatar
      Un-register notification and server event when RedisModule_OnLoad fails (#12809) · c4776caf
      Binbin authored
      When we register notification or server event in RedisModule_OnLoad, but
      RedisModule_OnLoad eventually fails, triggering notification or server
      event
      will cause the server to crash.
      
      If the loading fails on a later stage of moduleLoad, we do call
      moduleUnload
      which handles all un-registration, but when it fails on the
      RedisModule_OnLoad
      call, we only un-register several specific things and these were
      missing:
      
      - moduleUnsubscribeNotifications
      - moduleUnregisterFilters
      - moduleUnsubscribeAllServerEvents
      
      Refactored the code to reuse the code from moduleUnload.
      
      Fixes #12808.
      
      (cherry picked from commit d6f19539)
      c4776caf
    • Meir Shpilraien (Spielrein)'s avatar
      Before evicted and before expired server events are not executed inside an execution unit. (#12733) · 4cbf9030
      Meir Shpilraien (Spielrein) authored
      Redis 7.2 (#9406) introduced a new modules event, `RedisModuleEvent_Key`.
      This new event allows the module to read the key data just before it is removed
      from the database (either deleted, expired, evicted, or overwritten).
      
      When the key is removed from the database, either by active expire or eviction.
      The new event was not called as part of an execution unit. This can cause an
      issue if the module registers a post notification job inside the event. This job will
      not be executed atomically with the expiration/eviction operation and will not
      replicated inside a Multi/Exec. Moreover, the post notification job will be executed
      right after the event where it is still not safe to perform any write operation, this will
      violate the promise that post notification job will be called atomically with the
      operation that triggered it and **only when it is safe to write**.
      
      This PR fixes the issue by wrapping each expiration/eviction of a key with an execution
      unit. This makes sure the entire operation will run atomically and all the post notification
      jobs will be executed at the end where it is safe to write.
      
      Tests were modified to verify the fix.
      
      (cherry picked from commit 0ffb9d2e)
      4cbf9030
    • Sankar's avatar
      Clear owner_not_claiming_slot bit for the slot in clusterDelSlot (#12564) · a91b57ef
      Sankar authored
      Clear owner_not_claiming_slot bit for the slot in clusterDelSlot to keep it
      consistent with slot ownership information.
      
      (cherry picked from commit 8cdeddc8)
      a91b57ef
    • Binbin's avatar
      Use CLZ in _dictNextExp to get the next power of two (#12815) · 8359ce26
      Binbin authored
      In the past, we did not call _dictNextExp frequently. It was only
      called when the dictionary was expanded.
      
      Later, dictTypeExpandAllowed was introduced in #7954, which is 6.2.
      For the data dict and the expire dict, we can check maxmemory before
      actually expanding the dict. This is a good optimization to avoid
      maxmemory being exceeded due to the dict expansion.
      
      And in #11692, we moved the dictTypeExpandAllowed check before the
      threshold check, this caused a bit of performance degradation, every
      time a key is added to the dict, dictTypeExpandAllowed is called to
      check.
      
      The main reason for degradation is that in a large dict, we need to
      call _dictNextExp frequently, that is, every time we add a key, we
      need to call _dictNextExp once. Then the threshold is checked to see
      if the dict needs to be expanded. We can see that the order of checks
      here can be optimized.
      
      So we moved the dictTypeExpandAllowed check back to after the threshold
      check in #12789. In this way, before the dict is actually expanded (that
      is, before the threshold is reached), we will not do anything extra
      compared to before, that is, we will not call _dictNextExp frequently.
      
      But note we'll still hit the degradation when we over the thresholds.
      When the threshold is reached, because #7954, we may delay the dict
      expansion due to maxmemory limitations. In this case, we will call
      _dictNextExp every time we add a key during this period.
      
      This PR use CLZ in _dictNextExp to get the next power of two. CLZ (count
      leading zeros) can easily give you the next power of two. It should be
      noted that we have actually introduced the use of __builtin_clzl in
      #8687,
      which is 7.0. So i suppose all the platforms we use have it (even if the
      CPU doesn't have an instruction).
      
      We build 67108864 (2**26) keys through DEBUG POPULTE, which will use
      approximately 5.49G memory (used_memory:5898522936). If expansion is
      triggered, the additional hash table will consume approximately 1G
      memory (2 ** 27 * 8). So we set maxmemory to 6871947673 (that is, 6.4G),
      which will be less than 5.49G + 1G, so we will delay the dict rehash
      while addint the keys.
      
      After that, each time an element is added to the dict, an allow check
      will be performed, that is, we can frequently call _dictNextExp to test
      the comparison before and after the optimization. Using DEBUG HTSTATS 0
      to
      check and make sure that our dict expansion is dealyed.
      
      Using `./src/redis-server redis.conf --save "" --maxmemory 6871947673`.
      Using `./src/redis-benchmark -P 100 -r 1000000000 -t set -n 5000000`.
      After ten rounds of testing:
      ```
      unstable:           this PR:
      769585.94           816860.00
      771724.00           818196.69
      775674.81           822368.44
      781983.12           822503.69
      783576.25           828088.75
      784190.75           828637.75
      791389.69           829875.50
      794659.94           835660.69
      798212.00           830013.25
      801153.62           833934.56
      ```
      
      We can see there is about 4-5% performance improvement in this case.
      
      (cherry picked from commit 22cc9b51)
      8359ce26
    • Binbin's avatar
      Optimize dict expand check, move allow check after the thresholds check (#12789) · 856c8a47
      Binbin authored
      dictExpandAllowed (for the main db dict and the expire dict) seems to
      involve a few function calls and memory accesses, and we can do it only
      after the thresholds checks and can get some performance improvements.
      
      A simple benchmark test: there are 11032768 fixed keys in the database,
      start a redis-server with `--maxmemory big_number --save ""`,
      start a redis-benchmark with `-P 100 -r 1000000000 -t set -n 5000000`,
      collect `throughput summary: n requests per second` result.
      
      After five rounds of testing:
      ```
      unstable     this PR
      848032.56    897988.56
      854408.69    913408.88
      858663.94    914076.81
      871839.56    916758.31
      882612.56    920640.75
      ```
      
      We can see a 5% performance improvement in general condition.
      But note we'll still hit the degradation when we over the thresholds.
      
      (cherry picked from commit 46347693)
      856c8a47