1. 15 Aug, 2023 1 commit
  2. 16 Jul, 2023 1 commit
    • Chen Tianjie's avatar
      Hide the comma after cport when there is no hostname. (#12411) · 91011100
      Chen Tianjie authored
      According to the format shown in https://redis.io/commands/cluster-nodes/
      ```
      <ip:port@cport[,hostname[,auxiliary_field=value]*]>
      ```
      when there is no hostname, and the auxiliary fields are hidden, the cluster topology should be
      ```
      <ip:port@cport>
      ```
      However in the code we always print the hostname even when it is an empty string, leaving an unnecessary comma tailing after cport, which is weird and conflicts with the doc.
      ```
      94ca2f6cf85228a49fde7b738ee1209de7bee325 127.0.0.1:6379@16379, myself,master - 0 0 0 connected 0-16383
      ```
      91011100
  3. 07 Jul, 2023 1 commit
    • Binbin's avatar
      Initialize cluster owner_not_claiming_slot to avoid warning (#12391) · 14f802b3
      Binbin authored
      valgrind report a Uninitialised warning:
      ```
      ==25508==  Uninitialised value was created by a heap allocation
      ==25508==    at 0x4848899: malloc (in
      /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
      ==25508==    by 0x1A35A1: ztrymalloc_usable_internal (zmalloc.c:117)
      ==25508==    by 0x1A368D: zmalloc (zmalloc.c:145)
      ==25508==    by 0x21FDEA: clusterInit (cluster.c:973)
      ==25508==    by 0x19DC09: main (server.c:7306)
      ```
      
      Introduced in #12344
      14f802b3
  4. 06 Jul, 2023 1 commit
    • Sankar's avatar
      Process loss of slot ownership in cluster bus (#12344) · 1190f25c
      Sankar authored
      Process loss of slot ownership in cluster bus
      
      When a node no longer owns a slot, it clears the bit corresponding
      to the slot in the cluster bus messages. The receiving nodes
      currently don't record the fact that the sender stopped claiming
      a slot until some other node in the cluster starts claiming the slot.
      This can cause a slot to go missing during slot migration when subjected
      to inopportune race with addition of new shards or a failover.
      This fix forces the receiving nodes to process the loss of ownership
      to avoid spreading wrong information.
      1190f25c
  5. 26 Jun, 2023 1 commit
    • Chen Tianjie's avatar
      Support TLS service when "tls-cluster" is not enabled and persist both plain... · 22a29935
      Chen Tianjie authored
      Support TLS service when "tls-cluster" is not enabled and persist both plain and TLS port in nodes.conf (#12233)
      
      Originally, when "tls-cluster" is enabled, `port` is set to TLS port. In order to support non-TLS clients, `pport` is used to propagate TCP port across cluster nodes. However when "tls-cluster" is disabled, `port` is set to TCP port, and `pport` is not used, which means the cluster cannot provide TLS service unless "tls-cluster" is on.
      ```
      typedef struct {
          // ...
          uint16_t port;  /* Latest known clients port (TLS or plain). */
          uint16_t pport; /* Latest known clients plaintext port. Only used if the main clients port is for TLS. */
          // ...
      } clusterNode;
      ```
      ```
      typedef struct {
          // ...
          uint16_t port;   /* TCP base port number. */
          uint16_t pport;  /* Sender TCP plaintext port, if base port is TLS */
          // ...
      } clusterMsg;
      ```
      This PR renames `port` and `pport` in `clusterNode` to `tcp_port` and `tls_port`, to record both ports no matter "tls-cluster" is enabled or disabled.
      
      This allows to provide TLS service to clients when "tls-cluster" is disabled: when displaying cluster topology, or giving `MOVED` error, server can provide TLS or TCP port according to client's connection type, no matter what type of connection cluster bus is using.
      
      For backwards compatibility, `port` and `pport` in `clusterMsg` are preserved, when "tls-cluster" is enabled, `port` is set to TLS port and `pport` is set to TCP port, when "tls-cluster" is disabled, `port` is set to TCP port and `pport` is set to TLS port (instead of 0).
      
      Also, in the nodes.conf file, a new aux field displaying an extra port is added to complete the persisted info. We may have `tls_port=xxxxx` or `tcp_port=xxxxx` in the aux field, to complete the cluster topology, while the other port is stored in the normal `<ip>:<port>` field. The format is shown below.
      ```
      <node-id> <ip>:<tcp_port>@<cport>,<hostname>,shard-id=...,tls-port=6379 myself,master - 0 0 0 connected 0-1000
      ```
      Or we can switch the position of two ports, both can be correctly resolved.
      ```
      <node-id> <ip>:<tls_port>@<cport>,<hostname>,shard-id=...,tcp-port=6379 myself,master - 0 0 0 connected 0-1000
      ```
      22a29935
  6. 20 Jun, 2023 1 commit
    • Binbin's avatar
      Fix cluster human_nodename Getter data loss in nodes.conf (#12325) · cd4f3e20
      Binbin authored
      auxHumanNodenameGetter limited to %.40s, since we did not limit the
      length of config cluster-announce-human-nodename, %.40s will cause
      nodename data loss (we will persist it in nodes.conf).
      
      Additional modified auxHumanNodenamePresent to use sdslen.
      cd4f3e20
  7. 18 Jun, 2023 1 commit
    • Wen Hui's avatar
      Cluster human readable nodename feature (#9564) · 070453ee
      Wen Hui authored
      
      
      This PR adds a human readable name to a node in clusters that are visible as part of error logs. This is useful so that admins and operators of Redis cluster have better visibility into failures without having to cross-reference the generated ID with some logical identifier (such as pod-ID or EC2 instance ID). This is mentioned in #8948. Specific nodenames can be set by using the variable cluster-announce-human-nodename. The nodename is gossiped using the clusterbus extension in #9530.
      Co-authored-by: default avatarMadelyn Olson <madelyneolson@gmail.com>
      070453ee
  8. 16 Jun, 2023 1 commit
    • Binbin's avatar
      Fix SPOP/RESTORE propagation when doing lazy free (#12320) · 439b0315
      Binbin authored
      In SPOP, when COUNT is greater than or equal to set's size,
      we will remove the set. In dbDelete, we will do DEL or UNLINK
      according to the lazy flag. This is also required for propagate.
      
      In RESTORE, we won't store expired keys into the db, see #7472.
      When used together with REPLACE, it should emit a DEL or UNLINK
      according to the lazy flag.
      
      This PR also adds tests to cover the propagation. The RESTORE
      test will also cover #7472.
      439b0315
  9. 23 May, 2023 1 commit
  10. 03 May, 2023 1 commit
    • Madelyn Olson's avatar
      Remove prototypes with empty declarations (#12020) · 5e3be1be
      Madelyn Olson authored
      Technically declaring a prototype with an empty declaration has been deprecated since the early days of C, but we never got a warning for it. C2x will apparently be introducing a breaking change if you are using this type of declarator, so Clang 15 has started issuing a warning with -pedantic. Although not apparently a problem for any of the compiler we build on, if feels like the right thing is to properly adhere to the C standard and use (void).
      5e3be1be
  11. 19 Feb, 2023 1 commit
  12. 16 Feb, 2023 1 commit
  13. 14 Feb, 2023 1 commit
    • Wen Hui's avatar
      Update codes (#11804) · a7051845
      Wen Hui authored
      In this PR, we use function pointer *isPresent replace the variable "present" in auxFieldHandler, so that in the future, when we have more aux fields, we could decide if the aux field is displayed or not.
      a7051845
  14. 02 Feb, 2023 1 commit
    • Harkrishn Patro's avatar
      Propagate message to a node only if the cluster link is healthy. (#11752) · fd397568
      Harkrishn Patro authored
      Currently while a sharded pubsub message publish tries to propagate the message across the cluster, a NULL check is missing for clusterLink. clusterLink could be NULL if the link is causing memory beyond the set threshold cluster-link-sendbuf-limit and server terminates the link.
      
      This change introduces two things:
      
      Avoids the engine crashes on the publishing node if a message is tried to be sent to a node and the link is NULL.
      Adds a debugging tool CLUSTERLINK KILL to terminate the clusterLink between two nodes.
      fd397568
  15. 30 Jan, 2023 1 commit
    • Madelyn Olson's avatar
      Optimize the performance of cluster slots for non-continuous slots (#11745) · e74a1f3b
      Madelyn Olson authored
      This change improves the performance of cluster slots by removing the deferring lengths that are used. Deferring lengths are used in two contexts, the first is for determining the number of replicas that serve a slot (Added in 6.2 as part of a different performance improvement) and the second is for determining the extra networking options for each node (Added in 7.0). For continuous slots, (e.g. 0-8196) this improvement is very negligible, however it becomes more significant when slots are not continuous (e.g. 0 2 4 6 etc) which can happen in production for various users.
      
      The `cluster slots` command is deprecated in favor of `cluster shards`, but since most clients don't support the new command yet I think it's important to not degrade performance here.
      
      Benchmarking shows about 2x improvement, however I wasn't able to get a coherent TPS number since the benchmark process was being saturated long before Redis was, so had to run with multiple benchmarks and merge results. If needed I can add this to our memtier framework. Instead the next section shows the number of usec per call from the benchmark results, which shows significant improvement as well as having a more coherent response in the CoB.
      
      | | New Code | Old Code | % Improvements
      |----|----|----- |-----
      | Uniform slots| usec_per_call=10.46 | usec_per_call=11.03 | 5.7%
      | Worst case (Only even slots)| usec_per_call=963.80 | usec_per_call=2950.99 | 307%
      
      This change also removes some extra white space that I added a when making a code change for adding hostnames.
      e74a1f3b
  16. 12 Jan, 2023 1 commit
  17. 11 Jan, 2023 2 commits
    • Viktor Söderqvist's avatar
      Remove the bucket-cb from dictScan and move dictEntry defrag to dictScanDefrag · b60d33c9
      Viktor Söderqvist authored
      This change deletes the dictGetNext and dictGetNextRef functions, so the
      dict API doesn't expose the next field at all.
      
      The bucket function in dictScan is deleted. A separate dictScanDefrag function
      is added which takes a defrag alloc function to defrag-reallocate the dict entries.
      
      "Dirty" code accessing the dict internals in active defrag is removed.
      
      An 'afterReplaceEntry' is added to dictType, which allows the dict user
      to keep the dictEntry metadata up to date after reallocation/defrag/move.
      
      Additionally, for updating the cluster slot-to-key mapping, after a dictEntry
      has been reallocated, we need to know which db a dict belongs to, so we store
      a pointer to the db in a new metadata section in the dict struct, which is
      a new mechanism similar to dictEntry metadata. This adds some complexity but
      provides better isolation.
      b60d33c9
    • Viktor Söderqvist's avatar
      Make dictEntry opaque · c84248b5
      Viktor Söderqvist authored
      Use functions for all accesses to dictEntry (except in dict.c). Dict abuses
      e.g. in defrag.c have been replaced by support functions provided by dict.
      c84248b5
  18. 02 Jan, 2023 1 commit
  19. 01 Jan, 2023 1 commit
    • ranshid's avatar
      reprocess command when client is unblocked on keys (#11012) · 383d902c
      ranshid authored
      *TL;DR*
      ---------------------------------------
      Following the discussion over the issue [#7551](https://github.com/redis/redis/issues/7551
      
      )
      We decided to refactor the client blocking code to eliminate some of the code duplications
      and to rebuild the infrastructure better for future key blocking cases.
      
      
      *In this PR*
      ---------------------------------------
      1. reprocess the command once a client becomes unblocked on key (instead of running
         custom code for the unblocked path that's different than the one that would have run if
         blocking wasn't needed)
      2. eliminate some (now) irrelevant code for handling unblocking lists/zsets/streams etc...
      3. modify some tests to intercept the error in cases of error on reprocess after unblock (see
         details in the notes section below)
      4. replace '$' on the client argv with current stream id. Since once we reprocess the stream
         XREAD we need to read from the last msg and not wait for new msg  in order to prevent
         endless block loop. 
      5. Added statistics to the info "Clients" section to report the:
         * `total_blocking_keys` - number of blocking keys
         * `total_blocking_keys_on_nokey` - number of blocking keys which have at least 1 client
            which would like
         to be unblocked on when the key is deleted.
      6. Avoid expiring unblocked key during unblock. Previously we used to lookup the unblocked key
         which might have been expired during the lookup. Now we lookup the key using NOTOUCH and
         NOEXPIRE to avoid deleting it at this point, so propagating commands in blocked.c is no longer needed.
      7. deprecated command flags. We decided to remove the CMD_CALL_STATS and CMD_CALL_SLOWLOG
         and make an explicit verification in the call() function in order to decide if stats update should take place.
         This should simplify the logic and also mitigate existing issues: for example module calls which are
         triggered as part of AOF loading might still report stats even though they are called during AOF loading.
      
      *Behavior changes*
      ---------------------------------------------------
      
      1. As this implementation prevents writing dedicated code handling unblocked streams/lists/zsets,
      since we now re-process the command once the client is unblocked some errors will be reported differently.
      The old implementation used to issue
      ``UNBLOCKED the stream key no longer exists``
      in the following cases:
         - The stream key has been deleted (ie. calling DEL)
         - The stream and group existed but the key type was changed by overriding it (ie. with set command)
         - The key not longer exists after we swapdb with a db which does not contains this key
         - After swapdb when the new db has this key but with different type.
         
      In the new implementation the reported errors will be the same as if the command was processed after effect:
      **NOGROUP** - in case key no longer exists, or **WRONGTYPE** in case the key was overridden with a different type.
      
      2. Reprocessing the command means that some checks will be reevaluated once the
      client is unblocked.
      For example, ACL rules might change since the command originally was executed and
      will fail once the client is unblocked.
      Another example is OOM condition checks which might enable the command to run and
      block but fail the command reprocess once the client is unblocked.
      
      3. One of the changes in this PR is that no command stats are being updated once the
      command is blocked (all stats will be updated once the client is unblocked). This implies
      that when we have many clients blocked, users will no longer be able to get that information
      from the command stats. However the information can still be gathered from the client list.
      
      **Client blocking**
      ---------------------------------------------------
      
      the blocking on key will still be triggered the same way as it is done today.
      in order to block the current client on list of keys, the call to
      blockForKeys will still need to be made which will perform the same as it is today:
      
      *  add the client to the list of blocked clients on each key
      *  keep the key with a matching list node (position in the global blocking clients list for that key)
         in the client private blocking key dict.
      *  flag the client with CLIENT_BLOCKED
      *  update blocking statistics
      *  register the client on the timeout table
      
      **Key Unblock**
      ---------------------------------------------------
      
      Unblocking a specific key will be triggered (same as today) by calling signalKeyAsReady.
      the implementation in that part will stay the same as today - adding the key to the global readyList.
      The reason to maintain the readyList (as apposed to iterating over all clients blocked on the specific key)
      is in order to keep the signal operation as short as possible, since it is called during the command processing.
      The main change is that instead of going through a dedicated code path that operates the blocked command
      we will just call processPendingCommandsAndResetClient.
      
      **ClientUnblock (keys)**
      ---------------------------------------------------
      
      1. Unblocking clients on keys will be triggered after command is
         processed and during the beforeSleep
      8. the general schema is:
      9. For each key *k* in the readyList:
      ```            
      For each client *c* which is blocked on *k*:
                  in case either:
      	          1. *k* exists AND the *k* type matches the current client blocking type
      	  	      OR
      	          2. *k* exists and *c* is blocked on module command
      	    	      OR
      	          3. *k* does not exists and *c* was blocked with the flag
      	             unblock_on_deleted_key
                       do:
                                        1. remove the client from the list of clients blocked on this key
                                        2. remove the blocking list node from the client blocking key dict
                                        3. remove the client from the timeout list
                                        10. queue the client on the unblocked_clients list
                                        11. *NEW*: call processCommandAndResetClient(c);
      ```
      *NOTE:* for module blocked clients we will still call the moduleUnblockClientByHandle
                    which will queue the client for processing in moduleUnblockedClients list.
      
      **Process Unblocked clients**
      ---------------------------------------------------
      
      The process of all unblocked clients is done in the beforeSleep and no change is planned
      in that part.
      
      The general schema will be:
      For each client *c* in server.unblocked_clients:
      
              * remove client from the server.unblocked_clients
              * set back the client readHandler
              * continue processing the pending command and input buffer.
      
      *Some notes regarding the new implementation*
      ---------------------------------------------------
      
      1. Although it was proposed, it is currently difficult to remove the
         read handler from the client while it is blocked.
         The reason is that a blocked client should be unblocked when it is
         disconnected, or we might consume data into void.
      
      2. While this PR mainly keep the current blocking logic as-is, there
         might be some future additions to the infrastructure that we would
         like to have:
         - allow non-preemptive blocking of client - sometimes we can think
           that a new kind of blocking can be expected to not be preempt. for
           example lets imagine we hold some keys on disk and when a command
           needs to process them it will block until the keys are uploaded.
           in this case we will want the client to not disconnect or be
           unblocked until the process is completed (remove the client read
           handler, prevent client timeout, disable unblock via debug command etc...).
         - allow generic blocking based on command declared keys - we might
           want to add a hook before command processing to check if any of the
           declared keys require the command to block. this way it would be
           easier to add new kinds of key-based blocking mechanisms.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      Signed-off-by: default avatarRan Shidlansik <ranshid@amazon.com>
      383d902c
  20. 20 Dec, 2022 1 commit
    • guybe7's avatar
      Cleanup: Get rid of server.core_propagates (#11572) · 9c7c6924
      guybe7 authored
      1. Get rid of server.core_propagates - we can just rely on module/call nesting levels
      2. Rename in_nested_call  to execution_nesting and update the comment
      3. Remove module_ctx_nesting (redundant, we can use execution_nesting)
      4. Modify postExecutionUnitOperations according to the comment (The main purpose of this PR)
      5. trackingHandlePendingKeyInvalidations: Check the nesting level inside this function
      9c7c6924
  21. 27 Nov, 2022 1 commit
  22. 26 Nov, 2022 1 commit
  23. 25 Nov, 2022 1 commit
  24. 24 Nov, 2022 1 commit
    • Meir Shpilraien (Spielrein)'s avatar
      Module API to allow writes after key space notification hooks (#11199) · abc345ad
      Meir Shpilraien (Spielrein) authored
      ### Summary of API additions
      
      * `RedisModule_AddPostNotificationJob` - new API to call inside a key space
        notification (and on more locations in the future) and allow to add a post job as describe above.
      * New module option, `REDISMODULE_OPTIONS_ALLOW_NESTED_KEYSPACE_NOTIFICATIONS`,
        allows to disable Redis protection of nested key-space notifications.
      * `RedisModule_GetModuleOptionsAll` - gets the mask of all supported module options so a module
        will be able to check if a given option is supported by the current running Redis instance.
      
      ### Background
      
      The following PR is a proposal of handling write operations inside module key space notifications.
      After a lot of discussions we came to a conclusion that module should not perform any write
      operations on key space notification.
      
      Some examples of issues that such write operation can cause are describe on the following links:
      
      * Bad replication oreder - https://github.com/redis/redis/pull/10969
      * Used after free - https://github.com/redis/redis/pull/10969#issuecomment-1223771006
      * Used after free - https://github.com/redis/redis/pull/9406#issuecomment-1221684054
      
      
      
      There are probably more issues that are yet to be discovered. The underline problem with writing
      inside key space notification is that the notification runs synchronously, this means that the notification
      code will be executed in the middle on Redis logic (commands logic, eviction, expire).
      Redis **do not assume** that the data might change while running the logic and such changes
      can crash Redis or cause unexpected behaviour.
      
      The solution is to state that modules **should not** perform any write command inside key space
      notification (we can chose whether or not we want to force it). To still cover the use-case where
      module wants to perform a write operation as a reaction to key space notifications, we introduce
      a new API , `RedisModule_AddPostNotificationJob`, that allows to register a callback that will be
      called by Redis when the following conditions hold:
      
      * It is safe to perform any write operation.
      * The job will be called atomically along side the operation that triggers it (in our case, key
        space notification).
      
      Module can use this new API to safely perform any write operation and still achieve atomicity
      between the notification and the write.
      
      Although currently the API is supported on key space notifications, the API is written in a generic
      way so that in the future we will be able to use it on other places (server events for example).
      
      ### Technical Details
      
      Whenever a module uses `RedisModule_AddPostNotificationJob` the callback is added to a list
      of callbacks (called `modulePostExecUnitJobs`) that need to be invoke after the current execution
      unit ends (whether its a command, eviction, or active expire). In order to trigger those callback
      atomically with the notification effect, we call those callbacks on `postExecutionUnitOperations`
      (which was `propagatePendingCommands` before this PR). The new function fires the post jobs
      and then calls `propagatePendingCommands`.
      
      If the callback perform more operations that triggers more key space notifications. Those keys
      space notifications might register more callbacks. Those callbacks will be added to the end
      of `modulePostExecUnitJobs` list and will be invoke atomically after the current callback ends.
      This raises a concerns of entering an infinite loops, we consider infinite loops as a logical bug
      that need to be fixed in the module, an attempt to protect against infinite loops by halting the
      execution could result in violation of the feature correctness and so **Redis will make no attempt
      to protect the module from infinite loops**
      
      In addition, currently key space notifications are not nested. Some modules might want to allow
      nesting key-space notifications. To allow that and keep backward compatibility, we introduce a
      new module option called `REDISMODULE_OPTIONS_ALLOW_NESTED_KEYSPACE_NOTIFICATIONS`.
      Setting this option will disable the Redis key-space notifications nesting protection and will
      pass this responsibility to the module.
      
      ### Redis infrastructure
      
      This PR promotes the existing `propagatePendingCommands` to an "Execution Unit" concept,
      which is called after each atomic unit of execution,
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      Co-authored-by: default avatarYossi Gottlieb <yossigo@gmail.com>
      Co-authored-by: default avatarMadelyn Olson <34459052+madolson@users.noreply.github.com>
      abc345ad
  25. 17 Nov, 2022 1 commit
    • Ping Xie's avatar
      Introduce Shard IDs to logically group nodes in cluster mode (#10536) · 203b12e4
      Ping Xie authored
      Introduce Shard IDs to logically group nodes in cluster mode.
      1. Added a new "shard_id" field to "cluster nodes" output and nodes.conf after "hostname"
      2. Added a new PING extension to propagate "shard_id"
      3. Handled upgrade from pre-7.2 releases automatically
      4. Refactored PING extension assembling/parsing logic
      
      Behavior of Shard IDs:
      
      Replicas will always follow the shards of their reported primaries. If a primary updates its shard ID, the replica will follow. (This need not follow for cluster v2) This is not an expected use case.
      203b12e4
  26. 02 Nov, 2022 2 commits
  27. 27 Oct, 2022 1 commit
    • Moti Cohen's avatar
      Refactor and (internally) rebrand from pause-clients to pause-actions (#11098) · c0d72262
      Moti Cohen authored
      Renamed from "Pause Clients" to "Pause Actions" since the mechanism can pause
      several actions in redis, not just clients (e.g. eviction, expiration).
      
      Previously each pause purpose (which has a timeout that's tracked separately from others purposes),
      also implicitly dictated what it pauses (reads, writes, eviction, etc). Now it is explicit, and
      the actions that are paused (bit flags) are defined separately from the purpose.
      
      - Previously, when using feature pause-client it also implicitly means to make the server static:
        - Pause replica traffic
        - Pauses eviction processing
        - Pauses expire processing
      
      Making the server static is used also for failover and shutdown. This PR internally rebrand
      pause-client API to become pause-action API. It also Simplifies pauseClients structure
      by replacing pointers array with static array.
      
      The context of this PR is to add another trigger to pause-client which will activated in case
      of OOM as throttling mechanism ([see here](https://github.com/redis/redis/issues/10907)).
      In this case we want only to pause client, and eviction actions.
      c0d72262
  28. 16 Oct, 2022 1 commit
  29. 12 Oct, 2022 1 commit
    • Meir Shpilraien (Spielrein)'s avatar
      Fix crash on RM_Call inside module load (#11346) · eb6accad
      Meir Shpilraien (Spielrein) authored
      PR #9320 introduces initialization order changes. Now cluster is initialized after modules.
      This changes causes a crash if the module uses RM_Call inside the load function
      on cluster mode (the code will try to access `server.cluster` which at this point is NULL).
      
      To solve it, separate cluster initialization into 2 phases:
      1. Structure initialization that happened before the modules initialization
      2. Listener initialization that happened after.
      
      Test was added to verify the fix.
      eb6accad
  30. 09 Oct, 2022 1 commit
    • Binbin's avatar
      Freeze time sampling during command execution, and scripts (#10300) · 35b3fbd9
      Binbin authored
      Freeze time during execution of scripts and all other commands.
      This means that a key is either expired or not, and doesn't change
      state during a script execution. resolves #10182
      
      This PR try to add a new `commandTimeSnapshot` function.
      The function logic is extracted from `keyIsExpired`, but the related
      calls to `fixed_time_expire` and `mstime()` are removed, see below.
      
      In commands, we will avoid calling `mstime()` multiple times
      and just use the one that sampled in call. The background is,
      e.g. using `PEXPIRE 1` with valgrind sometimes result in the key
      being deleted rather than expired. The reason is that both `PEXPIRE`
      command and `checkAlreadyExpired` call `mstime()` separately.
      
      There are other more important changes in this PR:
      1. Eliminate `fixed_time_expire`, it is no longer needed. 
         When we want to sample time we should always use a time snapshot. 
         We will use `in_nested_call` instead to update the cached time in `call`.
      2. Move the call for `updateCachedTime` from `serverCron` to `afterSleep`.
          Now `commandTimeSnapshot` will always return the sample time, the
          `lookupKeyReadWithFlags` call in `getNodeByQuery` will get a outdated
          cached time (because `processCommand` is out of the `call` context).
          We put the call to `updateCachedTime` in `aftersleep`.
      3. Cache the time each time the module lock Redis.
          Call `updateCachedTime` in `moduleGILAfterLock`, affecting `RM_ThreadSafeContextLock`
          and `RM_ThreadSafeContextTryLock`
      
      Currently the commandTimeSnapshot change affects the following TTL commands:
      - SET EX / SET PX
      - EXPIRE / PEXPIRE
      - SETEX / PSETEX
      - GETEX EX / GETEX PX
      - TTL / PTTL
      - EXPIRETIME / PEXPIRETIME
      - RESTORE key TTL
      
      And other commands just use the cached mstime (including TIME).
      
      This is considered to be a breaking change since it can break a script
      that uses a loop to wait for a key to expire.
      35b3fbd9
  31. 03 Oct, 2022 1 commit
    • Madelyn Olson's avatar
      Stabilize cluster hostnames tests (#11307) · 663fbd34
      Madelyn Olson authored
      This PR introduces a couple of changes to improve cluster test stability:
      1. Increase the cluster node timeout to 3 seconds, which is similar to the
         normal cluster tests, but introduce a new mechanism to increase the ping
         period so that the tests are still fast. This new config is a debug config.
      2. Set `cluster-replica-no-failover yes` on a wider array of tests which are
         sensitive to failovers. This was occurring on the ARM CI.
      663fbd34
  32. 02 Oct, 2022 1 commit
    • Binbin's avatar
      code, typo and comment cleanups (#11280) · 3c02d1ac
      Binbin authored
      - fix `the the` typo
      - `LPOPRPUSH` does not exist, should be `RPOPLPUSH`
      - `CLUSTER GETKEYINSLOT` 's time complexity should be O(N)
      - `there bytes` should be `three bytes`, this closes #11266
      - `slave` word to `replica` in log, modified the front and missed the back
      - remove useless aofReadDiffFromParent in server.h
      - `trackingHandlePendingKeyInvalidations` method adds a void parameter
      3c02d1ac
  33. 30 Sep, 2022 1 commit
  34. 22 Sep, 2022 1 commit
    • Binbin's avatar
      Fix CLUSTER SHARDS showing empty hostname (#11297) · 1de675b3
      Binbin authored
      * Fix CLUSTER SHARDS showing empty hostname
      
      In #10290, we changed clusterNode hostname from `char*`
      to `sds`, and the old `node->hostname` was changed to
      `sdslen(node->hostname)!=0`.
      
      But in `addNodeDetailsToShardReply` it is missing.
      It results in the return of an empty string hostname
      in CLUSTER SHARDS command if it unavailable.
      
      Like this (note that we listed it as optional in the doc):
      ```
       9) "hostname"
      10) ""
      ```
      1de675b3
  35. 14 Sep, 2022 1 commit
  36. 13 Sep, 2022 1 commit
  37. 05 Sep, 2022 1 commit
  38. 28 Aug, 2022 1 commit