1. 04 Jan, 2023 1 commit
  2. 01 Jan, 2023 1 commit
    • ranshid's avatar
      reprocess command when client is unblocked on keys (#11012) · 383d902c
      ranshid authored
      *TL;DR*
      ---------------------------------------
      Following the discussion over the issue [#7551](https://github.com/redis/redis/issues/7551
      
      )
      We decided to refactor the client blocking code to eliminate some of the code duplications
      and to rebuild the infrastructure better for future key blocking cases.
      
      
      *In this PR*
      ---------------------------------------
      1. reprocess the command once a client becomes unblocked on key (instead of running
         custom code for the unblocked path that's different than the one that would have run if
         blocking wasn't needed)
      2. eliminate some (now) irrelevant code for handling unblocking lists/zsets/streams etc...
      3. modify some tests to intercept the error in cases of error on reprocess after unblock (see
         details in the notes section below)
      4. replace '$' on the client argv with current stream id. Since once we reprocess the stream
         XREAD we need to read from the last msg and not wait for new msg  in order to prevent
         endless block loop. 
      5. Added statistics to the info "Clients" section to report the:
         * `total_blocking_keys` - number of blocking keys
         * `total_blocking_keys_on_nokey` - number of blocking keys which have at least 1 client
            which would like
         to be unblocked on when the key is deleted.
      6. Avoid expiring unblocked key during unblock. Previously we used to lookup the unblocked key
         which might have been expired during the lookup. Now we lookup the key using NOTOUCH and
         NOEXPIRE to avoid deleting it at this point, so propagating commands in blocked.c is no longer needed.
      7. deprecated command flags. We decided to remove the CMD_CALL_STATS and CMD_CALL_SLOWLOG
         and make an explicit verification in the call() function in order to decide if stats update should take place.
         This should simplify the logic and also mitigate existing issues: for example module calls which are
         triggered as part of AOF loading might still report stats even though they are called during AOF loading.
      
      *Behavior changes*
      ---------------------------------------------------
      
      1. As this implementation prevents writing dedicated code handling unblocked streams/lists/zsets,
      since we now re-process the command once the client is unblocked some errors will be reported differently.
      The old implementation used to issue
      ``UNBLOCKED the stream key no longer exists``
      in the following cases:
         - The stream key has been deleted (ie. calling DEL)
         - The stream and group existed but the key type was changed by overriding it (ie. with set command)
         - The key not longer exists after we swapdb with a db which does not contains this key
         - After swapdb when the new db has this key but with different type.
         
      In the new implementation the reported errors will be the same as if the command was processed after effect:
      **NOGROUP** - in case key no longer exists, or **WRONGTYPE** in case the key was overridden with a different type.
      
      2. Reprocessing the command means that some checks will be reevaluated once the
      client is unblocked.
      For example, ACL rules might change since the command originally was executed and
      will fail once the client is unblocked.
      Another example is OOM condition checks which might enable the command to run and
      block but fail the command reprocess once the client is unblocked.
      
      3. One of the changes in this PR is that no command stats are being updated once the
      command is blocked (all stats will be updated once the client is unblocked). This implies
      that when we have many clients blocked, users will no longer be able to get that information
      from the command stats. However the information can still be gathered from the client list.
      
      **Client blocking**
      ---------------------------------------------------
      
      the blocking on key will still be triggered the same way as it is done today.
      in order to block the current client on list of keys, the call to
      blockForKeys will still need to be made which will perform the same as it is today:
      
      *  add the client to the list of blocked clients on each key
      *  keep the key with a matching list node (position in the global blocking clients list for that key)
         in the client private blocking key dict.
      *  flag the client with CLIENT_BLOCKED
      *  update blocking statistics
      *  register the client on the timeout table
      
      **Key Unblock**
      ---------------------------------------------------
      
      Unblocking a specific key will be triggered (same as today) by calling signalKeyAsReady.
      the implementation in that part will stay the same as today - adding the key to the global readyList.
      The reason to maintain the readyList (as apposed to iterating over all clients blocked on the specific key)
      is in order to keep the signal operation as short as possible, since it is called during the command processing.
      The main change is that instead of going through a dedicated code path that operates the blocked command
      we will just call processPendingCommandsAndResetClient.
      
      **ClientUnblock (keys)**
      ---------------------------------------------------
      
      1. Unblocking clients on keys will be triggered after command is
         processed and during the beforeSleep
      8. the general schema is:
      9. For each key *k* in the readyList:
      ```            
      For each client *c* which is blocked on *k*:
                  in case either:
      	          1. *k* exists AND the *k* type matches the current client blocking type
      	  	      OR
      	          2. *k* exists and *c* is blocked on module command
      	    	      OR
      	          3. *k* does not exists and *c* was blocked with the flag
      	             unblock_on_deleted_key
                       do:
                                        1. remove the client from the list of clients blocked on this key
                                        2. remove the blocking list node from the client blocking key dict
                                        3. remove the client from the timeout list
                                        10. queue the client on the unblocked_clients list
                                        11. *NEW*: call processCommandAndResetClient(c);
      ```
      *NOTE:* for module blocked clients we will still call the moduleUnblockClientByHandle
                    which will queue the client for processing in moduleUnblockedClients list.
      
      **Process Unblocked clients**
      ---------------------------------------------------
      
      The process of all unblocked clients is done in the beforeSleep and no change is planned
      in that part.
      
      The general schema will be:
      For each client *c* in server.unblocked_clients:
      
              * remove client from the server.unblocked_clients
              * set back the client readHandler
              * continue processing the pending command and input buffer.
      
      *Some notes regarding the new implementation*
      ---------------------------------------------------
      
      1. Although it was proposed, it is currently difficult to remove the
         read handler from the client while it is blocked.
         The reason is that a blocked client should be unblocked when it is
         disconnected, or we might consume data into void.
      
      2. While this PR mainly keep the current blocking logic as-is, there
         might be some future additions to the infrastructure that we would
         like to have:
         - allow non-preemptive blocking of client - sometimes we can think
           that a new kind of blocking can be expected to not be preempt. for
           example lets imagine we hold some keys on disk and when a command
           needs to process them it will block until the keys are uploaded.
           in this case we will want the client to not disconnect or be
           unblocked until the process is completed (remove the client read
           handler, prevent client timeout, disable unblock via debug command etc...).
         - allow generic blocking based on command declared keys - we might
           want to add a hook before command processing to check if any of the
           declared keys require the command to block. this way it would be
           easier to add new kinds of key-based blocking mechanisms.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      Signed-off-by: default avatarRan Shidlansik <ranshid@amazon.com>
      383d902c
  3. 08 Dec, 2022 1 commit
  4. 07 Dec, 2022 1 commit
    • Harkrishn Patro's avatar
      Optimize client memory usage tracking operation while client eviction is disabled (#11348) · c0267b3f
      Harkrishn Patro authored
      
      
      ## Issue
      During the client input/output buffer processing, the memory usage is
      incrementally updated to keep track of clients going beyond a certain
      threshold `maxmemory-clients` to be evicted. However, this additional
      tracking activity leads to unnecessary CPU cycles wasted when no
      client-eviction is required. It is applicable in two cases.
      
      * `maxmemory-clients` is set to `0` which equates to no client eviction
        (applicable to all clients)
      * `CLIENT NO-EVICT` flag is set to `ON` which equates to a particular
        client not applicable for eviction.  
      
      ## Solution
      * Disable client memory usage tracking during the read/write flow when
        `maxmemory-clients` is set to `0` or `client no-evict` is `on`.
        The memory usage is tracked only during the `clientCron` i.e. it gets
        periodically updated.
      * Cleanup the clients from the memory usage bucket when client eviction
        is disabled.
      * When the maxmemory-clients config is enabled or disabled at runtime,
        we immediately update the memory usage buckets for all clients (tested
        scanning 80000 took some 20ms)
      
      Benchmark shown that this can improve performance by about 5% in
      certain situations.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      c0267b3f
  5. 29 Nov, 2022 1 commit
    • filipe oliveira's avatar
      Reduce eval related overhead introduced in v7.0 by evalCalcFunctionName (#11521) · 7dfd7b91
      filipe oliveira authored
      
      
      As being discussed in #10981 we see a degradation in performance
      between v6.2 and v7.0 of Redis on the EVAL command. 
      
      After profiling the current unstable branch we can see that we call the
      expensive function evalCalcFunctionName twice. 
      
      The current "fix" is to basically avoid calling evalCalcFunctionName and
      even dictFind(lua_scripts) twice for the same command.
      Instead we cache the current script's dictEntry (for both Eval and Functions)
      in the current client so we don't have to repeat these calls.
      The exception would be when doing an EVAL on a new script that's not yet
      in the script cache. in that case we will call evalCalcFunctionName (and even
      evalExtractShebangFlags) twice.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      7dfd7b91
  6. 04 Nov, 2022 1 commit
    • Binbin's avatar
      Introduce socket shutdown into connection type, used if a fork is active (#11376) · fac188b4
      Binbin authored
      Introduce socket `shutdown()` into connection type, and use it
      on normal socket if a fork is active. This allows us to close
      client connections when there are child processes sharing the
      file descriptors.
      
      Fixes #10077. The reason is that since the `fork()` child is holding
      the file descriptors, the `close` in `unlinkClient -> connClose`
      isn't sufficient. The client will not realize that the connection is
      disconnected until the child process ends.
      
      Let's try to be conservative and only use shutdown when the fork is active.
      fac188b4
  7. 27 Oct, 2022 1 commit
    • Moti Cohen's avatar
      Refactor and (internally) rebrand from pause-clients to pause-actions (#11098) · c0d72262
      Moti Cohen authored
      Renamed from "Pause Clients" to "Pause Actions" since the mechanism can pause
      several actions in redis, not just clients (e.g. eviction, expiration).
      
      Previously each pause purpose (which has a timeout that's tracked separately from others purposes),
      also implicitly dictated what it pauses (reads, writes, eviction, etc). Now it is explicit, and
      the actions that are paused (bit flags) are defined separately from the purpose.
      
      - Previously, when using feature pause-client it also implicitly means to make the server static:
        - Pause replica traffic
        - Pauses eviction processing
        - Pauses expire processing
      
      Making the server static is used also for failover and shutdown. This PR internally rebrand
      pause-client API to become pause-action API. It also Simplifies pauseClients structure
      by replacing pointers array with static array.
      
      The context of this PR is to add another trigger to pause-client which will activated in case
      of OOM as throttling mechanism ([see here](https://github.com/redis/redis/issues/10907)).
      In this case we want only to pause client, and eviction actions.
      c0d72262
  8. 16 Oct, 2022 1 commit
    • David CARLIER's avatar
      Fixes build warning when CACHE_LINE_SIZE is already defined. (#11389) · 871cc200
      David CARLIER authored
      * Fixes build warning when CACHE_LINE_SIZE is already defined
      * Fixes wrong CACHE_LINE_SIZE on some FreeBSD systems where it could be set to 128 (e.g. on MIPS)
      * Fixes wrong CACHE_LINE_SIZE on Apple M1 (use 128 instead of 64)
      
      Wrong cache line size in that case can some false sharing of array elements between threads, see #10892
      871cc200
  9. 15 Oct, 2022 1 commit
    • filipe oliveira's avatar
      optimizing d2string() and addReplyDouble() with grisu2: double to string... · 29380ff7
      filipe oliveira authored
      optimizing d2string() and addReplyDouble() with grisu2: double to string conversion based on Florian Loitsch's Grisu-algorithm (#10587)
      
      All commands / use cases that heavily rely on double to a string representation conversion,
      (e.g. meaning take a double-precision floating-point number like 1.5 and return a string like "1.5" ),
      could benefit from a performance boost by swapping snprintf(buf,len,"%.17g",value) by the
      equivalent [fpconv_dtoa](https://github.com/night-shift/fpconv) or any other algorithm that ensures
      100% coverage of conversion.
      
      This is a well-studied topic and Projects like MongoDB. RedPanda, PyTorch leverage libraries
      ( fmtlib ) that use the optimized double to string conversion underneath.
      
      
      The positive impact can be substantial. This PR uses the grisu2 approach ( grisu explained on
      https://www.cs.tufts.edu/~nr/cs257/archive/florian-loitsch/printf.pdf section 5 ). 
      
      test suite changes:
      Despite being compatible, in some cases it produces a different result from printf, and...
      29380ff7
  10. 22 Sep, 2022 1 commit
    • Shaya Potter's avatar
      Add RM_SetContextUser to support acl validation in RM_Call (and scripts) (#10966) · 6e993a5d
      Shaya Potter authored
      Adds a number of user management/ACL validaiton/command execution functions to improve a
      Redis module's ability to enforce ACLs correctly and easily.
      
      * RM_SetContextUser - sets a RedisModuleUser on the context, which RM_Call will use to both
        validate ACLs (if requested and set) as well as assign to the client so that scripts executed via
        RM_Call will have proper ACL validation.
      * RM_SetModuleUserACLString - Enables one to pass an entire ACL string, not just a single OP
        and have it applied to the user
      * RM_GetModuleUserACLString - returns a stringified version of the user's ACL (same format as dump
        and list).  Contains an optimization to cache the stringified version until the underlying ACL is modified.
      * Slightly re-purpose the "C" flag to RM_Call from just being about ACL check before calling the
        command, to actually running the command with the right user, so that it also affects commands
        inside EVAL scripts. see #11231
      6e993a5d
  11. 15 Sep, 2022 1 commit
  12. 26 Aug, 2022 1 commit
  13. 23 Aug, 2022 1 commit
    • Ariel Shtul's avatar
      [PERF] use snprintf once in addReplyDouble (#11093) · 90223759
      Ariel Shtul authored
      The previous implementation calls `snprintf` twice, the second time used to
      'memcpy' the output of the first, which could be a very large string.
      The new implementation reserves space for the protocol header ahead
      of the formatted double, and then prepends the string length ahead of it.
      
      Measured improvement of simple ZADD of some 25%.
      90223759
  14. 22 Aug, 2022 4 commits
    • zhenwei pi's avatar
      Introduce unix socket connection type · eb94d6d3
      zhenwei pi authored
      
      
      Unix socket uses different accept handler/create listener from TCP,
      to hide these difference to avoid hard code, use a new unix socket
      connection type. Also move 'acceptUnixHandler' into unix.c.
      
      Currently, the connection framework becomes like following:
      
                         uplayer
                            |
                     connection layer
                       /    |     \
                     TCP   Unix   TLS
      
      It's possible to build Unix socket support as a shared library, and
      load it dynamically. Because TCP and Unix socket don't require any
      heavy dependencies or overheads, we build them into Redis statically.
      Signed-off-by: default avatarzhenwei pi <pizhenwei@bytedance.com>
      eb94d6d3
    • zhenwei pi's avatar
      Abstract accept handler · 0ae02ce9
      zhenwei pi authored
      
      
      Abstract accept handler for socket&TLS, and add helper function
      'connAcceptHandler' to get accept handler by specified type.
      
      Also move acceptTcpHandler into socket.c, and move
      acceptTLSHandler into tls.c.
      Signed-off-by: default avatarzhenwei pi <pizhenwei@bytedance.com>
      0ae02ce9
    • zhenwei pi's avatar
      Fully abstract connection type · 1234e3a5
      zhenwei pi authored
      
      
      Abstract common interface of connection type, so Redis can hide the
      implementation and uplayer only calls connection API without macro.
      
                     uplayer
                        |
                 connection layer
                   /          \
                socket        TLS
      
      Currently, for both socket and TLS, all the methods of connection type
      are declared as static functions.
      
      It's possible to build TLS(even socket) as a shared library, and Redis
      loads it dynamically in the next step.
      
      Also add helper function connTypeOfCluster() and
      connTypeOfReplication() to simplify the code:
      link->conn = server.tls_cluster ? connCreateTLS() : connCreateSocket();
      -> link->conn = connCreate(connTypeOfCluster());
      Signed-off-by: default avatarzhenwei pi <pizhenwei@bytedance.com>
      1234e3a5
    • zhenwei pi's avatar
      Introduce connAddr · bff7ecc7
      zhenwei pi authored
      
      
      Originally, connPeerToString is designed to get the address info from
      socket only(for both TCP & TLS), and the API 'connPeerToString' is
      oriented to operate a FD like:
      int connPeerToString(connection *conn, char *ip, size_t ip_len, int *port) {
          return anetFdToString(conn ? conn->fd : -1, ip, ip_len, port, FD_TO_PEER_NAME);
      }
      
      Introduce connAddr and implement .addr method for socket and TLS,
      thus the API 'connAddr' and 'connFormatAddr' become oriented to a
      connection like:
      static inline int connAddr(connection *conn, char *ip, size_t ip_len, int *port, int remote) {
          if (conn && conn->type->addr) {
              return conn->type->addr(conn, ip, ip_len, port, remote);
          }
      
          return -1;
      }
      
      Also remove 'FD_TO_PEER_NAME' & 'FD_TO_SOCK_NAME', use a boolean type
      'remote' to get local/remote address of a connection.
      
      With these changes, it's possible to support the other connection
      types which does not use socket(Ex, RDMA).
      
      Thanks to Oran for suggestions!
      Signed-off-by: default avatarzhenwei pi <pizhenwei@bytedance.com>
      bff7ecc7
  15. 04 Aug, 2022 1 commit
  16. 18 Jul, 2022 1 commit
    • ranshid's avatar
      Avoid using unsafe C functions (#10932) · eacca729
      ranshid authored
      replace use of:
      sprintf --> snprintf
      strcpy/strncpy  --> redis_strlcpy
      strcat/strncat  --> redis_strlcat
      
      **why are we making this change?**
      Much of the code uses some unsafe variants or deprecated buffer handling
      functions.
      While most cases are probably not presenting any issue on the known path
      programming errors and unterminated strings might lead to potential
      buffer overflows which are not covered by tests.
      
      **As part of this PR we change**
      1. added implementation for redis_strlcpy and redis_strlcat based on the strl implementation: https://linux.die.net/man/3/strl
      2. change all occurrences of use of sprintf with use of snprintf
      3. change occurrences of use of  strcpy/strncpy with redis_strlcpy
      4. change occurrences of use of strcat/strncat with redis_strlcat
      5. change the behavior of ll2string/ull2string/ld2string so that it will always place null
        termination ('\0') on the output buffer in the first index. this was done in order to make
        the use of these functions more safe in cases were the user will not check the output
        returned by them (for example in rdbRemoveTempFile)
      6. we added a compiler directive to issue a deprecation error in case a use of
        sprintf/strcpy/strcat is found during compilation which will result in error during compile time.
        However keep in mind that since the deprecation attribute is not supported on all compilers,
        this is expected to fail during push workflows.
      
      
      **NOTE:** while this is only an initial milestone. We might also consider
      using the *_s implementation provided by the C11 Extensions (however not
      yet widly supported). I would also suggest to start
      looking at static code analyzers to track unsafe use cases.
      For example LLVM clang checker supports security.insecureAPI.DeprecatedOrUnsafeBufferHandling
      which can help locate unsafe function usage.
      https://clang.llvm.org/docs/analyzer/checkers.html#security-insecureapi-deprecatedorunsafebufferhandling-c
      The main reason not to onboard it at this stage is that the alternative
      excepted by clang is to use the C11 extensions which are not always
      supported by stdlib.
      eacca729
  17. 12 Jul, 2022 1 commit
  18. 04 Jul, 2022 1 commit
  19. 28 Jun, 2022 1 commit
    • jonnyomerredis's avatar
      Add sharded pubsub keychannel count to client info (#10895) · 35c2ee87
      jonnyomerredis authored
      When calling CLIENT INFO/LIST, and in various debug prints, Redis is printing
      the number of pubsub channels / patterns the client is subscribed to.
      With the addition of sharded pubsub, it would be useful to print the number of
      keychannels the client is subscribed to as well.
      35c2ee87
  20. 26 Jun, 2022 2 commits
  21. 01 Jun, 2022 1 commit
    • Oran Agra's avatar
      Fix broken protocol in MISCONF error, RM_Yield bugs, RM_Call(EVAL) OOM check... · b2061de2
      Oran Agra authored
      Fix broken protocol in MISCONF error, RM_Yield bugs, RM_Call(EVAL) OOM check bug, and new RM_Call checks. (#10786)
      
      * Fix broken protocol when redis can't persist to RDB (general commands, not
        modules), excessive newline. regression of #10372 (7.0 RC3)
      * Fix broken protocol when Redis can't persist to AOF (modules and
        scripts), missing newline.
      * Fix bug in OOM check of EVAL scripts called from RM_Call.
        set the cached OOM state for scripts before executing module commands too,
        so that it can serve scripts that are executed by modules.
        i.e. in the past EVAL executed by RM_Call could have either falsely
        fail or falsely succeeded because of a wrong cached OOM state flag.
      * Fix bugs with RM_Yield:
        1. SHUTDOWN should only accept the NOSAVE mode
        2. Avoid eviction during yield command processing.
        3. Avoid processing master client commands while yielding from another client
      * Add new two more checks to RM_Call script mode.
        1. READONLY You can't write against a read only replica
        2. MASTERDOWN Link with MASTER is down and `replica-serve-stale-data` is set to `no`
      * Add new RM_Call flag to let redis automatically refuse `deny-oom` commands
        while over the memory limit. 
      * Add tests to cover various errors from Scripts, Modules, Modules
        calling scripts, and Modules calling commands in script mode.
      
      Add tests:
      * Looks like the MISCONF error was completely uncovered by the tests,
        add tests for it, including from scripts, and modules
      * Add tests for NOREPLICAS from scripts
      * Add tests for the various errors in module RM_Call, including RM_Call that
        calls EVAL, and RM_call in "eval mode". that includes:
        NOREPLICAS, READONLY, MASTERDOWN, MISCONF
      b2061de2
  22. 31 May, 2022 1 commit
    • DarrenJiang13's avatar
      Adds isolated netstats for replication. (#10062) · bb1de082
      DarrenJiang13 authored
      
      
      The amount of `server.stat_net_output_bytes/server.stat_net_input_bytes`
      is actually the sum of replication flow and users' data flow. 
      It may cause confusions like this:
      "Why does my server get such a large output_bytes while I am doing nothing? ". 
      
      After discussions and revisions, now here is the change about what this
      PR brings (final version before merge):
      - 2 server variables to count the network bytes during replication,
           including fullsync and propagate bytes.
           - `server.stat_net_repl_output_bytes`/`server.stat_net_repl_input_bytes`
      - 3 info fields to print the input and output of repl bytes and instantaneous
           value of total repl bytes.
           - `total_net_repl_input_bytes` / `total_net_repl_output_bytes`
           - `instantaneous_repl_total_kbps`
      - 1 new API `rioCheckType()` to check the type of rio. So we can use this
           to distinguish between diskless and diskbased replication
      - 2 new counting items to keep network statistics consistent between master
           and slave
          - rdb portion during diskless replica. in `rdbLoadProgressCallback()`
          - first line of the full sync payload. in `readSyncBulkPayload()`
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      bb1de082
  23. 15 May, 2022 1 commit
  24. 26 Apr, 2022 2 commits
    • Madelyn Olson's avatar
      Set replicas to panic on disk errors, and optionally panic on replication errors (#10504) · 6fa8e4f7
      Madelyn Olson authored
      * Till now, replicas that were unable to persist, would still execute the commands
        they got from the master, now they'll panic by default, and we add a new
        `replica-ignore-disk-errors` config to change that.
      * Till now, when a command failed on a replica or AOF-loading, it only logged a
        warning and a stat, we add a new `propagation-error-behavior` config to allow
        panicking in that state (may become the default one day)
      
      Note that commands that fail on the replica can either indicate a bug that could
      cause data inconsistency between the replica and the master, or they could be
      in some cases (specifically in previous versions), a result of a command (e.g. EVAL)
      that failed on the master, but still had to be propagated to fail on the replica as well.
      6fa8e4f7
    • Madelyn Olson's avatar
      By default prevent cross slot operations in functions and scripts with # (#10615) · efcd1bf3
      Madelyn Olson authored
      Adds the `allow-cross-slot-keys` flag to Eval scripts and Functions to allow
      scripts to access keys from multiple slots.
      The default behavior is now that they are not allowed to do that (unlike before).
      This is a breaking change for 7.0 release candidates (to be part of 7.0.0), but
      not for previous redis releases since EVAL without shebang isn't doing this check.
      
      Note that the check is done on both the keys declared by the EVAL / FCALL command
      arguments, and also the ones used by the script when making a `redis.call`.
      
      A note about the implementation, there seems to have been some confusion
      about allowing access to non local keys. I thought I missed something in our
      wider conversation, but Redis scripts do block access to non-local keys.
      So the issue was just about cross slots being accessed.
      efcd1bf3
  25. 25 Apr, 2022 2 commits
  26. 11 Apr, 2022 1 commit
    • zhaozhao.zz's avatar
      Durability enhancement for appendfsync=always policy (#9678) · 1a7765cb
      zhaozhao.zz authored
      Durability of database is a big and old topic, in this regard Redis use AOF to
      support it, and `appendfsync=alwasys` policy is the most strict level, guarantee
      all data is both written and synced on disk before reply success to client.
      
      But there are some cases have been overlooked, and could lead to durability broken.
      
      1. The most clear one is about threaded-io mode
         we should also set client's write handler with `ae_barrier` in
         `handleClientsWithPendingWritesUsingThreads`, or the write handler would be
         called after read handler in the next event loop, it means the write command result
         could be replied to client before flush to AOF.
      2. About blocked client (mostly by module)
         in `beforeSleep()`, `handleClientsBlockedOnKeys()` should be called before
         `flushAppendOnlyFile()`, in case the unblocked clients modify data without persistence
         but send reply.
      3. When handling `ProcessingEventsWhileBlocked`
         normally it takes place when lua/function/module timeout, and we give a chance to users
         to kill the slow operation, but we should call `flushAppendOnlyFile()` before
         `handleClientsWithPendingWrites()`, in case the other clients in the last event loop get
         acknowledge before data persistence.
         for a instance:
         ```
         in the same event loop
         client A executes set foo bar
         client B executes eval "for var=1,10000000,1 do end" 0
         ```
         after the script timeout, client A will get `OK` but lose data after restart (kill redis when
         timeout) if we don't flush the write command to AOF.
      4. A more complex case about `ProcessingEventsWhileBlocked`
         it is lua timeout in transaction, for example
         `MULTI; set foo bar; eval "for var=1,10000000,1 do end" 0; EXEC`, then client will get set
         command's result before the whole transaction done, that breaks atomicity too.
         fortunately, it's already fixed by #5428 (although it's not the original purpose just a side
         effect : )), but module timeout should be fixed too.
      
      case 1, 2, 3 are fixed in this commit, the module issue in case 4 needs a followup PR.
      1a7765cb
  27. 10 Apr, 2022 1 commit
  28. 25 Mar, 2022 1 commit
    • zhaozhao.zz's avatar
      optimize(remove) usage of client's pending_querybuf (#10413) · 78bef6e1
      zhaozhao.zz authored
      To remove `pending_querybuf`, the key point is reusing `querybuf`, it means master client's `querybuf` is not only used to parse command, but also proxy to sub-replicas.
      
      1. add a new variable `repl_applied` for master client to record how many data applied (propagated via `replicationFeedStreamFromMasterStream()`) but not trimmed in `querybuf`.
      
      2. don't sdsrange `querybuf` in `commandProcessed()`, we trim it to `repl_applied` after the whole replication pipeline processed to avoid fragmented `sdsrange`. And here are some scenarios we cannot trim to `qb_pos`:
          * we don't receive complete command from master
          * master client blocked because of client pause
          * IO threads operate read, master client flagged with CLIENT_PENDING_COMMAND
      
          In these scenarios, `qb_pos` points to the part of the current command or the beginning of next command, and the current command is not applied yet, so the `repl_applied` is not equal to `qb_pos`.
      
      Some other notes:
      * Do not do big arg optimization on master client, since we can only sdsrange `querybuf` after data sent to replicas.
      * Set `qb_pos` and `repl_applied` to 0 when `freeClient` in `replicationCacheMaster`.
      * Rewrite `processPendingCommandsAndResetClient` to `processPendingCommandAndInputBuffer`, let `processInputBuffer` to be called successively after `processCommandAndResetClient`.
      78bef6e1
  29. 16 Mar, 2022 1 commit
  30. 15 Mar, 2022 1 commit
    • yoav-steinberg's avatar
      Optimization: remove `updateClientMemUsage` from i/o threads. (#10401) · cf6dcb7b
      yoav-steinberg authored
      In a benchmark we noticed we spend a relatively long time updating the client
      memory usage leading to performance degradation.
      Before #8687 this was performed in the client's cron and didn't affect performance.
      But since introducing client eviction we need to perform this after filling the input
      buffers and after processing commands. This also lead me to write this code to be
      thread safe and perform it in the i/o threads.
      
      It turns out that the main performance issue here is related to atomic operations
      being performed while updating the total clients memory usage stats used for client
      eviction (`server.stat_clients_type_memory[]`). This update needed to be atomic
      because `updateClientMemUsage()` was called from the IO threads.
      
      In this commit I make sure to call `updateClientMemUsage()` only from the main thread.
      In case of threaded IO I call it for each client during the "fan-in" phase of the read/write
      operation. This also means I could chuck the `updateClientMemUsageBucket()` function
      which was called during this phase and embed it into `updateClientMemUsage()`.
      
      Profiling shows this makes `updateClientMemUsage()` (on my x86_64 linux) roughly x4 faster.
      cf6dcb7b
  31. 13 Mar, 2022 1 commit
  32. 09 Mar, 2022 1 commit
  33. 08 Mar, 2022 1 commit
    • guybe7's avatar
      XREADGROUP: Unblock client if stream is deleted (#10306) · 2a295408
      guybe7 authored
      Deleting a stream while a client is blocked XREADGROUP should unblock the client.
      
      The idea is that if a client is blocked via XREADGROUP is different from
      any other blocking type in the sense that it depends on the existence of both
      the key and the group. Even if the key is deleted and then revived with XADD
      it won't help any clients blocked on XREADGROUP because the group no longer
      exist, so they would fail with -NOGROUP anyway.
      The conclusion is that it's better to unblock these clients (with error) upon
      the deletion of the key, rather than waiting for the first XADD. 
      
      Other changes:
      1. Slightly optimize all `serveClientsBlockedOn*` functions by checking `server.blocked_clients_by_type`
      2. All `serveClientsBlockedOn*` functions now use a list iterator rather than looking at `listFirst`, relying
        on `unblockClient` to delete the head of the list. Before this commit, only `serveClientsBlockedOnStreams`
        used to work like that.
      3. bugfix: CLIENT UNBLOCK ERROR should work even if the command doesn't have a timeout_callback
        (only relevant to module commands)
      2a295408
  34. 27 Feb, 2022 1 commit
    • Meir Shpilraien (Spielrein)'s avatar
      Sort out the mess around Lua error messages and error stats (#10329) · aa856b39
      Meir Shpilraien (Spielrein) authored
      
      
      This PR fix 2 issues on Lua scripting:
      * Server error reply statistics (some errors were counted twice).
      * Error code and error strings returning from scripts (error code was missing / misplaced).
      
      ## Statistics
      a Lua script user is considered part of the user application, a sophisticated transaction,
      so we want to count an error even if handled silently by the script, but when it is
      propagated outwards from the script we don't wanna count it twice. on the other hand,
      if the script decides to throw an error on its own (using `redis.error_reply`), we wanna
      count that too.
      Besides, we do count the `calls` in command statistics for the commands the script calls,
      we we should certainly also count `failed_calls`.
      So when a simple `eval "return redis.call('set','x','y')" 0` fails, it should count the failed call
      to both SET and EVAL, but the `errorstats` and `total_error_replies` should be counted only once.
      
      The PR changes the error object that is raised on errors. Instead of raising a simple Lua
      string, Redis will raise a Lua table in the following format:
      
      ```
      {
          err='<error message (including error code)>',
          source='<User source file name>',
          line='<line where the error happned>',
          ignore_error_stats_update=true/false,
      }
      ```
      
      The `luaPushError` function was modified to construct the new error table as describe above.
      The `luaRaiseError` was renamed to `luaError` and is now simply called `lua_error` to raise
      the table on the top of the Lua stack as the error object.
      The reason is that since its functionality is changed, in case some Redis branch / fork uses it,
      it's better to have a compilation error than a bug.
      
      The `source` and `line` fields are enriched by the error handler (if possible) and the
      `ignore_error_stats_update` is optional and if its not present then the default value is `false`.
      If `ignore_error_stats_update` is true, the error will not be counted on the error stats.
      
      When parsing Redis call reply, each error is translated to a Lua table on the format describe
      above and the `ignore_error_stats_update` field is set to `true` so we will not count errors
      twice (we counted this error when we invoke the command).
      
      The changes in this PR might have been considered as a breaking change for users that used
      Lua `pcall` function. Before, the error was a string and now its a table. To keep backward
      comparability the PR override the `pcall` implementation and extract the error message from
      the error table and return it.
      
      Example of the error stats update:
      
      ```
      127.0.0.1:6379> lpush l 1
      (integer) 2
      127.0.0.1:6379> eval "return redis.call('get', 'l')" 0
      (error) WRONGTYPE Operation against a key holding the wrong kind of value. script: e471b73f1ef44774987ab00bdf51f21fd9f7974a, on @user_script:1.
      
      127.0.0.1:6379> info Errorstats
      # Errorstats
      errorstat_WRONGTYPE:count=1
      
      127.0.0.1:6379> info commandstats
      # Commandstats
      cmdstat_eval:calls=1,usec=341,usec_per_call=341.00,rejected_calls=0,failed_calls=1
      cmdstat_info:calls=1,usec=35,usec_per_call=35.00,rejected_calls=0,failed_calls=0
      cmdstat_lpush:calls=1,usec=14,usec_per_call=14.00,rejected_calls=0,failed_calls=0
      cmdstat_get:calls=1,usec=10,usec_per_call=10.00,rejected_calls=0,failed_calls=1
      ```
      
      ## error message
      We can now construct the error message (sent as a reply to the user) from the error table,
      so this solves issues where the error message was malformed and the error code appeared
      in the middle of the error message:
      
      ```diff
      127.0.0.1:6379> eval "return redis.call('set','x','y')" 0
      -(error) ERR Error running script (call to 71e6319f97b0fe8bdfa1c5df3ce4489946dda479): @user_script:1: OOM command not allowed when used memory > 'maxmemory'.
      +(error) OOM command not allowed when used memory > 'maxmemory' @user_script:1. Error running script (call to 71e6319f97b0fe8bdfa1c5df3ce4489946dda479)
      ```
      
      ```diff
      127.0.0.1:6379> eval "redis.call('get', 'l')" 0
      -(error) ERR Error running script (call to f_8a705cfb9fb09515bfe57ca2bd84a5caee2cbbd1): @user_script:1: WRONGTYPE Operation against a key holding the wrong kind of value
      +(error) WRONGTYPE Operation against a key holding the wrong kind of value script: 8a705cfb9fb09515bfe57ca2bd84a5caee2cbbd1, on @user_script:1.
      ```
      
      Notica that `redis.pcall` was not change:
      ```
      127.0.0.1:6379> eval "return redis.pcall('get', 'l')" 0
      (error) WRONGTYPE Operation against a key holding the wrong kind of value
      ```
      
      
      ## other notes
      Notice that Some commands (like GEOADD) changes the cmd variable on the client stats so we
      can not count on it to update the command stats. In order to be able to update those stats correctly
      we needed to promote `realcmd` variable to be located on the client struct.
      
      Tests was added and modified to verify the changes.
      
      Related PR's: #10279, #10218, #10278, #10309
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      aa856b39