1. 01 Jan, 2023 1 commit
    • ranshid's avatar
      reprocess command when client is unblocked on keys (#11012) · 383d902c
      ranshid authored
      *TL;DR*
      ---------------------------------------
      Following the discussion over the issue [#7551](https://github.com/redis/redis/issues/7551
      
      )
      We decided to refactor the client blocking code to eliminate some of the code duplications
      and to rebuild the infrastructure better for future key blocking cases.
      
      
      *In this PR*
      ---------------------------------------
      1. reprocess the command once a client becomes unblocked on key (instead of running
         custom code for the unblocked path that's different than the one that would have run if
         blocking wasn't needed)
      2. eliminate some (now) irrelevant code for handling unblocking lists/zsets/streams etc...
      3. modify some tests to intercept the error in cases of error on reprocess after unblock (see
         details in the notes section below)
      4. replace '$' on the client argv with current stream id. Since once we reprocess the stream
         XREAD we need to read from the last msg and not wait for new msg  in order to prevent
         endless block loop. 
      5. Added statistics to the info "Clients" section to report the:
         * `total_blocking_keys` - number of blocking keys
         * `total_blocking_keys_on_nokey` - number of blocking keys which have at least 1 client
            which would like
         to be unblocked on when the key is deleted.
      6. Avoid expiring unblocked key during unblock. Previously we used to lookup the unblocked key
         which might have been expired during the lookup. Now we lookup the key using NOTOUCH and
         NOEXPIRE to avoid deleting it at this point, so propagating commands in blocked.c is no longer needed.
      7. deprecated command flags. We decided to remove the CMD_CALL_STATS and CMD_CALL_SLOWLOG
         and make an explicit verification in the call() function in order to decide if stats update should take place.
         This should simplify the logic and also mitigate existing issues: for example module calls which are
         triggered as part of AOF loading might still report stats even though they are called during AOF loading.
      
      *Behavior changes*
      ---------------------------------------------------
      
      1. As this implementation prevents writing dedicated code handling unblocked streams/lists/zsets,
      since we now re-process the command once the client is unblocked some errors will be reported differently.
      The old implementation used to issue
      ``UNBLOCKED the stream key no longer exists``
      in the following cases:
         - The stream key has been deleted (ie. calling DEL)
         - The stream and group existed but the key type was changed by overriding it (ie. with set command)
         - The key not longer exists after we swapdb with a db which does not contains this key
         - After swapdb when the new db has this key but with different type.
         
      In the new implementation the reported errors will be the same as if the command was processed after effect:
      **NOGROUP** - in case key no longer exists, or **WRONGTYPE** in case the key was overridden with a different type.
      
      2. Reprocessing the command means that some checks will be reevaluated once the
      client is unblocked.
      For example, ACL rules might change since the command originally was executed and
      will fail once the client is unblocked.
      Another example is OOM condition checks which might enable the command to run and
      block but fail the command reprocess once the client is unblocked.
      
      3. One of the changes in this PR is that no command stats are being updated once the
      command is blocked (all stats will be updated once the client is unblocked). This implies
      that when we have many clients blocked, users will no longer be able to get that information
      from the command stats. However the information can still be gathered from the client list.
      
      **Client blocking**
      ---------------------------------------------------
      
      the blocking on key will still be triggered the same way as it is done today.
      in order to block the current client on list of keys, the call to
      blockForKeys will still need to be made which will perform the same as it is today:
      
      *  add the client to the list of blocked clients on each key
      *  keep the key with a matching list node (position in the global blocking clients list for that key)
         in the client private blocking key dict.
      *  flag the client with CLIENT_BLOCKED
      *  update blocking statistics
      *  register the client on the timeout table
      
      **Key Unblock**
      ---------------------------------------------------
      
      Unblocking a specific key will be triggered (same as today) by calling signalKeyAsReady.
      the implementation in that part will stay the same as today - adding the key to the global readyList.
      The reason to maintain the readyList (as apposed to iterating over all clients blocked on the specific key)
      is in order to keep the signal operation as short as possible, since it is called during the command processing.
      The main change is that instead of going through a dedicated code path that operates the blocked command
      we will just call processPendingCommandsAndResetClient.
      
      **ClientUnblock (keys)**
      ---------------------------------------------------
      
      1. Unblocking clients on keys will be triggered after command is
         processed and during the beforeSleep
      8. the general schema is:
      9. For each key *k* in the readyList:
      ```            
      For each client *c* which is blocked on *k*:
                  in case either:
      	          1. *k* exists AND the *k* type matches the current client blocking type
      	  	      OR
      	          2. *k* exists and *c* is blocked on module command
      	    	      OR
      	          3. *k* does not exists and *c* was blocked with the flag
      	             unblock_on_deleted_key
                       do:
                                        1. remove the client from the list of clients blocked on this key
                                        2. remove the blocking list node from the client blocking key dict
                                        3. remove the client from the timeout list
                                        10. queue the client on the unblocked_clients list
                                        11. *NEW*: call processCommandAndResetClient(c);
      ```
      *NOTE:* for module blocked clients we will still call the moduleUnblockClientByHandle
                    which will queue the client for processing in moduleUnblockedClients list.
      
      **Process Unblocked clients**
      ---------------------------------------------------
      
      The process of all unblocked clients is done in the beforeSleep and no change is planned
      in that part.
      
      The general schema will be:
      For each client *c* in server.unblocked_clients:
      
              * remove client from the server.unblocked_clients
              * set back the client readHandler
              * continue processing the pending command and input buffer.
      
      *Some notes regarding the new implementation*
      ---------------------------------------------------
      
      1. Although it was proposed, it is currently difficult to remove the
         read handler from the client while it is blocked.
         The reason is that a blocked client should be unblocked when it is
         disconnected, or we might consume data into void.
      
      2. While this PR mainly keep the current blocking logic as-is, there
         might be some future additions to the infrastructure that we would
         like to have:
         - allow non-preemptive blocking of client - sometimes we can think
           that a new kind of blocking can be expected to not be preempt. for
           example lets imagine we hold some keys on disk and when a command
           needs to process them it will block until the keys are uploaded.
           in this case we will want the client to not disconnect or be
           unblocked until the process is completed (remove the client read
           handler, prevent client timeout, disable unblock via debug command etc...).
         - allow generic blocking based on command declared keys - we might
           want to add a hook before command processing to check if any of the
           declared keys require the command to block. this way it would be
           easier to add new kinds of key-based blocking mechanisms.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      Signed-off-by: default avatarRan Shidlansik <ranshid@amazon.com>
      383d902c
  2. 22 Dec, 2022 1 commit
    • Binbin's avatar
      Fix flaky PTTL time to live in milliseconds test on slow machines (#11651) · 9b20d598
      Binbin authored
      This test failed in FreeBSD:
      ```
      *** [err]: PTTL returns time to live in milliseconds in tests/unit/expire.tcl
      Expected 836 > 900 && 836 <= 1000 (context: type eval line 5 cmd {assert {$ttl > 900 && $ttl <= 1000}} proc ::test)
      ```
      
      On some slow machines, sometimes the test take close to 200ms
      to finish. We only set aside 100ms, so that caused the failure.
      Since the failure was around 800, change the condition to be >500.
      9b20d598
  3. 20 Dec, 2022 1 commit
    • guybe7's avatar
      Cleanup: Get rid of server.core_propagates (#11572) · 9c7c6924
      guybe7 authored
      1. Get rid of server.core_propagates - we can just rely on module/call nesting levels
      2. Rename in_nested_call  to execution_nesting and update the comment
      3. Remove module_ctx_nesting (redundant, we can use execution_nesting)
      4. Modify postExecutionUnitOperations according to the comment (The main purpose of this PR)
      5. trackingHandlePendingKeyInvalidations: Check the nesting level inside this function
      9c7c6924
  4. 18 Dec, 2022 2 commits
    • Oran Agra's avatar
      fix race in list test with blocking commands (#11627) · 669688a3
      Oran Agra authored
      I've seen the `BRPOPLPUSH with multiple blocked clients` test hang.
      this probably happened because rd2 blocked before rd1 and then it was
      also released first, and rd1 remained blocked.
      
      ```
              r del blist{t} target1{t} target2{t}
              r set target1{t} nolist
              $rd1 brpoplpush blist{t} target1{t} 0
              $rd2 brpoplpush blist{t} target2{t} 0
              r lpush blist{t} foo
      
              assert_error "WRONGTYPE*" {$rd1 read}
              assert_equal {foo} [$rd2 read]
              assert_equal {foo} [r lrange target2{t} 0 -1]
      ```
      changes:
      * added all missing calls for wait_for_blocked_client after issuing blocking commands)
      * removed some excessive `after 100`
      * fix undetected crossslot error in BRPOPLPUSH test
      * rollback changes to proto-max-bulk-len so external tests can be rerun
      669688a3
    • Oran Agra's avatar
      fix flaky latency test (#11636) · 60f7111b
      Oran Agra authored
      Fix a flaky test that probably fails on overload timing issues.
      
      This unit starts with
      ```
          # Set a threshold high enough to avoid spurious latency events.
          r config set latency-monitor-threshold 200
      ```
      
      but later the test measuring expire event changes the threshold.
      this fix is to revert it to 200 after that test.
      
      Got this error (ARM+TLS)
      ```
      *** [err]: LATENCY RESET is able to reset events in tests/unit/latency-monitor.tcl
      Expected [r latency latest] eq {} (context: type eval line 3 cmd {assert {[r latency latest] eq {}}} proc ::test)
      ```
      60f7111b
  5. 15 Dec, 2022 1 commit
  6. 09 Dec, 2022 2 commits
    • Binbin's avatar
      Fix zuiFind crash / RM_ScanKey hang on SET object listpack encoding (#11581) · 20854cb6
      Binbin authored
      
      
      In #11290, we added listpack encoding for SET object.
      But forgot to support it in zuiFind, causes ZINTER, ZINTERSTORE,
      ZINTERCARD, ZIDFF, ZDIFFSTORE to crash.
      And forgot to support it in RM_ScanKey, causes it hang.
      
      This PR add support SET listpack in zuiFind, and in RM_ScanKey.
      And add tests for related commands to cover this case.
      
      Other changes:
      - There is no reason for zuiFind to go into the internals of the SET.
        It can simply use setTypeIsMember and don't care about encoding.
      - Remove the `#include "intset.h"` from server.h reduce the chance of
        accidental intset API use.
      - Move setTypeAddAux, setTypeRemoveAux and setTypeIsMemberAux
        interfaces to the header.
      - In scanGenericCommand, use setTypeInitIterator and setTypeNext
        to handle OBJ_SET scan.
      - In RM_ScanKey, improve hash scan mode, use lpGetValue like zset,
        they can share code and better performance.
      
      The zuiFind part fixes #11578
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      Co-authored-by: default avatarViktor Söderqvist <viktor.soderqvist@est.tech>
      20854cb6
    • Oran Agra's avatar
      Solve issues with active defrag test failing on fast machines (#11598) · 528bb11d
      Oran Agra authored
      We do defrag during AOF loading, but aim to detect fragmentation only
      once a second, so this test aims to slow down the AOF loading and mimic
      loading of a large file.
      On fast machines the sleep, plus the actual work we did was insufficient
      making it sleep longer so the test won't fail.
      
      The error we used to get is this one:
      Expected 0 > 100000 (context: type eval line 106 cmd {assert {$hits > 100000}} proc ::test)
      528bb11d
  7. 08 Dec, 2022 1 commit
  8. 07 Dec, 2022 1 commit
    • Harkrishn Patro's avatar
      Optimize client memory usage tracking operation while client eviction is disabled (#11348) · c0267b3f
      Harkrishn Patro authored
      
      
      ## Issue
      During the client input/output buffer processing, the memory usage is
      incrementally updated to keep track of clients going beyond a certain
      threshold `maxmemory-clients` to be evicted. However, this additional
      tracking activity leads to unnecessary CPU cycles wasted when no
      client-eviction is required. It is applicable in two cases.
      
      * `maxmemory-clients` is set to `0` which equates to no client eviction
        (applicable to all clients)
      * `CLIENT NO-EVICT` flag is set to `ON` which equates to a particular
        client not applicable for eviction.  
      
      ## Solution
      * Disable client memory usage tracking during the read/write flow when
        `maxmemory-clients` is set to `0` or `client no-evict` is `on`.
        The memory usage is tracked only during the `clientCron` i.e. it gets
        periodically updated.
      * Cleanup the clients from the memory usage bucket when client eviction
        is disabled.
      * When the maxmemory-clients config is enabled or disabled at runtime,
        we immediately update the memory usage buckets for all clients (tested
        scanning 80000 took some 20ms)
      
      Benchmark shown that this can improve performance by about 5% in
      certain situations.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      c0267b3f
  9. 05 Dec, 2022 1 commit
    • filipe oliveira's avatar
      Reintroduce lua argument cache in luaRedisGenericCommand removed in v7.0 (#11541) · 2d80cd78
      filipe oliveira authored
      This mechanism aims to reduce calls to malloc and free when
      preparing the arguments the script sends to redis commands.
      This is a mechanism was originally implemented in 48c49c48
      and 4f686555
      
      , and was removed in #10220 (thinking it's not needed
      and that it has no impact), but it now turns out it was wrong, and it
      indeed provides some 5% performance improvement.
      
      The implementation is a little bit too simplistic, it assumes consecutive
      calls use the same size in the same arg index, but that's arguably
      sufficient since it's only aimed at caching very small things.
      
      We could even consider always pre-allocating args to the full
      LUA_CMD_OBJCACHE_MAX_LEN (64 bytes) rather than the right size for the argument,
      that would increase the chance they'll be able to be re-used.
      But in some way this is already happening since we're using
      sdsalloc, which in turn uses s_malloc_usable and takes ownership
      of the full side of the allocation, so we are padded to the allocator
      bucket size.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      Co-authored-by: default avatarsundb <sundbcn@gmail.com>
      2d80cd78
  10. 04 Dec, 2022 1 commit
  11. 01 Dec, 2022 1 commit
  12. 30 Nov, 2022 2 commits
    • guybe7's avatar
      Stream consumers: Re-purpose seen-time, add active-time (#11099) · 72e90695
      guybe7 authored
      1. "Fixed" the current code so that seen-time/idle actually refers to interaction
        attempts (as documented; breaking change)
      2. Added active-time/inactive to refer to successful interaction (what
        seen-time/idle used to be)
      
      At first, I tried to avoid changing the behavior of seen-time/idle but then realized
      that, in this case, the odds are the people read the docs and implemented their
      code based on the docs (which didn't match the behavior).
      For the most part, that would work fine, except that issue #9996 was found.
      
      I was working under the assumption that people relied on the docs, and for
      the most part, it could have worked well enough. so instead of fixing the docs,
      as I would usually do, I fixed the code to match the docs in this particular case.
      
      Note that, in case the consumer has never read any entries, the values
      for both "active-time" (XINFO FULL) and "inactive" (XINFO CONSUMERS) will
      be -1, meaning here that the consumer was never active.
      
      Note that seen/active time is only affected by XREADGROUP / X[AUTO]CLAIM, not
      by XPENDING, XINFO, and other "read-only" stream CG commands (always has been,
      even before this PR)
      
      Other changes:
      * Another behavioral change (arguably a bugfix) is that XREADGROUP and X[AUTO]CLAIM
        create the consumer regardless of whether it was able to perform some reading/claiming
      * RDB format change to save the `active_time`, and set it to the same value of `seen_time` in old rdb files.
      72e90695
    • Huang Zhw's avatar
      Add a special notification unlink available only for modules (#9406) · c8181314
      Huang Zhw authored
      
      
      Add a new module event `RedisModule_Event_Key`, this event is fired
      when a key is removed from the keyspace.
      The event includes an open key that can be used for reading the key before
      it is removed. Modules can also extract the key-name, and use RM_Open
      or RM_Call to access key from within that event, but shouldn't modify anything
      from within this event.
      
      The following sub events are available:
        - `REDISMODULE_SUBEVENT_KEY_DELETED`
        - `REDISMODULE_SUBEVENT_KEY_EXPIRED`
        - `REDISMODULE_SUBEVENT_KEY_EVICTED`
        - `REDISMODULE_SUBEVENT_KEY_OVERWRITE`
      
      The data pointer can be casted to a RedisModuleKeyInfo structure
      with the following fields:
      ```
           RedisModuleKey *key;    // Opened Key
       ```
      
      ### internals
      
      * We also add two dict functions:
        `dictTwoPhaseUnlinkFind` finds an element from the table, also get the plink of the entry.
        The entry is returned if the element is found. The user should later call `dictTwoPhaseUnlinkFree`
        with it in order to unlink and release it. Otherwise if the key is not found, NULL is returned.
        These two functions should be used in pair. `dictTwoPhaseUnlinkFind` pauses rehash and
        `dictTwoPhaseUnlinkFree` resumes rehash.
      * We change `dbOverwrite` to `dbReplaceValue` which just replaces the value of the key and
        doesn't fire any events. The "overwrite" part (which emits events) is just when called from `setKey`,
        the other places that called dbOverwrite were ones that just update the value in-place (INCR*, SPOP,
        and dbUnshareStringValue). This should not have any real impact since `moduleNotifyKeyUnlink` and
        `signalDeletedKeyAsReady` wouldn't have mattered in these cases anyway (i.e. module keys and
        stream keys didn't have direct calls to dbOverwrite)
      * since we allow doing RM_OpenKey from withing these callbacks, we temporarily disable lazy expiry.
      * We also temporarily disable lazy expiry when we are in unlink/unlink2 callback and keyspace 
        notification callback.
      * Move special definitions to the top of redismodule.h
        This is needed to resolve compilation errors with RedisModuleKeyInfoV1
        that carries a RedisModuleKey member.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      c8181314
  13. 28 Nov, 2022 2 commits
    • Mingyi Kang's avatar
      Hyperloglog avoid allocate more than 'server.hll_sparse_max_bytes' bytes of... · f8ac5a65
      Mingyi Kang authored
      Hyperloglog avoid allocate more than 'server.hll_sparse_max_bytes' bytes of memory for sparse representation (#11438)
      
      Before this PR, we use sdsMakeRoomFor() to expand the size of hyperloglog
      string (sparse representation). And because sdsMakeRoomFor() uses a greedy
      strategy (allocate about twice what we need), the memory we allocated for the
      hyperloglog may be more than `server.hll_sparse_max_bytes` bytes.
      The memory more than` server.hll_sparse_max_bytes` will be wasted.
      
      In this pull request, tone down the greediness of the allocation growth, and also
      make sure it'll never request more than `server.hll_sparse_max_bytes`.
      
      This could in theory mean the size of the hyperloglog string is insufficient for the
      increment we need, should be ok since in this case we promote the hyperloglog
      to dense representation, an assertion was added to make sure.
      
      This PR also add some tests and fixes some typo and indentation issues.
      f8ac5a65
    • C Charles's avatar
      Add withscore option to ZRANK and ZREVRANK. (#11235) · eeca7f29
      C Charles authored
      Add an option "withscores" to ZRANK and ZREVRANK.
      
      Add `[withscore]` option to both `zrank` and `zrevrank`, like this:
      ```
      z[rev]rank key member [withscore]
      ```
      eeca7f29
  14. 27 Nov, 2022 1 commit
  15. 26 Nov, 2022 1 commit
  16. 24 Nov, 2022 1 commit
    • Meir Shpilraien (Spielrein)'s avatar
      Module API to allow writes after key space notification hooks (#11199) · abc345ad
      Meir Shpilraien (Spielrein) authored
      ### Summary of API additions
      
      * `RedisModule_AddPostNotificationJob` - new API to call inside a key space
        notification (and on more locations in the future) and allow to add a post job as describe above.
      * New module option, `REDISMODULE_OPTIONS_ALLOW_NESTED_KEYSPACE_NOTIFICATIONS`,
        allows to disable Redis protection of nested key-space notifications.
      * `RedisModule_GetModuleOptionsAll` - gets the mask of all supported module options so a module
        will be able to check if a given option is supported by the current running Redis instance.
      
      ### Background
      
      The following PR is a proposal of handling write operations inside module key space notifications.
      After a lot of discussions we came to a conclusion that module should not perform any write
      operations on key space notification.
      
      Some examples of issues that such write operation can cause are describe on the following links:
      
      * Bad replication oreder - https://github.com/redis/redis/pull/10969
      * Used after free - https://github.com/redis/redis/pull/10969#issuecomment-1223771006
      * Used after free - https://github.com/redis/redis/pull/9406#issuecomment-1221684054
      
      
      
      There are probably more issues that are yet to be discovered. The underline problem with writing
      inside key space notification is that the notification runs synchronously, this means that the notification
      code will be executed in the middle on Redis logic (commands logic, eviction, expire).
      Redis **do not assume** that the data might change while running the logic and such changes
      can crash Redis or cause unexpected behaviour.
      
      The solution is to state that modules **should not** perform any write command inside key space
      notification (we can chose whether or not we want to force it). To still cover the use-case where
      module wants to perform a write operation as a reaction to key space notifications, we introduce
      a new API , `RedisModule_AddPostNotificationJob`, that allows to register a callback that will be
      called by Redis when the following conditions hold:
      
      * It is safe to perform any write operation.
      * The job will be called atomically along side the operation that triggers it (in our case, key
        space notification).
      
      Module can use this new API to safely perform any write operation and still achieve atomicity
      between the notification and the write.
      
      Although currently the API is supported on key space notifications, the API is written in a generic
      way so that in the future we will be able to use it on other places (server events for example).
      
      ### Technical Details
      
      Whenever a module uses `RedisModule_AddPostNotificationJob` the callback is added to a list
      of callbacks (called `modulePostExecUnitJobs`) that need to be invoke after the current execution
      unit ends (whether its a command, eviction, or active expire). In order to trigger those callback
      atomically with the notification effect, we call those callbacks on `postExecutionUnitOperations`
      (which was `propagatePendingCommands` before this PR). The new function fires the post jobs
      and then calls `propagatePendingCommands`.
      
      If the callback perform more operations that triggers more key space notifications. Those keys
      space notifications might register more callbacks. Those callbacks will be added to the end
      of `modulePostExecUnitJobs` list and will be invoke atomically after the current callback ends.
      This raises a concerns of entering an infinite loops, we consider infinite loops as a logical bug
      that need to be fixed in the module, an attempt to protect against infinite loops by halting the
      execution could result in violation of the feature correctness and so **Redis will make no attempt
      to protect the module from infinite loops**
      
      In addition, currently key space notifications are not nested. Some modules might want to allow
      nesting key-space notifications. To allow that and keep backward compatibility, we introduce a
      new module option called `REDISMODULE_OPTIONS_ALLOW_NESTED_KEYSPACE_NOTIFICATIONS`.
      Setting this option will disable the Redis key-space notifications nesting protection and will
      pass this responsibility to the module.
      
      ### Redis infrastructure
      
      This PR promotes the existing `propagatePendingCommands` to an "Execution Unit" concept,
      which is called after each atomic unit of execution,
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      Co-authored-by: default avatarYossi Gottlieb <yossigo@gmail.com>
      Co-authored-by: default avatarMadelyn Olson <34459052+madolson@users.noreply.github.com>
      abc345ad
  17. 22 Nov, 2022 1 commit
    • Binbin's avatar
      Make assert_refcount skip the OBJECT REFCOUNT check with needs:debug tag (#11487) · 543e0daa
      Binbin authored
      This PR add `assert_refcount_morethan`, and modify `assert_refcount` to skip
      the `OBJECT REFCOUNT` check with `needs:debug` flag. Use them to modify all
      `OBJECT REFCOUNT` calls and also update the tests/README to be more specific.
      
      The reasoning is that some of these tests could be testing something important,
      and along the way also add a check for the refcount, and it could be a shame to skip
      the whole test just because the refcount functionality is missing or blocked.
      but much like the fact that some redis variants may not support DEBUG,
      and still we want to run the majority of the test for coverage, and just skip the digest match.
      543e0daa
  18. 16 Nov, 2022 2 commits
    • sundb's avatar
      Add listpack encoding for list (#11303) · 2168ccc6
      sundb authored
      Improve memory efficiency of list keys
      
      ## Description of the feature
      The new listpack encoding uses the old `list-max-listpack-size` config
      to perform the conversion, which we can think it of as a node inside a
      quicklist, but without 80 bytes overhead (internal fragmentation included)
      of quicklist and quicklistNode structs.
      For example, a list key with 5 items of 10 chars each, now takes 128 bytes
      instead of 208 it used to take.
      
      ## Conversion rules
      * Convert listpack to quicklist
        When the listpack length or size reaches the `list-max-listpack-size` limit,
        it will be converted to a quicklist.
      * Convert quicklist to listpack
        When a quicklist has only one node, and its length or size is reduced to half
        of the `list-max-listpack-size` limit, it will be converted to a listpack.
        This is done to avoid frequent conversions when we add or remove at the bounding size or length.
          
      ## Interface changes
      1. add list entry param to listTypeSetIteratorDirection
          When list encoding is listpack, `listTypeIterator->lpi` points to the next entry of current entry,
          so when changing the direction, we need to use the current node (listTypeEntry->p) to 
          update `listTypeIterator->lpi` to the next node in the reverse direction.
      
      ## Benchmark
      ### Listpack VS Quicklist with one node
      * LPUSH - roughly 0.3% improvement
      * LRANGE - roughly 13% improvement
      
      ### Both are quicklist
      * LRANGE - roughly 3% improvement
      * LRANGE without pipeline - roughly 3% improvement
      
      From the benchmark, as we can see from the results
      1. When list is quicklist encoding, LRANGE improves performance by <5%.
      2. When list is listpack encoding, LRANGE improves performance by ~13%,
         the main enhancement is brought by `addListListpackRangeReply()`.
      
      ## Memory usage
      1M lists(key:0~key:1000000) with 5 items of 10 chars ("hellohello") each.
      shows memory usage down by 35.49%, from 214MB to 138MB.
      
      ## Note
      1. Add conversion callback to support doing some work before conversion
          Since the quicklist iterator decompresses the current node when it is released, we can 
          no longer decompress the quicklist after we convert the list.
      2168ccc6
    • Madelyn Olson's avatar
      Explicitly send function commands to monitor (#11510) · d136bf28
      Madelyn Olson authored
      Both functions and eval are marked as "no-monitor", since we want to explicitly feed in the script command before the commands generated by the script. Note that we want this behavior generally, so that commands can redact arguments before being added to the monitor.
      d136bf28
  19. 15 Nov, 2022 1 commit
    • Binbin's avatar
      Fix double negative nan test, ignoring sign (#11506) · a4bcdbcf
      Binbin authored
      The test introduced in #11482 fail on ARM (extra CI):
      ```
      *** [err]: RESP2: RM_ReplyWithDouble: NaN in tests/unit/moduleapi/reply.tcl
      Expected '-nan' to be equal to 'nan' (context: type eval line 3 cmd
      {assert_equal "-nan" [r rw.double 0 0]} proc ::test)
      
      *** [err]: RESP3: RM_ReplyWithDouble: NaN in tests/unit/moduleapi/reply.tcl
      Expected ',-nan' to be equal to ',nan' (context: type eval line 8 cmd
      {assert_equal ",-nan" [r rw.double 0 0]} proc ::test)
      ```
      
      It looks like there is no negative nan on ARM. 
      a4bcdbcf
  20. 14 Nov, 2022 2 commits
  21. 13 Nov, 2022 1 commit
  22. 09 Nov, 2022 1 commit
    • Viktor Söderqvist's avatar
      Listpack encoding for sets (#11290) · 4e472a1a
      Viktor Söderqvist authored
      Small sets with not only integer elements are listpack encoded, by default
      up to 128 elements, max 64 bytes per element, new config `set-max-listpack-entries`
      and `set-max-listpack-value`. This saves memory for small sets compared to using a hashtable.
      
      Sets with only integers, even very small sets, are still intset encoded (up to 1G
      limit, etc.). Larger sets are hashtable encoded.
      
      This PR increments the RDB version, and has an effect on OBJECT ENCODING
      
      Possible conversions when elements are added:
      
          intset -> listpack
          listpack -> hashtable
          intset -> hashtable
      
      Note: No conversion happens when elements are deleted. If all elements are
      deleted and then added again, the set is deleted and recreated, thus implicitly
      converted to a smaller encoding.
      4e472a1a
  23. 08 Nov, 2022 1 commit
  24. 04 Nov, 2022 1 commit
    • Binbin's avatar
      Introduce socket shutdown into connection type, used if a fork is active (#11376) · fac188b4
      Binbin authored
      Introduce socket `shutdown()` into connection type, and use it
      on normal socket if a fork is active. This allows us to close
      client connections when there are child processes sharing the
      file descriptors.
      
      Fixes #10077. The reason is that since the `fork()` child is holding
      the file descriptors, the `close` in `unlinkClient -> connClose`
      isn't sufficient. The client will not realize that the connection is
      disconnected until the child process ends.
      
      Let's try to be conservative and only use shutdown when the fork is active.
      fac188b4
  25. 03 Nov, 2022 1 commit
  26. 02 Nov, 2022 3 commits
    • Wen Hui's avatar
      Fix XSETID with max_deleted_entry_id issue (#11444) · 7395e370
      Wen Hui authored
      Resolve an edge case where the ID of a stream is updated retroactively
      to an ID lower than the already set max_deleted_entry_id.
      
      Currently, if we have command as below:
      **xsetid mystream 1-1 MAXDELETEDID 1-2**
      Then we will get the following error:
      **(error) ERR The ID specified in XSETID is smaller than the provided max_deleted_entry_id**
      Becuase the provided MAXDELETEDID 1-2 is greated than input last-id: 1-1
      
      Then we could assume there is a similar situation:
      step 1: we add three items in the mystream
      
      **127.0.0.1:6381> xadd mystream 1-1 a 1
      "1-1"
      127.0.0.1:6381> xadd mystream 1-2 b 2
      "1-2"
      127.0.0.1:6381> xadd mystream 1-3 c 3
      "1-3"**
      
      step 2: we could check the mystream infomation as below:
      **127.0.0.1:6381> xinfo stream mystream
       1) "length"
       2) (integer) 3
       7) "last-generated-id"
       8) "1-3"
       9) "max-deleted-entry-id"
      10) "0-0"
      
      step 3: we delete the item id 1-2 and 1-3 as below:
      **127.0.0.1:6381> xdel mystream 1-2
      (integer) 1
      127.0.0.1:6381> xdel mystream 1-3
      (integer) 1**
      
      step 4: we check the mystream information:
      127.0.0.1:6381> xinfo stream mystream
       1) "length"
       2) (integer) 1
       7) "last-generated-id"
       8) "1-3"
       9) "max-deleted-entry-id"
      10) "1-3"
      
      we could notice that the **max-deleted-entry-id update to 1-3**, so right now, if we just run:
      **xsetid mystream 1-2** 
      the above command has the same effect with **xsetid mystream 1-2  MAXDELETEDID 1-3**
      
      So we should return an error to the client that **(error) ERR The ID specified in XSETID is smaller than current max_deleted_entry_id**
      7395e370
    • Wen Hui's avatar
      Fix command BITFIELD_RO and BITFIELD argument json file, add some test cases for them (#11445) · fea9bbbe
      Wen Hui authored
      According to the source code, the commands can be executed with only key name,
      and no GET/SET/INCR operation arguments.
      change the docs to reflect that by marking these arguments as optional.
      also add tests.
      fea9bbbe
    • Brennan's avatar
      Re-design cluster link send buffer to improve memory management (#11343) · 47c493e0
      Brennan authored
      Re-design cluster link send queue to improve memory management
      47c493e0
  27. 01 Nov, 2022 1 commit
  28. 27 Oct, 2022 2 commits
    • Moti Cohen's avatar
      Refactor and (internally) rebrand from pause-clients to pause-actions (#11098) · c0d72262
      Moti Cohen authored
      Renamed from "Pause Clients" to "Pause Actions" since the mechanism can pause
      several actions in redis, not just clients (e.g. eviction, expiration).
      
      Previously each pause purpose (which has a timeout that's tracked separately from others purposes),
      also implicitly dictated what it pauses (reads, writes, eviction, etc). Now it is explicit, and
      the actions that are paused (bit flags) are defined separately from the purpose.
      
      - Previously, when using feature pause-client it also implicitly means to make the server static:
        - Pause replica traffic
        - Pauses eviction processing
        - Pauses expire processing
      
      Making the server static is used also for failover and shutdown. This PR internally rebrand
      pause-client API to become pause-action API. It also Simplifies pauseClients structure
      by replacing pointers array with static array.
      
      The context of this PR is to add another trigger to pause-client which will activated in case
      of OOM as throttling mechanism ([see here](https://github.com/redis/redis/issues/10907)).
      In this case we want only to pause client, and eviction actions.
      c0d72262
    • Shaya Potter's avatar
      RM_Call - only enforce OOM on scripts if 'M' flag is sent (#11425) · 38028dab
      Shaya Potter authored
      
      
      RM_Call is designed to let modules call redis commands disregarding the
      OOM state (the module is responsible to declare its command flags to redis,
      or perform the necessary checks).
      The other (new) alternative is to pass the "M" flag to RM_Call so that redis can
      OOM reject commands implicitly.
      
      However, Currently, RM_Call enforces OOM on scripts (excluding scripts that
      declared `allow-oom`) in all cases, regardless of the RM_Call "M" flag being present.
      
      This PR fixes scripts to be consistent with other commands being executed by RM_Call.
      It modifies the flow in effect treats scripts as if they if they have the ALLOW_OOM script
      flag, if the "M" flag is not passed (i.e. no OOM checking is being performed by RM_Call,
      so no OOM checking should be done on script).
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      38028dab
  29. 25 Oct, 2022 1 commit
  30. 24 Oct, 2022 1 commit
    • guybe7's avatar
      Set errno in case XADD with partial ID fails (#11424) · 737a0905
      guybe7 authored
      This is a rare failure mode of a new feature of redis 7 introduced in #9217
      (when the incremental part of the ID overflows).
      Till now, the outcome of that error was undetermined (could easily result in
      `Elements are too large to be stored` wrongly, due to unset `errno`).
      737a0905
  31. 22 Oct, 2022 1 commit
    • Binbin's avatar
      Make PFMERGE source key optional in docs, add tests with one input key, add... · 9e1b879f
      Binbin authored
      Make PFMERGE source key optional in docs, add tests with one input key, add tests on missing source keys (#11205)
      
      The following two cases will create an empty destkey HLL:
      1. called with no source keys, like `pfmerge destkey`
      2. called with non-existing source keys, like `pfmerge destkey non-existing-source-key`
      
      In the first case, in `PFMERGE`, the dest key is actually one of the source keys too.
      So `PFMERGE k1 k2` is equivalent to `SUNIONSTORE k1 k1 k2`,
      and `PFMERGE k1` is equivalent to `SUNIONSTORE k1 k1`.
      So the first case is reasonable, the source key is actually optional.
      
      And the second case, `PFMERGE` on missing keys should succeed and create an empty dest.
      This is consistent with `PFCOUNT`, and also with `SUNIONSTORE`, no need to change.
      9e1b879f