1. 30 Nov, 2022 1 commit
    • Huang Zhw's avatar
      Add a special notification unlink available only for modules (#9406) · c8181314
      Huang Zhw authored
      
      
      Add a new module event `RedisModule_Event_Key`, this event is fired
      when a key is removed from the keyspace.
      The event includes an open key that can be used for reading the key before
      it is removed. Modules can also extract the key-name, and use RM_Open
      or RM_Call to access key from within that event, but shouldn't modify anything
      from within this event.
      
      The following sub events are available:
        - `REDISMODULE_SUBEVENT_KEY_DELETED`
        - `REDISMODULE_SUBEVENT_KEY_EXPIRED`
        - `REDISMODULE_SUBEVENT_KEY_EVICTED`
        - `REDISMODULE_SUBEVENT_KEY_OVERWRITE`
      
      The data pointer can be casted to a RedisModuleKeyInfo structure
      with the following fields:
      ```
           RedisModuleKey *key;    // Opened Key
       ```
      
      ### internals
      
      * We also add two dict functions:
        `dictTwoPhaseUnlinkFind` finds an element from the table, also get the plink of the entry.
        The entry is returned if the element is found. The user should later call `dictTwoPhaseUnlinkFree`
        with it in order to unlink and release it. Otherwise if the key is not found, NULL is returned.
        These two functions should be used in pair. `dictTwoPhaseUnlinkFind` pauses rehash and
        `dictTwoPhaseUnlinkFree` resumes rehash.
      * We change `dbOverwrite` to `dbReplaceValue` which just replaces the value of the key and
        doesn't fire any events. The "overwrite" part (which emits events) is just when called from `setKey`,
        the other places that called dbOverwrite were ones that just update the value in-place (INCR*, SPOP,
        and dbUnshareStringValue). This should not have any real impact since `moduleNotifyKeyUnlink` and
        `signalDeletedKeyAsReady` wouldn't have mattered in these cases anyway (i.e. module keys and
        stream keys didn't have direct calls to dbOverwrite)
      * since we allow doing RM_OpenKey from withing these callbacks, we temporarily disable lazy expiry.
      * We also temporarily disable lazy expiry when we are in unlink/unlink2 callback and keyspace 
        notification callback.
      * Move special definitions to the top of redismodule.h
        This is needed to resolve compilation errors with RedisModuleKeyInfoV1
        that carries a RedisModuleKey member.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      c8181314
  2. 29 Nov, 2022 1 commit
    • filipe oliveira's avatar
      Reduce eval related overhead introduced in v7.0 by evalCalcFunctionName (#11521) · 7dfd7b91
      filipe oliveira authored
      
      
      As being discussed in #10981 we see a degradation in performance
      between v6.2 and v7.0 of Redis on the EVAL command. 
      
      After profiling the current unstable branch we can see that we call the
      expensive function evalCalcFunctionName twice. 
      
      The current "fix" is to basically avoid calling evalCalcFunctionName and
      even dictFind(lua_scripts) twice for the same command.
      Instead we cache the current script's dictEntry (for both Eval and Functions)
      in the current client so we don't have to repeat these calls.
      The exception would be when doing an EVAL on a new script that's not yet
      in the script cache. in that case we will call evalCalcFunctionName (and even
      evalExtractShebangFlags) twice.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      7dfd7b91
  3. 28 Nov, 2022 5 commits
    • Mingyi Kang's avatar
      Hyperloglog avoid allocate more than 'server.hll_sparse_max_bytes' bytes of... · f8ac5a65
      Mingyi Kang authored
      Hyperloglog avoid allocate more than 'server.hll_sparse_max_bytes' bytes of memory for sparse representation (#11438)
      
      Before this PR, we use sdsMakeRoomFor() to expand the size of hyperloglog
      string (sparse representation). And because sdsMakeRoomFor() uses a greedy
      strategy (allocate about twice what we need), the memory we allocated for the
      hyperloglog may be more than `server.hll_sparse_max_bytes` bytes.
      The memory more than` server.hll_sparse_max_bytes` will be wasted.
      
      In this pull request, tone down the greediness of the allocation growth, and also
      make sure it'll never request more than `server.hll_sparse_max_bytes`.
      
      This could in theory mean the size of the hyperloglog string is insufficient for the
      increment we need, should be ok since in this case we promote the hyperloglog
      to dense representation, an assertion was added to make sure.
      
      This PR also add some tests and fixes some typo and indentation issues.
      f8ac5a65
    • zhaozhao.zz's avatar
      benchmark getRedisConfig exit only when meet NOAUTH error (#11096) · f0005b53
      zhaozhao.zz authored
      redis-benchmark: when trying to get the CONFIG before benchmark,
      avoid printing any warning on most errors (e.g. NOPERM error).
      avoid aborting the benchmark on NOPERM.
      keep the warning only when we abort the benchmark on a NOAUTH error
      f0005b53
    • Binbin's avatar
      Fix replication on expired key test timing issue, give it more chances (#11548) · 06b577aa
      Binbin authored
      In replica, the key expired before master's `INCR` was arrived, so INCR
      creates a new key in the replica and the test failed.
      ```
      *** [err]: Replication of an expired key does not delete the expired key in tests/integration/replication-4.tcl
      Expected '0' to be equal to '1' (context: type eval line 13 cmd {assert_equal 0 [$slave exists k]} proc ::test)
      ```
      
      This test is very likely to do a false positive if the `wait_for_ofs_sync`
      takes longer than the expiration time, so give it a few more chances.
      
      The test was introduced in #9572.
      06b577aa
    • C Charles's avatar
      Add withscore option to ZRANK and ZREVRANK. (#11235) · eeca7f29
      C Charles authored
      Add an option "withscores" to ZRANK and ZREVRANK.
      
      Add `[withscore]` option to both `zrank` and `zrevrank`, like this:
      ```
      z[rev]rank key member [withscore]
      ```
      eeca7f29
    • filipe oliveira's avatar
      Simplified geoAppendIfWithinShape() and removed spurious calls do sdsdup and sdsfree (#11522) · 376b689b
      filipe oliveira authored
      
      
      In scenarios in which we have large datasets and the elements are not
      contained within the range we do spurious calls do sdsdup and sdsfree.
      I.e. instead of pre-creating an sds before we know if we're gonna use it
      or not, change the role of geoAppendIfWithinShape to just do geoWithinShape,
      and let the caller create the string only when needed.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      376b689b
  4. 27 Nov, 2022 4 commits
  5. 26 Nov, 2022 3 commits
  6. 25 Nov, 2022 1 commit
  7. 24 Nov, 2022 5 commits
    • Meir Shpilraien (Spielrein)'s avatar
      Module API to allow writes after key space notification hooks (#11199) · abc345ad
      Meir Shpilraien (Spielrein) authored
      ### Summary of API additions
      
      * `RedisModule_AddPostNotificationJob` - new API to call inside a key space
        notification (and on more locations in the future) and allow to add a post job as describe above.
      * New module option, `REDISMODULE_OPTIONS_ALLOW_NESTED_KEYSPACE_NOTIFICATIONS`,
        allows to disable Redis protection of nested key-space notifications.
      * `RedisModule_GetModuleOptionsAll` - gets the mask of all supported module options so a module
        will be able to check if a given option is supported by the current running Redis instance.
      
      ### Background
      
      The following PR is a proposal of handling write operations inside module key space notifications.
      After a lot of discussions we came to a conclusion that module should not perform any write
      operations on key space notification.
      
      Some examples of issues that such write operation can cause are describe on the following links:
      
      * Bad replication oreder - https://github.com/redis/redis/pull/10969
      * Used after free - https://github.com/redis/redis/pull/10969#issuecomment-1223771006
      * Used after free - https://github.com/redis/redis/pull/9406#issuecomment-1221684054
      
      
      
      There are probably more issues that are yet to be discovered. The underline problem with writing
      inside key space notification is that the notification runs synchronously, this means that the notification
      code will be executed in the middle on Redis logic (commands logic, eviction, expire).
      Redis **do not assume** that the data might change while running the logic and such changes
      can crash Redis or cause unexpected behaviour.
      
      The solution is to state that modules **should not** perform any write command inside key space
      notification (we can chose whether or not we want to force it). To still cover the use-case where
      module wants to perform a write operation as a reaction to key space notifications, we introduce
      a new API , `RedisModule_AddPostNotificationJob`, that allows to register a callback that will be
      called by Redis when the following conditions hold:
      
      * It is safe to perform any write operation.
      * The job will be called atomically along side the operation that triggers it (in our case, key
        space notification).
      
      Module can use this new API to safely perform any write operation and still achieve atomicity
      between the notification and the write.
      
      Although currently the API is supported on key space notifications, the API is written in a generic
      way so that in the future we will be able to use it on other places (server events for example).
      
      ### Technical Details
      
      Whenever a module uses `RedisModule_AddPostNotificationJob` the callback is added to a list
      of callbacks (called `modulePostExecUnitJobs`) that need to be invoke after the current execution
      unit ends (whether its a command, eviction, or active expire). In order to trigger those callback
      atomically with the notification effect, we call those callbacks on `postExecutionUnitOperations`
      (which was `propagatePendingCommands` before this PR). The new function fires the post jobs
      and then calls `propagatePendingCommands`.
      
      If the callback perform more operations that triggers more key space notifications. Those keys
      space notifications might register more callbacks. Those callbacks will be added to the end
      of `modulePostExecUnitJobs` list and will be invoke atomically after the current callback ends.
      This raises a concerns of entering an infinite loops, we consider infinite loops as a logical bug
      that need to be fixed in the module, an attempt to protect against infinite loops by halting the
      execution could result in violation of the feature correctness and so **Redis will make no attempt
      to protect the module from infinite loops**
      
      In addition, currently key space notifications are not nested. Some modules might want to allow
      nesting key-space notifications. To allow that and keep backward compatibility, we introduce a
      new module option called `REDISMODULE_OPTIONS_ALLOW_NESTED_KEYSPACE_NOTIFICATIONS`.
      Setting this option will disable the Redis key-space notifications nesting protection and will
      pass this responsibility to the module.
      
      ### Redis infrastructure
      
      This PR promotes the existing `propagatePendingCommands` to an "Execution Unit" concept,
      which is called after each atomic unit of execution,
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      Co-authored-by: default avatarYossi Gottlieb <yossigo@gmail.com>
      Co-authored-by: default avatarMadelyn Olson <34459052+madolson@users.noreply.github.com>
      abc345ad
    • filipe oliveira's avatar
      GEOSEARCH BYBOX: Reduce wastefull computation on... · ae1de549
      filipe oliveira authored
      
      GEOSEARCH BYBOX: Reduce wastefull computation on geohashGetDistanceIfInRectangle and geohashGetDistance (#11535)
      
      Optimize geohashGetDistanceIfInRectangle when there are many misses.
      It calls 3x geohashGetDistance. The first 2 times we call them to produce intermediate results.
      This PR focus on optimizing for those 2 intermediate results.
      
      1 Reduce expensive computation on intermediate geohashGetDistance with same long
      2 Avoid expensive lon_distance calculation if lat_distance fails beforehand
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      ae1de549
    • Binbin's avatar
      Fix sanitizer warning, use offsetof instread of member_offset (#11539) · ca174e1d
      Binbin authored
      
      
      In #11511 we introduced member_offset which has a sanitizer warning:
      ```
      multi.c:390:26: runtime error: member access within null pointer of type 'watchedKey' (aka 'struct watchedKey')
      SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior multi.c:390:26
      ```
      
      We can use offsetof() from stddef.h. This is part of the standard lib
      just to avoid this UB :) Sanitizer should not complain after we change
      this.
      
      1. Use offsetof instead of member_offset, so we can delete this now
      2. Changed (uint8_t*) cast to (char*).
      
      This does not matter much but according to standard, we are only allowed
      to cast pointers to its own type, char* and void*. Let's try to follow
      the rules.
      
      This change was suggested by tezc and the comments is also from him.
      Co-authored-by: default avatarOzan Tezcan <ozantezcan@gmail.com>
      ca174e1d
    • sundb's avatar
      Ignore -Wstringop-overread warning for SHA1Transform() on GCC 12 (#11538) · fd808185
      sundb authored
      Fix compile warning for SHA1Transform() method under alpine with GCC 12.
      
      Warning:
      ```
      In function 'SHA1Update',
          inlined from 'SHA1Final' at sha1.c:187:9:
      sha1.c:144:13: error: 'SHA1Transform' reading 64 bytes from a region of size 0 [-Werror=stringop-overread]
        144 |             SHA1Transform(context->state, &data[i]);
            |             ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      sha1.c:144:13: note: referencing argument 2 of type 'const unsigned char[64]'
      sha1.c: In function 'SHA1Final':
      sha1.c:56:6: note: in a call to function 'SHA1Transform'
         56 | void SHA1Transform(uint32_t state[5], const unsigned char buffer[64])
            |      ^~~~~~~~~~~~~
      ```
      
      This warning is a false positive because it has been determined in the loop judgment that there must be 64 chars after position `i`
      ```c
      for ( ; i + 63 < len; i += 64) {
          SHA1Transform(context->state, &data[i]);
      }
      ```
      
      Reference: https://github.com/libevent/libevent/commit/e1d7d3e40a7fd50348d849046fbfd9bf976e643c
      fd808185
    • Wen Hui's avatar
      Update Sentinel Debug command json file and add test case for it (#11513) · 75c66fb0
      Wen Hui authored
      Command SENTINEL DEBUG could be no arguments, which display all
      configurable arguments and their values.
      Update the command arguments in the docs (json file) to indicate that
      arguments are optional
      75c66fb0
  8. 23 Nov, 2022 1 commit
    • Mingyi Kang's avatar
      optimize unwatchAllKeys() (#11511) · 3b462ce5
      Mingyi Kang authored
      In unwatchAllKeys() function, we traverse all the keys watched by the client,
      and for each key we need to remove the client from the list of clients watching that key.
      This is implemented by listSearchKey which traverses the list of clients.
      
      If we can reach the node of the list of clients from watchedKey in O(1) time,
      then we do not need to call listSearchKey anymore.
      
      Changes in this PR: put the node of the list of clients of each watched key in the
      db inside the watchedKey structure. In this way, for every key watched by the client,
      we can get the watchedKey structure and then reach the node in the list of clients in
      db->watched_keys to remove it from that list.
      From the perspective of the list of clients watching the key, the list node is inside a
      watchedKey structure, so we can get to the watchedKey struct from the listnode by
      struct member offset math. And because of this, node->value is not used, we can point
      node->value to the list itself, so that we don't need to fetch the list of clients from the dict.
      3b462ce5
  9. 22 Nov, 2022 4 commits
    • Itamar Haber's avatar
      Deprecates SETEX, PSETEX and SETNX (#11512) · f36eb5a1
      Itamar Haber authored
      Technically, these commands were deprecated as of 2.6.12, with the
      introduction of the respective arguments to SET.
      In reality, the deprecation note will only be added in 7.2.0.
      f36eb5a1
    • Binbin's avatar
      Make assert_refcount skip the OBJECT REFCOUNT check with needs:debug tag (#11487) · 543e0daa
      Binbin authored
      This PR add `assert_refcount_morethan`, and modify `assert_refcount` to skip
      the `OBJECT REFCOUNT` check with `needs:debug` flag. Use them to modify all
      `OBJECT REFCOUNT` calls and also update the tests/README to be more specific.
      
      The reasoning is that some of these tests could be testing something important,
      and along the way also add a check for the refcount, and it could be a shame to skip
      the whole test just because the refcount functionality is missing or blocked.
      but much like the fact that some redis variants may not support DEBUG,
      and still we want to run the majority of the test for coverage, and just skip the digest match.
      543e0daa
    • Wen Hui's avatar
      Add explicit error log message for AOF_TRUNCATED status when server load AOF file (#11484) · 6e9724cb
      Wen Hui authored
      Now, according to the comments, if the truncated file is not the last file,
      it will be considered as a fatal error.
      And the return code will updated to AOF_FAILED, then server will exit
      without any error message to the client.
      
      Similar to other error situations, this PR add an explicit error message
      for this case and make the client know clearly what happens.
      6e9724cb
    • Binbin's avatar
      Fix set with duplicate elements causes sdiff to hang (#11530) · 3f8756a0
      Binbin authored
      
      
      This payload produces a set with duplicate elements (listpack encoding):
      ```
      restore _key 0 "\x14\x25\x25\x00\x00\x00\x0A\x00\x06\x01\x82\x5F\x35\x03\x04\x01\x82\x5F\x31\x03\x82\x5F\x33\x03\x00\x01\x82\x5F\x39\x03\x82\x5F\x33\x03\x08\x01\x02\x01\xFF\x0B\x00\x31\xBE\x7D\x41\x01\x03\x5B\xEC"
      
      smembers key
      1) "6"
      2) "_5"
      3) "4"
      4) "_1"
      5) "_3"  ---> dup
      6) "0"
      7) "_9"
      8) "_3"  ---> dup
      9) "8"
      10) "2"
      ```
      
      This kind of sets will cause SDIFF to hang, SDIFF generated a broken
      protocol and left the client hung. (Expected ten elements, but only
      got nine elements due to the duplication.)
      
      If we set `sanitize-dump-payload` to yes, we will be able to find
      the duplicate elements and report "ERR Bad data format".
      
      Discovered and discussed in #11290.
      
      This PR also improve prints when corrupt-dump-fuzzer hangs, it will
      print the cmds and the payload, an example like:
      ```
      Testing integration/corrupt-dump-fuzzer
      [TIMEOUT]: clients state report follows.
      sock6 => (SPAWNED SERVER) pid:28884
      Killing still running Redis server 28884
      commands caused test to hang:
      SDIFF __key 
      payload that caused test to hang: "\x14\balabala"
      ```
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      3f8756a0
  10. 21 Nov, 2022 1 commit
    • Binbin's avatar
      Fix sentinel update loglevel tls test (#11528) · 0f857131
      Binbin authored
      Apparently we used to set `loglevel debug` for tls in spawn_instance.
      I.e. cluster and sentinel tests used to run with debug logging, only when tls mode was enabled.
      this was probably a leftover from when creating the tls mode tests.
      it cause a new test created for #11214 to fail in tls mode.
      
      At the same time, in order to better distinguish the tests, change the
      name of `test-centos7-tls` to `test-centos7-tls-module`, change the name
      of `test-centos7-tls-no-tls` to `test-centos7-tls-module-no-tls`.
      
      Note that in `test-centos7-tls-module`, we did not pass `--tls-module`
      in sentinel test because it is not supported, see 4faddf18
      
      , added in #9320.
      So only `test-ubuntu-tls` fails in daily CI.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      0f857131
  11. 20 Nov, 2022 2 commits
    • Binbin's avatar
      sanitize dump payload: fix crash with empty set with listpack encoding (#11519) · 51887e61
      Binbin authored
      The following example will create an empty set (listpack encoding):
      ```
      > RESTORE key 0
      "\x14\x25\x25\x00\x00\x00\x00\x00\x02\x01\x82\x5F\x37\x03\x06\x01\x82\x5F\x35\x03\x82\x5F\x33\x03\x00\x01\x82\x5F\x31\x03\x82\x5F\x39\x03\x04\xA9\x08\x01\xFF\x0B\x00\xA3\x26\x49\xB4\x86\xB0\x0F\x41"
      OK
      > SCARD key
      (integer) 0
      > SRANDMEMBER key
      Error: Server closed the connection
      ```
      
      In the spirit of #9297, skip empty set when loading RDB_TYPE_SET_LISTPACK.
      Introduced in #11290
      51887e61
    • Wen Hui's avatar
      Add CONFIG SET and GET loglevel feature in Sentinel (#11214) · 2f411770
      Wen Hui authored
      Till now Sentinel allowed modifying the log level in the config file, but not at runtime.
      this makes it possible to tune the log level at runtime
      2f411770
  12. 17 Nov, 2022 1 commit
    • Ping Xie's avatar
      Introduce Shard IDs to logically group nodes in cluster mode (#10536) · 203b12e4
      Ping Xie authored
      Introduce Shard IDs to logically group nodes in cluster mode.
      1. Added a new "shard_id" field to "cluster nodes" output and nodes.conf after "hostname"
      2. Added a new PING extension to propagate "shard_id"
      3. Handled upgrade from pre-7.2 releases automatically
      4. Refactored PING extension assembling/parsing logic
      
      Behavior of Shard IDs:
      
      Replicas will always follow the shards of their reported primaries. If a primary updates its shard ID, the replica will follow. (This need not follow for cluster v2) This is not an expected use case.
      203b12e4
  13. 16 Nov, 2022 2 commits
    • sundb's avatar
      Add listpack encoding for list (#11303) · 2168ccc6
      sundb authored
      Improve memory efficiency of list keys
      
      ## Description of the feature
      The new listpack encoding uses the old `list-max-listpack-size` config
      to perform the conversion, which we can think it of as a node inside a
      quicklist, but without 80 bytes overhead (internal fragmentation included)
      of quicklist and quicklistNode structs.
      For example, a list key with 5 items of 10 chars each, now takes 128 bytes
      instead of 208 it used to take.
      
      ## Conversion rules
      * Convert listpack to quicklist
        When the listpack length or size reaches the `list-max-listpack-size` limit,
        it will be converted to a quicklist.
      * Convert quicklist to listpack
        When a quicklist has only one node, and its length or size is reduced to half
        of the `list-max-listpack-size` limit, it will be converted to a listpack.
        This is done to avoid frequent conversions when we add or remove at the bounding size or length.
          
      ## Interface changes
      1. add list entry param to listTypeSetIteratorDirection
          When list encoding is listpack, `listTypeIterator->lpi` points to the next entry of current entry,
          so when changing the direction, we need to use the current node (listTypeEntry->p) to 
          update `listTypeIterator->lpi` to the next node in the reverse direction.
      
      ## Benchmark
      ### Listpack VS Quicklist with one node
      * LPUSH - roughly 0.3% improvement
      * LRANGE - roughly 13% improvement
      
      ### Both are quicklist
      * LRANGE - roughly 3% improvement
      * LRANGE without pipeline - roughly 3% improvement
      
      From the benchmark, as we can see from the results
      1. When list is quicklist encoding, LRANGE improves performance by <5%.
      2. When list is listpack encoding, LRANGE improves performance by ~13%,
         the main enhancement is brought by `addListListpackRangeReply()`.
      
      ## Memory usage
      1M lists(key:0~key:1000000) with 5 items of 10 chars ("hellohello") each.
      shows memory usage down by 35.49%, from 214MB to 138MB.
      
      ## Note
      1. Add conversion callback to support doing some work before conversion
          Since the quicklist iterator decompresses the current node when it is released, we can 
          no longer decompress the quicklist after we convert the list.
      2168ccc6
    • Madelyn Olson's avatar
      Explicitly send function commands to monitor (#11510) · d136bf28
      Madelyn Olson authored
      Both functions and eval are marked as "no-monitor", since we want to explicitly feed in the script command before the commands generated by the script. Note that we want this behavior generally, so that commands can redact arguments before being added to the monitor.
      d136bf28
  14. 15 Nov, 2022 1 commit
    • Binbin's avatar
      Fix double negative nan test, ignoring sign (#11506) · a4bcdbcf
      Binbin authored
      The test introduced in #11482 fail on ARM (extra CI):
      ```
      *** [err]: RESP2: RM_ReplyWithDouble: NaN in tests/unit/moduleapi/reply.tcl
      Expected '-nan' to be equal to 'nan' (context: type eval line 3 cmd
      {assert_equal "-nan" [r rw.double 0 0]} proc ::test)
      
      *** [err]: RESP3: RM_ReplyWithDouble: NaN in tests/unit/moduleapi/reply.tcl
      Expected ',-nan' to be equal to ',nan' (context: type eval line 8 cmd
      {assert_equal ",-nan" [r rw.double 0 0]} proc ::test)
      ```
      
      It looks like there is no negative nan on ARM. 
      a4bcdbcf
  15. 14 Nov, 2022 2 commits
  16. 13 Nov, 2022 1 commit
  17. 12 Nov, 2022 1 commit
  18. 10 Nov, 2022 1 commit
  19. 09 Nov, 2022 3 commits
    • Viktor Söderqvist's avatar
      Listpack encoding for sets (#11290) · 4e472a1a
      Viktor Söderqvist authored
      Small sets with not only integer elements are listpack encoded, by default
      up to 128 elements, max 64 bytes per element, new config `set-max-listpack-entries`
      and `set-max-listpack-value`. This saves memory for small sets compared to using a hashtable.
      
      Sets with only integers, even very small sets, are still intset encoded (up to 1G
      limit, etc.). Larger sets are hashtable encoded.
      
      This PR increments the RDB version, and has an effect on OBJECT ENCODING
      
      Possible conversions when elements are added:
      
          intset -> listpack
          listpack -> hashtable
          intset -> hashtable
      
      Note: No conversion happens when elements are deleted. If all elements are
      deleted and then added again, the set is deleted and recreated, thus implicitly
      converted to a smaller encoding.
      4e472a1a
    • Viktor Söderqvist's avatar
      Deprecate QUIT (#11439) · 07d18706
      Viktor Söderqvist authored
      Clients should not use this command.
      Instead, clients should simply close the connection when they're not used anymore.
      Terminating a connection on the client side is preferable, as it eliminates `TIME_WAIT`
      lingering sockets on the server side.
      07d18706
    • Oran Agra's avatar
      diskless master, avoid bgsave child hung when fork parent crashes (#11463) · ccaef5c9
      Oran Agra authored
      During a diskless sync, if the master main process crashes, the child would
      have hung in `write`. This fix closes the read fd on the child side, so that if the
      parent crashes, the child will get a write error and exit.
      
      This change also fixes disk-based replication, BGSAVE and AOFRW.
      In that case the child wouldn't have been hang, it would have just kept
      running until done which may be pointless.
      
      There is a certain degree of risk here. in case there's a BGSAVE child that could
      maybe succeed and the parent dies for some reason, the old code would have let
      the child keep running and maybe succeed and avoid data loss.
      On the other hand, if the parent is restarted, it would have loaded an old rdb file
      (or none), and then the child could reach the end and rename the rdb file (data
      conflicting with what the parent has), or also have a race with another BGSAVE
      child that the new parent started.
      
      Note that i removed a comment saying a write error will be ignored in the child
      and handled by the parent (this comment was very old and i don't think relevant).
      ccaef5c9