1. 17 Jul, 2024 1 commit
    • Oran Agra's avatar
      Fix external test hang in redis-cli test when run in a certain order (#13423) · a3319785
      Oran Agra authored
      When the tests are run against an external server in this order:
      `--single unit/introspection --single unit/moduleapi/blockonbackground
      --single integration/redis-cli`
      the test would hang when the "ASK redirect test" test attempts to create
      a listening socket (it fails, and then redis-cli itself hangs waiting
      for a non-responsive socket created by the introspection test).
      
      the reasons are:
      1. the blockedbackground test includes util.tcl and resets the
      `::last_port_attempted` variable
      2. the test in introspection didn't close the listening server, so it's
      still alive.
      3. find_available_port doesn't properly detect the busy port, and it
      thinks that the port is free even though it's busy.
      
      fixing all 3 of these problems, even though fixing just one would be
      enough to let the test pass.
      a3319785
  2. 16 Jul, 2024 1 commit
    • Oran Agra's avatar
      Test infra adjustments for external CI runs (#13421) · fa46aa4d
      Oran Agra authored
      - when uploading server logs, make sure they don't overwrite each other.
      - sort the test units to get consistent order between them (following
      #13220)
      - backup and restore the entire server configuration, to protect one
      unit from config changes another unit performs
      fa46aa4d
  3. 09 Jul, 2024 1 commit
    • debing.sun's avatar
      Hide user data from log (#13400) · 69b480cb
      debing.sun authored
      
      
      This PR is based on the commits from PR #11747.
      
      In the event of an assertion failure, hide command arguments from the
      operator.
      
      In some cases, private client information can be voluntarily exposed
      when a redis instance crashes due to an assertion failure.
      This commit prevent וnintentional client info exposure.
      Operators can still access the hidden data, but they must actively
      request it.
      Any of the client info commands remains the unchanged.
      
      ### Config
      Add a new config `hide-user-data-from-log` to turn this feature on and
      off, default off.
      
      ---------
      Co-authored-by: default avatarnaglera <anagler123@gmail.com>
      Co-authored-by: default avatarnaglera <58042354+naglera@users.noreply.github.com>
      69b480cb
  4. 02 Jul, 2024 1 commit
  5. 30 May, 2024 1 commit
    • jonghoonpark's avatar
      dynamically list test files (#13220) · 5a3534f9
      jonghoonpark authored
      **Related issue**
      https://github.com/redis/redis/issues/13219
      
      **Motivation**
      Currently we have to manually update the all_tests variable when
      introducing new test files.
      
      **Modification**
      I have modified it to list test files dynamically, but instead of
      modifying it to add all test files, I have modified it to only add only
      test files from the following 4 paths
      
      - unit
      - unit/type
      - unit/cluster
      - integration
      
      so that it doesn't deviate too much from what we already do
      
      **Result**
      - dynamically list test files to all_tests variable
      - close issue https://github.com/redis/redis/issues/13219
      
      
      
      **Additional information**
      - removed `list-common.tcl` file and added
      `generate_largevalue_test_array` proc in `util.tcl`. because
      `list-common.tcl` is not a test file
      - There is an order dependency. So I added a code to the "Is a ziplist
      encoded Hash promoted on big payload?" test that resets
      hash-max-listpack-value to the default (64).
      
      ---------
      Signed-off-by: default avatarjonghoonpark <dev@jonghoonpark.com>
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      5a3534f9
  6. 29 May, 2024 1 commit
    • Moti Cohen's avatar
      HFE to support AOF and replicas (#13285) · 33fc0fbf
      Moti Cohen authored
      * For replica sake, rewrite commands `H*EXPIRE*` , `HSETF`, `HGETF` to
      have absolute unix time in msec.
      * On active-expiration of field, propagate HDEL to replica
      (`propagateHashFieldDeletion()`)
      * On lazy-expiration, propagate HDEL to replica (`hashTypeGetValue()`
      now calls `hashTypeDelete()`. It also takes care to call
      `propagateHashFieldDeletion()`).
      * Fix `H*EXPIRE*` command such that if it gets flag `LT` and it doesn’t
      have any expiration on the field then it will considered as valid
      condition.
      
      Note, replicas doesn’t make any active expiration, and should avoid lazy
      expiration. On `hashTypeGetValue()` it doesn't check expiration (As long
      as the master didn’t request to delete the field, it is valid)
      
      TODO: 
      * Attach `dbid` to HASH metadata. See
      [here](https://github.com/redis/redis/pull/13209#discussion_r1593385850
      
      )
      
      ---------
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      33fc0fbf
  7. 18 May, 2024 1 commit
  8. 17 May, 2024 1 commit
    • Ronen Kalish's avatar
      Hfe serialization listpack (#13243) · 323be4d6
      Ronen Kalish authored
      Add RDB de/serialization for HFE
      
      This PR adds two new RDB types: `RDB_TYPE_HASH_METADATA` and
      `RDB_TYPE_HASH_LISTPACK_TTL` to save HFE data.
      When the hash RAM encoding is dict, it will be saved in the former, and
      when it is listpack it will be saved in the latter.
      Both formats just add the TTL value for each field after the data that
      was previously saved, i.e HASH_METADATA will save the number of entries
      and, for each entry, key, value and TTL, whereas listpack is saved as a
      blob.
      On read, the usual dict <--> listpack conversion takes place if
      required.
      In addition, when reading a hash that was saved as a dict fields are
      actively expired if expiry is due. Currently this slao holds for
      listpack encoding, but it is supposed to be removed.
      
      TODO:
      Remove active expiry on load when loading from listpack format (unless
      we'll decide to keep it)
      323be4d6
  9. 20 Mar, 2024 1 commit
  10. 20 Feb, 2024 1 commit
    • Binbin's avatar
      Fix wathced client test timing issue caused by late close (#13062) · 3c2ea1ea
      Binbin authored
      There is a timing issue in the test, close may arrive late, or in
      freeClientAsync we will free the client in async way, which will
      lead to errors in watching_clients statistics, since we will only
      unwatch all keys when we truly freeClient.
      
      Add a wait here to avoid this problem. Also fixed some outdated
      comments i saw. The test was introduced in #12966.
      3c2ea1ea
  11. 11 Jan, 2024 1 commit
  12. 09 Jan, 2024 1 commit
  13. 11 Dec, 2023 1 commit
    • Binbin's avatar
      Fix delKeysInSlot server events are not executed inside an execution unit (#12745) · c85a9b78
      Binbin authored
      This is a follow-up fix to #12733. We need to apply the same changes to
      delKeysInSlot. Refer to #12733 for more details.
      
      This PR contains some other minor cleanups / improvements to the test
      suite and docs.
      It uses the postnotifications test module in a cluster mode test which
      revealed a leak in the test module (fixed).
      c85a9b78
  14. 15 Oct, 2023 1 commit
    • Vitaly's avatar
      Replace cluster metadata with slot specific dictionaries (#11695) · 0270abda
      Vitaly authored
      This is an implementation of https://github.com/redis/redis/issues/10589
      
       that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot.  Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
      
      ## Important changes
      * Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
      * getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
      * Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
      * scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
      * Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot. 
      * Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
      * DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
      
      ## Performance
      This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict. 
      
      RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
      
      ## Interface changes
      * Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
      * Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
      * New RDB version to support the new op code for SLOT information. 
      
      ---------
      Co-authored-by: default avatarVitaly Arbuzov <arvit@amazon.com>
      Co-authored-by: default avatarHarkrishn Patro <harkrisp@amazon.com>
      Co-authored-by: default avatarRoshan Khatri <rvkhatri@amazon.com>
      Co-authored-by: default avatarMadelyn Olson <madelyneolson@gmail.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      0270abda
  15. 13 Oct, 2023 1 commit
    • Oran Agra's avatar
      test suite: clean server pids after server crashed (#12639) · f0c1c730
      Oran Agra authored
      when a server in the test suite crashes and is restarted by redstart_server, we didn't clean it's pid from the list.
      we can see that when the corrupt-dump-fuzzer hangs, it has a long list of servers to lean, but in fact they're all already dead.
      f0c1c730
  16. 02 Oct, 2023 1 commit
    • YaacovHazan's avatar
      Stabilization and improvements around aof tests (#12626) · 2e0f6724
      YaacovHazan authored
      In some tests, the code manually searches for a log message, and it
      uses tail -1 with a delay of 1 second, which can miss the expected line.
      
      Also, because the aof tests use start_server_aof and not start_server,
      the test name doesn't log into the server log.
      
      To fix the above, I made the following changes:
      - Change the start_server_aof to wrap the start_server.
        This will add the created aof server to the servers list, and make
        srv() and wait_for_log_messages() available for the tests.
      
      - Introduce a new option for start_server.
        'wait_ready' - an option to let the caller start the test code without
        waiting for the server to be ready. useful for tests on a server that
        is expected to exit on startup.
      
      - Create a new start_server_aof_ex.
        The new proc also accept options as argument and make use of the
        new 'short_life' option for tests that are expected to exit on startup
        because of some error in the aof file(s).
      
      Because of the above, I had to change many lines and replace every
      local srv variable (a server config) usage with the srv().
      2e0f6724
  17. 30 Jul, 2023 1 commit
  18. 26 Jun, 2023 1 commit
    • Chen Tianjie's avatar
      Support TLS service when "tls-cluster" is not enabled and persist both plain... · 22a29935
      Chen Tianjie authored
      Support TLS service when "tls-cluster" is not enabled and persist both plain and TLS port in nodes.conf (#12233)
      
      Originally, when "tls-cluster" is enabled, `port` is set to TLS port. In order to support non-TLS clients, `pport` is used to propagate TCP port across cluster nodes. However when "tls-cluster" is disabled, `port` is set to TCP port, and `pport` is not used, which means the cluster cannot provide TLS service unless "tls-cluster" is on.
      ```
      typedef struct {
          // ...
          uint16_t port;  /* Latest known clients port (TLS or plain). */
          uint16_t pport; /* Latest known clients plaintext port. Only used if the main clients port is for TLS. */
          // ...
      } clusterNode;
      ```
      ```
      typedef struct {
          // ...
          uint16_t port;   /* TCP base port number. */
          uint16_t pport;  /* Sender TCP plaintext port, if base port is TLS */
          // ...
      } clusterMsg;
      ```
      This PR renames `port` and `pport` in `clusterNode` to `tcp_port` and `tls_port`, to record both ports no matter "tls-cluster" is enabled or disabled.
      
      This allows to provide TLS service to clients when "tls-cluster" is disabled: when displaying cluster topology, or giving `MOVED` error, server can provide TLS or TCP port according to client's connection type, no matter what type of connection cluster bus is using.
      
      For backwards compatibility, `port` and `pport` in `clusterMsg` are preserved, when "tls-cluster" is enabled, `port` is set to TLS port and `pport` is set to TCP port, when "tls-cluster" is disabled, `port` is set to TCP port and `pport` is set to TLS port (instead of 0).
      
      Also, in the nodes.conf file, a new aux field displaying an extra port is added to complete the persisted info. We may have `tls_port=xxxxx` or `tcp_port=xxxxx` in the aux field, to complete the cluster topology, while the other port is stored in the normal `<ip>:<port>` field. The format is shown below.
      ```
      <node-id> <ip>:<tcp_port>@<cport>,<hostname>,shard-id=...,tls-port=6379 myself,master - 0 0 0 connected 0-1000
      ```
      Or we can switch the position of two ports, both can be correctly resolved.
      ```
      <node-id> <ip>:<tls_port>@<cport>,<hostname>,shard-id=...,tcp-port=6379 myself,master - 0 0 0 connected 0-1000
      ```
      22a29935
  19. 21 Jun, 2023 1 commit
    • Madelyn Olson's avatar
      Make nodename test more consistent (#12330) · 73cf0243
      Madelyn Olson authored
      To determine when everything was stable, we couldn't just query the nodename since they aren't API visible by design. Instead, we were using a proxy piece of information which was bumping the epoch and waiting for everyone to observe that. This works for making source Node 0 and Node 1 had pinged, and Node 0 and Node 2 had pinged, but did not guarantee that Node 1 and Node 2 had pinged. Although unlikely, this can cause this failure message. To fix it I hijacked hostnames and used its validation that it has been propagated, since we know that it is stable.
      
      I also noticed while stress testing this sometimes the test took almost 4.5 seconds to finish, which is really close to the current 5 second limit of the log check, so I bumped that up as well just to make it a bit more consistent.
      73cf0243
  20. 18 Jun, 2023 1 commit
    • Wen Hui's avatar
      Cluster human readable nodename feature (#9564) · 070453ee
      Wen Hui authored
      
      
      This PR adds a human readable name to a node in clusters that are visible as part of error logs. This is useful so that admins and operators of Redis cluster have better visibility into failures without having to cross-reference the generated ID with some logical identifier (such as pod-ID or EC2 instance ID). This is mentioned in #8948. Specific nodenames can be set by using the variable cluster-announce-human-nodename. The nodename is gossiped using the clusterbus extension in #9530.
      Co-authored-by: default avatarMadelyn Olson <madelyneolson@gmail.com>
      070453ee
  21. 12 Jun, 2023 1 commit
  22. 18 Apr, 2023 1 commit
    • sundb's avatar
      Fix some compile warnings and errors when building with gcc-12 or clang (#12035) · 42c8c618
      sundb authored
      This PR is to fix the compilation warnings and errors generated by the latest
      complier toolchain, and to add a new runner of the latest toolchain for daily CI.
      
      ## Fix various compilation warnings and errors
      
      1) jemalloc.c
      
      COMPILER: clang-14 with FORTIFY_SOURCE
      
      WARNING:
      ```
      src/jemalloc.c:1028:7: warning: suspicious concatenation of string literals in an array initialization; did you mean to separate the elements with a comma? [-Wstring-concatenation]
                          "/etc/malloc.conf",
                          ^
      src/jemalloc.c:1027:3: note: place parentheses around the string literal to silence warning
                      "\"name\" of the file referenced by the symbolic link named "
                      ^
      ```
      
      REASON:  the compiler to alert developers to potential issues with string concatenation
      that may miss a comma,
      just like #9534 which misses a comma.
      
      SOLUTION: use `()` to tell the compiler that these two line strings are continuous.
      
      2) config.h
      
      COMPILER: clang-14 with FORTIFY_SOURCE
      
      WARNING:
      ```
      In file included from quicklist.c:36:
      ./config.h:319:76: warning: attribute declaration must precede definition [-Wignored-attributes]
      char *strcat(char *restrict dest, const char *restrict src) __attribute__((deprecated("please avoid use of unsafe C functions. prefer use of redis_strlcat instead")));
      ```
      
      REASON: Enabling _FORTIFY_SOURCE will cause the compiler to use `strcpy()` with check,
      it results in a deprecated attribute declaration after including <features.h>.
      
      SOLUTION: move the deprecated attribute declaration from config.h to fmacro.h before "#include <features.h>".
      
      3) networking.c
      
      COMPILER: GCC-12
      
      WARNING: 
      ```
      networking.c: In function ‘addReplyDouble.part.0’:
      networking.c:876:21: warning: writing 1 byte into a region of size 0 [-Wstringop-overflow=]
        876 |         dbuf[start] = '$';
            |                     ^
      networking.c:868:14: note: at offset -5 into destination object ‘dbuf’ of size 5152
        868 |         char dbuf[MAX_LONG_DOUBLE_CHARS+32];
            |              ^
      networking.c:876:21: warning: writing 1 byte into a region of size 0 [-Wstringop-overflow=]
        876 |         dbuf[start] = '$';
            |                     ^
      networking.c:868:14: note: at offset -6 into destination object ‘dbuf’ of size 5152
        868 |         char dbuf[MAX_LONG_DOUBLE_CHARS+32];
      ```
      
      REASON: GCC-12 predicts that digits10() may return 9 or 10 through `return 9 + (v >= 1000000000UL)`.
      
      SOLUTION: add an assert to let the compiler know the possible length;
      
      4) redis-cli.c & redis-benchmark.c
      
      COMPILER: clang-14 with FORTIFY_SOURCE
      
      WARNING:
      ```
      redis-benchmark.c:1621:2: warning: embedding a directive within macro arguments has undefined behavior [-Wembedded-directive] #ifdef USE_OPENSSL
      redis-cli.c:3015:2: warning: embedding a directive within macro arguments has undefined behavior [-Wembedded-directive] #ifdef USE_OPENSSL
      ```
      
      REASON: when _FORTIFY_SOURCE is enabled, the compiler will use the print() with
      check, which is a macro. this may result in the use of directives within the macro, which
      is undefined behavior.
      
      SOLUTION: move the directives-related code out of `print()`.
      
      5) server.c
      
      COMPILER: gcc-13 with FORTIFY_SOURCE
      
      WARNING:
      ```
      In function 'lookupCommandLogic',
          inlined from 'lookupCommandBySdsLogic' at server.c:3139:32:
      server.c:3102:66: error: '*(robj **)argv' may be used uninitialized [-Werror=maybe-uninitialized]
       3102 |     struct redisCommand *base_cmd = dictFetchValue(commands, argv[0]->ptr);
            |                                                              ~~~~^~~
      ```
      
      REASON: The compiler thinks that the `argc` returned by `sdssplitlen()` could be 0,
      resulting in an empty array of size 0 being passed to lookupCommandLogic.
      this should be a false positive, `argc` can't be 0 when strings are not NULL.
      
      SOLUTION: add an assert to let the compiler know that `argc` is positive.
      
      6) sha1.c
      
      COMPILER: gcc-12
      
      WARNING:
      ```
      In function ‘SHA1Update’,
          inlined from ‘SHA1Final’ at sha1.c:195:5:
      sha1.c:152:13: warning: ‘SHA1Transform’ reading 64 bytes from a region of size 0 [-Wstringop-overread]
        152 |             SHA1Transform(context->state, &data[i]);
            |             ^
      sha1.c:152:13: note: referencing argument 2 of type ‘const unsigned char[64]’
      sha1.c: In function ‘SHA1Final’:
      sha1.c:56:6: note: in a call to function ‘SHA1Transform’
         56 | void SHA1Transform(uint32_t state[5], const unsigned char buffer[64])
            |      ^
      In function ‘SHA1Update’,
          inlined from ‘SHA1Final’ at sha1.c:198:9:
      sha1.c:152:13: warning: ‘SHA1Transform’ reading 64 bytes from a region of size 0 [-Wstringop-overread]
        152 |             SHA1Transform(context->state, &data[i]);
            |             ^
      sha1.c:152:13: note: referencing argument 2 of type ‘const unsigned char[64]’
      sha1.c: In function ‘SHA1Final’:
      sha1.c:56:6: note: in a call to function ‘SHA1Transform’
         56 | void SHA1Transform(uint32_t state[5], const unsigned char buffer[64])
      ```
      
      REASON: due to the bug[https://gcc.gnu.org/bugzilla/show_bug.cgi?id=80922], when
      enable LTO, gcc-12 will not see `diagnostic ignored "-Wstringop-overread"`, resulting in a warning.
      
      SOLUTION: temporarily set SHA1Update to noinline to avoid compiler warnings due
      to LTO being enabled until the above gcc bug is fixed.
      
      7) zmalloc.h
      
      COMPILER: GCC-12
      
      WARNING: 
      ```
      In function ‘memset’,
          inlined from ‘moduleCreateContext’ at module.c:877:5,
          inlined from ‘RM_GetDetachedThreadSafeContext’ at module.c:8410:5:
      /usr/include/x86_64-linux-gnu/bits/string_fortified.h:59:10: warning: ‘__builtin_memset’ writing 104 bytes into a region of size 0 overflows the destination [-Wstringop-overflow=]
         59 |   return __builtin___memset_chk (__dest, __ch, __len,
      ```
      
      REASON: due to the GCC-12 bug [https://gcc.gnu.org/bugzilla/show_bug.cgi?id=96503],
      GCC-12 cannot see alloc_size, which causes GCC to think that the actual size of memory
      is 0 when checking with __glibc_objsize0().
      
      SOLUTION: temporarily set malloc-related interfaces to `noinline` to avoid compiler warnings
      due to LTO being enabled until the above gcc bug is fixed.
      
      ## Other changes
      1) Fixed `ps -p [pid]`  doesn't output `<defunct>` when using procps 4.x causing `replication
        child dies when parent is killed - diskless` test to fail.
      2) Add a new fortify CI with GCC-13 and ubuntu-lunar docker image.
      42c8c618
  23. 12 Apr, 2023 1 commit
    • Oran Agra's avatar
      Attempt to solve MacOS CI issues in GH Actions (#12013) · 997fa41e
      Oran Agra authored
      The MacOS CI in github actions often hangs without any logs. GH argues that
      it's due to resource utilization, either running out of disk space, memory, or CPU
      starvation, and thus the runner is terminated.
      
      This PR contains multiple attempts to resolve this:
      1. introducing pause_process instead of SIGSTOP, which waits for the process
        to stop before resuming the test, possibly resolving race conditions in some tests,
        this was a suspect since there was one test that could result in an infinite loop in that
       case, in practice this didn't help, but still a good idea to keep.
      2. disable the `save` config in many tests that don't need it, specifically ones that use
        heavy writes and could create large files.
      3. change the `populate` proc to use short pipeline rather than an infinite one.
      4. use `--clients 1` in the macos CI so that we don't risk running multiple resource
        demanding tests in parallel.
      5. enable `--verbose` to be repeated to elevate verbosity and print more info to stdout
        when a test or a server starts.
      997fa41e
  24. 26 Mar, 2023 1 commit
    • Binbin's avatar
      Fix redis-cli cluster test timing issue (#11887) · aa2403ca
      Binbin authored
      This test fails sporadically:
      ```
      *** [err]: Migrate the last slot away from a node using redis-cli in tests/unit/cluster/cli.tcl
      cluster size did not reach a consistent size 4
      ```
      
      I guess the time (5s) of wait_for_cluster_size is not enough,
      usually, the waiting time for our other tests for cluster
      consistency is 50s, so also changing it to 50s.
      aa2403ca
  25. 15 Mar, 2023 1 commit
    • Binbin's avatar
      Fix WAITAOF reply when using last_offset and last_numreplicas (#11917) · 70b2c4f5
      Binbin authored
      WAITAOF wad added in #11713, its return is an array.
      But forget to handle WAITAOF in last_offset and last_numreplicas,
      causing WAITAOF to return a WAIT like reply.
      
      Tests was added to validate that case (both WAIT and WAITAOF).
      This PR also refactored processClientsWaitingReplicas a bit for better
      maintainability and readability.
      70b2c4f5
  26. 14 Mar, 2023 1 commit
    • Slava Koyfman's avatar
      Implementing the WAITAOF command (issue #10505) (#11713) · 9344f654
      Slava Koyfman authored
      
      
      Implementing the WAITAOF functionality which would allow the user to
      block until a specified number of Redises have fsynced all previous write
      commands to the AOF.
      
      Syntax: `WAITAOF <num_local> <num_replicas> <timeout>`
      Response: Array containing two elements: num_local, num_replicas
      num_local is always either 0 or 1 representing the local AOF on the master.
      num_replicas is the number of replicas that acknowledged the a replication
      offset of the last write being fsynced to the AOF.
      
      Returns an error when called on replicas, or when called with non-zero
      num_local on a master with AOF disabled, in all other cases the response
      just contains number of fsync copies.
      
      Main changes:
      * Added code to keep track of replication offsets that are confirmed to have
        been fsynced to disk.
      * Keep advancing master_repl_offset even when replication is disabled (and
        there's no replication backlog, only if there's an AOF enabled).
        This way we can use this command and it's mechanisms even when replication
        is disabled.
      * Extend REPLCONF ACK to `REPLCONF ACK <ofs> FACK <ofs>`, the FACK
        will be appended only if there's an AOF on the replica, and already ignored on
        old masters (thus backwards compatible)
      * WAIT now no longer wait for the replication offset after your last command, but
        rather the replication offset after your last write (or read command that caused
        propagation, e.g. lazy expiry).
      
      Unrelated changes:
      * WAIT command respects CLIENT_DENY_BLOCKING (not just CLIENT_MULTI)
      
      Implementation details:
      * Add an atomic var named `fsynced_reploff_pending` that's updated
        (usually by the bio thread) and later copied to the main `fsynced_reploff`
        variable (only if the AOF base file exists).
        I.e. during the initial AOF rewrite it will not be used as the fsynced offset
        since the AOF base is still missing.
      * Replace close+fsync bio job with new BIO_CLOSE_AOF (AOF specific)
        job that will also update fsync offset the field.
      * Handle all AOF jobs (BIO_CLOSE_AOF, BIO_AOF_FSYNC) in the same bio
        worker thread, to impose ordering on their execution. This solves a
        race condition where a job could set `fsynced_reploff_pending` to a higher
        value than another pending fsync job, resulting in indicating an offset
        for which parts of the data have not yet actually been fsynced.
        Imposing an ordering on the jobs guarantees that fsync jobs are executed
        in increasing order of replication offset.
      * Drain bio jobs when switching `appendfsync` to "always"
        This should prevent a write race between updates to `fsynced_reploff_pending`
        in the main thread (`flushAppendOnlyFile` when set to ALWAYS fsync), and
        those done in the bio thread.
      * Drain the pending fsync when starting over a new AOF to avoid race conditions
        with the previous AOF offsets overriding the new one (e.g. after switching to
        replicate from a new master).
      * Make sure to update the fsynced offset at the end of the initial AOF rewrite.
        a must in case there are no additional writes that trigger a periodic fsync,
        specifically for a replica that does a full sync.
      
      Limitations:
      It is possible to write a module and a Lua script that propagate to the AOF and doesn't
      propagate to the replication stream. see REDISMODULE_ARGV_NO_REPLICAS and luaRedisSetReplCommand.
      These features are incompatible with the WAITAOF command, and can result
      in two bad cases. The scenario is that the user executes command that only
      propagates to AOF, and then immediately
      issues a WAITAOF, and there's no further writes on the replication stream after that.
      1. if the the last thing that happened on the replication stream is a PING
        (which increased the replication offset but won't trigger an fsync on the replica),
        then the client would hang forever (will wait for an fack that the replica will never
        send sine it doesn't trigger any fsyncs).
      2. if the last thing that happened is a write command that got propagated properly,
        then WAITAOF will be released immediately, without waiting for an fsync (since
        the offset didn't change)
      
      Refactoring:
      * Plumbing to allow bio worker to handle multiple job types
        This introduces infrastructure necessary to allow BIO workers to
        not have a 1-1 mapping of worker to job-type. This allows in the
        future to assign multiple job types to a single worker, either as
        a performance/resource optimization, or as a way of enforcing
        ordering between specific classes of jobs.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      9344f654
  27. 11 Mar, 2023 1 commit
    • guybe7's avatar
      Add reply_schema to command json files (internal for now) (#10273) · 4ba47d2d
      guybe7 authored
      Work in progress towards implementing a reply schema as part of COMMAND DOCS, see #9845
      Since ironing the details of the reply schema of each and every command can take a long time, we
      would like to merge this PR when the infrastructure is ready, and let this mature in the unstable branch.
      Meanwhile the changes of this PR are internal, they are part of the repo, but do not affect the produced build.
      
      ### Background
      In #9656 we add a lot of information about Redis commands, but we are missing information about the replies
      
      ### Motivation
      1. Documentation. This is the primary goal.
      2. It should be possible, based on the output of COMMAND, to be able to generate client code in typed
        languages. In order to do that, we need Redis to tell us, in detail, what each reply looks like.
      3. We would like to build a fuzzer that verifies the reply structure (for now we use the existing
        testsuite, see the "Testing" section)
      
      ### Schema
      The idea is to supply some sort of schema for the various replies of each command.
      The schema will describe the conceptual structure of the reply (for generated clients), as defined in RESP3.
      Note that the reply structure itself may change, depending on the arguments (e.g. `XINFO STREAM`, with
      and without the `FULL` modifier)
      We decided to use the standard json-schema (see https://json-schema.org/) as the reply-schema.
      
      Example for `BZPOPMIN`:
      ```
      "reply_schema": {
          "oneOf": [
              {
                  "description": "Timeout reached and no elements were popped.",
                  "type": "null"
              },
              {
                  "description": "The keyname, popped member, and its score.",
                  "type": "array",
                  "minItems": 3,
                  "maxItems": 3,
                  "items": [
                      {
                          "description": "Keyname",
                          "type": "string"
                      },
                      {
                          "description": "Member",
                          "type": "string"
                      },
                      {
                          "description": "Score",
                          "type": "number"
                      }
                  ]
              }
          ]
      }
      ```
      
      #### Notes
      1.  It is ok that some commands' reply structure depends on the arguments and it's the caller's responsibility
        to know which is the relevant one. this comes after looking at other request-reply systems like OpenAPI,
        where the reply schema can also be oneOf and the caller is responsible to know which schema is the relevant one.
      2. The reply schemas will describe RESP3 replies only. even though RESP3 is structured, we want to use reply
        schema for documentation (and possibly to create a fuzzer that validates the replies)
      3. For documentation, the description field will include an explanation of the scenario in which the reply is sent,
        including any relation to arguments. for example, for `ZRANGE`'s two schemas we will need to state that one
        is with `WITHSCORES` and the other is without.
      4. For documentation, there will be another optional field "notes" in which we will add a short description of
        the representation in RESP2, in case it's not trivial (RESP3's `ZRANGE`'s nested array vs. RESP2's flat
        array, for example)
      
      Given the above:
      1. We can generate the "return" section of all commands in [redis-doc](https://redis.io/commands/)
        (given that "description" and "notes" are comprehensive enough)
      2. We can generate a client in a strongly typed language (but the return type could be a conceptual
        `union` and the caller needs to know which schema is relevant). see the section below for RESP2 support.
      3. We can create a fuzzer for RESP3.
      
      ### Limitations (because we are using the standard json-schema)
      The problem is that Redis' replies are more diverse than what the json format allows. This means that,
      when we convert the reply to a json (in order to validate the schema against it), we lose information (see
      the "Testing" section below).
      The other option would have been to extend the standard json-schema (and json format) to include stuff
      like sets, bulk-strings, error-string, etc. but that would mean also extending the schema-validator - and that
      seemed like too much work, so we decided to compromise.
      
      Examples:
      1. We cannot tell the difference between an "array" and a "set"
      2. We cannot tell the difference between simple-string and bulk-string
      3. we cannot verify true uniqueness of items in commands like ZRANGE: json-schema doesn't cover the
        case of two identical members with different scores (e.g. `[["m1",6],["m1",7]]`) because `uniqueItems`
        compares (member,score) tuples and not just the member name. 
      
      ### Testing
      This commit includes some changes inside Redis in order to verify the schemas (existing and future ones)
      are indeed correct (i.e. describe the actual response of Redis).
      To do that, we added a debugging feature to Redis that causes it to produce a log of all the commands
      it executed and their replies.
      For that, Redis needs to be compiled with `-DLOG_REQ_RES` and run with
      `--reg-res-logfile <file> --client-default-resp 3` (the testsuite already does that if you run it with
      `--log-req-res --force-resp3`)
      You should run the testsuite with the above args (and `--dont-clean`) in order to make Redis generate
      `.reqres` files (same dir as the `stdout` files) which contain request-response pairs.
      These files are later on processed by `./utils/req-res-log-validator.py` which does:
      1. Goes over req-res files, generated by redis-servers, spawned by the testsuite (see logreqres.c)
      2. For each request-response pair, it validates the response against the request's reply_schema
        (obtained from the extended COMMAND DOCS)
      5. In order to get good coverage of the Redis commands, and all their different replies, we chose to use
        the existing redis test suite, rather than attempt to write a fuzzer.
      
      #### Notes about RESP2
      1. We will not be able to use the testing tool to verify RESP2 replies (we are ok with that, it's time to
        accept RESP3 as the future RESP)
      2. Since the majority of the test suite is using RESP2, and we want the server to reply with RESP3
        so that we can validate it, we will need to know how to convert the actual reply to the one expected.
         - number and boolean are always strings in RESP2 so the conversion is easy
         - objects (maps) are always a flat array in RESP2
         - others (nested array in RESP3's `ZRANGE` and others) will need some special per-command
           handling (so the client will not be totally auto-generated)
      
      Example for ZRANGE:
      ```
      "reply_schema": {
          "anyOf": [
              {
                  "description": "A list of member elements",
                  "type": "array",
                  "uniqueItems": true,
                  "items": {
                      "type": "string"
                  }
              },
              {
                  "description": "Members and their scores. Returned in case `WITHSCORES` was used.",
                  "notes": "In RESP2 this is returned as a flat array",
                  "type": "array",
                  "uniqueItems": true,
                  "items": {
                      "type": "array",
                      "minItems": 2,
                      "maxItems": 2,
                      "items": [
                          {
                              "description": "Member",
                              "type": "string"
                          },
                          {
                              "description": "Score",
                              "type": "number"
                          }
                      ]
                  }
              }
          ]
      }
      ```
      
      ### Other changes
      1. Some tests that behave differently depending on the RESP are now being tested for both RESP,
        regardless of the special log-req-res mode ("Pub/Sub PING" for example)
      2. Update the history field of CLIENT LIST
      3. Added basic tests for commands that were not covered at all by the testsuite
      
      ### TODO
      
      - [x] (maybe a different PR) add a "condition" field to anyOf/oneOf schemas that refers to args. e.g.
        when `SET` return NULL, the condition is `arguments.get||arguments.condition`, for `OK` the condition
        is `!arguments.get`, and for `string` the condition is `arguments.get` - https://github.com/redis/redis/issues/11896
      - [x] (maybe a different PR) also run `runtest-cluster` in the req-res logging mode
      - [x] add the new tests to GH actions (i.e. compile with `-DLOG_REQ_RES`, run the tests, and run the validator)
      - [x] (maybe a different PR) figure out a way to warn about (sub)schemas that are uncovered by the output
        of the tests - https://github.com/redis/redis/issues/11897
      - [x] (probably a separate PR) add all missing schemas
      - [x] check why "SDOWN is triggered by misconfigured instance replying with errors" fails with --log-req-res
      - [x] move the response transformers to their own file (run both regular, cluster, and sentinel tests - need to
        fight with the tcl including mechanism a bit)
      - [x] issue: module API - https://github.com/redis/redis/issues/11898
      - [x] (probably a separate PR): improve schemas: add `required` to `object`s - https://github.com/redis/redis/issues/11899
      
      Co-authored-by: default avatarOzan Tezcan <ozantezcan@gmail.com>
      Co-authored-by: default avatarHanna Fadida <hanna.fadida@redislabs.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      Co-authored-by: default avatarShaya Potter <shaya@redislabs.com>
      4ba47d2d
  28. 08 Mar, 2023 1 commit
    • Binbin's avatar
      Fix test and improve assert_replication_stream print the whole stream (#11793) · a7c9e505
      Binbin authored
      This PR has two parts:
      
      1. Fix flaky test case, the previous tests set a lot of volatile keys,
      it injects an unexpected DEL command into the replication stream during
      the later test, causing it to fail. Add a flushall to avoid it.
      
      2. Improve assert_replication_stream, now it can print the whole stream
      rather than just the failing line.
      a7c9e505
  29. 22 Nov, 2022 2 commits
    • Binbin's avatar
      Make assert_refcount skip the OBJECT REFCOUNT check with needs:debug tag (#11487) · 543e0daa
      Binbin authored
      This PR add `assert_refcount_morethan`, and modify `assert_refcount` to skip
      the `OBJECT REFCOUNT` check with `needs:debug` flag. Use them to modify all
      `OBJECT REFCOUNT` calls and also update the tests/README to be more specific.
      
      The reasoning is that some of these tests could be testing something important,
      and along the way also add a check for the refcount, and it could be a shame to skip
      the whole test just because the refcount functionality is missing or blocked.
      but much like the fact that some redis variants may not support DEBUG,
      and still we want to run the majority of the test for coverage, and just skip the digest match.
      543e0daa
    • Binbin's avatar
      Fix set with duplicate elements causes sdiff to hang (#11530) · 3f8756a0
      Binbin authored
      
      
      This payload produces a set with duplicate elements (listpack encoding):
      ```
      restore _key 0 "\x14\x25\x25\x00\x00\x00\x0A\x00\x06\x01\x82\x5F\x35\x03\x04\x01\x82\x5F\x31\x03\x82\x5F\x33\x03\x00\x01\x82\x5F\x39\x03\x82\x5F\x33\x03\x08\x01\x02\x01\xFF\x0B\x00\x31\xBE\x7D\x41\x01\x03\x5B\xEC"
      
      smembers key
      1) "6"
      2) "_5"
      3) "4"
      4) "_1"
      5) "_3"  ---> dup
      6) "0"
      7) "_9"
      8) "_3"  ---> dup
      9) "8"
      10) "2"
      ```
      
      This kind of sets will cause SDIFF to hang, SDIFF generated a broken
      protocol and left the client hung. (Expected ten elements, but only
      got nine elements due to the duplication.)
      
      If we set `sanitize-dump-payload` to yes, we will be able to find
      the duplicate elements and report "ERR Bad data format".
      
      Discovered and discussed in #11290.
      
      This PR also improve prints when corrupt-dump-fuzzer hangs, it will
      print the cmds and the payload, an example like:
      ```
      Testing integration/corrupt-dump-fuzzer
      [TIMEOUT]: clients state report follows.
      sock6 => (SPAWNED SERVER) pid:28884
      Killing still running Redis server 28884
      commands caused test to hang:
      SDIFF __key 
      payload that caused test to hang: "\x14\balabala"
      ```
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      3f8756a0
  30. 12 Nov, 2022 1 commit
  31. 09 Nov, 2022 1 commit
    • Oran Agra's avatar
      diskless master, avoid bgsave child hung when fork parent crashes (#11463) · ccaef5c9
      Oran Agra authored
      During a diskless sync, if the master main process crashes, the child would
      have hung in `write`. This fix closes the read fd on the child side, so that if the
      parent crashes, the child will get a write error and exit.
      
      This change also fixes disk-based replication, BGSAVE and AOFRW.
      In that case the child wouldn't have been hang, it would have just kept
      running until done which may be pointless.
      
      There is a certain degree of risk here. in case there's a BGSAVE child that could
      maybe succeed and the parent dies for some reason, the old code would have let
      the child keep running and maybe succeed and avoid data loss.
      On the other hand, if the parent is restarted, it would have loaded an old rdb file
      (or none), and then the child could reach the end and rename the rdb file (data
      conflicting with what the parent has), or also have a race with another BGSAVE
      child that the new parent started.
      
      Note that i removed a comment saying a write error will be ignored in the child
      and handled by the parent (this comment was very old and i don't think relevant).
      ccaef5c9
  32. 02 Nov, 2022 1 commit
  33. 03 Oct, 2022 1 commit
    • Madelyn Olson's avatar
      Stabilize cluster hostnames tests (#11307) · 663fbd34
      Madelyn Olson authored
      This PR introduces a couple of changes to improve cluster test stability:
      1. Increase the cluster node timeout to 3 seconds, which is similar to the
         normal cluster tests, but introduce a new mechanism to increase the ping
         period so that the tests are still fast. This new config is a debug config.
      2. Set `cluster-replica-no-failover yes` on a wider array of tests which are
         sensitive to failovers. This was occurring on the ARM CI.
      663fbd34
  34. 19 Sep, 2022 1 commit
    • sundb's avatar
      Fix crash due to delete entry from compress quicklistNode and wrongly split quicklistNode (#11242) · 13d25dd9
      sundb authored
      This PR mainly deals with 2 crashes introduced in #9357,
      and fix the QUICKLIST-PACKED-THRESHOLD mess in external test mode.
      
      1. Fix crash due to deleting an entry from a compress quicklistNode
         When inserting a large element, we need to create a new quicklistNode first,
         and then delete its previous element, if the node where the deleted element is
         located is compressed, it will cause a crash.
         Now add `dont_compress` to quicklistNode, if we want to use a quicklistNode
         after some operation, we can use this flag like following:
      
          ```c
          node->dont_compress = 1; /* Prevent to be compressed */
          some_operation(node); /* This operation might try to compress this node */
          some_other_operation(node); /* We can use this node without decompress it */
          node->dont_compress = 0; /* Re-able compression */
          quicklistCompressNode(node);
          ```
      
         Perhaps in the future, we could just disable the current entry from being
         compressed during the iterator loop, but that would require more work.
      
      2. Fix crash due to wrongly split quicklist
         before #9357, the offset param of _quicklistSplitNode() will not negative.
         For now, when offset is negative, the split extent will be wrong.
         following example:
          ```c
          int orig_start = after ? offset + 1 : 0;
          int orig_extent = after ? -1 : offset;
          int new_start = after ? 0 : offset;
          int new_extent = after ? offset + 1 : -1;
          # offset: -2, after: 1, node->count: 2
          # current wrong range: [-1,-1] [0,-1]
          # correct range: [1,-1] [0, 1]
          ```
      
         Because only `_quicklistInsert()` splits the quicklistNode and only
         `quicklistInsertAfter()`, `quicklistInsertBefore()` call _quicklistInsert(), 
         so `quicklistReplaceEntry()` and `listTypeInsert()` might occur this crash.
         But the iterator of `listTypeInsert()` is alway from head to tail(iter->offset is
         always positive), so it is not affected.
         The final conclusion is this crash only occur when we insert a large element
         with negative index into a list, that affects `LSET` command and `RM_ListSet`
         module api.
           
      3. In external test mode, we need to restore quicklist packed threshold after
         when the end of test.
      4. Show `node->count` in quicklistRepr().
      5. Add new tcl proc `config_get_set` to support restoring config in tests.
      13d25dd9
  35. 06 Sep, 2022 1 commit
    • ranshid's avatar
      fix test Migrate the last slot away from a node using redis-cli (#11221) · c0ce97fa
      ranshid authored
      When using cli to add node, there can potentially be a race condition in
      which all nodes presenting cluster state o.k even though the added node
      did not yet meet all cluster nodes.
      this adds another utility function to wait until all cluster nodes see the same cluster size
      c0ce97fa
  36. 24 Aug, 2022 1 commit
  37. 23 Aug, 2022 1 commit
    • Oran Agra's avatar
      Build TLS as a loadable module · 4faddf18
      Oran Agra authored
      * Support BUILD_TLS=module to be loaded as a module via config file or
        command line. e.g. redis-server --loadmodule redis-tls.so
      * Updates to redismodule.h to allow it to be used side by side with
        server.h by defining REDISMODULE_CORE_MODULE
      * Changes to server.h, redismodule.h and module.c to avoid repeated
        type declarations (gcc 4.8 doesn't like these)
      * Add a mechanism for non-ABI neutral modules (ones who include
        server.h) to refuse loading if they detect not being built together with
        redis (release.c)
      * Fix wrong signature of RedisModuleDefragFunc, this could break
        compilation of a module, but not the ABI
      * Move initialization of listeners in server.c to be after loading
        the modules
      * Config TLS after initialization of listeners
      * Init cluster after initialization of listeners
      * Add TLS module to CI
      * Fix a test suite race conditions:
        Now that the listeners are initialized later, it's not sufficient to
        wait for the PID message in the log, we need to wa...
      4faddf18
  38. 12 Jul, 2022 1 commit
  39. 11 Jul, 2022 1 commit
    • Binbin's avatar
      Add cluster-port support to redis-cli --cluster (#10344) · 35e8ae3e
      Binbin authored
      
      
      In #9389, we add a new `cluster-port` config and make cluster bus port configurable,
      and currently redis-cli --cluster create/add-node doesn't support with a configurable `cluster-port` instance.
      Because redis-cli uses the old way (port + 10000) to send the `CLUSTER MEET` command.
      
      Now we add this support on redis-cli `--cluster`, note we don't need to explicitly pass in the
      `cluster-port` parameter, we can get the real `cluster-port` of the node in `clusterManagerNodeLoadInfo`,
      so the `--cluster create` and `--cluster add-node` interfaces have not changed.
      
      We will use the `cluster-port` when we are doing `CLUSTER MEET`, also note that `CLUSTER MEET` bus-port
      parameter was added in 4.0, so if the bus_port (the one in redis-cli) is 0, or equal (port + 10000),
      we just call `CLUSTER MEET` with 2 arguments, using the old form.
      Co-authored-by: default avatarMadelyn Olson <34459052+madolson@users.noreply.github.com>
      35e8ae3e