1. 23 Sep, 2024 1 commit
  2. 13 Sep, 2024 1 commit
  3. 12 Sep, 2024 1 commit
    • Filipe Oliveira (Redis)'s avatar
      Optimize SSCAN command in case of listpack or intset encoding: avoid the usage... · f2f85ba3
      Filipe Oliveira (Redis) authored
      Optimize SSCAN command in case of listpack or intset encoding: avoid the usage of intermediate list. From 2N to N iterations (#13530)
      
      On SSCAN, in case of listpack and intset encoding we actually reply the
      entire set, and always reply with the cursor 0.
      
      For those cases, we don't need to accumulate the replies in a list and
      can completely avoid the overhead of list appending and then iterating
      over the list again -- meaning we do N iterations instead of 2N
      iterations over the SET and save intermediate memory as well.
      
      Preliminary benchmarks, `SSCAN set:100 0`, showcased an improvement of
      60% as visible bellow on a SET with 100 string elements (listpack
      encoded).
      f2f85ba3
  4. 04 Sep, 2024 1 commit
  5. 03 Sep, 2024 1 commit
    • Ozan Tezcan's avatar
      Reply LOADING on replica while flushing the db (#13495) · a7afd1d2
      Ozan Tezcan authored
      On a full sync, replica starts discarding existing db. If the existing 
      db is huge and flush is happening synchronously, replica may become 
      unresponsive. 
      
      Adding a change to yield back to event loop while flushing db on 
      a replica. Replica will reply -LOADING in this case. Note that while 
      replica is loading the new rdb, it may get an error and start flushing
      the partial db. This step may take a long time as well. Similarly, 
      replica will reply -LOADING in this case. 
      
      To call processEventsWhileBlocked() and reply -LOADING, we need to do:
      - Set connSetReadHandler() null not to process further data from the master
      - Set server.loading flag
      - Call blockingOperationStarts()
      
      rdbload() already does these steps and calls processEventsWhileBlocked()
      while loading the rdb. Added a new call rdbLoadWithEmptyFunc() which 
      accepts callback to flush db before loading rdb or when an error 
      happens while loading. 
      
      For diskless replication, doing something similar and calling emptyData()
      after setting required flags.
      
      Additional changes:
      - Allow `appendonly` config change during loading. 
       Config can be changed while loading data on startup or on replication 
       when slave is loading RDB. We allow config change command to update 
       `server.aof_enabled` and then lazily apply config change after loading
       operation is completed.
       
       - Added a test for `replica-lazy-flush` config
      a7afd1d2
  6. 03 Aug, 2024 1 commit
  7. 31 Jul, 2024 1 commit
  8. 25 Jul, 2024 1 commit
    • Oran Agra's avatar
      solve races in replication lpop tests (#13445) · e74550dd
      Oran Agra authored
      * some tests didn't wait for replication offset sync
      * tests that used deferring client, didn't wait for it to get blocked.
      an in some cases, the replication offset sync ended before the deferring
      client finished, so the digest match failed.
      * some tests used deferring clients excessively
      * the tests didn't read the client response
      * the tests didn't close the client (fd leak)
      e74550dd
  9. 22 Jul, 2024 1 commit
    • Oran Agra's avatar
      solve race conditions in tests (#13433) · 447ce11a
      Oran Agra authored
      [exception]: Executing test client: ERR FAILOVER target replica is not
      online.. ERR FAILOVER target replica is not online.
          while executing
      "$node_0 failover to $node_1_host $node_1_port"
          ("uplevel" body line 16)
          invoked from within
      "uplevel 1 $code"
          (procedure "test" line 58)
          invoked from within
      "test {failover command to specific replica works} {
      
      [err]: client evicted due to percentage of maxmemory in
      tests/unit/client-eviction.tcl
      Expected 33622 >= 220200 && 33622 < 440401 (context: type eval line 17
      cmd {assert {$tot_mem >= $n && $tot_mem < $maxmemory_clients_actual}}
      proc ::test)
      447ce11a
  10. 15 Jul, 2024 1 commit
    • Oran Agra's avatar
      solve redis-cli test failures due to local history file (#13419) · 880e147d
      Oran Agra authored
      test failure:
      ```
      [err]: Interactive CLI: should find second search result if user presses ctrl+s in tests/integration/redis-cli.tcl
      Expected '1' to be equal to '0' (context: type eval line 10 cmd {assert_equal 1 [regexp {\(i-search\): \x1B\[0mk\x1B\[1mey\x1B\[0ms one} $result]} proc ::test)
      ```
      
      this test (introduced in #12543) depends on the local history file, so
      it can fail if there's some match there.
      the fix is to use a different history file, and delete it before each
      run.
      880e147d
  11. 09 Jul, 2024 1 commit
    • debing.sun's avatar
      Hide user data from log (#13400) · 69b480cb
      debing.sun authored
      
      
      This PR is based on the commits from PR #11747.
      
      In the event of an assertion failure, hide command arguments from the
      operator.
      
      In some cases, private client information can be voluntarily exposed
      when a redis instance crashes due to an assertion failure.
      This commit prevent וnintentional client info exposure.
      Operators can still access the hidden data, but they must actively
      request it.
      Any of the client info commands remains the unchanged.
      
      ### Config
      Add a new config `hide-user-data-from-log` to turn this feature on and
      off, default off.
      
      ---------
      Co-authored-by: default avatarnaglera <anagler123@gmail.com>
      Co-authored-by: default avatarnaglera <58042354+naglera@users.noreply.github.com>
      69b480cb
  12. 02 Jul, 2024 1 commit
  13. 10 Jun, 2024 1 commit
    • Moti Cohen's avatar
      Reserve 2 bits out of EB_EXPIRE_TIME_MAX for possible future use (#13331) · f01fdc39
      Moti Cohen authored
      Reserve 2 bits out of hash-field expiration time (`EB_EXPIRE_TIME_MAX`)
      for possible future lightweight indexing/categorizing of fields. It can
      be achieved by hacking HFE as follows:
      ```
      HPEXPIREAT key [ 2^47 + USER_INDEX ] FIELDS numfields field [field …]
      ```
      
      Redis will also need to expose kind of `HEXPIRESCAN` and `HEXPIRECOUNT`
      for this idea. Yet to be better defined.
      
      `HFE_MAX_ABS_TIME_MSEC` constraint must be enforced only at API level.
      Internally, the expiration time can be up to `EB_EXPIRE_TIME_MAX` for
      future readiness.
      f01fdc39
  14. 29 May, 2024 1 commit
    • Moti Cohen's avatar
      HFE to support AOF and replicas (#13285) · 33fc0fbf
      Moti Cohen authored
      * For replica sake, rewrite commands `H*EXPIRE*` , `HSETF`, `HGETF` to
      have absolute unix time in msec.
      * On active-expiration of field, propagate HDEL to replica
      (`propagateHashFieldDeletion()`)
      * On lazy-expiration, propagate HDEL to replica (`hashTypeGetValue()`
      now calls `hashTypeDelete()`. It also takes care to call
      `propagateHashFieldDeletion()`).
      * Fix `H*EXPIRE*` command such that if it gets flag `LT` and it doesn’t
      have any expiration on the field then it will considered as valid
      condition.
      
      Note, replicas doesn’t make any active expiration, and should avoid lazy
      expiration. On `hashTypeGetValue()` it doesn't check expiration (As long
      as the master didn’t request to delete the field, it is valid)
      
      TODO: 
      * Attach `dbid` to HASH metadata. See
      [here](https://github.com/redis/redis/pull/13209#discussion_r1593385850
      
      )
      
      ---------
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      33fc0fbf
  15. 22 May, 2024 1 commit
  16. 18 May, 2024 1 commit
  17. 17 May, 2024 1 commit
    • Ronen Kalish's avatar
      Hfe serialization listpack (#13243) · 323be4d6
      Ronen Kalish authored
      Add RDB de/serialization for HFE
      
      This PR adds two new RDB types: `RDB_TYPE_HASH_METADATA` and
      `RDB_TYPE_HASH_LISTPACK_TTL` to save HFE data.
      When the hash RAM encoding is dict, it will be saved in the former, and
      when it is listpack it will be saved in the latter.
      Both formats just add the TTL value for each field after the data that
      was previously saved, i.e HASH_METADATA will save the number of entries
      and, for each entry, key, value and TTL, whereas listpack is saved as a
      blob.
      On read, the usual dict <--> listpack conversion takes place if
      required.
      In addition, when reading a hash that was saved as a dict fields are
      actively expired if expiry is due. Currently this slao holds for
      listpack encoding, but it is supposed to be removed.
      
      TODO:
      Remove active expiry on load when loading from listpack format (unless
      we'll decide to keep it)
      323be4d6
  18. 10 May, 2024 1 commit
    • ClaytonNorthey92's avatar
      Add reverse history search in redis-cli (linenoise) (#12543) · 8a05f009
      ClaytonNorthey92 authored
      added reverse history search to redis-cli, use it with the following:
      
      * CTRL+R : enable search backward mode, and search next one when
      pressing CTRL+R again until reach index 0.
      ```
      127.0.0.1:6379> keys one
      127.0.0.1:6379> keys two
      (reverse-i-search):                   # press CTRL+R
      (reverse-i-search): keys two          # input `keys`
      (reverse-i-search): keys one          # press CTRL+R again
      (reverse-i-search): keys one          # press CTRL+R again, still `keys one` due to reaching index 0
      (i-search): keys two                  # press CTRL+S, enable search forward
      (i-search): keys two                  # press CTRL+S, still `keys one` due to reaching index 1
      ```
      
      * CTRL+S : enable search forward mode, and search next one when pressing
      CTRL+S again until reach index 0.
      ```
      127.0.0.1:6379> keys one
      127.0.0.1:6379> keys two
      (i-search):                       # press CTRL+S
      (i-search): keys one              # input `keys`
      (i-search): keys two              # press CTRL+S again
      (i-search): keys two              # press CTRL+R again, still `keys two` due to reaching index 0
      (reverse-i-search): keys one      # press CTRL+R, enable search backward
      (reverse-i-search): keys one      # press CTRL+S, still `keys one` due to reaching index 1
      ```
      
      * CTRL+G : disable
      ```
      127.0.0.1:6379> keys one
      127.0.0.1:6379> keys two
      (reverse-i-search):                   # press CTRL+R
      (reverse-i-search): keys two          # input `keys`
      127.0.0.1:6379>                       # press CTRL+G
      ```
      
      * CTRL+C : disable
      ```
      127.0.0.1:6379> keys one
      127.0.0.1:6379> keys two
      (reverse-i-search):                   # press CTRL+R
      (reverse-i-search): keys two          # input `keys`
      127.0.0.1:6379>                       # press CTRL+G
      ```
      
      * TAB : use the current search result and exit search mode
      ```
      127.0.0.1:6379> keys one
      127.0.0.1:6379> keys two
      (reverse-i-search):                # press CTRL+R
      (reverse-i-search): keys two       # input `keys`
      127.0.0.1:6379> keys two           # press TAB
      ```
      
      * ENTER : use the current search result and execute the command
      ```
      127.0.0.1:6379> keys one
      127.0.0.1:6379> keys two
      (reverse-i-search):                 # press CTRL+R
      (reverse-i-search): keys two        # input `keys`
      127.0.0.1:6379> keys two            # press ENTER
      (empty array)
      127.0.0.1:6379>
      ```
      
      * any arrow key will disable reverse search
      
      your result will have the search match bolded, you can press enter to
      execute the full result
      
      note: I have _only added this for multi-line mode_, as it seems to be
      forced that way when `repl` is called
      
      Closes: https://github.com/redis/redis/issues/8277
      
      
      
      ---------
      Co-authored-by: default avatarClayton Northey <clayton@knowbl.com>
      Co-authored-by: default avatarViktor Söderqvist <viktor.soderqvist@est.tech>
      Co-authored-by: default avatardebing.sun <debing.sun@redis.com>
      Co-authored-by: default avatarBjorn Svensson <bjorn.a.svensson@est.tech>
      Co-authored-by: default avatarViktor Söderqvist <viktor@zuiderkwast.se>
      8a05f009
  19. 12 Mar, 2024 1 commit
    • Binbin's avatar
      Fix redis-check-aof incorrectly considering data in manifest format as MP-AOF (#12958) · da727ad4
      Binbin authored
      The check in fileIsManifest misjudged the manifest file. For example,
      if resp aof contains "file", it will be considered a manifest file and
      the check will fail:
      ```
      *3
      $3
      set
      $4
      file
      $4
      file
      ```
      
      In #12951, if the preamble aof also contains it, it will also fail.
      Fixes #12951.
      
      the bug was happening if the the word "file" is mentioned
      in the first 1024 lines of the AOF. and now as soon as it finds
      a non-comment line it'll break (if it contains "file" or doesn't)
      da727ad4
  20. 20 Feb, 2024 1 commit
    • Binbin's avatar
      xinfo-stream add minimum to seen-time, skip logreqres in fuzzer (#13056) · ca5cac99
      Binbin authored
      
      
      Recently I saw in CI that reply-schemas-validator fails here:
      ```
      Failed validating 'minimum' in schema[1]['properties']['groups']['items']['properties']['consumers']['items']['properties']['active-time']:
          {'description': 'Last time this consumer was active (successful '
                          'reading/claiming).',
           'minimum': 0,
           'type': 'integer'}
      
      On instance['groups'][0]['consumers'][0]['active-time']:
          -1729380548878722639
      ```
      
      The reason is that in fuzzer, we may restore corrupted active-time,
      which will cause the reply schema CI to fail.
      
      The fuzzer can cause corrupt the state in many places, which will
      bugs that mess up the reply, so we decided to skip logreqres.
      
      Also, seen-time is the same type as active-time, adding the minimum.
      
      ---------
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      ca5cac99
  21. 09 Jan, 2024 1 commit
  22. 23 Nov, 2023 1 commit
    • meiravgri's avatar
      Fix async safety in signal handlers (#12658) · 2e854bcc
      meiravgri authored
      see discussion from after https://github.com/redis/redis/pull/12453 was
      merged
      ----
      This PR replaces signals that are not considered async-signal-safe
      (AS-safe) with safe calls.
      
      #### **1. serverLog() and serverLogFromHandler()**
      `serverLog` uses unsafe calls. It was decided that we will **avoid**
      `serverLog` calls by the signal handlers when:
      * The signal is not fatal, such as SIGALRM. In these cases, we prefer
      using `serverLogFromHandler` which is the safe version of `serverLog`.
      Note they have different prompts:
      `serverLog`: `62220:M 26 Oct 2023 14:39:04.526 # <msg>`
      `serverLogFromHandler`: `62220:signal-handler (1698331136) <msg>`
      * The code was added recently. Calls to `serverLog` by the signal
      handler have been there ever since Redis exists and it hasn't caused
      problems so far. To avoid regression, from now we should use
      `serverLogFromHandler`
      
      #### **2. `snprintf` `fgets` and `strtoul`(base = 16) -------->
      `_safe_snprintf`, `fgets_async_signal_safe`, `string_to_hex`**
      The safe version of `snprintf` was taken from
      [here](https://github.com/twitter/twemcache/blob/8cfc4ca5e76ed936bd3786c8cc43ed47e7778c08/src/mc_util.c#L754)
      
      #### **3. fopen(), fgets(), fclose() --------> open(), read(), close()**
      
      #### **4. opendir(), readdir(), closedir() --------> open(),
      syscall(SYS_getdents64), close()**
      
      #### **5. Threads_mngr sync mechanisms**
      * waiting for the thread to generate stack trace: semaphore -------->
      busy-wait
      * `globals_rw_lock` was removed: as we are not using malloc and the
      semaphore anymore we don't need to protect `ThreadsManager_cleanups`.
      
      #### **6. Stacktraces buffer**
      The initial problem was that we were not able to safely call malloc
      within the signal handler.
      To solve that we created a buffer on the stack of `writeStacktraces` and
      saved it in a global pointer, assuming that under normal circumstances,
      the function `writeStacktraces` would complete before any thread
      attempted to write to it. However, **if threads lag behind, they might
      access this global pointer after it no longer belongs to the
      `writeStacktraces` stack, potentially corrupting memory.**
      To address this, various solutions were discussed
      [here](https://github.com/redis/redis/pull/12658#discussion_r1390442896)
      Eventually, we decided to **create a pipe** at server startup that will
      remain valid as long as the process is alive.
      We chose this solution due to its minimal memory usage, and since
      `write()` and `read()` are atomic operations. It ensures that stack
      traces from different threads won't mix.
      
      **The stacktraces collection process is now as  follows:**
      * Cleaning the pipe to eliminate writes of late threads from previous
      runs.
      * Each thread writes to the pipe its stacktrace
      * Waiting for all the threads to mark completion or until a timeout (2
      sec) is reached
      * Reading from the pipe to print the stacktraces.
      
      #### **7. Changes that were considered and eventually were dropped**
      * replace watchdog timer with a POSIX timer: 
      according to [settimer man](https://linux.die.net/man/2/setitimer)
      
      > POSIX.1-2008 marks getitimer() and setitimer() obsolete, recommending
      the use of the POSIX timers API
      ([timer_gettime](https://linux.die.net/man/2/timer_gettime)(2),
      [timer_settime](https://linux.die.net/man/2/timer_settime)(2), etc.)
      instead.
      
      However, although it is supposed to conform to POSIX std, POSIX timers
      API is not supported on Mac.
      You can take a look here at the Linux implementation:
      
      [here](https://github.com/redis/redis/commit/c7562ee13546e504977372fdf40d33c3f86775a5
      
      )
      To avoid messing up the code, and uncertainty regarding compatibility,
      it was decided to drop it for now.
      
      * avoid using sds (uses malloc) in logConfigDebugInfo
      It was considered to print config info instead of using sds, however
      apparently, `logConfigDebugInfo` does more than just print the sds, so
      it was decided this fix is out of this issue scope.
      
      #### **8. fix Signal mask check**
      The check `signum & sig_mask` intended to indicate whether the signal is
      blocked by the thread was incorrect. Actually, the bit position in the
      signal mask corresponds to the signal number. We fixed this by changing
      the condition to: `sig_mask & (1L << (sig_num - 1))`
      
      #### **9. Unrelated changes**
      both `fork.tcl `and `util.tcl` implemented a function called
      `count_log_message` expecting different parameters. This caused
      confusion when trying to run daily tests with additional test parameters
      to run a specific test.
      The `count_log_message` in `fork.tcl` was removed and the calls were
      replaced with calls to `count_log_message` located in `util.tcl`
      
      ---------
      Co-authored-by: default avatarOzan Tezcan <ozantezcan@gmail.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      2e854bcc
  23. 08 Oct, 2023 1 commit
  24. 03 Oct, 2023 1 commit
  25. 02 Oct, 2023 2 commits
    • meiravgri's avatar
      fix crash in crash-report and other improvements (#12623) · 4ba9e18e
      meiravgri authored
      
      
      ## Crash fix
      ### Current behavior
      We might crash if we fail to collect some of the threads' output. If it exceeds timeout for example.
      
      The threads mngr API guarantees that the output array length will be `tids_len`, however, some
      indices can be NULL, in case it fails to collect some of the threads' outputs.
      
      When we use the threads mngr to collect the threads' stacktraces, we rely on this and skip NULL
      entries. Since the output array was allocated with malloc, instead of NULL, it contained garbage,
      so we got a segmentation fault when trying to read this garbage. (in debug.c:writeStacktraces() )
      
      ### fix
      Allocate the global output array with zcalloc.
      
      ### To reproduce the bug, you'll have to change the code:
      **in threadsmngr:ThreadsManager_runOnThreads():**
      make sure the g_output_array allocation is initialized with garbage and not 0s 
      (add `memset(g_output_array, 2, sizeof(void*) * tids_len);` below the allocation).
      
      Force one of the threads to write to the array:
      add a global var: `static redisAtomic size_t return_now = 0;` 
      add to `invoke_callback()` before writing to the output array:
      ```
          size_t i_return;
          atomicGetIncr(return_now, i_return, 1);
          if(i_return == 1) return;
      ```
      compile, start the server with `--enable-debug-command local` and run `redis-cli debug assert`
      The assertion triggers the the stacktrace collection. 
      Expect to get 2 prints of the stack trace - since we get the segmentation fault after we return from
      the threads mngr, it can be safely triggered again.
      
      ## Added global variables r/w lock in ThreadsManager
      To avoid a situation where the main thread runs `ThreadsManager_cleanups` while threads are still
      invoking the signal handler, we use a r/w lock.
      For cleanups, we will acquire the write lock.
      The threads will acquire the read lock to enable them to write simultaneously.
      If we fail to acquire the read lock, it means cleanups are in progress and we return immediately.
      After acquiring the lock we can safely check that the global output array wasn't nullified and proceed
      to write to it.
      This way we ensure the threads are not modifying the global variables/ trying to write to the output
      array after they were zeroed/nullified/destroyed(the semaphore).
      
      ## other minor logging change
      1. removed logging if the semaphore times out because the threads can still write to the output array
        after this check. Instead, we print the total number of printed stacktraces compared to the exacted
        number (len_tids).
      2. use noinline attribute to make sure the uplevel number of ignored stack trace entries stays correct.
      3. improve testing
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      4ba9e18e
    • YaacovHazan's avatar
      Stabilization and improvements around aof tests (#12626) · 2e0f6724
      YaacovHazan authored
      In some tests, the code manually searches for a log message, and it
      uses tail -1 with a delay of 1 second, which can miss the expected line.
      
      Also, because the aof tests use start_server_aof and not start_server,
      the test name doesn't log into the server log.
      
      To fix the above, I made the following changes:
      - Change the start_server_aof to wrap the start_server.
        This will add the created aof server to the servers list, and make
        srv() and wait_for_log_messages() available for the tests.
      
      - Introduce a new option for start_server.
        'wait_ready' - an option to let the caller start the test code without
        waiting for the server to be ready. useful for tests on a server that
        is expected to exit on startup.
      
      - Create a new start_server_aof_ex.
        The new proc also accept options as argument and make use of the
        new 'short_life' option for tests that are expected to exit on startup
        because of some error in the aof file(s).
      
      Because of the above, I had to change many lines and replace every
      local srv variable (a server config) usage with the srv().
      2e0f6724
  26. 27 Sep, 2023 1 commit
  27. 24 Sep, 2023 1 commit
    • meiravgri's avatar
      Print stack trace from all threads in crash report (#12453) · cc2be639
      meiravgri authored
      In this PR we are adding the functionality to collect all the process's threads' backtraces.
      
      ## Changes made in this PR
      
      ### **introduce threads mngr API**
      The **threads mngr API** which has 2 abilities:
      * `ThreadsManager_init() `- register to SIGUSR2. called on the server start-up.
      * ` ThreadsManager_runOnThreads()` - receives a list of a pid_t and a callback, tells every
        thread in the list to invoke the callback, and returns the output collected by each invocation.
      **Elaborating atomicvar API**
      * `atomicIncrGet(var,newvalue_var,count) `-- Increment and get the atomic counter new value
      * `atomicFlagGetSet` -- Get and set the atomic counter value to 1
      
      ### **Always set SIGALRM handler**
      SIGALRM handler prints the process's stacktrace to the log file. Up until now, it was set only if the
      `server.watchdog_period` > 0. This can be also useful if debugging is needed. However, in situations
      where the server can't get requests, (a deadlock, for example) we weren't able to change the signal handler.
      To make it available at run time we set SIGALRM handler on server startup. The signal handler name was
      changed to a more general `sigalrmSignalHandler`.
      
      ### **Print all the process' threads' stacktraces**
      
      `logStackTrace()` now calls `writeStacktraces()`, instead of logging the current thread stacktrace.
      `writeStacktraces()`:
      * On Linux systems we use the threads manager API to collect the backtraces of all the process' threads.
        To get the `tids` list (threads ids) we read the `/proc/<redis-server-pid>/tasks` file which includes a list of directories.
        Each directory name corresponds to one tid (including the main thread). For each thread, we also need to check if it
        can get the signal from the threads manager (meaning it is not blocking/ignoring that signal). We send the threads
        manager this tids list and `collect_stacktrace_data()` callback, which collects the thread's backtrace addresses,
        its name, and tid.
      * On other systems, the behavior remained as it was (writing only the current thread stacktrace to the log file).
      
      ## compatibility notes
      1. **The threads mngr API is only supported in linux.** 
      2. glibc earlier than 2.3 We use `syscall(SYS_gettid)` and `syscall(SYS_tgkill...)` because their dedicated
        alternatives (`gettid()` and `tgkill`) were added in glibc 2.3.
      
      ## Output example
      
      Each thread backtrace will have the following format:
      `<tid> <thread_name> [additional_info]`
      * **tid**: as read from the `/proc/<redis-server-pid>/tasks` file
      * **thread_name**: the tread name as it is registered in the os/
      * **additional_info**: Sometimes we want to add specific information about one of the threads. currently.
        it is only used to mark the thread that handles the backtraces collection by adding "*".
        In case of crash - this also indicates which thread caused the crash. The handling thread in won't
        necessarily appear first.
      
      ```
      ------ STACK TRACE ------
      EIP:
      /lib/aarch64-linux-gnu/libc.so.6(epoll_pwait+0x9c)[0xffffb9295ebc]
      
      67089 redis-server *
      linux-vdso.so.1(__kernel_rt_sigreturn+0x0)[0xffffb9437790]
      /lib/aarch64-linux-gnu/libc.so.6(epoll_pwait+0x9c)[0xffffb9295ebc]
      redis-server *:6379(+0x75e0c)[0xaaaac2fe5e0c]
      redis-server *:6379(aeProcessEvents+0x18c)[0xaaaac2fe6c00]
      redis-server *:6379(aeMain+0x24)[0xaaaac2fe7038]
      redis-server *:6379(main+0xe0c)[0xaaaac3001afc]
      /lib/aarch64-linux-gnu/libc.so.6(+0x273fc)[0xffffb91d73fc]
      /lib/aarch64-linux-gnu/libc.so.6(__libc_start_main+0x98)[0xffffb91d74cc]
      redis-server *:6379(_start+0x30)[0xaaaac2fe0370]
      
      67093 bio_lazy_free
      /lib/aarch64-linux-gnu/libc.so.6(+0x79dfc)[0xffffb9229dfc]
      /lib/aarch64-linux-gnu/libc.so.6(pthread_cond_wait+0x208)[0xffffb922c8fc]
      redis-server *:6379(bioProcessBackgroundJobs+0x174)[0xaaaac30976e8]
      /lib/aarch64-linux-gnu/libc.so.6(+0x7d5c8)[0xffffb922d5c8]
      /lib/aarch64-linux-gnu/libc.so.6(+0xe5d1c)[0xffffb9295d1c]
      
      67091 bio_close_file
      /lib/aarch64-linux-gnu/libc.so.6(+0x79dfc)[0xffffb9229dfc]
      /lib/aarch64-linux-gnu/libc.so.6(pthread_cond_wait+0x208)[0xffffb922c8fc]
      redis-server *:6379(bioProcessBackgroundJobs+0x174)[0xaaaac30976e8]
      /lib/aarch64-linux-gnu/libc.so.6(+0x7d5c8)[0xffffb922d5c8]
      /lib/aarch64-linux-gnu/libc.so.6(+0xe5d1c)[0xffffb9295d1c]
      
      67092 bio_aof
      /lib/aarch64-linux-gnu/libc.so.6(+0x79dfc)[0xffffb9229dfc]
      /lib/aarch64-linux-gnu/libc.so.6(pthread_cond_wait+0x208)[0xffffb922c8fc]
      redis-server *:6379(bioProcessBackgroundJobs+0x174)[0xaaaac30976e8]
      /lib/aarch64-linux-gnu/libc.so.6(+0x7d5c8)[0xffffb922d5c8]
      /lib/aarch64-linux-gnu/libc.so.6(+0xe5d1c)[0xffffb9295d1c]
      67089:signal-handler (1693824528) --------
      ```
      cc2be639
  28. 02 Sep, 2023 1 commit
    • alonre24's avatar
      redis-benchmark - add the support for binary strings (#9414) · 044e29dd
      alonre24 authored
      
      
      Recently, the option of sending an argument from stdin using `-x` flag
      was added to redis-benchmark (this option is available in redis-cli as well).
      However, using the `-x` option for sending a blobs that contains null-characters
      doesn't work as expected - the argument is trimmed in the first occurrence of
      `\X00` (unlike in redis-cli).  
      This PR aims to fix this issue and add the support for every binary string input,
      by sending arguments length to `redisFormatCommandArgv` when processing
      redis-benchmark command, so we won't treat the arguments as C-strings.
      
      Additionally, we add a simple test coverage for `-x` (without binary strings, and
      also remove an excessive server started in tests, and make sure to select db 0
      so that `r` and the benchmark work on the same db.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      044e29dd
  29. 16 Aug, 2023 1 commit
  30. 12 Jun, 2023 1 commit
  31. 22 May, 2023 1 commit
    • binfeng-xin's avatar
      optimize spopwithcount propagation (#12082) · 38e284f1
      binfeng-xin authored
      
      
      A single SPOP with command with count argument resulted in many SPOP
      commands being propagated to the replica.
      This is inefficient because the key name is repeated many times, and is also
      being looked-up many times.
      also it results in high QPS metrics on the replica.
      To solve that, we flush batches of 1024 fields per SPOP command.
      Co-authored-by: default avatarzhaozhao.zz <zhaozhao.zz@alibaba-inc.com>
      38e284f1
  32. 06 May, 2023 1 commit
    • zhaozhao.zz's avatar
      Free backlog only if rsi is invalid when master reboot (#12088) · b0dd7b32
      zhaozhao.zz authored
      When master reboot from RDB, if rsi in RDB is valid we should not free replication backlog, even if master_repl_offset or repl-offset is 0.
      
      Since if master doesn't send any data to replicas master_repl_offset is 0, it's a valid number.
      
      A clear example:
      
      1. start a master and apply some write commands, the master's master_repl_offset is 0 since it has no replicas.
      2. stop write commands on master, and start another instance and replicaof the master, trigger an FULLRESYNC
      3. the master's master_repl_offset is still 0 (set a large number for repl-ping-replica-period), do BGSAVE and restart the master
      4. master load master_repl_offset from RDB's rsi and it's still 0, and we should make sure replica can partially resync with master.
      b0dd7b32
  33. 18 Apr, 2023 1 commit
  34. 12 Apr, 2023 1 commit
    • Oran Agra's avatar
      Attempt to solve MacOS CI issues in GH Actions (#12013) · 997fa41e
      Oran Agra authored
      The MacOS CI in github actions often hangs without any logs. GH argues that
      it's due to resource utilization, either running out of disk space, memory, or CPU
      starvation, and thus the runner is terminated.
      
      This PR contains multiple attempts to resolve this:
      1. introducing pause_process instead of SIGSTOP, which waits for the process
        to stop before resuming the test, possibly resolving race conditions in some tests,
        this was a suspect since there was one test that could result in an infinite loop in that
       case, in practice this didn't help, but still a good idea to keep.
      2. disable the `save` config in many tests that don't need it, specifically ones that use
        heavy writes and could create large files.
      3. change the `populate` proc to use short pipeline rather than an infinite one.
      4. use `--clients 1` in the macos CI so that we don't risk running multiple resource
        demanding tests in parallel.
      5. enable `--verbose` to be repeated to elevate verbosity and print more info to stdout
        when a test or a server starts.
      997fa41e
  35. 30 Mar, 2023 1 commit
    • Jason Elbaum's avatar
      Reimplement cli hints based on command arg docs (#10515) · 1f76bb17
      Jason Elbaum authored
      
      
      Now that the command argument specs are available at runtime (#9656), this PR addresses
      #8084 by implementing a complete solution for command-line hinting in `redis-cli`.
      
      It correctly handles nearly every case in Redis's complex command argument definitions, including
      `BLOCK` and `ONEOF` arguments, reordering of optional arguments, and repeated arguments
      (even when followed by mandatory arguments). It also validates numerically-typed arguments.
      It may not correctly handle all possible combinations of those, but overall it is quite robust.
      
      Arguments are only matched after the space bar is typed, so partial word matching is not
      supported - that proved to be more confusing than helpful. When the user's current input
      cannot be matched against the argument specs, hinting is disabled.
      
      Partial support has been implemented for legacy (pre-7.0) servers that do not support
      `COMMAND DOCS`, by falling back to a statically-compiled command argument table.
      On startup, if the server does not support `COMMAND DOCS`, `redis-cli` will now issue
      an `INFO SERVER` command to retrieve the server version (unless `HELLO` has already
      been sent, in which case the server version will be extracted from the reply to `HELLO`).
      The server version will be used to filter the commands and arguments in the command table,
      removing those not supported by that version of the server. However, the static table only
      includes core Redis commands, so with a legacy server hinting will not be supported for
      module commands. The auto generated help.h and the scripts that generates it are gone.
      
      Command and argument tables for the server and CLI use different structs, due primarily
      to the need to support different runtime data. In order to generate code for both, macros
      have been added to `commands.def` (previously `commands.c`) to make it possible to
      configure the code generation differently for different use cases (one linked with redis-server,
      and one with redis-cli).
      
      Also adding a basic testing framework for the command hints based on new (undocumented)
      command line options to `redis-cli`: `--test_hint 'INPUT'` prints out the command-line hint for
      a given input string, and `--test_hint_file <filename>` runs a suite of test cases for the hinting
      mechanism. The test suite is in `tests/assets/test_cli_hint_suite.txt`, and it is run from
      `tests/integration/redis-cli.tcl`.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      Co-authored-by: default avatarViktor Söderqvist <viktor.soderqvist@est.tech>
      1f76bb17
  36. 20 Mar, 2023 1 commit
    • Binbin's avatar
      Fix new subscribe mode test in reply-schemas-validator (#11939) · c9124145
      Binbin authored
      The reason is in reply-schemas-validator, the resp of the
      client we create will be client_default_resp (currently 3):
      ```
      client *createClient(connection *conn) {
          client *c = zmalloc(sizeof(client));
       #ifdef LOG_REQ_RES
          reqresReset(c, 0);
          c->resp = server.client_default_resp;
       #else
          c->resp = 2;
       #endif
      }
      ```
      
      But current_resp3 in redis-cli will be inconsistent with it,
      the test adds a simple hello 3 to avoid this failure, test
      was added in #11873.
      
      Added help descriptions for dont-pre-clean option, it was
      added in #10273
      c9124145
  37. 19 Mar, 2023 1 commit
    • Viktor Söderqvist's avatar
      redis-cli: Accept commands in subscribed mode (#11873) · bbf364a4
      Viktor Söderqvist authored
      The message "Reading messages... (press Ctrl-C to quit)" is replaced by
      "Reading messages... (press Ctrl-C to quit or any key to type command)".
      
      This allows users to subscribe to more channels, to try out UNSUBSCRIBE and to
      combine pubsub with other features such as push messages from client tracking.
      
      The "Reading messages" info message is displayed in the bottom of the output in a
      distinct style and moves downward as more messages appear. When any key is pressed,
      the info message is replaced by the prompt with for entering commands.
      After entering a command and the reply is displayed, the "Reading messages" info
      messages appears again. This is added to the repl loop in redis-cli and in the
      corresponding place for non-interactive mode.
      
      An indication "(subscribed mode)" is included in the prompt when entering commands
      in subscribed mode.
      
      Also:
      * Fixes a problem that UNSUBSCRIBE hanged when used with RESP3 and push callback,
        without first entering subscribe mode. It hanged because UNSUBSCRIBE gets one or
        more push replies but no in-band reply.
      * Exit subscribed mode after RESET.
      bbf364a4
  38. 13 Mar, 2023 1 commit
    • Binbin's avatar
      Fix tail->repl_offset update in feedReplicationBuffer (#11905) · 7997874f
      Binbin authored
      
      
      In #11666, we added a while loop and will split a big reply
      node to multiple nodes. The update of tail->repl_offset may
      be wrong. Like before #11666, we would have created at most
      one new reply node, and now we will create multiple nodes if
      it is a big reply node.
      
      Now we are creating more than one node, and the tail->repl_offset
      of all the nodes except the last one are incorrect. Because we
      update master_repl_offset at the beginning, and then use it to
      update the tail->repl_offset. This would have lead to an assertion
      during PSYNC, a test was added to validate that case.
      
      Besides that, the calculation of size was adjusted to fix
      tests that failed due to a combination of a very low backlog size,
      and some thresholds of that get violated because of the relatively
      high overhead of replBufBlock. So now if the backlog size / 16 is too
      small, we'll take PROTO_REPLY_CHUNK_BYTES instead.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      7997874f
  39. 12 Mar, 2023 1 commit
    • xbasel's avatar
      Large blocks of replica client output buffer could lead to psync loops and... · 7be7834e
      xbasel authored
      
      Large blocks of replica client output buffer could lead to psync loops and unnecessary memory usage (#11666)
      
      This can happen when a key almost equal or larger than the
      client output buffer limit of the replica is written.
      
      Example:
      1. DB is empty
      2. Backlog size is 1 MB
      3. Client out put buffer limit is 2 MB
      4. Client writes a 3 MB key
      5. The shared replication buffer will have a single node which contains
      the key written above, and it exceeds the backlog size.
      
      At this point the client output buffer usage calculation will report the
      replica buffer to be 3 MB (or more) even after sending all the data to
      the replica.
      The primary drops the replica connection for exceeding the limits,
      the replica reconnects and successfully executes partial sync but the
      primary will drop the connection again because the buffer usage is still
      3 MB. This happens over and over.
      
      To mitigate the problem, this fix limits the maximum size of a single
      backlog node to be (repl_backlog_size/16). This way a single node can't
      exceed the limits of the COB (the COB has to be larger than the
      backlog).
      It also means that if the backlog has some excessive data it can't trim,
      it would be at most about 6% overuse.
      
      other notes:
      1. a loop was added in feedReplicationBuffer which caused a massive LOC
        change due to indentation, the actual changes are just the `min(max` and the loop.
      3. an unrelated change in an existing test to speed up a server termination which took 10 seconds.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      7be7834e