1. 09 Jul, 2024 1 commit
    • debing.sun's avatar
      Hide user data from log (#13400) · 69b480cb
      debing.sun authored
      
      
      This PR is based on the commits from PR #11747.
      
      In the event of an assertion failure, hide command arguments from the
      operator.
      
      In some cases, private client information can be voluntarily exposed
      when a redis instance crashes due to an assertion failure.
      This commit prevent וnintentional client info exposure.
      Operators can still access the hidden data, but they must actively
      request it.
      Any of the client info commands remains the unchanged.
      
      ### Config
      Add a new config `hide-user-data-from-log` to turn this feature on and
      off, default off.
      
      ---------
      Co-authored-by: default avatarnaglera <anagler123@gmail.com>
      Co-authored-by: default avatarnaglera <58042354+naglera@users.noreply.github.com>
      69b480cb
  2. 09 Jan, 2024 1 commit
  3. 23 Nov, 2023 1 commit
    • meiravgri's avatar
      Fix async safety in signal handlers (#12658) · 2e854bcc
      meiravgri authored
      see discussion from after https://github.com/redis/redis/pull/12453 was
      merged
      ----
      This PR replaces signals that are not considered async-signal-safe
      (AS-safe) with safe calls.
      
      #### **1. serverLog() and serverLogFromHandler()**
      `serverLog` uses unsafe calls. It was decided that we will **avoid**
      `serverLog` calls by the signal handlers when:
      * The signal is not fatal, such as SIGALRM. In these cases, we prefer
      using `serverLogFromHandler` which is the safe version of `serverLog`.
      Note they have different prompts:
      `serverLog`: `62220:M 26 Oct 2023 14:39:04.526 # <msg>`
      `serverLogFromHandler`: `62220:signal-handler (1698331136) <msg>`
      * The code was added recently. Calls to `serverLog` by the signal
      handler have been there ever since Redis exists and it hasn't caused
      problems so far. To avoid regression, from now we should use
      `serverLogFromHandler`
      
      #### **2. `snprintf` `fgets` and `strtoul`(base = 16) -------->
      `_safe_snprintf`, `fgets_async_signal_safe`, `string_to_hex`**
      The safe version of `snprintf` was taken from
      [here](https://github.com/twitter/twemcache/blob/8cfc4ca5e76ed936bd3786c8cc43ed47e7778c08/src/mc_util.c#L754)
      
      #### **3. fopen(), fgets(), fclose() --------> open(), read(), close()**
      
      #### **4. opendir(), readdir(), closedir() --------> open(),
      syscall(SYS_getdents64), close()**
      
      #### **5. Threads_mngr sync mechanisms**
      * waiting for the thread to generate stack trace: semaphore -------->
      busy-wait
      * `globals_rw_lock` was removed: as we are not using malloc and the
      semaphore anymore we don't need to protect `ThreadsManager_cleanups`.
      
      #### **6. Stacktraces buffer**
      The initial problem was that we were not able to safely call malloc
      within the signal handler.
      To solve that we created a buffer on the stack of `writeStacktraces` and
      saved it in a global pointer, assuming that under normal circumstances,
      the function `writeStacktraces` would complete before any thread
      attempted to write to it. However, **if threads lag behind, they might
      access this global pointer after it no longer belongs to the
      `writeStacktraces` stack, potentially corrupting memory.**
      To address this, various solutions were discussed
      [here](https://github.com/redis/redis/pull/12658#discussion_r1390442896)
      Eventually, we decided to **create a pipe** at server startup that will
      remain valid as long as the process is alive.
      We chose this solution due to its minimal memory usage, and since
      `write()` and `read()` are atomic operations. It ensures that stack
      traces from different threads won't mix.
      
      **The stacktraces collection process is now as  follows:**
      * Cleaning the pipe to eliminate writes of late threads from previous
      runs.
      * Each thread writes to the pipe its stacktrace
      * Waiting for all the threads to mark completion or until a timeout (2
      sec) is reached
      * Reading from the pipe to print the stacktraces.
      
      #### **7. Changes that were considered and eventually were dropped**
      * replace watchdog timer with a POSIX timer: 
      according to [settimer man](https://linux.die.net/man/2/setitimer)
      
      > POSIX.1-2008 marks getitimer() and setitimer() obsolete, recommending
      the use of the POSIX timers API
      ([timer_gettime](https://linux.die.net/man/2/timer_gettime)(2),
      [timer_settime](https://linux.die.net/man/2/timer_settime)(2), etc.)
      instead.
      
      However, although it is supposed to conform to POSIX std, POSIX timers
      API is not supported on Mac.
      You can take a look here at the Linux implementation:
      
      [here](https://github.com/redis/redis/commit/c7562ee13546e504977372fdf40d33c3f86775a5
      
      )
      To avoid messing up the code, and uncertainty regarding compatibility,
      it was decided to drop it for now.
      
      * avoid using sds (uses malloc) in logConfigDebugInfo
      It was considered to print config info instead of using sds, however
      apparently, `logConfigDebugInfo` does more than just print the sds, so
      it was decided this fix is out of this issue scope.
      
      #### **8. fix Signal mask check**
      The check `signum & sig_mask` intended to indicate whether the signal is
      blocked by the thread was incorrect. Actually, the bit position in the
      signal mask corresponds to the signal number. We fixed this by changing
      the condition to: `sig_mask & (1L << (sig_num - 1))`
      
      #### **9. Unrelated changes**
      both `fork.tcl `and `util.tcl` implemented a function called
      `count_log_message` expecting different parameters. This caused
      confusion when trying to run daily tests with additional test parameters
      to run a specific test.
      The `count_log_message` in `fork.tcl` was removed and the calls were
      replaced with calls to `count_log_message` located in `util.tcl`
      
      ---------
      Co-authored-by: default avatarOzan Tezcan <ozantezcan@gmail.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      2e854bcc
  4. 02 Oct, 2023 1 commit
    • meiravgri's avatar
      fix crash in crash-report and other improvements (#12623) · 4ba9e18e
      meiravgri authored
      
      
      ## Crash fix
      ### Current behavior
      We might crash if we fail to collect some of the threads' output. If it exceeds timeout for example.
      
      The threads mngr API guarantees that the output array length will be `tids_len`, however, some
      indices can be NULL, in case it fails to collect some of the threads' outputs.
      
      When we use the threads mngr to collect the threads' stacktraces, we rely on this and skip NULL
      entries. Since the output array was allocated with malloc, instead of NULL, it contained garbage,
      so we got a segmentation fault when trying to read this garbage. (in debug.c:writeStacktraces() )
      
      ### fix
      Allocate the global output array with zcalloc.
      
      ### To reproduce the bug, you'll have to change the code:
      **in threadsmngr:ThreadsManager_runOnThreads():**
      make sure the g_output_array allocation is initialized with garbage and not 0s 
      (add `memset(g_output_array, 2, sizeof(void*) * tids_len);` below the allocation).
      
      Force one of the threads to write to the array:
      add a global var: `static redisAtomic size_t return_now = 0;` 
      add to `invoke_callback()` before writing to the output array:
      ```
          size_t i_return;
          atomicGetIncr(return_now, i_return, 1);
          if(i_return == 1) return;
      ```
      compile, start the server with `--enable-debug-command local` and run `redis-cli debug assert`
      The assertion triggers the the stacktrace collection. 
      Expect to get 2 prints of the stack trace - since we get the segmentation fault after we return from
      the threads mngr, it can be safely triggered again.
      
      ## Added global variables r/w lock in ThreadsManager
      To avoid a situation where the main thread runs `ThreadsManager_cleanups` while threads are still
      invoking the signal handler, we use a r/w lock.
      For cleanups, we will acquire the write lock.
      The threads will acquire the read lock to enable them to write simultaneously.
      If we fail to acquire the read lock, it means cleanups are in progress and we return immediately.
      After acquiring the lock we can safely check that the global output array wasn't nullified and proceed
      to write to it.
      This way we ensure the threads are not modifying the global variables/ trying to write to the output
      array after they were zeroed/nullified/destroyed(the semaphore).
      
      ## other minor logging change
      1. removed logging if the semaphore times out because the threads can still write to the output array
        after this check. Instead, we print the total number of printed stacktraces compared to the exacted
        number (len_tids).
      2. use noinline attribute to make sure the uplevel number of ignored stack trace entries stays correct.
      3. improve testing
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      4ba9e18e
  5. 24 Sep, 2023 1 commit
    • meiravgri's avatar
      Print stack trace from all threads in crash report (#12453) · cc2be639
      meiravgri authored
      In this PR we are adding the functionality to collect all the process's threads' backtraces.
      
      ## Changes made in this PR
      
      ### **introduce threads mngr API**
      The **threads mngr API** which has 2 abilities:
      * `ThreadsManager_init() `- register to SIGUSR2. called on the server start-up.
      * ` ThreadsManager_runOnThreads()` - receives a list of a pid_t and a callback, tells every
        thread in the list to invoke the callback, and returns the output collected by each invocation.
      **Elaborating atomicvar API**
      * `atomicIncrGet(var,newvalue_var,count) `-- Increment and get the atomic counter new value
      * `atomicFlagGetSet` -- Get and set the atomic counter value to 1
      
      ### **Always set SIGALRM handler**
      SIGALRM handler prints the process's stacktrace to the log file. Up until now, it was set only if the
      `server.watchdog_period` > 0. This can be also useful if debugging is needed. However, in situations
      where the server can't get requests, (a deadlock, for example) we weren't able to change the signal handler.
      To make it available at run time we set SIGALRM handler on server startup. The signal handler name was
      changed to a more general `sigalrmSignalHandler`.
      
      ### **Print all the process' threads' stacktraces**
      
      `logStackTrace()` now calls `writeStacktraces()`, instead of logging the current thread stacktrace.
      `writeStacktraces()`:
      * On Linux systems we use the threads manager API to collect the backtraces of all the process' threads.
        To get the `tids` list (threads ids) we read the `/proc/<redis-server-pid>/tasks` file which includes a list of directories.
        Each directory name corresponds to one tid (including the main thread). For each thread, we also need to check if it
        can get the signal from the threads manager (meaning it is not blocking/ignoring that signal). We send the threads
        manager this tids list and `collect_stacktrace_data()` callback, which collects the thread's backtrace addresses,
        its name, and tid.
      * On other systems, the behavior remained as it was (writing only the current thread stacktrace to the log file).
      
      ## compatibility notes
      1. **The threads mngr API is only supported in linux.** 
      2. glibc earlier than 2.3 We use `syscall(SYS_gettid)` and `syscall(SYS_tgkill...)` because their dedicated
        alternatives (`gettid()` and `tgkill`) were added in glibc 2.3.
      
      ## Output example
      
      Each thread backtrace will have the following format:
      `<tid> <thread_name> [additional_info]`
      * **tid**: as read from the `/proc/<redis-server-pid>/tasks` file
      * **thread_name**: the tread name as it is registered in the os/
      * **additional_info**: Sometimes we want to add specific information about one of the threads. currently.
        it is only used to mark the thread that handles the backtraces collection by adding "*".
        In case of crash - this also indicates which thread caused the crash. The handling thread in won't
        necessarily appear first.
      
      ```
      ------ STACK TRACE ------
      EIP:
      /lib/aarch64-linux-gnu/libc.so.6(epoll_pwait+0x9c)[0xffffb9295ebc]
      
      67089 redis-server *
      linux-vdso.so.1(__kernel_rt_sigreturn+0x0)[0xffffb9437790]
      /lib/aarch64-linux-gnu/libc.so.6(epoll_pwait+0x9c)[0xffffb9295ebc]
      redis-server *:6379(+0x75e0c)[0xaaaac2fe5e0c]
      redis-server *:6379(aeProcessEvents+0x18c)[0xaaaac2fe6c00]
      redis-server *:6379(aeMain+0x24)[0xaaaac2fe7038]
      redis-server *:6379(main+0xe0c)[0xaaaac3001afc]
      /lib/aarch64-linux-gnu/libc.so.6(+0x273fc)[0xffffb91d73fc]
      /lib/aarch64-linux-gnu/libc.so.6(__libc_start_main+0x98)[0xffffb91d74cc]
      redis-server *:6379(_start+0x30)[0xaaaac2fe0370]
      
      67093 bio_lazy_free
      /lib/aarch64-linux-gnu/libc.so.6(+0x79dfc)[0xffffb9229dfc]
      /lib/aarch64-linux-gnu/libc.so.6(pthread_cond_wait+0x208)[0xffffb922c8fc]
      redis-server *:6379(bioProcessBackgroundJobs+0x174)[0xaaaac30976e8]
      /lib/aarch64-linux-gnu/libc.so.6(+0x7d5c8)[0xffffb922d5c8]
      /lib/aarch64-linux-gnu/libc.so.6(+0xe5d1c)[0xffffb9295d1c]
      
      67091 bio_close_file
      /lib/aarch64-linux-gnu/libc.so.6(+0x79dfc)[0xffffb9229dfc]
      /lib/aarch64-linux-gnu/libc.so.6(pthread_cond_wait+0x208)[0xffffb922c8fc]
      redis-server *:6379(bioProcessBackgroundJobs+0x174)[0xaaaac30976e8]
      /lib/aarch64-linux-gnu/libc.so.6(+0x7d5c8)[0xffffb922d5c8]
      /lib/aarch64-linux-gnu/libc.so.6(+0xe5d1c)[0xffffb9295d1c]
      
      67092 bio_aof
      /lib/aarch64-linux-gnu/libc.so.6(+0x79dfc)[0xffffb9229dfc]
      /lib/aarch64-linux-gnu/libc.so.6(pthread_cond_wait+0x208)[0xffffb922c8fc]
      redis-server *:6379(bioProcessBackgroundJobs+0x174)[0xaaaac30976e8]
      /lib/aarch64-linux-gnu/libc.so.6(+0x7d5c8)[0xffffb922d5c8]
      /lib/aarch64-linux-gnu/libc.so.6(+0xe5d1c)[0xffffb9295d1c]
      67089:signal-handler (1693824528) --------
      ```
      cc2be639
  6. 11 Dec, 2022 1 commit
  7. 09 Jun, 2022 1 commit
    • Christian Krieg's avatar
      Fixing test to consider statically linked binaries (#10835) · 032619b8
      Christian Krieg authored
      
      
      The test calls `ldd` on `redis-server` in order to find out whether the binary
      was linked against `libmusl`; However, `ldd` returns a value different from `0`
      when statically linking the binaries agains libc-musl, because `redis-server` is
      not a dynamic executable (as given by the exception thrown by the failing test),
      and `make test` terminates with an error::
      
         $ ldd src/redis-server
             not a dynamic executable
         $ echo $?
         1
      
      This commit fixes the test by ignoring such failures.
      Co-authored-by: default avatarYossi Gottlieb <yossigo@gmail.com>
      032619b8
  8. 07 Jun, 2022 1 commit
    • Petr Vaněk's avatar
      Update musl libc detection pattern (#10826) · f22bfe86
      Petr Vaněk authored
      This change fixes failing `integration/logging.tcl` test in Gentoo with
      musl libc, where `ldd` returns
      ```
      libc.so => /lib/ld-musl-x86_64.so.1 (0x7f9d5f171000)
      ```
      unlike Alpine's
      ```
      libc.musl-x86_64.so.1 => /lib/ld-musl-x86_64.so.1 (0x7f82cfa16000)
      ```
      The solution is to extend matching pattern introduced in #8532.
      f22bfe86
  9. 07 Dec, 2021 1 commit
    • Binbin's avatar
      Fix timing issue in logging.tcl with FreeBSD (#9910) · b947049f
      Binbin authored
      A test failure was reported in Daily CI.
      `Crash report generated on SIGABRT` with FreeBSD.
      
      ```
      *** [err]: Crash report generated on SIGABRT in tests/integration/logging.tcl
      Expected [string match *crashed by signal* ### Starting...(logs) in tests/integration/logging.tcl]
      ```
      
      It look like `tail -1000` was executed too early, before it
      printed out all the crash logs. We can give it a few more
      chances by using `wait_for_log_messages`.
      
      Other changes:
      1. In `Server is able to generate a stack trace on selected systems`,
      use `wait_for_log_messages`to reduce the lines of code. And if it
      fails, there are more detailed logs that can be printed.
      
      2. In `Crash report generated on DEBUG SEGFAULT`, we also use
      `wait_for_log_messages` to avoid possible timing issues.
      b947049f
  10. 11 Nov, 2021 1 commit
    • Ozan Tezcan's avatar
      Add sanitizer support and clean up sanitizer findings (#9601) · b91d8b28
      Ozan Tezcan authored
      - Added sanitizer support. `address`, `undefined` and `thread` sanitizers are available.  
      - To build Redis with desired sanitizer : `make SANITIZER=undefined`
      - There were some sanitizer findings, cleaned up codebase
      - Added tests with address and undefined behavior sanitizers to daily CI.
      - Added tests with address sanitizer to the per-PR CI (smoke out mem leaks sooner).
      
      Basically, there are three types of issues : 
      
      **1- Unaligned load/store** : Most probably, this issue may cause a crash on a platform that
      does not support unaligned access. Redis does unaligned access only on supported platforms.
      
      **2- Signed integer overflow.** Although, signed overflow issue can be problematic time to time
      and change how compiler generates code, current findings mostly about signed shift or simple
      addition overflow. For most platforms Redis can be compiled for, this wouldn't cause any issue
      as far as I can tell (checked generated code on godbolt.org).
      
       **3 -Minor leak** (redis-cli), **use-after-free**(just before calling exit());
      
      UB means nothing guaranteed and risky to reason about program behavior but I don't think any
      of the fixes here worth backporting. As sanitizers are now part of the CI, preventing new issues
      will be the real benefit. 
      b91d8b28
  11. 09 Jun, 2021 1 commit
    • Yossi Gottlieb's avatar
      Improve test suite to handle external servers better. (#9033) · 8a86bca5
      Yossi Gottlieb authored
      This commit revives the improves the ability to run the test suite against
      external servers, instead of launching and managing `redis-server` processes as
      part of the test fixture.
      
      This capability existed in the past, using the `--host` and `--port` options.
      However, it was quite limited and mostly useful when running a specific tests.
      Attempting to run larger chunks of the test suite experienced many issues:
      
      * Many tests depend on being able to start and control `redis-server` themselves,
      and there's no clear distinction between external server compatible and other
      tests.
      * Cluster mode is not supported (resulting with `CROSSSLOT` errors).
      
      This PR cleans up many things and makes it possible to run the entire test suite
      against an external server. It also provides more fine grained controls to
      handle cases where the external server supports a subset of the Redis commands,
      limited number of databases, cluster mode, etc.
      
      The tests directory now contains a `README.md` file that describes how this
      works.
      
      This commit also includes additional cleanups and fixes:
      
      * Tests can now be tagged.
      * Tag-based selection is now unified across `start_server`, `tags` and `test`.
      * More information is provided about skipped or ignored tests.
      * Repeated patterns in tests have been extracted to common procedures, both at a
        global level and on a per-test file basis.
      * Cleaned up some cases where test setup was based on a previous test executing
        (a major anti-pattern that repeats itself in many places).
      * Cleaned up some cases where test teardown was not part of a test (in the
        future we should have dedicated teardown code that executes even when tests
        fail).
      * Fixed some tests that were flaky running on external servers.
      8a86bca5
  12. 23 Feb, 2021 1 commit
    • Yossi Gottlieb's avatar
      Fix failed tests on Linux Alpine and add a CI job. (#8532) · 95ea7454
      Yossi Gottlieb authored
      * Remove linux/version.h dependency.
      
      This introduces unnecessary dependencies, and generally not a good idea
      as the platform we build on may be different than the platform we run
      on.
      
      To determine if sync_file_range exists we can simply rely on header file
      hints.
      
      * Fix setproctitle() on libmusl.
      
      The previous ifdef checks were a bit too strict for no apparent
      reason.
      
      * Fix tests failure on Linux with no backtrace.
      
      * Add alpine daily CI job.
      95ea7454
  13. 04 Nov, 2020 1 commit
  14. 03 Nov, 2020 1 commit
    • Meir Shpilraien (Spielrein)'s avatar
      Added crash report on SIGABRT (#8004) · f210e197
      Meir Shpilraien (Spielrein) authored
      The reason that we want to get a full crash report on SIGABRT
      is that the jmalloc, when detecting a corruption, calls abort().
      This will cause the Redis to exist silently without any report
      and without any way to analyze what happened.
      f210e197
  15. 10 Feb, 2015 1 commit