- 15 Sep, 2024 1 commit
-
-
Filipe Oliveira (Personal) authored
Fixes #8825 We're using the fast_float library[1] in our (compiled-in) floating-point fast_float_strtod implementation for faster and more portable parsing of 64 decimal strings. The single file fast_float.h is an amalgamation of the entire library, which can be (re)generated with the amalgamate.py script (from the fast_float repository) via the command: ``` python3 ./script/amalgamate.py --license=MIT > $REDIS_SRC/deps/fast_float/fast_float.h ``` [1]: https://github.com/fastfloat/fast_float The used commit from fast_float library was the one from https://github.com/fastfloat/fast_float/releases/tag/v3.10.1 --------- Co-authored-by:
fcostaoliveira <filipe@redis.com>
-
- 03 Sep, 2024 1 commit
-
-
Ozan Tezcan authored
On a full sync, replica starts discarding existing db. If the existing db is huge and flush is happening synchronously, replica may become unresponsive. Adding a change to yield back to event loop while flushing db on a replica. Replica will reply -LOADING in this case. Note that while replica is loading the new rdb, it may get an error and start flushing the partial db. This step may take a long time as well. Similarly, replica will reply -LOADING in this case. To call processEventsWhileBlocked() and reply -LOADING, we need to do: - Set connSetReadHandler() null not to process further data from the master - Set server.loading flag - Call blockingOperationStarts() rdbload() already does these steps and calls processEventsWhileBlocked() while loading the rdb. Added a new call rdbLoadWithEmptyFunc() which accepts callback to flush db before loading rdb or when an error happens while loading. For diskless replication, doing something similar and calling emptyData() after setting required flags. Additional changes: - Allow `appendonly` config change during loading. Config can be changed while loading data on startup or on replication when slave is loading RDB. We allow config change command to update `server.aof_enabled` and then lazily apply config change after loading operation is completed. - Added a test for `replica-lazy-flush` config
-
- 14 Jul, 2024 1 commit
-
-
guybe7 authored
128 is not enough chars when we're talking about commands like RESTORE. Of course, it's impossible to find the perfect number, but 1024 is better than 128, and it's not obscenely large.
-
- 09 Jul, 2024 1 commit
-
-
debing.sun authored
This PR is based on the commits from PR #11747. In the event of an assertion failure, hide command arguments from the operator. In some cases, private client information can be voluntarily exposed when a redis instance crashes due to an assertion failure. This commit prevent וnintentional client info exposure. Operators can still access the hidden data, but they must actively request it. Any of the client info commands remains the unchanged. ### Config Add a new config `hide-user-data-from-log` to turn this feature on and off, default off. --------- Co-authored-by:
naglera <anagler123@gmail.com> Co-authored-by:
naglera <58042354+naglera@users.noreply.github.com>
-
- 04 Jul, 2024 1 commit
-
-
debing.sun authored
1. Add help for `DEBUG SCRIPT` command. 2. Remove a duplicate `getLuaScripts()` which is same as `evalScriptsDict()`.
-
- 21 Jun, 2024 1 commit
-
-
AcherTT authored
Add two new debug commands for outputing script. 1. `DEBUG SCRIPT LIST` Output all scripts. 2. `DEBUG SCRIPT <sha1>` Output a specific script. Close #3846
-
- 29 May, 2024 1 commit
-
-
Moti Cohen authored
* For replica sake, rewrite commands `H*EXPIRE*` , `HSETF`, `HGETF` to have absolute unix time in msec. * On active-expiration of field, propagate HDEL to replica (`propagateHashFieldDeletion()`) * On lazy-expiration, propagate HDEL to replica (`hashTypeGetValue()` now calls `hashTypeDelete()`. It also takes care to call `propagateHashFieldDeletion()`). * Fix `H*EXPIRE*` command such that if it gets flag `LT` and it doesn’t have any expiration on the field then it will considered as valid condition. Note, replicas doesn’t make any active expiration, and should avoid lazy expiration. On `hashTypeGetValue()` it doesn't check expiration (As long as the master didn’t request to delete the field, it is valid) TODO: * Attach `dbid` to HASH metadata. See [here](https://github.com/redis/redis/pull/13209#discussion_r1593385850 ) --------- Co-authored-by:
debing.sun <debing.sun@redis.com>
-
- 08 May, 2024 1 commit
-
-
Ozan Tezcan authored
**Changes:** - Adds listpack support to hash field expiration - Implements hgetf/hsetf commands **Listpack support for hash field expiration** We keep field name and value pairs in listpack for the hash type. With this PR, if one of hash field expiration command is called on the key for the first time, it converts listpack layout to triplets to hold field name, value and ttl per field. If a field does not have a TTL, we store zero as the ttl value. Zero is encoded as two bytes in the listpack. So, once we convert listpack to hold triplets, for the fields that don't have a TTL, it will be consuming those extra 2 bytes per item. Fields are ordered by ttl in the listpack to find the field with minimum expiry time efficiently. **New command implementations as part of this PR:** - HGETF command For each specified field get its value and optionally set the field's expiration time in sec/msec /unix-sec/unix-msec: ``` HGETF key [NX | XX | GT | LT] [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | PERSIST] <FIELDS count field [field ...]> ``` - HSETF command For each specified field value pair: set field to value and optionally set the field's expiration time in sec/msec /unix-sec/unix-msec: ``` HSETF key [DC] [DCF | DOF] [NX | XX | GT | LT] [GETNEW | GETOLD] [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | KEEPTTL] <FVS count field value [field value …]> ``` Todo: - Performance improvement. - rdb load/save - aof - defrag
-
- 18 Apr, 2024 1 commit
-
-
Moti Cohen authored
- Add ebuckets & mstr data structures - Integrate active & lazy expiration - Add most of the commands - Add support for dict (listpack is missing) TODOs: RDB, notification, listpack, HSET, HGETF, defrag, aof
-
- 20 Mar, 2024 1 commit
-
-
Pieter Cailliau authored
[Read more about the license change here](https://redis.com/blog/redis-adopts-dual-source-available-licensing/) Live long and prosper
🖖
-
- 22 Feb, 2024 1 commit
-
-
debing.sun authored
Following #12568 In issue #9357, when inserting an element larger than 1GB, we currently store it in a plain node instead of a listpack. Presently, when we insert an element that exceeds the maximum size of a packed node, it cannot be accommodated in any other nodes, thus ending up isolated like a large element. I.e. it's a node with only one element, but it's listpack encoded rather than a plain buffer. This PR lowers the threshold for considering an element as 'large' from 1GB to the maximum size of a node. While this change doesn't completely resolve the bug mentioned in the previous PR, it does mitigate its potential impact. As a result of this change, we can now only use LSET to replace an element with another element that falls below the maximum size threshold. In the worst-case scenario, with a fill of -5, the largest packed node we can create is 2GB (32k * 64k): * 32k: The smallest element in a listpack is 2 bytes, which allows us to store up to 32k elements. * 64k: This is the maximum size for a single quicklist node. ## Others To fully fix #9357, we need more work, as discussed in #12568, when we insert an element into a quicklistNode, it may be created in a new node, put into another node, or merged, and we can't correctly delete the node that was supposed to be deleted. I'm not sure it's worth it, since it involves a lot of modifications.
-
- 08 Feb, 2024 1 commit
-
-
Binbin authored
The test fails here and there: ``` *** [err]: expire scan should skip dictionaries with lot's of empty buckets in tests/unit/expire.tcl scan didn't handle slot skipping logic. ``` There are two case: 1. In the case of passing the test, we use child process to avoid the dict resize, but it can not completely limit it, since in the dictDelete we still have chance to trigger the resize (hit the force radio). The reason why our test passed before is because the expire dict is still in the rehashing process, so the dictDelete, the dictShrinkIfNeeded can not trigger the resize. 2. In the case of failing the test, the expire dict finished the rehashing, so the last dictDelete, the dictShrinkIfNeeded trigger the dict resize since it hit the force radio, so the skipping logic fail. This PR add a new DEBUG command to disbale the dict resize.
-
- 05 Feb, 2024 1 commit
-
-
guybe7 authored
# Description Gather most of the scattered `redisDb`-related code from the per-slot dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e. it's a class that represents an array of dictionaries. # Motivation The main motivation is code cleanliness, the idea of using an array of dictionaries is very well-suited to becoming a self-contained data structure. This allowed cleaning some ugly code, among others: loops that run twice on the main dict and expires dict, and duplicate code for allocating and releasing this data structure. # Notes 1. This PR reverts the part of https://github.com/redis/redis/pull/12848 where the `rehashing` list is global (handling rehashing `dict`s is under the responsibility of `kvstore`, and should not be managed by the server) 2. This PR also replaces the type of `server.pubsubshard_channels` from `dict**` to `kvstore` (original PR: https://github.com/redis/redis/pull/12804). After that was done, server.pubsub_channels was also chosen to be a `kvstore` (with only one `dict`, which seems odd) just to make the code cleaner by making it the same type as `server.pubsubshard_channels`, see `pubsubtype.serverPubSubChannels` 3. the keys and expires kvstores are currenlty configured to allocate the individual dicts only when the first key is added (unlike before, in which they allocated them in advance), but they won't release them when the last key is deleted. Worth mentioning that due to the recent change the reply of DEBUG HTSTATS changed, in case no keys were ever added to the db. before: ``` 127.0.0.1:6379> DEBUG htstats 9 [Dictionary HT] Hash table 0 stats (main hash table): No stats available for empty dictionaries [Expires HT] Hash table 0 stats (main hash table): No stats available for empty dictionaries ``` after: ``` 127.0.0.1:6379> DEBUG htstats 9 [Dictionary HT] [Expires HT] ```
-
- 14 Jan, 2024 1 commit
-
-
Binbin authored
The open function returns a fd on success or -1 on failure, here we should check fd != -1, otherwise -1 will be judged as success. This closes #12938.
-
- 03 Jan, 2024 1 commit
-
-
Madelyn Olson authored
This change is trying to make two failure modes a bit easier to deep dive: 1. If a serverPanic or serverAssert occurs during the info (or module) printing, it will recursively panic, which is a lot of fun as it will just keep recursively printing. It will eventually stack overflow, but will generate a lot of text in the process. 2. When a segfault happens during the segfault handler, no information is communicated other than it happened. This can be problematic because `info` may help diagnose the real issue, but without fixing the recursive crash it might be hard to get at that info.
-
- 23 Nov, 2023 1 commit
-
-
meiravgri authored
see discussion from after https://github.com/redis/redis/pull/12453 was merged ---- This PR replaces signals that are not considered async-signal-safe (AS-safe) with safe calls. #### **1. serverLog() and serverLogFromHandler()** `serverLog` uses unsafe calls. It was decided that we will **avoid** `serverLog` calls by the signal handlers when: * The signal is not fatal, such as SIGALRM. In these cases, we prefer using `serverLogFromHandler` which is the safe version of `serverLog`. Note they have different prompts: `serverLog`: `62220:M 26 Oct 2023 14:39:04.526 # <msg>` `serverLogFromHandler`: `62220:signal-handler (1698331136) <msg>` * The code was added recently. Calls to `serverLog` by the signal handler have been there ever since Redis exists and it hasn't caused problems so far. To avoid regression, from now we should use `serverLogFromHandler` #### **2. `snprintf` `fgets` and `strtoul`(base = 16) --------> `_safe_snprintf`, `fgets_async_signal_safe`, `string_to_hex`** The safe version of `snprintf` was taken from [here](https://github.com/twitter/twemcache/blob/8cfc4ca5e76ed936bd3786c8cc43ed47e7778c08/src/mc_util.c#L754) #### **3. fopen(), fgets(), fclose() --------> open(), read(), close()** #### **4. opendir(), readdir(), closedir() --------> open(), syscall(SYS_getdents64), close()** #### **5. Threads_mngr sync mechanisms** * waiting for the thread to generate stack trace: semaphore --------> busy-wait * `globals_rw_lock` was removed: as we are not using malloc and the semaphore anymore we don't need to protect `ThreadsManager_cleanups`. #### **6. Stacktraces buffer** The initial problem was that we were not able to safely call malloc within the signal handler. To solve that we created a buffer on the stack of `writeStacktraces` and saved it in a global pointer, assuming that under normal circumstances, the function `writeStacktraces` would complete before any thread attempted to write to it. However, **if threads lag behind, they might access this global pointer after it no longer belongs to the `writeStacktraces` stack, potentially corrupting memory.** To address this, various solutions were discussed [here](https://github.com/redis/redis/pull/12658#discussion_r1390442896) Eventually, we decided to **create a pipe** at server startup that will remain valid as long as the process is alive. We chose this solution due to its minimal memory usage, and since `write()` and `read()` are atomic operations. It ensures that stack traces from different threads won't mix. **The stacktraces collection process is now as follows:** * Cleaning the pipe to eliminate writes of late threads from previous runs. * Each thread writes to the pipe its stacktrace * Waiting for all the threads to mark completion or until a timeout (2 sec) is reached * Reading from the pipe to print the stacktraces. #### **7. Changes that were considered and eventually were dropped** * replace watchdog timer with a POSIX timer: according to [settimer man](https://linux.die.net/man/2/setitimer) > POSIX.1-2008 marks getitimer() and setitimer() obsolete, recommending the use of the POSIX timers API ([timer_gettime](https://linux.die.net/man/2/timer_gettime)(2), [timer_settime](https://linux.die.net/man/2/timer_settime)(2), etc.) instead. However, although it is supposed to conform to POSIX std, POSIX timers API is not supported on Mac. You can take a look here at the Linux implementation: [here](https://github.com/redis/redis/commit/c7562ee13546e504977372fdf40d33c3f86775a5 ) To avoid messing up the code, and uncertainty regarding compatibility, it was decided to drop it for now. * avoid using sds (uses malloc) in logConfigDebugInfo It was considered to print config info instead of using sds, however apparently, `logConfigDebugInfo` does more than just print the sds, so it was decided this fix is out of this issue scope. #### **8. fix Signal mask check** The check `signum & sig_mask` intended to indicate whether the signal is blocked by the thread was incorrect. Actually, the bit position in the signal mask corresponds to the signal number. We fixed this by changing the condition to: `sig_mask & (1L << (sig_num - 1))` #### **9. Unrelated changes** both `fork.tcl `and `util.tcl` implemented a function called `count_log_message` expecting different parameters. This caused confusion when trying to run daily tests with additional test parameters to run a specific test. The `count_log_message` in `fork.tcl` was removed and the calls were replaced with calls to `count_log_message` located in `util.tcl` --------- Co-authored-by:
Ozan Tezcan <ozantezcan@gmail.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 22 Nov, 2023 2 commits
-
-
Josh Hershberg authored
Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
Josh Hershberg authored
Move clusterNode into cluster_legacy.h. In order to achieve this some accessor methods were added and also a refactor of how debugCommand handles cluster related subcommands. Signed-off-by:
Josh Hershberg <yehoshua@redis.com>
-
- 21 Nov, 2023 1 commit
-
-
Binbin authored
When we are loading data, it is not safe to generate data through DEBUG POPULATE. POPULATE may generate keys, causing panic when loading data with duplicate keys.
-
- 14 Nov, 2023 1 commit
-
-
Binbin authored
When using DB iterator, it will use dictInitSafeIterator to init a old safe dict iterator. When dbIteratorNext is used, it will jump to the next slot db dict when we are done a dict. During this process, we do not have any calls to dictResumeRehashing, which causes the dict's pauserehash to always be > 0. And at last, it will be returned directly in dictRehashMilliseconds, which causes us to have slot dict in a state where rehash cannot be completed. In the "expire scan should skip dictionaries with lot's of empty buckets" test, adding a `keys *` can reproduce the problem stably. `keys *` will call dbIteratorNext to trigger a traversal of all slot dicts. Added dbReleaseIterator and dbIteratorInitNextSafeIterator methods to call dictResetIterator. Issue was introduced in #11695.
-
- 08 Nov, 2023 1 commit
-
-
Binbin authored
This function now will only be called in printCrashReport, so this is just a cleanup.
-
- 16 Oct, 2023 1 commit
-
-
meiravgri authored
Using heap allocation during signal handlers is unsafe. This PR purpose is to replace all the heap allocations done within the signal handlers raised upon server crash and assertions. These were added in #12453. writeStacktraces(): allocates the stacktraces output array on the calling thread's stack and assigns the address to a global variable. It calls `ThreadsManager_runOnThreads()` that invokes `collect_stacktrace_data()` by each thread: each thread writes to a different location in the above array to allow sync writes. get_ready_to_signal_threads_tids(): instead of allocating the `tids` array, it receives it as a fixed size array parameter, allocated on on the stack of the calling function, and returns the number of valid threads. The array size is hard-coded to 50. `ThreadsManager_runOnThreads():` To avoid the outputs array allocation, the **callback signature** was changed. Now it should return void. This function return type has also changed to int - returns 1 if successful, and 0 otherwise. Other unsafe calls will be handled in following PRs
-
- 15 Oct, 2023 1 commit
-
-
Vitaly authored
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data. ## Important changes * Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms. * getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time. * Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree. * scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue. * Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot. * Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading. * DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well. ## Performance This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict. RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load. ## Interface changes * Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS` * Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored. * New RDB version to support the new op code for SLOT information. --------- Co-authored-by:
Vitaly Arbuzov <arvit@amazon.com> Co-authored-by:
Harkrishn Patro <harkrisp@amazon.com> Co-authored-by:
Roshan Khatri <rvkhatri@amazon.com> Co-authored-by:
Madelyn Olson <madelyneolson@gmail.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 08 Oct, 2023 1 commit
-
-
Jachin authored
Use the __MAC_OS_X_VERSION_MIN_REQUIRED macro to detect the macOS system version instead of using MAC_OS_X_VERSION_10_6. From MacOSX14.0.sdk, the default definitions of MAC_OS_X_VERSION_xxx have been removed in usr/include/AvailabilityMacros.h. It includes AvailabilityVersions.h, where the following condition must be met: `#if (!defined(_POSIX_C_SOURCE) && !defined(_XOPEN_SOURCE)) || defined(_DARWIN_C_SOURCE)` Only then will MAC_OS_X_VERSION_xxx be defined. However, in the project, _DARWIN_C_SOURCE is not defined, which leads to the loss of the definition for MAC_OS_X_VERSION_10_6.
-
- 02 Oct, 2023 1 commit
-
-
meiravgri authored
## Crash fix ### Current behavior We might crash if we fail to collect some of the threads' output. If it exceeds timeout for example. The threads mngr API guarantees that the output array length will be `tids_len`, however, some indices can be NULL, in case it fails to collect some of the threads' outputs. When we use the threads mngr to collect the threads' stacktraces, we rely on this and skip NULL entries. Since the output array was allocated with malloc, instead of NULL, it contained garbage, so we got a segmentation fault when trying to read this garbage. (in debug.c:writeStacktraces() ) ### fix Allocate the global output array with zcalloc. ### To reproduce the bug, you'll have to change the code: **in threadsmngr:ThreadsManager_runOnThreads():** make sure the g_output_array allocation is initialized with garbage and not 0s (add `memset(g_output_array, 2, sizeof(void*) * tids_len);` below the allocation). Force one of the threads to write to the array: add a global var: `static redisAtomic size_t return_now = 0;` add to `invoke_callback()` before writing to the output array: ``` size_t i_return; atomicGetIncr(return_now, i_return, 1); if(i_return == 1) return; ``` compile, start the server with `--enable-debug-command local` and run `redis-cli debug assert` The assertion triggers the the stacktrace collection. Expect to get 2 prints of the stack trace - since we get the segmentation fault after we return from the threads mngr, it can be safely triggered again. ## Added global variables r/w lock in ThreadsManager To avoid a situation where the main thread runs `ThreadsManager_cleanups` while threads are still invoking the signal handler, we use a r/w lock. For cleanups, we will acquire the write lock. The threads will acquire the read lock to enable them to write simultaneously. If we fail to acquire the read lock, it means cleanups are in progress and we return immediately. After acquiring the lock we can safely check that the global output array wasn't nullified and proceed to write to it. This way we ensure the threads are not modifying the global variables/ trying to write to the output array after they were zeroed/nullified/destroyed(the semaphore). ## other minor logging change 1. removed logging if the semaphore times out because the threads can still write to the output array after this check. Instead, we print the total number of printed stacktraces compared to the exacted number (len_tids). 2. use noinline attribute to make sure the uplevel number of ignored stack trace entries stays correct. 3. improve testing Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 24 Sep, 2023 1 commit
-
-
meiravgri authored
In this PR we are adding the functionality to collect all the process's threads' backtraces. ## Changes made in this PR ### **introduce threads mngr API** The **threads mngr API** which has 2 abilities: * `ThreadsManager_init() `- register to SIGUSR2. called on the server start-up. * ` ThreadsManager_runOnThreads()` - receives a list of a pid_t and a callback, tells every thread in the list to invoke the callback, and returns the output collected by each invocation. **Elaborating atomicvar API** * `atomicIncrGet(var,newvalue_var,count) `-- Increment and get the atomic counter new value * `atomicFlagGetSet` -- Get and set the atomic counter value to 1 ### **Always set SIGALRM handler** SIGALRM handler prints the process's stacktrace to the log file. Up until now, it was set only if the `server.watchdog_period` > 0. This can be also useful if debugging is needed. However, in situations where the server can't get requests, (a deadlock, for example) we weren't able to change the signal handler. To make it available at run time we set SIGALRM handler on server startup. The signal handler name was changed to a more general `sigalrmSignalHandler`. ### **Print all the process' threads' stacktraces** `logStackTrace()` now calls `writeStacktraces()`, instead of logging the current thread stacktrace. `writeStacktraces()`: * On Linux systems we use the threads manager API to collect the backtraces of all the process' threads. To get the `tids` list (threads ids) we read the `/proc/<redis-server-pid>/tasks` file which includes a list of directories. Each directory name corresponds to one tid (including the main thread). For each thread, we also need to check if it can get the signal from the threads manager (meaning it is not blocking/ignoring that signal). We send the threads manager this tids list and `collect_stacktrace_data()` callback, which collects the thread's backtrace addresses, its name, and tid. * On other systems, the behavior remained as it was (writing only the current thread stacktrace to the log file). ## compatibility notes 1. **The threads mngr API is only supported in linux.** 2. glibc earlier than 2.3 We use `syscall(SYS_gettid)` and `syscall(SYS_tgkill...)` because their dedicated alternatives (`gettid()` and `tgkill`) were added in glibc 2.3. ## Output example Each thread backtrace will have the following format: `<tid> <thread_name> [additional_info]` * **tid**: as read from the `/proc/<redis-server-pid>/tasks` file * **thread_name**: the tread name as it is registered in the os/ * **additional_info**: Sometimes we want to add specific information about one of the threads. currently. it is only used to mark the thread that handles the backtraces collection by adding "*". In case of crash - this also indicates which thread caused the crash. The handling thread in won't necessarily appear first. ``` ------ STACK TRACE ------ EIP: /lib/aarch64-linux-gnu/libc.so.6(epoll_pwait+0x9c)[0xffffb9295ebc] 67089 redis-server * linux-vdso.so.1(__kernel_rt_sigreturn+0x0)[0xffffb9437790] /lib/aarch64-linux-gnu/libc.so.6(epoll_pwait+0x9c)[0xffffb9295ebc] redis-server *:6379(+0x75e0c)[0xaaaac2fe5e0c] redis-server *:6379(aeProcessEvents+0x18c)[0xaaaac2fe6c00] redis-server *:6379(aeMain+0x24)[0xaaaac2fe7038] redis-server *:6379(main+0xe0c)[0xaaaac3001afc] /lib/aarch64-linux-gnu/libc.so.6(+0x273fc)[0xffffb91d73fc] /lib/aarch64-linux-gnu/libc.so.6(__libc_start_main+0x98)[0xffffb91d74cc] redis-server *:6379(_start+0x30)[0xaaaac2fe0370] 67093 bio_lazy_free /lib/aarch64-linux-gnu/libc.so.6(+0x79dfc)[0xffffb9229dfc] /lib/aarch64-linux-gnu/libc.so.6(pthread_cond_wait+0x208)[0xffffb922c8fc] redis-server *:6379(bioProcessBackgroundJobs+0x174)[0xaaaac30976e8] /lib/aarch64-linux-gnu/libc.so.6(+0x7d5c8)[0xffffb922d5c8] /lib/aarch64-linux-gnu/libc.so.6(+0xe5d1c)[0xffffb9295d1c] 67091 bio_close_file /lib/aarch64-linux-gnu/libc.so.6(+0x79dfc)[0xffffb9229dfc] /lib/aarch64-linux-gnu/libc.so.6(pthread_cond_wait+0x208)[0xffffb922c8fc] redis-server *:6379(bioProcessBackgroundJobs+0x174)[0xaaaac30976e8] /lib/aarch64-linux-gnu/libc.so.6(+0x7d5c8)[0xffffb922d5c8] /lib/aarch64-linux-gnu/libc.so.6(+0xe5d1c)[0xffffb9295d1c] 67092 bio_aof /lib/aarch64-linux-gnu/libc.so.6(+0x79dfc)[0xffffb9229dfc] /lib/aarch64-linux-gnu/libc.so.6(pthread_cond_wait+0x208)[0xffffb922c8fc] redis-server *:6379(bioProcessBackgroundJobs+0x174)[0xaaaac30976e8] /lib/aarch64-linux-gnu/libc.so.6(+0x7d5c8)[0xffffb922d5c8] /lib/aarch64-linux-gnu/libc.so.6(+0xe5d1c)[0xffffb9295d1c] 67089:signal-handler (1693824528) -------- ```
-
- 08 Sep, 2023 1 commit
-
-
Binbin authored
and adjustments.
-
- 20 Aug, 2023 1 commit
-
-
meiravgri authored
This PR purpose is to make the crash report process thread safe. main changes include: 1. `setupSigSegvHandler()` is introduced to initialize the signal handler. This function first initializes the signal handler mutex (if not initialized yet) and then registers the process to the signal handler. 2. **sigsegvHandler** flags : SA_NODEFER - don't add the signal to the process signal mask. We use this flag because we want to be able to handle a second call to the signal manually. removed SA_RESETHAND: this flag resets the signal handler function upon the first entrance to the registered function. The reason to use this flag is to protect from recursively entering the signal handler by the same thread. But, it also means that if a second thread crashes while handling a signal, the process will be terminated immediately and we won't get the crash report. In this PR we discard this flag. The signal handler guard described below purpose is to solve the above issues. 3. Add a **signal handler lock** with ERRORCHECK attributes. The lock's purpose is to ensure that only one thread generates a crash report. Once a second thread enters the signal handler it will be blocked. We use the ERRORCHECK lock in order to protect from possible deadlock in case the thread handling the crash gets a signal. In the latest scenario, we log what we have collected until the handler crashed. At the end of the crash report we reset the signal handler SIG_DFL, with no flags, and rethrow the signal to generate a core dump (if enabled) and exit the process. During the work on this PR we wanted to understand the historical reasons for how crash is handled. With respect to the choice of the flag, we believe the **SA_RESETHAND** was not added for any specific purpose. **SA_ONSTACK** which is removed here from bugReportEnd(), was originally also set in the initial registration to signal handler, but removed in 3ada43e7. In addition, it was removed from another location in deee2c1e with the following description, which is also relevant to why it should be removed from bugReportEnd: > it seems to be some valgrind bug with SA_ONSTACK. > SA_ONSTACK seems unneeded since WD is not recursive (SA_NODEFER was removed), > also, not sure if it's even valid without a call to sigaltstack()
-
- 01 Jul, 2023 1 commit
-
-
Binbin authored
Change to use dictTryExpand, return error on OOM.
-
- 27 Jun, 2023 1 commit
-
-
michalbiesek authored
- Add support for `getAndSetMcontextEip` - Add support for `logRegisters`
-
- 24 May, 2023 1 commit
-
-
Oran Agra authored
The light version only shows the table sizes, while the pre-existing version that shows chain length stats is reachable with the `full` argument. This should allow looking into rehashing state, even on huge dicts, on which we're afraid to run the command for fear of causing a server freeze. Also, fix a possible overflow in dictGetStats.
-
- 23 May, 2023 1 commit
-
-
Ping Xie authored
This commit excludes aux fields from the output of the `cluster nodes` and `cluster replicas` command. We may decide to re-introduce them in some form or another in the future, but not in v7.2.
-
- 08 May, 2023 1 commit
-
-
Oran Agra authored
The new mallctl seems to set the output sz when EINVAL occors. that messes up the retry mechanism that does /2 on each iteration.
-
- 03 May, 2023 1 commit
-
-
Madelyn Olson authored
Technically declaring a prototype with an empty declaration has been deprecated since the early days of C, but we never got a warning for it. C2x will apparently be introducing a breaking change if you are using this type of declarator, so Clang 15 has started issuing a warning with -pedantic. Although not apparently a problem for any of the compiler we build on, if feels like the right thing is to properly adhere to the C standard and use (void).
-
- 10 Apr, 2023 1 commit
-
-
sundb authored
## Issue When we use GCC-12 later or clang 9.0 later to build with `-D_FORTIFY_SOURCE=3`, we can see the following buffer overflow: ``` === REDIS BUG REPORT START: Cut & paste starting from here === 6263:M 06 Apr 2023 08:59:12.915 # Redis 255.255.255 crashed by signal: 6, si_code: -6 6263:M 06 Apr 2023 08:59:12.915 # Crashed running the instruction at: 0x7f03d59efa7c ------ STACK TRACE ------ EIP: /lib/x86_64-linux-gnu/libc.so.6(pthread_kill+0x12c)[0x7f03d59efa7c] Backtrace: /lib/x86_64-linux-gnu/libc.so.6(+0x42520)[0x7f03d599b520] /lib/x86_64-linux-gnu/libc.so.6(pthread_kill+0x12c)[0x7f03d59efa7c] /lib/x86_64-linux-gnu/libc.so.6(raise+0x16)[0x7f03d599b476] /lib/x86_64-linux-gnu/libc.so.6(abort+0xd3)[0x7f03d59817f3] /lib/x86_64-linux-gnu/libc.so.6(+0x896f6)[0x7f03d59e26f6] /lib/x86_64-linux-gnu/libc.so.6(__fortify_fail+0x2a)[0x7f03d5a8f76a] /lib/x86_64-linux-gnu/libc.so.6(+0x1350c6)[0x7f03d5a8e0c6] src/redis-server 127.0.0.1:25111(+0xd5e80)[0x557cddd3be80] src/redis-server 127.0.0.1:25111(feedReplicationBufferWithObject+0x78)[0x557cddd3c768] src/redis-server 127.0.0.1:25111(replicationFeedSlaves+0x1a4)[0x557cddd3cbc4] src/redis-server 127.0.0.1:25111(+0x8721a)[0x557cddced21a] src/redis-server 127.0.0.1:25111(call+0x47a)[0x557cddcf38ea] src/redis-server 127.0.0.1:25111(processCommand+0xbf4)[0x557cddcf4aa4] src/redis-server 127.0.0.1:25111(processInputBuffer+0xe6)[0x557cddd22216] src/redis-server 127.0.0.1:25111(readQueryFromClient+0x3a8)[0x557cddd22898] src/redis-server 127.0.0.1:25111(+0x1b9134)[0x557cdde1f134] src/redis-server 127.0.0.1:25111(aeMain+0x119)[0x557cddce5349] src/redis-server 127.0.0.1:25111(main+0x466)[0x557cddcd6716] /lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x7f03d5982d90] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x7f03d5982e40] src/redis-server 127.0.0.1:25111(_start+0x25)[0x557cddcd7025] ``` The main reason is that when FORTIFY_SOURCE is enabled, GCC or clang will enhance some common functions, such as `strcpy`, `memcpy`, `fgets`, etc, so that they can detect buffer overflow errors and stop program execution, thus improving the safety of the program. We use `zmalloc_usable_size()` everywhere to use memory blocks, but that is an abuse since the malloc_usable_size() isn't meant for this kind of use, it is for diagnostics only. That is also why the behavior is flaky when built with _FORTIFY_SOURCE, the compiler can sense that we reach outside the allocated block and SIGABRT. ### Solution If we need to use the additional memory we got, we need to use a dummy realloc with `alloc_size` attribute and no inlining, (see `extend_to_usable`) to let the compiler see the large of memory we need to use. This can either be an implicit call inside `z*usable` that returns the size, so that the caller doesn't have any other worry, or it can be a normal zmalloc call which means that if the caller wants to use zmalloc_usable_size it must also use extend_to_usable. ### Changes This PR does the following: 1) rename the current z[try]malloc_usable family to z[try]malloc_internal and don't expose them to users outside zmalloc.c, 2) expose a new set of `z[*]_usable` family that use z[*]_internal and `extend_to_usable()` implicitly, the caller gets the size of the allocation and it is safe to use. 3) go over all the users of `zmalloc_usable_size` and convert them to use the `z[*]_usable` family if possible. 4) in the places where the caller can't use `z[*]_usable` and store the real size, and must still rely on zmalloc_usable_size, we still make sure that the allocation used `z[*]_usable` (which has a call to `extend_to_usable()`) and ignores the returning size, this way a later call to `zmalloc_usable_size` is still safe. [4] was done for module.c and listpack.c, all the others places (sds, reply proto list, replication backlog, client->buf) are using [3]. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 20 Mar, 2023 1 commit
-
-
polaris-alioth authored
When the server crashes during the AUTH command, or another command with an AUTH argument, the password was recorded in the log. Now, when the `auth` keyword is detected (could be in HELLO or MIGRATE, etc), the loop exits before printing any additional arguments.
-
- 12 Mar, 2023 1 commit
-
-
Binbin authored
This bug seems to be there forever, CLIENT REPLY OFF|SKIP will mark the client with CLIENT_REPLY_OFF or CLIENT_REPLY_SKIP flags. With these flags, prepareClientToWrite called by addReply* will return C_ERR directly. So the client can't receive the Pub/Sub messages and any other push notifications, e.g client side tracking. In this PR, we adding a CLIENT_PUSHING flag, disables the reply silencing flags. When adding push replies, set the flag, after the reply, clear the flag. Then add the flag check in prepareClientToWrite. Fixes #11874 Note, the SUBSCRIBE command response is a bit awkward, see https://github.com/redis/redis-doc/pull/2327 Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 19 Feb, 2023 1 commit
-
-
Binbin authored
We have cases where we print information (might be important but by no means an error indicator) with the LL_WARNING level. Demoting these to LL_NOTICE: - oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo - User requested shutdown... This is also true for cases that we encounter a rare but normal situation. Demoting to LL_NOTICE. Examples: - AOF was enabled but there is already another background operation. An AOF background was scheduled to start when possible. - Connection with master lost. base on yoav-steinberg's https://github.com/redis/redis/pull/10650#issuecomment-1112280554 and yossigo's https://github.com/redis/redis/pull/10650#pullrequestreview-967677676
-
- 16 Feb, 2023 1 commit
-
-
Oran Agra authored
* Make it clear that current_client is the root client that was called by external connection * add executing_client which is the client that runs the current command (can be a module or a script) * Remove script_caller that was used for commands that have CLIENT_SCRIPT to get the client that called the script. in most cases, that's the current_client, and in others (when being called from a module), it could be an intermediate client when we actually want the original one used by the external connection. bugfixes: * RM_Call with C flag should log ACL errors with the requested user rather than the one used by the original client, this also solves a crash when RM_Call is used with C flag from a detached thread safe context. * addACLLogEntry would have logged info about the script_caller, but in case the script was issued by a module command we actually want the current_client. the exception is when RM_Call is called from a timer event, in which case we don't have a current_client. behavior changes: * client side tracking for scripts now tracks the keys that are read by the script instead of the keys that are declared by the caller for EVAL other changes: * Log both current_client and executing_client in the crash log. * remove prepareLuaClient and resetLuaClient, being dead code that was forgotten. * remove scriptTimeSnapshot and snapshot_time and instead add cmd_time_snapshot that serves all commands and is reset only when execution nesting starts. * remove code to propagate CLIENT_FORCE_REPL from the executed command to the script caller since scripts aren't propagated anyway these days and anyway this flag wouldn't have had an effect since CLIENT_PREVENT_PROP is added by scriptResetRun. * fix a module GIL violation issue in afterSleep that was introduced in #10300 (unreleased)
-
- 12 Feb, 2023 1 commit
-
-
Tian authored
# Background The RDB file is usually generated and used once and seldom used again, but the content would reside in page cache until OS evicts it. A potential problem is that once the free memory exhausts, the OS have to reclaim some memory from page cache or swap anonymous page out, which may result in a jitters to the Redis service. Supposing an exact scenario, a high-capacity machine hosts many redis instances, and we're upgrading the Redis together. The page cache in host machine increases as RDBs are generated. Once the free memory drop into low watermark(which is more likely to happen in older Linux kernel like 3.10, before [watermark_scale_factor](https://lore.kernel.org/lkml/1455813719-2395-1-git-send-email-hannes@cmpxchg.org/) is introduced, the `low watermark` is linear to `min watermark`, and there'is not too much buffer space for `kswapd` to be wake up to reclaim memory), a `direct reclaim` happens, which means the process would stall to wait for memory allocation. # What the PR does The PR introduces a capability to reclaim the cache when the RDB is operated. Generally there're two cases, read and write the RDB. For read it's a little messy to address the incremental reclaim, so the reclaim is done in one go in background after the load is finished to avoid blocking the work thread. For write, incremental reclaim amortizes the work of reclaim so no need to put it into background, and the peak watermark of cache can be reduced in this way. Two cases are addresses specially, replication and restart, for both of which the cache is leveraged to speed up the processing, so the reclaim is postponed to a right time. To do this, a flag is added to`rdbSave` and `rdbLoad` to control whether the cache need to be kept, with the default value false. # Something deserve noting 1. Though `posix_fadvise` is the POSIX standard, but only few platform support it, e.g. Linux, FreeBSD 10.0. 2. In Linux `posix_fadvise` only take effect on writeback-ed pages, so a `sync`(or `fsync`, `fdatasync`) is needed to flush the dirty page before `posix_fadvise` if we reclaim write cache. # About test A unit test is added to verify the effect of `posix_fadvise`. In integration test overall cache increase is checked, as well as the cache backed by RDB as a specific TCL test is executed in isolated Github action job.
-