- 08 Oct, 2023 1 commit
-
-
Jachin authored
Use the __MAC_OS_X_VERSION_MIN_REQUIRED macro to detect the macOS system version instead of using MAC_OS_X_VERSION_10_6. From MacOSX14.0.sdk, the default definitions of MAC_OS_X_VERSION_xxx have been removed in usr/include/AvailabilityMacros.h. It includes AvailabilityVersions.h, where the following condition must be met: `#if (!defined(_POSIX_C_SOURCE) && !defined(_XOPEN_SOURCE)) || defined(_DARWIN_C_SOURCE)` Only then will MAC_OS_X_VERSION_xxx be defined. However, in the project, _DARWIN_C_SOURCE is not defined, which leads to the loss of the definition for MAC_OS_X_VERSION_10_6.
-
- 05 Oct, 2023 1 commit
-
-
Oran Agra authored
Recently we added a way for the module to declare that it wishes to receive nested KSN, by setting ALLOW_NESTED_KEYSPACE_NOTIFICATIONS. but it looks like this flow has a bug, clearing the `active` member when it was previously set. however, since nesting is permitted, this bug has no implications, since regardless of the active member, the notification is permitted.
-
- 03 Oct, 2023 1 commit
-
-
Madelyn Olson authored
We use the C standard assert() in various places in the codebase, which requires NDEBUG to be undefined. We introduced the redisassert.h file in order to allow low level files to access the assert that maps to serverPanic, but this was only applied tactically and is not available broadly. This PR removes all usage of the standard library asserts and replaces them with an assert that maps to serverPanic. It makes us immune to accidentally setting the NDEBUG flag preventing assertions. I also marked marked the server asserts as "likely" to not execute. I spot checked various points in the code, and it didn't change the code layout on my x86 mac, but it is more consistent with redisassert.h and seems more correct overall.
-
- 02 Oct, 2023 2 commits
-
-
Madelyn Olson authored
Fixed some usages of tabs which caused weird indentation in the code. Tried to find all of the places so their was one PR. I ignored all of the usages of tabs which don't really affect readability.
-
meiravgri authored
## Crash fix ### Current behavior We might crash if we fail to collect some of the threads' output. If it exceeds timeout for example. The threads mngr API guarantees that the output array length will be `tids_len`, however, some indices can be NULL, in case it fails to collect some of the threads' outputs. When we use the threads mngr to collect the threads' stacktraces, we rely on this and skip NULL entries. Since the output array was allocated with malloc, instead of NULL, it contained garbage, so we got a segmentation fault when trying to read this garbage. (in debug.c:writeStacktraces() ) ### fix Allocate the global output array with zcalloc. ### To reproduce the bug, you'll have to change the code: **in threadsmngr:ThreadsManager_runOnThreads():** make sure the g_output_array allocation is initialized with garbage and not 0s (add `memset(g_output_array, 2, sizeof(void*) * tids_len);` below the allocation). Force one of the threads to write to the array: add a global var: `static redisAtomic size_t return_now = 0;` add to `invoke_callback()` before writing to the output array: ``` size_t i_return; atomicGetIncr(return_now, i_return, 1); if(i_return == 1) return; ``` compile, start the server with `--enable-debug-command local` and run `redis-cli debug assert` The assertion triggers the the stacktrace collection. Expect to get 2 prints of the stack trace - since we get the segmentation fault after we return from the threads mngr, it can be safely triggered again. ## Added global variables r/w lock in ThreadsManager To avoid a situation where the main thread runs `ThreadsManager_cleanups` while threads are still invoking the signal handler, we use a r/w lock. For cleanups, we will acquire the write lock. The threads will acquire the read lock to enable them to write simultaneously. If we fail to acquire the read lock, it means cleanups are in progress and we return immediately. After acquiring the lock we can safely check that the global output array wasn't nullified and proceed to write to it. This way we ensure the threads are not modifying the global variables/ trying to write to the output array after they were zeroed/nullified/destroyed(the semaphore). ## other minor logging change 1. removed logging if the semaphore times out because the threads can still write to the output array after this check. Instead, we print the total number of printed stacktraces compared to the exacted number (len_tids). 2. use noinline attribute to make sure the uplevel number of ignored stack trace entries stays correct. 3. improve testing Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 28 Sep, 2023 3 commits
-
-
guybe7 authored
The problem is that WAITAOF could have hang in case commands were propagated only to replicas. This can happen if a module uses RM_Call with the REDISMODULE_ARGV_NO_AOF flag. In that case, master_repl_offset would increase, but there would be nothing to fsync, so in the absence of other traffic, fsynced_reploff_pending would stay the static, and WAITAOF can hang. This commit updates fsynced_reploff_pending to the latest offset in flushAppendOnlyFile in case there's nothing to fsync. i.e. in case it's behind because of the above mentions case it'll be refreshed and release the WAITAOF. Other changes: Fix a race in wait.tcl (client getting blocked vs. the fsync thread)
-
guybe7 authored
If we set `fsynced_reploff_pending` in `startAppendOnly`, and the fork doesn't start immediately (e.g. there's another fork active at the time), any subsequent commands will increment `server.master_repl_offset`, but will not cause a fsync (given they were executed before the fork started, they just ended up in the RDB part of it) Therefore, any WAITAOF will wait on the new master_repl_offset, but it will time out because no fsync will be executed. Release notes: ``` WAITAOF could timeout in the absence of write traffic in case a new AOF is created and an AOFRW can't immediately start. This can happen by the appendonly config is changed at runtime, but also after FLUSHALL, and replica full sync. ```
-
Viktor Söderqvist authored
In a long printf call with many placeholders, it's hard to see which argument belongs to which placeholder. The long printf-like calls in the INFO and CLIENT commands are rewritten into pairs of (format, argument). These pairs are then rewritten to a single call with a long format string and a long list of arguments, using a macro called FMTARGS. The file `fmtargs.h` is added to the repo. Co-authored-by:
Madelyn Olson <34459052+madolson@users.noreply.github.com>
-
- 26 Sep, 2023 1 commit
-
-
Sankar authored
Clear owner_not_claiming_slot bit for the slot in clusterDelSlot to keep it consistent with slot ownership information.
-
- 24 Sep, 2023 2 commits
-
-
Nir Rattner authored
The `retval` variable is defined as an `int`, so with 4 bytes, it cannot properly represent microsecond values greater than the equivalent of about 35 minutes. This bug shouldn't impact standard Redis behavior because Redis doesn't have timer events that are scheduled as far as 35 minutes out, but it may affect custom Redis modules which interact with the event timers via the RM_CreateTimer API. The impact is that `usUntilEarliestTimer` may return 0 for as long as `retval` is scaled to an overflowing value. While `usUntilEarliestTimer` continues to return `0`, `aeApiPoll` will have a zero timeout, and so Redis will use significantly more CPU iterating through its event loop without pause. For timers scheduled far enough into the future, Redis will cycle between ~35 minute periods of high CPU usage and ~35 minute periods of standard CPU usage.
-
meiravgri authored
In this PR we are adding the functionality to collect all the process's threads' backtraces. ## Changes made in this PR ### **introduce threads mngr API** The **threads mngr API** which has 2 abilities: * `ThreadsManager_init() `- register to SIGUSR2. called on the server start-up. * ` ThreadsManager_runOnThreads()` - receives a list of a pid_t and a callback, tells every thread in the list to invoke the callback, and returns the output collected by each invocation. **Elaborating atomicvar API** * `atomicIncrGet(var,newvalue_var,count) `-- Increment and get the atomic counter new value * `atomicFlagGetSet` -- Get and set the atomic counter value to 1 ### **Always set SIGALRM handler** SIGALRM handler prints the process's stacktrace to the log file. Up until now, it was set only if the `server.watchdog_period` > 0. This can be also useful if debugging is needed. However, in situations where the server can't get requests, (a deadlock, for example) we weren't able to change the signal handler. To make it available at run time we set SIGALRM handler on server startup. The signal handler name was changed to a more general `sigalrmSignalHandler`. ### **Print all the process' threads' stacktraces** `logStackTrace()` now calls `writeStacktraces()`, instead of logging the current thread stacktrace. `writeStacktraces()`: * On Linux systems we use the threads manager API to collect the backtraces of all the process' threads. To get the `tids` list (threads ids) we read the `/proc/<redis-server-pid>/tasks` file which includes a list of directories. Each directory name corresponds to one tid (including the main thread). For each thread, we also need to check if it can get the signal from the threads manager (meaning it is not blocking/ignoring that signal). We send the threads manager this tids list and `collect_stacktrace_data()` callback, which collects the thread's backtrace addresses, its name, and tid. * On other systems, the behavior remained as it was (writing only the current thread stacktrace to the log file). ## compatibility notes 1. **The threads mngr API is only supported in linux.** 2. glibc earlier than 2.3 We use `syscall(SYS_gettid)` and `syscall(SYS_tgkill...)` because their dedicated alternatives (`gettid()` and `tgkill`) were added in glibc 2.3. ## Output example Each thread backtrace will have the following format: `<tid> <thread_name> [additional_info]` * **tid**: as read from the `/proc/<redis-server-pid>/tasks` file * **thread_name**: the tread name as it is registered in the os/ * **additional_info**: Sometimes we want to add specific information about one of the threads. currently. it is only used to mark the thread that handles the backtraces collection by adding "*". In case of crash - this also indicates which thread caused the crash. The handling thread in won't necessarily appear first. ``` ------ STACK TRACE ------ EIP: /lib/aarch64-linux-gnu/libc.so.6(epoll_pwait+0x9c)[0xffffb9295ebc] 67089 redis-server * linux-vdso.so.1(__kernel_rt_sigreturn+0x0)[0xffffb9437790] /lib/aarch64-linux-gnu/libc.so.6(epoll_pwait+0x9c)[0xffffb9295ebc] redis-server *:6379(+0x75e0c)[0xaaaac2fe5e0c] redis-server *:6379(aeProcessEvents+0x18c)[0xaaaac2fe6c00] redis-server *:6379(aeMain+0x24)[0xaaaac2fe7038] redis-server *:6379(main+0xe0c)[0xaaaac3001afc] /lib/aarch64-linux-gnu/libc.so.6(+0x273fc)[0xffffb91d73fc] /lib/aarch64-linux-gnu/libc.so.6(__libc_start_main+0x98)[0xffffb91d74cc] redis-server *:6379(_start+0x30)[0xaaaac2fe0370] 67093 bio_lazy_free /lib/aarch64-linux-gnu/libc.so.6(+0x79dfc)[0xffffb9229dfc] /lib/aarch64-linux-gnu/libc.so.6(pthread_cond_wait+0x208)[0xffffb922c8fc] redis-server *:6379(bioProcessBackgroundJobs+0x174)[0xaaaac30976e8] /lib/aarch64-linux-gnu/libc.so.6(+0x7d5c8)[0xffffb922d5c8] /lib/aarch64-linux-gnu/libc.so.6(+0xe5d1c)[0xffffb9295d1c] 67091 bio_close_file /lib/aarch64-linux-gnu/libc.so.6(+0x79dfc)[0xffffb9229dfc] /lib/aarch64-linux-gnu/libc.so.6(pthread_cond_wait+0x208)[0xffffb922c8fc] redis-server *:6379(bioProcessBackgroundJobs+0x174)[0xaaaac30976e8] /lib/aarch64-linux-gnu/libc.so.6(+0x7d5c8)[0xffffb922d5c8] /lib/aarch64-linux-gnu/libc.so.6(+0xe5d1c)[0xffffb9295d1c] 67092 bio_aof /lib/aarch64-linux-gnu/libc.so.6(+0x79dfc)[0xffffb9229dfc] /lib/aarch64-linux-gnu/libc.so.6(pthread_cond_wait+0x208)[0xffffb922c8fc] redis-server *:6379(bioProcessBackgroundJobs+0x174)[0xaaaac30976e8] /lib/aarch64-linux-gnu/libc.so.6(+0x7d5c8)[0xffffb922d5c8] /lib/aarch64-linux-gnu/libc.so.6(+0xe5d1c)[0xffffb9295d1c] 67089:signal-handler (1693824528) -------- ```
-
- 21 Sep, 2023 1 commit
-
-
Chen Tianjie authored
Starting a change in #12233 (released in 7.2), CLUSTER commands use client's connection to decide whether to return TLS port or non-TLS port, but commands called by Lua script and module's RM_Call don't have a real client with connection, and would currently be regarded as non-TLS connections. We can use server.current_client instead when it is available. When it is not (module calls commands without a real client), we may see this as an undefined behavior, and return null or default port (currently in this PR it returns default port, judged by server.tls_cluster).
-
- 10 Sep, 2023 1 commit
-
-
Binbin authored
An unintentional change was introduced in #10536, we used to use addReplyLongLong and now it is addReplyBulkLonglong, revert it back the previous behavior.
-
- 08 Sep, 2023 1 commit
-
-
Binbin authored
and adjustments.
-
- 04 Sep, 2023 1 commit
-
-
nihohit authored
Updated the command tips for ACL SAVE / SETUSER / DELUSER, CLIENT SETNAME / SETINFO, and LATENCY RESET. The tips now match CONFIG SET, since there's a similar behavior for all of these commands - the user expects to update the various configurations & states on all nodes, not only on a single, random node. For LATENCY RESET the response tip is now agg_sum. Co-authored-by:
Shachar Langbeheim <shachlan@amazon.com>
-
- 03 Sep, 2023 1 commit
-
-
secwall authored
When connecting between a 7.0 and 7.2 cluster, the 7.0 cluster will not populate the shard_id field, which is expect on the 7.2 cluster. This is not intended behavior, as the 7.2 cluster is supposed to use a temporary shard_id while the node is in the upgrading state, but it wasn't being correctly set in this case.
-
- 02 Sep, 2023 1 commit
-
-
alonre24 authored
Recently, the option of sending an argument from stdin using `-x` flag was added to redis-benchmark (this option is available in redis-cli as well). However, using the `-x` option for sending a blobs that contains null-characters doesn't work as expected - the argument is trimmed in the first occurrence of `\X00` (unlike in redis-cli). This PR aims to fix this issue and add the support for every binary string input, by sending arguments length to `redisFormatCommandArgv` when processing redis-benchmark command, so we won't treat the arguments as C-strings. Additionally, we add a simple test coverage for `-x` (without binary strings, and also remove an excessive server started in tests, and make sure to select db 0 so that `r` and the benchmark work on the same db. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 31 Aug, 2023 4 commits
-
-
Roshan Khatri authored
Found that in moduleConfigValidityCheck and isModuleConfigNameRegistered, sds is not required. This also allowed to remove unnecessary memcopy from some of the config registering APIs.
-
icy17 authored
-
Chen Tianjie authored
ZRANGE BYSCORE/BYLEX with [LIMIT offset count] option was using every level in skiplist to jump to the first/last node in range, but only use level[0] in skiplist to locate the node at offset, resulting in sub-optimal performance using LIMIT: ``` while (ln && offset--) { if (reverse) { ln = ln->backward; } else { ln = ln->level[0].forward; } } ``` It could be slow when offset is very big. We can get the total rank of the offset location and use skiplist to jump to it. It is an improvement from O(offset) to O(log rank). Below shows how this is implemented (if the offset is positve): Use the skiplist to seach for the first element in the range, record its rank `rank_0`, so we can have the rank of the target node `rank_t`. Meanwhile we record the last node we visited which has zsl->level-1 levels and its rank `rank_1`. Then we start from the zsl->level-1 node, use skiplist to go forward `rank_t-rank_1` nodes to reach the target node. It is very similiar when the offset is reversed. Note that if `rank_t` is very close to `rank_0`, we just start from the first element in range and go node by node, this for the case when zsl->level-1 node is to far away and it is quicker to reach the target node by node. Here is a test using a random generated zset including 10000 elements (with different positive scores), doing a bench mark which compares how fast the `ZRANGE` command is exucuted before and after the optimization. The start score is set to 0 and the count is set to 1 to make sure that most of the time is spent on locating the offset. ``` memtier_benchmark -h 127.0.0.1 -p 6379 --command="zrange test 0 +inf byscore limit <offset> 1" ``` | offset | QPS(unstable) | QPS(optimized) | |--------|--------|--------| | 10 | 73386.02 | 74819.82 | | 1000 | 48084.96 | 73177.73 | | 2000 | 31156.79 | 72805.83 | | 5000 | 10954.83 | 71218.21 | With the result above, we can see that the original code is greatly slowed down when offset gets bigger, and with the optimization the speed is almost not affected. Similiar results are generated when testing reversed offset: ``` memtier_benchmark -h 127.0.0.1 -p 6379 --command="zrange test +inf 0 byscore rev limit <offset> 1" ``` | offset | QPS(unstable) | QPS(optimized) | |--------|--------|--------| | 10 | 74505.14 | 71653.67 | | 1000 | 46829.25 | 72842.75 | | 2000 | 28985.48 | 73669.01 | | 5000 | 11066.22 | 73963.45 | And the same conclusion is drawn from the tests of ZRANGE BYLEX.
-
Binbin authored
Also added a test to cover this case, so this can cover the reply schemas check.
-
- 30 Aug, 2023 4 commits
-
-
Roshan Khatri authored
This PR adds a new Module API int RM_AddACLCategory(RedisModuleCtx *ctx, const char *category_name) to add a new ACL command category. Here, we initialize the ACLCommandCategories array by allocating space for 64 categories and duplicate the 21 default categories from the predefined array 'ACLDefaultCommandCategories' into the ACLCommandCategories array while ACL initialization. Valid ACL category names can only contain alphanumeric characters, underscores, and dashes. The API when called, checks for the onload flag, category name validity, and for duplicate category name if present. If the conditions are satisfied, the API adds the new category to the trailing end of the ACLCommandCategories array and assigns the acl_categories flag bit according to the index at which the category is added. If any error is encountered the errno is set accordingly by the API. --------- Co-authored-by:
Madelyn Olson <madelyneolson@gmail.com>
-
bodong.ybd authored
Before: ``` 127.0.0.1:6379> command getkeys sort_ro key (empty array) 127.0.0.1:6379> ``` After: ``` 127.0.0.1:6379> command getkeys sort_ro key 1) "key" 127.0.0.1:6379> ```
-
Chen Tianjie authored
Add these INFO metrics: * client_query_buffer_limit_disconnections * client_output_buffer_limit_disconnections Sometimes it is useful to monitor whether clients reaches size limit of query buffer and output buffer, to decide whether we need to adjust the buffer size limit or reduce client query payload.
-
nihohit authored
Since the three commands have similar behavior (change config, return OK), the tips that govern how they should behave should be similar. Co-authored-by:
Shachar Langbeheim <shachlan@amazon.com>
-
- 21 Aug, 2023 4 commits
-
-
Binbin authored
BITCOUNT and BITPOS with non-existing key will return 0 even the arguments are error, before this commit: ``` > flushall OK > bitcount s 0 (integer) 0 > bitpos s 0 0 1 hello (integer) 0 > set s 1 OK > bitcount s 0 (error) ERR syntax error > bitpos s 0 0 1 hello (error) ERR syntax error ``` The reason is that we judged non-existing before parameter checking and returned. This PR fixes it, and after this commit: ``` > flushall OK > bitcount s 0 (error) ERR syntax error > bitpos s 0 0 1 hello (error) ERR syntax error ``` Also BITPOS made the same fix as #12394, check for wrong argument, before checking for key. ``` > lpush mylist a b c (integer) 3 > bitpos mylist 1 a b (error) WRONGTYPE Operation against a key holding the wrong kind of value ```
-
Wen Hui authored
Generally, In any command we first check for the argument and then check if key exist. Some of the examples are ``` 127.0.0.1:6379> getrange no-key invalid1 invalid2 (error) ERR value is not an integer or out of range 127.0.0.1:6379> setbit no-key 1 invalid (error) ERR bit is not an integer or out of range 127.0.0.1:6379> xrange no-key invalid1 invalid2 (error) ERR Invalid stream ID specified as stream command argument ``` **Before change** ``` bitcount no-key invalid1 invalid2 0 ``` **After change** ``` bitcount no-key invalid1 invalid2 (error) ERR value is not an integer or out of range ```
-
Binbin authored
Limit the range of LREM count to -LONG_MAX ~ LONG_MAX. Before the fix, passing -LONG_MAX would cause an overflow and would effectively be the same as passing 0. (Because this condition `toremove && removed == toremove `can never be satisfied). This is a minor fix as it shouldn't really affect users, more like a cleanup.
-
Yves LeBras authored
Initializing `memkeys` to 0 for consistency and clarity. the config struct is anyway zeroed, but other fields are explicitly initialized.
-
- 20 Aug, 2023 1 commit
-
-
meiravgri authored
This PR purpose is to make the crash report process thread safe. main changes include: 1. `setupSigSegvHandler()` is introduced to initialize the signal handler. This function first initializes the signal handler mutex (if not initialized yet) and then registers the process to the signal handler. 2. **sigsegvHandler** flags : SA_NODEFER - don't add the signal to the process signal mask. We use this flag because we want to be able to handle a second call to the signal manually. removed SA_RESETHAND: this flag resets the signal handler function upon the first entrance to the registered function. The reason to use this flag is to protect from recursively entering the signal handler by the same thread. But, it also means that if a second thread crashes while handling a signal, the process will be terminated immediately and we won't get the crash report. In this PR we discard this flag. The signal handler guard described below purpose is to solve the above issues. 3. Add a **signal handler lock** with ERRORCHECK attributes. The lock's purpose is to ensure that only one thread generates a crash report. Once a second thread enters the signal handler it will be blocked. We use the ERRORCHECK lock in order to protect from possible deadlock in case the thread handling the crash gets a signal. In the latest scenario, we log what we have collected until the handler crashed. At the end of the crash report we reset the signal handler SIG_DFL, with no flags, and rethrow the signal to generate a core dump (if enabled) and exit the process. During the work on this PR we wanted to understand the historical reasons for how crash is handled. With respect to the choice of the flag, we believe the **SA_RESETHAND** was not added for any specific purpose. **SA_ONSTACK** which is removed here from bugReportEnd(), was originally also set in the initial registration to signal handler, but removed in 3ada43e7. In addition, it was removed from another location in deee2c1e with the following description, which is also relevant to why it should be removed from bugReportEnd: > it seems to be some valgrind bug with SA_ONSTACK. > SA_ONSTACK seems unneeded since WD is not recursive (SA_NODEFER was removed), > also, not sure if it's even valid without a call to sigaltstack()
-
- 16 Aug, 2023 4 commits
-
-
Binbin authored
In the past we hardcoded it to 20, causing it to not count keys for more databases.
-
Tyler Bream (Event pipeline) authored
When running cluster info, and the number of keys overflows the integer value, the summary no longer makes sense. This fixes by using an appropriate type to handle values over the max int value.
-
WangYu authored
If dict is rehashing, the entries in the head of table[0] is moved to table[1] and all entries in `table[0][0:rehashidx]` is NULL. `dictNext` start looking for non-NULL entry from table 0 index 0, and the first call of `dictNext` on a rehashing dict will Iterate many times to skip those NULL entries. We can easily skip those entries by setting `iter->index` as `iter->d->rehashidx` when dict is rehashing and it's the first call of dictNext (`iter->index == -1 && iter->table == 0`). Co-authored-by:
sundb <sundbcn@gmail.com>
-
Wen Hui authored
Currently rdbSaveMillisecondTime, rdbSaveDoubleValue api's return type is int but they return the value directly from rdbWriteRaw function which has the return type of ssize_t. As this may cause overflow to int so changed to ssize_t.
-
- 15 Aug, 2023 1 commit
-
-
Binbin authored
We iterate over all replicas to get the result, the time complexity should be O(N), like CLUSTER NODES complexity is O(N).
-
- 10 Aug, 2023 1 commit
-
-
Madelyn Olson authored
When adding a new ACL rule was added, an attempt was made to remove any "overlapping" rules. However, there when a match was found, the search was not resumed at the right location, but instead after the original position of the original command. For example, if the current rules were `-config +config|get` and a rule `+config` was added. It would identify that `-config` was matched, but it would skip over `+config|get`, leaving the compacted rule `-config +config`. This would be evaluated safely, but looks weird. This bug can only be triggered with subcommands, since that is the only way to have sequential matching rules. Resolves #12470. This is also only present in 7.2. I think there was also a minor risk of removing another valid rule, since it would start the search of the next command at an arbitrary point. I couldn't find a valid offset that would have cause a match using any of the existing commands that have subcommands with another command.
-
- 05 Aug, 2023 3 commits
-
-
zhaozhao.zz authored
if there are no subscribers, we can ignore the operation
-
zhaozhao.zz authored
Fix the assertion when a busy script (timeout) signal ready keys (like LPUSH), and then an arbitrary client's `allow-busy` command steps into `handleClientsBlockedOnKeys` try wake up clients blocked on keys (like BLPOP). Reproduction process: 1. start a redis with aof `./redis-server --appendonly yes` 2. exec blpop `127.0.0.1:6379> blpop a 0` 3. use another client call a busy script and this script push the blocked key `127.0.0.1:6379> eval "redis.call('lpush','a','b') while(1) do end" 0` 4. user a new client call an allow-busy command like auth `127.0.0.1:6379> auth a` BTW, this issue also break the atomicity of script. This bug has been around for many years, the old versions only have the atomic problem, only 7.0/7.2 has the assertion problem. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Binbin authored
GEOHASH / GEODIST / GEOPOS use zsetScore to get the score, in skiplist encoding, we use dictFind to get the score, which is O(1), same as ZSCORE command. It is not clear why these commands had O(Log(N)), and O(N) until now.
-
- 02 Aug, 2023 1 commit
-
-
Meir Shpilraien (Spielrein) authored
Ensure that the function load timeout is disabled during loading from RDB/AOF and on replicas. (#12451) When loading a function from either RDB/AOF or a replica, it is essential not to fail on timeout errors. The loading time may vary due to various factors, such as hardware specifications or the system's workload during the loading process. Once a function has been successfully loaded, it should be allowed to load from persistence or on replicas without encountering a timeout failure. To maintain a clear separation between the engine and Redis internals, the implementation refrains from directly checking the state of Redis within the engine itself. Instead, the engine receives the desired timeout as part of the library creation and duly respects this timeout value. If Redis wishes to disable any timeout, it can simply send a value of 0.
-