- 15 Oct, 2023 1 commit
-
-
Vitaly authored
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data. ## Important changes * Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms. * getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time. * Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree. * scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue. * Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot. * Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading. * DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well. ## Performance This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict. RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load. ## Interface changes * Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS` * Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored. * New RDB version to support the new op code for SLOT information. --------- Co-authored-by:
Vitaly Arbuzov <arvit@amazon.com> Co-authored-by:
Harkrishn Patro <harkrisp@amazon.com> Co-authored-by:
Roshan Khatri <rvkhatri@amazon.com> Co-authored-by:
Madelyn Olson <madelyneolson@gmail.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 13 Oct, 2023 2 commits
-
-
Oran Agra authored
when a server in the test suite crashes and is restarted by redstart_server, we didn't clean it's pid from the list. we can see that when the corrupt-dump-fuzzer hangs, it has a long list of servers to lean, but in fact they're all already dead.
-
Harkrishn Patro authored
Unsubscribe all clients from replica for shard channel if the master ownership changes
-
- 12 Oct, 2023 1 commit
-
-
zhaozhao.zz authored
In #11568 we removed the NOSCRIPT flag from commands and keep the BLOCKING flag. Aiming to allow them in scripts and let them implicitly behave in the non-blocking way. In that sense, the old behavior was to allow LPOP and reject BLPOP, and the new behavior, is to allow BLPOP too, and fail it only in case it ends up blocking. So likewise, so far we allowed XREAD and rejected XREAD BLOCK, and we will now allow that too, and only reject it if it ends up blocking.
-
- 08 Oct, 2023 1 commit
-
-
Oran Agra authored
recently there are some incidents of hanged tests in the CI when we try to reproduce them, we get an assertion, not a hang. maybe the server logs will reveal some info.
-
- 03 Oct, 2023 1 commit
-
-
YaacovHazan authored
After the change in #12626 (2e0f6724), the is_alive proc gets pid and not server config. This PR aligns it in 'load corrupted rdb with no CRC' test.
-
- 02 Oct, 2023 2 commits
-
-
meiravgri authored
## Crash fix ### Current behavior We might crash if we fail to collect some of the threads' output. If it exceeds timeout for example. The threads mngr API guarantees that the output array length will be `tids_len`, however, some indices can be NULL, in case it fails to collect some of the threads' outputs. When we use the threads mngr to collect the threads' stacktraces, we rely on this and skip NULL entries. Since the output array was allocated with malloc, instead of NULL, it contained garbage, so we got a segmentation fault when trying to read this garbage. (in debug.c:writeStacktraces() ) ### fix Allocate the global output array with zcalloc. ### To reproduce the bug, you'll have to change the code: **in threadsmngr:ThreadsManager_runOnThreads():** make sure the g_output_array allocation is initialized with garbage and not 0s (add `memset(g_output_array, 2, sizeof(void*) * tids_len);` below the allocation). Force one of the threads to write to the array: add a global var: `static redisAtomic size_t return_now = 0;` add to `invoke_callback()` before writing to the output array: ``` size_t i_return; atomicGetIncr(return_now, i_return, 1); if(i_return == 1) return; ``` compile, start the server with `--enable-debug-command local` and run `redis-cli debug assert` The assertion triggers the the stacktrace collection. Expect to get 2 prints of the stack trace - since we get the segmentation fault after we return from the threads mngr, it can be safely triggered again. ## Added global variables r/w lock in ThreadsManager To avoid a situation where the main thread runs `ThreadsManager_cleanups` while threads are still invoking the signal handler, we use a r/w lock. For cleanups, we will acquire the write lock. The threads will acquire the read lock to enable them to write simultaneously. If we fail to acquire the read lock, it means cleanups are in progress and we return immediately. After acquiring the lock we can safely check that the global output array wasn't nullified and proceed to write to it. This way we ensure the threads are not modifying the global variables/ trying to write to the output array after they were zeroed/nullified/destroyed(the semaphore). ## other minor logging change 1. removed logging if the semaphore times out because the threads can still write to the output array after this check. Instead, we print the total number of printed stacktraces compared to the exacted number (len_tids). 2. use noinline attribute to make sure the uplevel number of ignored stack trace entries stays correct. 3. improve testing Co-authored-by:
Oran Agra <oran@redislabs.com>
-
YaacovHazan authored
In some tests, the code manually searches for a log message, and it uses tail -1 with a delay of 1 second, which can miss the expected line. Also, because the aof tests use start_server_aof and not start_server, the test name doesn't log into the server log. To fix the above, I made the following changes: - Change the start_server_aof to wrap the start_server. This will add the created aof server to the servers list, and make srv() and wait_for_log_messages() available for the tests. - Introduce a new option for start_server. 'wait_ready' - an option to let the caller start the test code without waiting for the server to be ready. useful for tests on a server that is expected to exit on startup. - Create a new start_server_aof_ex. The new proc also accept options as argument and make use of the new 'short_life' option for tests that are expected to exit on startup because of some error in the aof file(s). Because of the above, I had to change many lines and replace every local srv variable (a server config) usage with the srv().
-
- 28 Sep, 2023 2 commits
-
-
guybe7 authored
The problem is that WAITAOF could have hang in case commands were propagated only to replicas. This can happen if a module uses RM_Call with the REDISMODULE_ARGV_NO_AOF flag. In that case, master_repl_offset would increase, but there would be nothing to fsync, so in the absence of other traffic, fsynced_reploff_pending would stay the static, and WAITAOF can hang. This commit updates fsynced_reploff_pending to the latest offset in flushAppendOnlyFile in case there's nothing to fsync. i.e. in case it's behind because of the above mentions case it'll be refreshed and release the WAITAOF. Other changes: Fix a race in wait.tcl (client getting blocked vs. the fsync thread)
-
guybe7 authored
If we set `fsynced_reploff_pending` in `startAppendOnly`, and the fork doesn't start immediately (e.g. there's another fork active at the time), any subsequent commands will increment `server.master_repl_offset`, but will not cause a fsync (given they were executed before the fork started, they just ended up in the RDB part of it) Therefore, any WAITAOF will wait on the new master_repl_offset, but it will time out because no fsync will be executed. Release notes: ``` WAITAOF could timeout in the absence of write traffic in case a new AOF is created and an AOFRW can't immediately start. This can happen by the appendonly config is changed at runtime, but also after FLUSHALL, and replica full sync. ```
-
- 27 Sep, 2023 1 commit
-
-
Binbin authored
Recently we found some signal crashes, but unable to reproduce them. It is a good idea to dump the server logs when a failure happens.
-
- 24 Sep, 2023 1 commit
-
-
meiravgri authored
In this PR we are adding the functionality to collect all the process's threads' backtraces. ## Changes made in this PR ### **introduce threads mngr API** The **threads mngr API** which has 2 abilities: * `ThreadsManager_init() `- register to SIGUSR2. called on the server start-up. * ` ThreadsManager_runOnThreads()` - receives a list of a pid_t and a callback, tells every thread in the list to invoke the callback, and returns the output collected by each invocation. **Elaborating atomicvar API** * `atomicIncrGet(var,newvalue_var,count) `-- Increment and get the atomic counter new value * `atomicFlagGetSet` -- Get and set the atomic counter value to 1 ### **Always set SIGALRM handler** SIGALRM handler prints the process's stacktrace to the log file. Up until now, it was set only if the `server.watchdog_period` > 0. This can be also useful if debugging is needed. However, in situations where the server can't get requests, (a deadlock, for example) we weren't able to change the signal handler. To make it available at run time we set SIGALRM handler on server startup. The signal handler name was changed to a more general `sigalrmSignalHandler`. ### **Print all the process' threads' stacktraces** `logStackTrace()` now calls `writeStacktraces()`, instead of logging the current thread stacktrace. `writeStacktraces()`: * On Linux systems we use the threads manager API to collect the backtraces of all the process' threads. To get the `tids` list (threads ids) we read the `/proc/<redis-server-pid>/tasks` file which includes a list of directories. Each directory name corresponds to one tid (including the main thread). For each thread, we also need to check if it can get the signal from the threads manager (meaning it is not blocking/ignoring that signal). We send the threads manager this tids list and `collect_stacktrace_data()` callback, which collects the thread's backtrace addresses, its name, and tid. * On other systems, the behavior remained as it was (writing only the current thread stacktrace to the log file). ## compatibility notes 1. **The threads mngr API is only supported in linux.** 2. glibc earlier than 2.3 We use `syscall(SYS_gettid)` and `syscall(SYS_tgkill...)` because their dedicated alternatives (`gettid()` and `tgkill`) were added in glibc 2.3. ## Output example Each thread backtrace will have the following format: `<tid> <thread_name> [additional_info]` * **tid**: as read from the `/proc/<redis-server-pid>/tasks` file * **thread_name**: the tread name as it is registered in the os/ * **additional_info**: Sometimes we want to add specific information about one of the threads. currently. it is only used to mark the thread that handles the backtraces collection by adding "*". In case of crash - this also indicates which thread caused the crash. The handling thread in won't necessarily appear first. ``` ------ STACK TRACE ------ EIP: /lib/aarch64-linux-gnu/libc.so.6(epoll_pwait+0x9c)[0xffffb9295ebc] 67089 redis-server * linux-vdso.so.1(__kernel_rt_sigreturn+0x0)[0xffffb9437790] /lib/aarch64-linux-gnu/libc.so.6(epoll_pwait+0x9c)[0xffffb9295ebc] redis-server *:6379(+0x75e0c)[0xaaaac2fe5e0c] redis-server *:6379(aeProcessEvents+0x18c)[0xaaaac2fe6c00] redis-server *:6379(aeMain+0x24)[0xaaaac2fe7038] redis-server *:6379(main+0xe0c)[0xaaaac3001afc] /lib/aarch64-linux-gnu/libc.so.6(+0x273fc)[0xffffb91d73fc] /lib/aarch64-linux-gnu/libc.so.6(__libc_start_main+0x98)[0xffffb91d74cc] redis-server *:6379(_start+0x30)[0xaaaac2fe0370] 67093 bio_lazy_free /lib/aarch64-linux-gnu/libc.so.6(+0x79dfc)[0xffffb9229dfc] /lib/aarch64-linux-gnu/libc.so.6(pthread_cond_wait+0x208)[0xffffb922c8fc] redis-server *:6379(bioProcessBackgroundJobs+0x174)[0xaaaac30976e8] /lib/aarch64-linux-gnu/libc.so.6(+0x7d5c8)[0xffffb922d5c8] /lib/aarch64-linux-gnu/libc.so.6(+0xe5d1c)[0xffffb9295d1c] 67091 bio_close_file /lib/aarch64-linux-gnu/libc.so.6(+0x79dfc)[0xffffb9229dfc] /lib/aarch64-linux-gnu/libc.so.6(pthread_cond_wait+0x208)[0xffffb922c8fc] redis-server *:6379(bioProcessBackgroundJobs+0x174)[0xaaaac30976e8] /lib/aarch64-linux-gnu/libc.so.6(+0x7d5c8)[0xffffb922d5c8] /lib/aarch64-linux-gnu/libc.so.6(+0xe5d1c)[0xffffb9295d1c] 67092 bio_aof /lib/aarch64-linux-gnu/libc.so.6(+0x79dfc)[0xffffb9229dfc] /lib/aarch64-linux-gnu/libc.so.6(pthread_cond_wait+0x208)[0xffffb922c8fc] redis-server *:6379(bioProcessBackgroundJobs+0x174)[0xaaaac30976e8] /lib/aarch64-linux-gnu/libc.so.6(+0x7d5c8)[0xffffb922d5c8] /lib/aarch64-linux-gnu/libc.so.6(+0xe5d1c)[0xffffb9295d1c] 67089:signal-handler (1693824528) -------- ```
-
- 08 Sep, 2023 1 commit
-
-
Binbin authored
and adjustments.
-
- 02 Sep, 2023 1 commit
-
-
alonre24 authored
Recently, the option of sending an argument from stdin using `-x` flag was added to redis-benchmark (this option is available in redis-cli as well). However, using the `-x` option for sending a blobs that contains null-characters doesn't work as expected - the argument is trimmed in the first occurrence of `\X00` (unlike in redis-cli). This PR aims to fix this issue and add the support for every binary string input, by sending arguments length to `redisFormatCommandArgv` when processing redis-benchmark command, so we won't treat the arguments as C-strings. Additionally, we add a simple test coverage for `-x` (without binary strings, and also remove an excessive server started in tests, and make sure to select db 0 so that `r` and the benchmark work on the same db. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 01 Sep, 2023 1 commit
-
-
Binbin authored
The new test added in #12476 causes reply-schemas-validator to fail. When doing `catch {r get key}`, the req-res output is: ``` 3 get 3 key 12 __argv_end__ $100000 aaaaaaaaaaaaaaaaaaaa...4 info 5 stats 12 __argv_end__ =1670 txt:# Stats ... ``` And we can see the link after `$100000`, there is a 4 in the last, it break the req-res-log-validator script since the format is wrong. The reason i guess is after the client reconnection (after the output buf limit), we will not add newlines, but append args directly. Since obuf-limits.tcl is doing the same thing, and it had the logreqres:skip flag, so this PR is following it.
-
- 31 Aug, 2023 2 commits
-
-
Chen Tianjie authored
ZRANGE BYSCORE/BYLEX with [LIMIT offset count] option was using every level in skiplist to jump to the first/last node in range, but only use level[0] in skiplist to locate the node at offset, resulting in sub-optimal performance using LIMIT: ``` while (ln && offset--) { if (reverse) { ln = ln->backward; } else { ln = ln->level[0].forward; } } ``` It could be slow when offset is very big. We can get the total rank of the offset location and use skiplist to jump to it. It is an improvement from O(offset) to O(log rank). Below shows how this is implemented (if the offset is positve): Use the skiplist to seach for the first element in the range, record its rank `rank_0`, so we can have the rank of the target node `rank_t`. Meanwhile we record the last node we visited which has zsl->level-1 levels and its rank `rank_1`. Then we start from the zsl->level-1 node, use skiplist to go forward `rank_t-rank_1` nodes to reach the target node. It is very similiar when the offset is reversed. Note that if `rank_t` is very close to `rank_0`, we just start from the first element in range and go node by node, this for the case when zsl->level-1 node is to far away and it is quicker to reach the target node by node. Here is a test using a random generated zset including 10000 elements (with different positive scores), doing a bench mark which compares how fast the `ZRANGE` command is exucuted before and after the optimization. The start score is set to 0 and the count is set to 1 to make sure that most of the time is spent on locating the offset. ``` memtier_benchmark -h 127.0.0.1 -p 6379 --command="zrange test 0 +inf byscore limit <offset> 1" ``` | offset | QPS(unstable) | QPS(optimized) | |--------|--------|--------| | 10 | 73386.02 | 74819.82 | | 1000 | 48084.96 | 73177.73 | | 2000 | 31156.79 | 72805.83 | | 5000 | 10954.83 | 71218.21 | With the result above, we can see that the original code is greatly slowed down when offset gets bigger, and with the optimization the speed is almost not affected. Similiar results are generated when testing reversed offset: ``` memtier_benchmark -h 127.0.0.1 -p 6379 --command="zrange test +inf 0 byscore rev limit <offset> 1" ``` | offset | QPS(unstable) | QPS(optimized) | |--------|--------|--------| | 10 | 74505.14 | 71653.67 | | 1000 | 46829.25 | 72842.75 | | 2000 | 28985.48 | 73669.01 | | 5000 | 11066.22 | 73963.45 | And the same conclusion is drawn from the tests of ZRANGE BYLEX.
-
Binbin authored
Also added a test to cover this case, so this can cover the reply schemas check.
-
- 30 Aug, 2023 3 commits
-
-
Roshan Khatri authored
This PR adds a new Module API int RM_AddACLCategory(RedisModuleCtx *ctx, const char *category_name) to add a new ACL command category. Here, we initialize the ACLCommandCategories array by allocating space for 64 categories and duplicate the 21 default categories from the predefined array 'ACLDefaultCommandCategories' into the ACLCommandCategories array while ACL initialization. Valid ACL category names can only contain alphanumeric characters, underscores, and dashes. The API when called, checks for the onload flag, category name validity, and for duplicate category name if present. If the conditions are satisfied, the API adds the new category to the trailing end of the ACLCommandCategories array and assigns the acl_categories flag bit according to the index at which the category is added. If any error is encountered the errno is set accordingly by the API. --------- Co-authored-by:
Madelyn Olson <madelyneolson@gmail.com>
-
bodong.ybd authored
Before: ``` 127.0.0.1:6379> command getkeys sort_ro key (empty array) 127.0.0.1:6379> ``` After: ``` 127.0.0.1:6379> command getkeys sort_ro key 1) "key" 127.0.0.1:6379> ```
-
Chen Tianjie authored
Add these INFO metrics: * client_query_buffer_limit_disconnections * client_output_buffer_limit_disconnections Sometimes it is useful to monitor whether clients reaches size limit of query buffer and output buffer, to decide whether we need to adjust the buffer size limit or reduce client query payload.
-
- 27 Aug, 2023 1 commit
-
-
Binbin authored
This test failed several times: ``` *** [err]: LATENCY GRAPH can output the event graph in tests/unit/latency-monitor.tcl Expected '478' to be more than or equal to '500' (context: type eval line 8 cmd {assert_morethan_equal $high 500} proc ::test) ``` Not sure why, adding some verbose printing that'll print the command result on the next time.
-
- 21 Aug, 2023 2 commits
-
-
Binbin authored
BITCOUNT and BITPOS with non-existing key will return 0 even the arguments are error, before this commit: ``` > flushall OK > bitcount s 0 (integer) 0 > bitpos s 0 0 1 hello (integer) 0 > set s 1 OK > bitcount s 0 (error) ERR syntax error > bitpos s 0 0 1 hello (error) ERR syntax error ``` The reason is that we judged non-existing before parameter checking and returned. This PR fixes it, and after this commit: ``` > flushall OK > bitcount s 0 (error) ERR syntax error > bitpos s 0 0 1 hello (error) ERR syntax error ``` Also BITPOS made the same fix as #12394, check for wrong argument, before checking for key. ``` > lpush mylist a b c (integer) 3 > bitpos mylist 1 a b (error) WRONGTYPE Operation against a key holding the wrong kind of value ```
-
Wen Hui authored
Generally, In any command we first check for the argument and then check if key exist. Some of the examples are ``` 127.0.0.1:6379> getrange no-key invalid1 invalid2 (error) ERR value is not an integer or out of range 127.0.0.1:6379> setbit no-key 1 invalid (error) ERR bit is not an integer or out of range 127.0.0.1:6379> xrange no-key invalid1 invalid2 (error) ERR Invalid stream ID specified as stream command argument ``` **Before change** ``` bitcount no-key invalid1 invalid2 0 ``` **After change** ``` bitcount no-key invalid1 invalid2 (error) ERR value is not an integer or out of range ```
-
- 20 Aug, 2023 1 commit
-
-
Wen Hui authored
In our test case, now we missed some test coverage for client sub-commands. This pr goal is to add some test coverage cases of the following commands: Client caching Client kill Client no-evict Client pause Client reply Client tracking Client setname At the very least, this is useful to make sure there are no leaks and crashes in these code paths.
-
- 16 Aug, 2023 1 commit
-
-
Oran Agra authored
Add 7.2, drop 6.0 as per https://redis.io/docs/about/releases/ Also replace a few concordances of the `’` char, with standard `'`
-
- 10 Aug, 2023 2 commits
-
-
Madelyn Olson authored
When adding a new ACL rule was added, an attempt was made to remove any "overlapping" rules. However, there when a match was found, the search was not resumed at the right location, but instead after the original position of the original command. For example, if the current rules were `-config +config|get` and a rule `+config` was added. It would identify that `-config` was matched, but it would skip over `+config|get`, leaving the compacted rule `-config +config`. This would be evaluated safely, but looks weird. This bug can only be triggered with subcommands, since that is the only way to have sequential matching rules. Resolves #12470. This is also only present in 7.2. I think there was also a minor risk of removing another valid rule, since it would start the search of the next command at an arbitrary point. I couldn't find a valid offset that would have cause a match using any of the existing commands that have subcommands with another command.
-
Binbin authored
After SENTINEL RESET, sometimes the sentinel can sense the master again, causing the test to fail. Here we give it a few more chances.
-
- 05 Aug, 2023 2 commits
-
-
zhaozhao.zz authored
Fix the assertion when a busy script (timeout) signal ready keys (like LPUSH), and then an arbitrary client's `allow-busy` command steps into `handleClientsBlockedOnKeys` try wake up clients blocked on keys (like BLPOP). Reproduction process: 1. start a redis with aof `./redis-server --appendonly yes` 2. exec blpop `127.0.0.1:6379> blpop a 0` 3. use another client call a busy script and this script push the blocked key `127.0.0.1:6379> eval "redis.call('lpush','a','b') while(1) do end" 0` 4. user a new client call an allow-busy command like auth `127.0.0.1:6379> auth a` BTW, this issue also break the atomicity of script. This bug has been around for many years, the old versions only have the atomic problem, only 7.0/7.2 has the assertion problem. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
sundb authored
This PR mainly fixes a possible integer overflow in `json_append_string()`. When we use `cjson.encoding()` to encode a string larger than 2GB, at specific compilation flags, an integer overflow may occur leading to truncation, resulting in the part of the string larger than 2GB not being encoded. On the other hand, this overflow doesn't cause any read or write out-of-range or segment fault. 1) using -O0 for lua_cjson (`make LUA_DEBUG=yes`) In this case, `i` will overflow and leads to truncation. When `i` reaches `INT_MAX+1` and overflows to INT_MIN, when compared to len, `i` (1000000..00) is expanded to 64 bits signed integer (1111111.....000000) . At this point i will be greater than len and jump out of the loop, so `for (i = 0; i < len; i++)` will loop up to 2^31 times, and the part of larger than 2GB will be truncated. ```asm `i` => -0x24(%rbp) <+253>: addl $0x1,-0x24(%rbp) ; overflow if i large than 2^31 <+257>: mov -0x24(%rbp),%eax <+260>: movslq %eax,%rdx ; move a 32-bit value with sign extension into a 64-bit signed <+263>: mov -0x20(%rbp),%rax <+267>: cmp %rax,%rdx ; check `i < len` <+270>: jb 0x212600 <json_append_string+148> ``` 2) using -O2/-O3 for lua_cjson (`make LUA_DEBUG=no`, **the default**) In this case, because singed integer overflow is an undefined behavior, `i` will not overflow. `i` will be optimized by the compiler and use 64-bit registers for all subsequent instructions. ```asm <+180>: add $0x1,%rbx ; Using 64-bit register `rbx` for i++ <+184>: lea 0x1(%rdx),%rsi <+188>: mov %rsi,0x10(%rbp) <+192>: mov %al,(%rcx,%rdx,1) <+195>: cmp %rbx,(%rsp) ; check `i < len` <+199>: ja 0x20b63a <json_append_string+154> ``` 3) using 32bit Because `strbuf_ensure_empty_length()` preallocates memory of length (len * 6 + 2), in 32-bit `cjson.encode()` can only handle strings smaller than ((2 ^ 32) - 3 ) / 6. So 32bit is not affected. Also change `i` in `strbuf_append_string()` to `size_t`. Since its second argument `str` is taken from the `char2escape` string array which is never larger than 6, so `strbuf_append_string()` is not at risk of overflow (the bug was unreachable).
-
- 02 Aug, 2023 1 commit
-
-
zhaozhao.zz authored
When doing merge selector, we should check whether the merge has started (i.e., whether open_bracket_start is -1) every time. Otherwise, encountering an illegal selector pattern could succeed and also cause memory leaks, for example: ``` acl setuser test1 (+PING (+SELECT (+DEL ) ``` The above would leak memory and succeed with only DEL being applied, and would now error after the fix. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 01 Aug, 2023 1 commit
-
-
Diego Lopez Recas authored
Changing the masterauth while turning into a replica is racy. Turn into replica after changing the masterauth instead.
-
- 30 Jul, 2023 1 commit
-
-
DarrenJiang13 authored
In the tcl foreach loop, the function should compare line rather than the whole file.
-
- 25 Jul, 2023 2 commits
-
-
Harkrishn Patro authored
Additional test coverage for incr/decr operation. integer number could be present in raw encoding format due to operation like append. A incr/decr operation following it optimize the string to int encoding format.
-
Harkrishn Patro authored
Add test coverage to validate object encoding update on APPEND command usage on a integer value
-
- 20 Jul, 2023 1 commit
-
-
Makdon authored
When the redis server cluster running on cluster-preferred-endpoint-type unknown-endpoint mode, and receive a request that should be redirected to another redis server node, it does not reply the hostip, but a empty host like MOVED 3999 :6381. The redis-cli would try to connect to an address without a host, which cause the issue: ``` 127.0.0.1:7002> set bar bar -> Redirected to slot [5061] located at :7000 Could not connect to Redis at :7000: No address associated with hostname Could not connect to Redis at :7000: No address associated with hostname not connected> exit ``` In this case, the redis-cli should use the previous hostip when there's no host provided by the server. --------- Co-authored-by:
Viktor Söderqvist <viktor.soderqvist@est.tech> Co-authored-by:
Madelyn Olson <madelynolson@gmail.com>
-
- 10 Jul, 2023 1 commit
-
-
Oran Agra authored
* Fix integer overflows due to using wrong integer size. * Add assertions / panic when overflow still happens. * Deletion of dead code to avoid need to maintain it * Some changes are not because of bugs, but rather paranoia. * Improve cmsgpack and cjson test coverage. Co-authored-by:
Yossi Gottlieb <yossigo@gmail.com>
-
- 06 Jul, 2023 1 commit
-
-
Sankar authored
Process loss of slot ownership in cluster bus When a node no longer owns a slot, it clears the bit corresponding to the slot in the cluster bus messages. The receiving nodes currently don't record the fact that the sender stopped claiming a slot until some other node in the cluster starts claiming the slot. This can cause a slot to go missing during slot migration when subjected to inopportune race with addition of new shards or a failover. This fix forces the receiving nodes to process the loss of ownership to avoid spreading wrong information.
-
- 05 Jul, 2023 1 commit
-
-
Binbin authored
The test fails on freebsd CI: ``` *** [err]: stats: eventloop metrics in tests/unit/info.tcl Expected '31777' to be less than '16183' (context: type eval line 17 cmd {assert_lessthan $el_sum2 [expr $el_sum1+10000] } proc ::test) ``` The test added in #11963, fails on freebsd CI which is slow, increase tollerance and also add some verbose logs, now we can see these logs in verbose mode (for better views): ``` eventloop metrics cycle1: 12, cycle2: 15 eventloop metrics el_sum1: 315, el_sum2: 411 eventloop metrics cmd_sum1: 126, cmd_sum2: 137 [ok]: stats: eventloop metrics (111 ms) instantaneous metrics instantaneous_eventloop_cycles_per_sec: 8 instantaneous metrics instantaneous_eventloop_duration_usec: 55 [ok]: stats: instantaneous metrics (1603 ms) [ok]: stats: debug metrics (112 ms) ```
-
- 03 Jul, 2023 1 commit
-
-
Lior Lahav authored
When getKeysUsingKeySpecs processes a command with more than one key-spec, and called with a total of more than 256 keys, it'll call getKeysPrepareResult again, but since numkeys isn't updated, getKeysPrepareResult will not bother to copy key names from the old result (leaving these slots uninitialized). Furthermore, it did not consider the keys it already found when allocating more space. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 02 Jul, 2023 1 commit
-
-
Binbin authored
The negative offset check was added in #9052, we realized that this is a non-mandatory breaking change and we would like to add it only in 8.0. This reverts PR #9052, will be re-introduced later in 8.0.
-