- 30 May, 2024 1 commit
-
-
jonghoonpark authored
**Related issue** https://github.com/redis/redis/issues/13219 **Motivation** Currently we have to manually update the all_tests variable when introducing new test files. **Modification** I have modified it to list test files dynamically, but instead of modifying it to add all test files, I have modified it to only add only test files from the following 4 paths - unit - unit/type - unit/cluster - integration so that it doesn't deviate too much from what we already do **Result** - dynamically list test files to all_tests variable - close issue https://github.com/redis/redis/issues/13219 **Additional information** - removed `list-common.tcl` file and added `generate_largevalue_test_array` proc in `util.tcl`. because `list-common.tcl` is not a test file - There is an order dependency. So I added a code to the "Is a ziplist encoded Hash promoted on big payload?" test that resets hash-max-listpack-value to the default (64). --------- Signed-off-by:
jonghoonpark <dev@jonghoonpark.com> Co-authored-by:
debing.sun <debing.sun@redis.com>
-
- 29 May, 2024 1 commit
-
-
Moti Cohen authored
* For replica sake, rewrite commands `H*EXPIRE*` , `HSETF`, `HGETF` to have absolute unix time in msec. * On active-expiration of field, propagate HDEL to replica (`propagateHashFieldDeletion()`) * On lazy-expiration, propagate HDEL to replica (`hashTypeGetValue()` now calls `hashTypeDelete()`. It also takes care to call `propagateHashFieldDeletion()`). * Fix `H*EXPIRE*` command such that if it gets flag `LT` and it doesn’t have any expiration on the field then it will considered as valid condition. Note, replicas doesn’t make any active expiration, and should avoid lazy expiration. On `hashTypeGetValue()` it doesn't check expiration (As long as the master didn’t request to delete the field, it is valid) TODO: * Attach `dbid` to HASH metadata. See [here](https://github.com/redis/redis/pull/13209#discussion_r1593385850 ) --------- Co-authored-by:
debing.sun <debing.sun@redis.com>
-
- 28 May, 2024 1 commit
-
-
Ozan Tezcan authored
In the last step of hscan, while replying to client, we assume all items in the result list are keys which are mstr instances. Though, there might be values which are sds instances. Added a check to avoid calling mstrlen() for value objects. To reproduce: ``` 127.0.0.1:6379> hset myhash1 a 11111111111111111111111111111111111111111111111111111111111111111 (integer) 0 127.0.0.1:6379> hscan myhash1 0 1) "0" 2) 1) "a" 2) "11111111111111111111111111111111111111111111111111111111111111111\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" ```
-
- 26 May, 2024 2 commits
-
-
Ozan Tezcan authored
Changes: - Delete hsetf and hgetf commands - Hfe commands will return empty array instead of nil. --------- Co-authored-by:
Moti Cohen <moticless@gmail.com>
-
Moti Cohen authored
-
- 23 May, 2024 1 commit
-
-
Moti Cohen authored
Added hashes_with_expiry_fields. Optimially it would better to have statistic of that counts all fields with expiry. But it requires careful logic and computation to follow and deep dive listpacks and hashes. This statistics is trivial to achieve and reflected by global HFE DS that has builtin enumeration of all the hashes that are registered in it.
-
- 22 May, 2024 1 commit
-
-
debing.sun authored
Add the following validations: 1. Get TTL using the lpGetIntegerValue() method instead of lpGetValue(), Ref https://github.com/redis/redis/pull/13209#discussion_r1602569422 2. The TTL of listpackex is a number in the valid range (0~EB_EXPIRE_TIME_MAX) and ordered. 3. The TTL fields of listpackex are ordered. 4. The TTL of hashtable is within the valid range (0~EB_EXPIRE_TIME_MAX). Other: Fix the missing of handling OBJ_ENCODING_LISTPACK_EX in dismissHashObject(). --------- Co-authored-by:
Ozan Tezcan <ozantezcan@gmail.com>
-
- 21 May, 2024 1 commit
-
-
debing.sun authored
This PR is based on the commits from PR #12944. Allow SPUBLISH command within multi/exec on replica Behavior on unstable: ``` 127.0.0.1:6380> CLUSTER NODES 39ce8aa20f1f0d91f1a88d976ee1926dfefcdf1a 127.0.0.1:6380@16380 myself,slave 8b0feb120b68aac489d6a5af9c77dc40d71bc792 0 0 0 connected 8b0feb120b68aac489d6a5af9c77dc40d71bc792 127.0.0.1:6379@16379 master - 0 1705091681202 0 connected 0-16383 127.0.0.1:6380> SPUBLISH hello world (integer) 0 127.0.0.1:6380> MULTI OK 127.0.0.1:6380(TX)> SPUBLISH hello world QUEUED 127.0.0.1:6380(TX)> EXEC (error) MOVED 866 127.0.0.1:6379 ``` With this change: ``` 127.0.0.1:6380> SPUBLISH hello world (integer) 0 127.0.0.1:6380> MULTI OK 127.0.0.1:6380(TX)> SPUBLISH hello world QUEUED 127.0.0.1:6380(TX)> EXEC 1) (integer) 0 ``` --------- Co-authored-by:
Harkrishn Patro <harkrisp@amazon.com> Co-authored-by:
oranagra <oran@redislabs.com>
-
- 18 May, 2024 1 commit
-
-
Ozan Tezcan authored
FIELDS keyword was added as part of [#13270](https://github.com/redis/redis/pull/13270). It was missing in [#13243](https://github.com/redis/redis/pull/13243)
-
- 17 May, 2024 1 commit
-
-
Ronen Kalish authored
Add RDB de/serialization for HFE This PR adds two new RDB types: `RDB_TYPE_HASH_METADATA` and `RDB_TYPE_HASH_LISTPACK_TTL` to save HFE data. When the hash RAM encoding is dict, it will be saved in the former, and when it is listpack it will be saved in the latter. Both formats just add the TTL value for each field after the data that was previously saved, i.e HASH_METADATA will save the number of entries and, for each entry, key, value and TTL, whereas listpack is saved as a blob. On read, the usual dict <--> listpack conversion takes place if required. In addition, when reading a hash that was saved as a dict fields are actively expired if expiry is due. Currently this slao holds for listpack encoding, but it is supposed to be removed. TODO: Remove active expiry on load when loading from listpack format (unless we'll decide to keep it)
-
- 16 May, 2024 1 commit
-
-
Moti Cohen authored
The same goes to: HPEXPIRE, HEXPIREAT, HPEXPIREAT, HEXPIRETIME, HPEXPIRETIME, HPTTL, HTTL, HPERSIST
-
- 14 May, 2024 2 commits
-
-
debing.sun authored
This test was introducted by #13251. Normally we auto transform the reply format of XREADGROUP to array under RESP3 (see trasformer_funcs). But when we execute XREADGROUP command in multi it can't work, which cause the new test failed. The solution is to verity the reply of XREADGROUP in advance rather than in MULTI. Failed validate schema CI: https://github.com/redis/redis/actions/runs/9025128323/job/24800285684 --------- Co-authored-by:
guybe7 <guy.benoish@redislabs.com>
-
debing.sun authored
## Background 1. All hash objects that contain HFE are referenced by db->hexpires. 2. All fields in a dict hash object with HFE are referenced by an ebucket. So when we defrag the hash object or the field in a dict with HFE, we also need to update the references in them. ## Interface 1. Add a new interface `ebDefragItem`, which can accept a defrag callback to defrag items in ebuckets, and simultaneously update their references in the ebucket. ## Mainly changes 1. The key type of dict of hash object is no longer sds, so add new `activeDefragHfieldDict()` to defrag the dict instead of `activeDefragSdsDict()`. 2. When we defrag the dict of hash object by using `dictScanDefrag()`, we always set the defrag callback `defragKey` of `dictDefragFunctions` to NULL, because we can't reallocate a field with out updating it's reference in ebuckets. Instead, we will defrag the field of the dict and update its reference in the callback `dictScanDefrag` of dictScanFunction(). 3. When we defrag the hash robj with HFE, we will use `ebDefragItem` to defrag the robj and update the reference in db->hexpires. ## TODO: Defrag ebucket structure incremently, which will be handler in a future PR. --------- Co-authored-by:
Ozan Tezcan <ozantezcan@gmail.com> Co-authored-by:
Moti Cohen <moti.cohen@redis.com>
-
- 13 May, 2024 1 commit
-
-
Ozan Tezcan authored
If encoding is listpack, hgetf and hsetf commands reply field value type as integer. This PR fixes it by returning string. Problematic cases: ``` 127.0.0.1:6379> hset hash one 1 (integer) 1 127.0.0.1:6379> hgetf hash fields 1 one 1) (integer) 1 127.0.0.1:6379> hsetf hash GETOLD fvs 1 one 2 1) (integer) 1 127.0.0.1:6379> hsetf hash DOF GETNEW fvs 1 one 2 1) (integer) 2 ``` Additional fixes: - hgetf/hsetf command description text Fixes #13261, #13262
-
- 10 May, 2024 1 commit
-
-
ClaytonNorthey92 authored
added reverse history search to redis-cli, use it with the following: * CTRL+R : enable search backward mode, and search next one when pressing CTRL+R again until reach index 0. ``` 127.0.0.1:6379> keys one 127.0.0.1:6379> keys two (reverse-i-search): # press CTRL+R (reverse-i-search): keys two # input `keys` (reverse-i-search): keys one # press CTRL+R again (reverse-i-search): keys one # press CTRL+R again, still `keys one` due to reaching index 0 (i-search): keys two # press CTRL+S, enable search forward (i-search): keys two # press CTRL+S, still `keys one` due to reaching index 1 ``` * CTRL+S : enable search forward mode, and search next one when pressing CTRL+S again until reach index 0. ``` 127.0.0.1:6379> keys one 127.0.0.1:6379> keys two (i-search): # press CTRL+S (i-search): keys one # input `keys` (i-search): keys two # press CTRL+S again (i-search): keys two # press CTRL+R again, still `keys two` due to reaching index 0 (reverse-i-search): keys one # press CTRL+R, enable search backward (reverse-i-search): keys one # press CTRL+S, still `keys one` due to reaching index 1 ``` * CTRL+G : disable ``` 127.0.0.1:6379> keys one 127.0.0.1:6379> keys two (reverse-i-search): # press CTRL+R (reverse-i-search): keys two # input `keys` 127.0.0.1:6379> # press CTRL+G ``` * CTRL+C : disable ``` 127.0.0.1:6379> keys one 127.0.0.1:6379> keys two (reverse-i-search): # press CTRL+R (reverse-i-search): keys two # input `keys` 127.0.0.1:6379> # press CTRL+G ``` * TAB : use the current search result and exit search mode ``` 127.0.0.1:6379> keys one 127.0.0.1:6379> keys two (reverse-i-search): # press CTRL+R (reverse-i-search): keys two # input `keys` 127.0.0.1:6379> keys two # press TAB ``` * ENTER : use the current search result and execute the command ``` 127.0.0.1:6379> keys one 127.0.0.1:6379> keys two (reverse-i-search): # press CTRL+R (reverse-i-search): keys two # input `keys` 127.0.0.1:6379> keys two # press ENTER (empty array) 127.0.0.1:6379> ``` * any arrow key will disable reverse search your result will have the search match bolded, you can press enter to execute the full result note: I have _only added this for multi-line mode_, as it seems to be forced that way when `repl` is called Closes: https://github.com/redis/redis/issues/8277 --------- Co-authored-by:
Clayton Northey <clayton@knowbl.com> Co-authored-by:
Viktor Söderqvist <viktor.soderqvist@est.tech> Co-authored-by:
debing.sun <debing.sun@redis.com> Co-authored-by:
Bjorn Svensson <bjorn.a.svensson@est.tech> Co-authored-by:
Viktor Söderqvist <viktor@zuiderkwast.se>
-
- 09 May, 2024 1 commit
-
-
debing.sun authored
1. Add `hpersist` notification for `hpersist` command. 2. Add `pexpire` notification for `hexpire`, `hexpireat` and `hpexpire`.
-
- 08 May, 2024 1 commit
-
-
Ozan Tezcan authored
**Changes:** - Adds listpack support to hash field expiration - Implements hgetf/hsetf commands **Listpack support for hash field expiration** We keep field name and value pairs in listpack for the hash type. With this PR, if one of hash field expiration command is called on the key for the first time, it converts listpack layout to triplets to hold field name, value and ttl per field. If a field does not have a TTL, we store zero as the ttl value. Zero is encoded as two bytes in the listpack. So, once we convert listpack to hold triplets, for the fields that don't have a TTL, it will be consuming those extra 2 bytes per item. Fields are ordered by ttl in the listpack to find the field with minimum expiry time efficiently. **New command implementations as part of this PR:** - HGETF command For each specified field get its value and optionally set the field's expiration time in sec/msec /unix-sec/unix-msec: ``` HGETF key [NX | XX | GT | LT] [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | PERSIST] <FIELDS count field [field ...]> ``` - HSETF command For each specified field value pair: set field to value and optionally set the field's expiration time in sec/msec /unix-sec/unix-msec: ``` HSETF key [DC] [DCF | DOF] [NX | XX | GT | LT] [GETNEW | GETOLD] [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | KEEPTTL] <FVS count field value [field value …]> ``` Todo: - Performance improvement. - rdb load/save - aof - defrag
-
- 06 May, 2024 1 commit
-
-
guybe7 authored
Because it does not cause any propagation (arguably it should, see the comment in the tcl file) The motivation for this fix is that in 6.2 if dirty changed without propagation inside MULTI/EXEC it would cause propagation of EXEC only, which would result in the replica sending errors to its master
-
- 03 May, 2024 1 commit
-
-
debing.sun authored
-
- 18 Apr, 2024 1 commit
-
-
Moti Cohen authored
- Add ebuckets & mstr data structures - Integrate active & lazy expiration - Add most of the commands - Add support for dict (listpack is missing) TODOs: RDB, notification, listpack, HSET, HGETF, defrag, aof
-
- 16 Apr, 2024 1 commit
-
-
Binbin authored
## Background 1. Currently Lua memory control does not pass through Redis's zmalloc.c. Redis maxmemory cannot limit memory problems caused by users abusing lua since these lua VM memory is not part of used_memory. 2. Since jemalloc is much better (fragmentation and speed), and also we know it and trust it. we are going to use jemalloc instead of libc to allocate the Lua VM code and count it used memory. ## Process: In this PR, we will use jemalloc in lua. 1. Create an arena for all lua vm (script and function), which is shared, in order to avoid blocking defragger. 2. Create a bound tcache for the lua VM, since the lua VM and the main thread are by default in the same tcache, and if there is no isolated tcache, lua may request memory from the tcache which has just been freed by main thread, and vice versa On the other hand, since lua vm might be release in bio thread, but tcache is not thread-safe, we need to recreate the tcache every time we recreate the lua vm. 3. Remove lua memory statistics from memory fragmentation statistics to avoid the effects of lua memory fragmentation ## Other Add the following new fields to `INFO DEBUG` (we may promote them to INFO MEMORY some day) 1. allocator_allocated_lua: total number of bytes allocated of lua arena 2. allocator_active_lua: total number of bytes in active pages allocated in lua arena 3. allocator_resident_lua: maximum number of bytes in physically resident data pages mapped in lua arena 4. allocator_frag_bytes_lua: fragment bytes in lua arena This is oranagra's idea, and i got some help from sundb. This solves the third point in #13102. --------- Co-authored-by:
debing.sun <debing.sun@redis.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 04 Apr, 2024 1 commit
-
-
debing.sun authored
fix some issues that come from sanitizer thread report. 1. when the main thread is updating daylight_active, other threads (bio, module thread) may be writing logs at the same time. ``` WARNING: ThreadSanitizer: data race (pid=661064) Read of size 4 at 0x55c9a4d11c70 by thread T2: #0 serverLogRaw /home/sundb/data/redis_fork/src/server.c:116 (redis-server+0x8d797) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) #1 _serverLog.constprop.2 /home/sundb/data/redis_fork/src/server.c:146 (redis-server+0x2a3b14) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) #2 bioProcessBackgroundJobs /home/sundb/data/redis_fork/src/bio.c:329 (redis-server+0x1c24ca) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) Previous write of size 4 at 0x55c9a4d11c70 by main thread (mutexes: write M0, write M1, write M2, write M3): #0 updateCachedTimeWithUs /home/sundb/data/redis_fork/src/server.c:1102 (redis-server+0x925e7) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) #1 updateCachedTimeWithUs /home/sundb/data/redis_fork/src/server.c:1087 (redis-server+0x925e7) #2 updateCachedTime /home/sundb/data/redis_fork/src/server.c:1118 (redis-server+0x925e7) #3 afterSleep /home/sundb/data/redis_fork/src/server.c:1811 (redis-server+0x925e7) #4 aeProcessEvents /home/sundb/data/redis_fork/src/ae.c:389 (redis-server+0x85ae0) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) #5 aeProcessEvents /home/sundb/data/redis_fork/src/ae.c:342 (redis-server+0x85ae0) #6 aeMain /home/sundb/data/redis_fork/src/ae.c:477 (redis-server+0x85ae0) #7 main /home/sundb/data/redis_fork/src/server.c:7211 (redis-server+0x7168c) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) ``` 2. thread leaks in module tests ``` WARNING: ThreadSanitizer: thread leak (pid=668683) Thread T13 (tid=670041, finished) created by main thread at: #0 pthread_create ../../../../src/libsanitizer/tsan/tsan_interceptors_posix.cpp:1036 (libtsan.so.2+0x3d179) (BuildId: 28a9f70061dbb2dfa2cef661d3b23aff4ea13536) #1 HelloBlockNoTracking_RedisCommand /home/sundb/data/redis_fork/tests/modules/blockonbackground.c:200 (blockonbackground.so+0x97fd) (BuildId: 9cd187906c57e88cdf896d121d1d96448b37a136) #2 HelloBlockNoTracking_RedisCommand /home/sundb/data/redis_fork/tests/modules/blockonbackground.c:169 (blockonbackground.so+0x97fd) #3 call /home/sundb/data/redis_fork/src/server.c:3546 (redis-server+0x9b7fb) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) #4 processCommand /home/sundb/data/redis_fork/src/server.c:4176 (redis-server+0xa091c) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) #5 processCommandAndResetClient /home/sundb/data/redis_fork/src/networking.c:2468 (redis-server+0xd2b8e) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) #6 processInputBuffer /home/sundb/data/redis_fork/src/networking.c:2576 (redis-server+0xd2b8e) #7 readQueryFromClient /home/sundb/data/redis_fork/src/networking.c:2722 (redis-server+0xd358f) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) #8 callHandler /home/sundb/data/redis_fork/src/connhelpers.h:58 (redis-server+0x288a7b) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) #9 connSocketEventHandler /home/sundb/data/redis_fork/src/socket.c:277 (redis-server+0x288a7b) #10 aeProcessEvents /home/sundb/data/redis_fork/src/ae.c:417 (redis-server+0x85b45) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) #11 aeProcessEvents /home/sundb/data/redis_fork/src/ae.c:342 (redis-server+0x85b45) #12 aeMain /home/sundb/data/redis_fork/src/ae.c:477 (redis-server+0x85b45) #13 main /home/sundb/data/redis_fork/src/server.c:7211 (redis-server+0x7168c) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) ```
-
- 02 Apr, 2024 1 commit
-
-
Moti Cohen authored
# Overview Users utilize the `FLUSHDB SYNC` and `FLUSHALL SYNC` commands for a variety of reasons. The main issue with this command is that if the database becomes substantial in size, the server will be unresponsive for an extended period. Other than freezing application traffic, this may also lead some clients making incorrect judgments about the server's availability. For instance, a watchdog may erroneously decide to terminate the process, resulting in potential adverse outcomes. While a `FLUSH* ASYNC` can address these issues, it might not be used for two reasons: firstly, it's not the default, and secondly, in some cases, the client issuing the flush wants to wait for its completion before repopulating the database. Between the option of triggering FLUSH* asynchronously in the background without indication for completion versus running it synchronously in the foreground by the main thread, there is another more appealing option. We can block the client that requested the flush, execute the flush command in the background, and once done, unblock the client and return notification for completion. This approach ensures the server remains responsive to other clients, and the blocked client receives the expected response only after the flush operation has been successfully carried out. # Implementation details Instead of defining yet another flavor to the flush command, we can modify `FLUSHALL SYNC` and `FLUSHDB SYNC` always run in this new mode. ## Extending BIO Threads capabilities Today jobs that are carried out by BIO threads don't have the capability to indicate completion to the main thread. We can add this infrastructure by having an additional dummy job, coined as completion-job, that eventually will be written by BIO threads to a response-queue. The main thread will take care to consume items from the response-queue and call the provided callback function of each completion-job. ## FLUSH* SYNC to run as blocking ASYNC Command `FLUSH* SYNC` will be modified to create one or more async jobs to flush DB(s) and afterward will push additional completion-job request. By sending the completion job request only at the end, the main thread will be called back only after all the preceding jobs completed their task in the background. During that time, the client of the command is suspended and marked as `BLOCKED_LAZYFREE` whereas any other client will be able to communicate with the server without any issue.
-
- 20 Mar, 2024 1 commit
-
-
Pieter Cailliau authored
[Read more about the license change here](https://redis.com/blog/redis-adopts-dual-source-available-licensing/) Live long and prosper
🖖
-
- 19 Mar, 2024 2 commits
-
-
Yanqi Lv authored
In `beginResultEmission`, -1 means the result length is not known in advance. But after #12185, if we pass -1 to `zrangeResultBeginStore`, it will convert to SIZE_MAX in `zsetTypeCreate` and try to `dictExpand`. Although `dictExpand` won't succeed because the size overflows, I think we'd better to avoid this wrong conversion. This bug can be triggered when the source of `zrangestore` doesn't exist or we use `zrangestore` command with `byscore` or `bylex`. The impact is that dst keys will be converted to use skiplist instead of listpack.
-
Binbin authored
Users who abuse lua error_reply will generate a new error object on each error call, which can make server.errors get bigger and bigger. This will cause the server to block when calling INFO (we also return errorstats by default). To prevent the damage it can cause, when a misuse is detected, we will print a warning log and disable the errorstats to avoid adding more new errors. It can be re-enabled via CONFIG RESETSTAT. Because server.errors may be very large (it may be better now since we have the limit), config resetstat may block for a while. So in resetErrorTableStats, we will try to lazyfree server.errors. See the related discussion at the end of #8217.
-
- 18 Mar, 2024 1 commit
-
-
Binbin authored
After #13072, there is an use-after-free error. In expireScanCallback, we will delete the dict, and then in dictScan we will continue to use the dict, like we will doing `dictResumeRehashing(d)` in the end, this casued an error. In this PR, in freeDictIfNeeded, if the dict's pauserehash is set, don't delete the dict yet, and then when scan returns try to delete it again. At the same time, we noticed that there will be similar problems in iterator. We may also delete elements during the iteration process, causing the dict to be deleted, so the part related to iter in the PR has also been modified. dictResetIterator was also missing from the previous kvstoreIteratorNextDict, we currently have no scenario that elements will be deleted in kvstoreIterator process, deal with it together to avoid future problems. Added some simple tests to verify the changes. In addition, the modification in #13072 omitted initTempDb and emptyDbAsync, and they were also added. This PR also remove the slow flag from the expire test (consumes 1.3s) so that problems can be found in CI in the future.
-
- 13 Mar, 2024 2 commits
-
-
Binbin authored
In some cases, users will abuse lua eval. Each EVAL call generates a new lua script, which is added to the lua interpreter and cached to redis-server, consuming a large amount of memory over time. Since EVAL is mostly the one that abuses the lua cache, and these won't have pipeline issues (i.e. the script won't disappear unexpectedly, and cause errors like it would with SCRIPT LOAD and EVALSHA), we implement a plain FIFO LRU eviction only for these (not for scripts loaded with SCRIPT LOAD). ### Implementation notes: When not abused we'll probably have less than 100 scripts, and when abused we'll have many thousands. So we use a hard coded value of 500 scripts. And considering that we don't have many scripts, then unlike keys, we don't need to worry about the memory usage of keeping a true sorted LRU linked list. We compute the SHA of each script anyway, and put the script in a dict, we can store a listNode there, and use it for quick removal and re-insertion into an LRU list each time the script is used. ### New interfaces: At the same time, a new `evicted_scripts` field is added to INFO, which represents the number of evicted eval scripts. Users can check it to see if they are abusing EVAL. ### benchmark: `./src/redis-benchmark -P 10 -n 1000000 -r 10000000000 eval "return __rand_int__" 0` The simple abuse of eval benchmark test that will create 1 million EVAL scripts. The performance has been improved by 50%, and the max latency has dropped from 500ms to 13ms (this may be caused by table expansion inside Lua when the number of scripts is large). And in the INFO memory, it used to consume 120MB (server cache) + 310MB (lua engine), but now it only consumes 70KB (server cache) + 210KB (lua_engine) because of the scripts eviction. For non-abusive case of about 100 EVAL scripts, there's no noticeable change in performance or memory usage. ### unlikely potentially breaking change: in theory, a user can maybe load a script with EVAL and then use EVALSHA to call it (by calculating the SHA1 value on the client side), it could be that if we read the docs carefully we'll realized it's a valid scenario, but we suppose it's extremely rare. So it may happen that EVALSHA acts on a script created by EVAL, and the script is evicted and EVALSHA returns a NOSCRIPT error. that is if you have more than 500 scripts being used in the same transaction / pipeline. This solves the second point in #13102.
-
Ronen Kalish authored
Allow using `+` as a special ID for last item in stream on XREAD command. This would allow to iterate on a stream with XREAD starting with the last available message instead of the next one which `$` is used for. I.e. the caller can use `BLOCK` and `+` on the first call, and change to `$` on the next call. Closes #7388 --------- Co-authored-by:
Felipe Machado <462154+felipou@users.noreply.github.com>
-
- 12 Mar, 2024 2 commits
-
-
Viktor Söderqvist authored
Sometimes it's useful to compute a key's cluster slot in a module. This API function is just like the command CLUSTER KEYSLOT (but faster). A "reverse" API is also added: `RedisModule_ClusterCanonicalKeyNameInSlot`. Given a slot, it returns a short string that we can call a canonical key for the slot.
-
Binbin authored
The check in fileIsManifest misjudged the manifest file. For example, if resp aof contains "file", it will be considered a manifest file and the check will fail: ``` *3 $3 set $4 file $4 file ``` In #12951, if the preamble aof also contains it, it will also fail. Fixes #12951. the bug was happening if the the word "file" is mentioned in the first 1024 lines of the AOF. and now as soon as it finds a non-comment line it'll break (if it contains "file" or doesn't)
-
- 10 Mar, 2024 1 commit
-
-
Matthew Douglass authored
Since lua_Number is not explicitly an integer or a double, we need to make an effort to convert it as an integer when that's possible, since the string could later be used in a context that doesn't support scientific notation (e.g. 1e9 instead of 100000000). Since fpconv_dtoa converts numbers with the equivalent of `%f` or `%e`, which ever is shorter, this would break if we try to pass a long integer number to a command that takes integer. we'll get an implicit conversion to string in Lua, and then the parsing in getLongLongFromObjectOrReply will fail. ``` > eval "redis.call('hincrby', 'key', 'field', '1000000000')" 0 (nil) > eval "redis.call('hincrby', 'key', 'field', tonumber('1000000000'))" 0 (error) ERR value is not an integer or out of range script: ac99c32e4daf7e300d593085b611de261954a946, on @user_script:1. ``` Switch to using ll2string if the number can be safely represented as a long long. The problem was introduced in #10587 (Redis 7.2). closes #13113. --------- Co-authored-by:
Binbin <binloveplay1314@qq.com> Co-authored-by:
debing.sun <debing.sun@redis.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 05 Mar, 2024 1 commit
-
-
debing.sun authored
`CONFIG SET oom-score-adj handles configuration failures` test failed in some CI jobs today. Failed CI: https://github.com/redis/redis/actions/runs/8152519326 Not sure why the github action's docker image perssions have changed, but the issue is similar to #12887, where we can't assume the range of oom_score_adj that a user can change. ## Solution: Modify the way of determining whether the current user has no privileges or not, instead of relying on whether the user id is 0 or not.
-
- 04 Mar, 2024 1 commit
-
-
debing.sun authored
After #13013 ### This PR make effort to defrag the pubsub kvstore in the following ways: 1. Till now server.pubsub(shard)_channels only share channel name obj with the first subscribed client, now change it so that the clients and the pubsub kvstore share the channel name robj. This would save a lot of memory when there are many subscribers to the same channel. It also means that we only need to defrag the channel name robj in the pubsub kvstore, and then update all client references for the current channel, avoiding the need to iterate through all the clients to do the same things. 2. Refactor the code to defragment pubsub(shard) in the same way as defragment of keys and EXPIRES, with the exception that we only defragment pubsub(without shard) when slot is zero. ### Other Fix an overlook in #11695, if defragment doesn't reach the end time, we should wait for the current db's keys and expires, pubsub and pubsubshard to finish before leaving, now it's possible to exit early when the keys are defragmented. --------- Co-authored-by:
oranagra <oran@redislabs.com>
-
- 01 Mar, 2024 1 commit
-
-
Chen Tianjie authored
Sometimes we need to make fast judgement about why Redis is suddenly taking more memory. One of the reasons is main DB's dicts doing rehashing. We may use `MEMORY STATS` to monitor the overhead memory of each DB, but there still lacks a total sum to show an overall trend. So this PR adds the total overhead of all DBs to `INFO MEMORY` section, together with the total count of rehashing DB dicts, providing some intuitive metrics about main dicts rehashing. This PR adds the following metrics to INFO MEMORY * `mem_overhead_db_hashtable_rehashing` - only size of ht[0] in dictionaries we're rehashing (i.e. the memory that's gonna get released soon) and a similar ones to MEMORY STATS: * `overhead.db.hashtable.lut` (complements the existing `overhead.hashtable.main` and `overhead.hashtable.expires` which also counts the `dictEntry` structs too) * `overhead.db.hashtable.rehashing` - temporary rehashing overhead. * `db.dict.rehashing.count` - number of top level dictionaries being rehashed. --------- Co-authored-by:
zhaozhao.zz <zhaozhao.zz@alibaba-inc.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 29 Feb, 2024 1 commit
-
-
Binbin authored
In XREADGROUP ACK, because streamPropagateXCLAIM does not propagate entries-read, entries-read will be inconsistent between master and replicas. I.e. if no entries were claimed, it would have propagated correctly, but if some were claimed, then the entries-read field would be inconsistent on the replica. The fix was suggested by guybe7, call streamPropagateGroupID unconditionally, so that we will normalize entries_read on the replicas. In the past, we would only set propagate_last_id when NOACK was specified. And in #9127, XCLAIM did not propagate entries_read in ACK, which would cause entries_read to be inconsistent between master and replicas. Another approach is add another arg to XCLAIM and let it propagate entries_read, but we decided not to use it. Because we want minimal damage in case there's an old target and new source (in the worst case scenario, the new source doesn't recognize XGROUP SETID ... ENTRIES READ and the lag is lost. If we change XCLAIM, the damage is much more severe). In this patch, now if the user uses XREADGROUP .. COUNT 1 there will be an additional overhead of MULTI, EXEC and XGROUPSETID. We assume the extra command in case of COUNT 1 (4x factor, changing from one XCLAIM to MULTI+XCLAIM+XSETID+EXEC), is probably ok since reading just one entry is in any case very inefficient (a client round trip per record), so we're hoping it's not a common case. Issue was introduced in #9127.
-
- 22 Feb, 2024 2 commits
-
-
debing.sun authored
Implement #12699 This PR exposing Lua os.clock() api for getting the elapsed time of Lua code execution. Using: ```lua local start = os.clock() ... do something ... local elpased = os.clock() - start ``` --------- Co-authored-by:
Meir Shpilraien (Spielrein) <meir@redis.com> Co-authored-by:
Madelyn Olson <34459052+madolson@users.noreply.github.com>
-
debing.sun authored
Following #12568 In issue #9357, when inserting an element larger than 1GB, we currently store it in a plain node instead of a listpack. Presently, when we insert an element that exceeds the maximum size of a packed node, it cannot be accommodated in any other nodes, thus ending up isolated like a large element. I.e. it's a node with only one element, but it's listpack encoded rather than a plain buffer. This PR lowers the threshold for considering an element as 'large' from 1GB to the maximum size of a node. While this change doesn't completely resolve the bug mentioned in the previous PR, it does mitigate its potential impact. As a result of this change, we can now only use LSET to replace an element with another element that falls below the maximum size threshold. In the worst-case scenario, with a fill of -5, the largest packed node we can create is 2GB (32k * 64k): * 32k: The smallest element in a listpack is 2 bytes, which allows us to store up to 32k elements. * 64k: This is the maximum size for a single quicklist node. ## Others To fully fix #9357, we need more work, as discussed in #12568, when we insert an element into a quicklistNode, it may be created in a new node, put into another node, or merged, and we can't correctly delete the node that was supposed to be deleted. I'm not sure it's worth it, since it involves a lot of modifications.
-
- 20 Feb, 2024 2 commits
-
-
Binbin authored
Recently I saw in CI that reply-schemas-validator fails here: ``` Failed validating 'minimum' in schema[1]['properties']['groups']['items']['properties']['consumers']['items']['properties']['active-time']: {'description': 'Last time this consumer was active (successful ' 'reading/claiming).', 'minimum': 0, 'type': 'integer'} On instance['groups'][0]['consumers'][0]['active-time']: -1729380548878722639 ``` The reason is that in fuzzer, we may restore corrupted active-time, which will cause the reply schema CI to fail. The fuzzer can cause corrupt the state in many places, which will bugs that mess up the reply, so we decided to skip logreqres. Also, seen-time is the same type as active-time, adding the minimum. --------- Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Binbin authored
There is a timing issue in the test, close may arrive late, or in freeClientAsync we will free the client in async way, which will lead to errors in watching_clients statistics, since we will only unwatch all keys when we truly freeClient. Add a wait here to avoid this problem. Also fixed some outdated comments i saw. The test was introduced in #12966.
-