- 11 Jan, 2023 3 commits
-
-
Viktor Söderqvist authored
This change deletes the dictGetNext and dictGetNextRef functions, so the dict API doesn't expose the next field at all. The bucket function in dictScan is deleted. A separate dictScanDefrag function is added which takes a defrag alloc function to defrag-reallocate the dict entries. "Dirty" code accessing the dict internals in active defrag is removed. An 'afterReplaceEntry' is added to dictType, which allows the dict user to keep the dictEntry metadata up to date after reallocation/defrag/move. Additionally, for updating the cluster slot-to-key mapping, after a dictEntry has been reallocated, we need to know which db a dict belongs to, so we store a pointer to the db in a new metadata section in the dict struct, which is a new mechanism similar to dictEntry metadata. This adds some complexity but provides better isolation.
-
Viktor Söderqvist authored
Also delete unused function activeDefragSdsListAndDict
-
Viktor Söderqvist authored
Use functions for all accesses to dictEntry (except in dict.c). Dict abuses e.g. in defrag.c have been replaced by support functions provided by dict.
-
- 16 Nov, 2022 1 commit
-
-
sundb authored
Improve memory efficiency of list keys ## Description of the feature The new listpack encoding uses the old `list-max-listpack-size` config to perform the conversion, which we can think it of as a node inside a quicklist, but without 80 bytes overhead (internal fragmentation included) of quicklist and quicklistNode structs. For example, a list key with 5 items of 10 chars each, now takes 128 bytes instead of 208 it used to take. ## Conversion rules * Convert listpack to quicklist When the listpack length or size reaches the `list-max-listpack-size` limit, it will be converted to a quicklist. * Convert quicklist to listpack When a quicklist has only one node, and its length or size is reduced to half of the `list-max-listpack-size` limit, it will be converted to a listpack. This is done to avoid frequent conversions when we add or remove at the bounding size or length. ## Interface changes 1. add list entry param to listTypeSetIteratorDirection When list encoding is listpack, `listTypeIterator->lpi` points to the next entry of current entry, so when changing the direction, we need to use the current node (listTypeEntry->p) to update `listTypeIterator->lpi` to the next node in the reverse direction. ## Benchmark ### Listpack VS Quicklist with one node * LPUSH - roughly 0.3% improvement * LRANGE - roughly 13% improvement ### Both are quicklist * LRANGE - roughly 3% improvement * LRANGE without pipeline - roughly 3% improvement From the benchmark, as we can see from the results 1. When list is quicklist encoding, LRANGE improves performance by <5%. 2. When list is listpack encoding, LRANGE improves performance by ~13%, the main enhancement is brought by `addListListpackRangeReply()`. ## Memory usage 1M lists(key:0~key:1000000) with 5 items of 10 chars ("hellohello") each. shows memory usage down by 35.49%, from 214MB to 138MB. ## Note 1. Add conversion callback to support doing some work before conversion Since the quicklist iterator decompresses the current node when it is released, we can no longer decompress the quicklist after we convert the list.
-
- 09 Nov, 2022 1 commit
-
-
Viktor Söderqvist authored
Small sets with not only integer elements are listpack encoded, by default up to 128 elements, max 64 bytes per element, new config `set-max-listpack-entries` and `set-max-listpack-value`. This saves memory for small sets compared to using a hashtable. Sets with only integers, even very small sets, are still intset encoded (up to 1G limit, etc.). Larger sets are hashtable encoded. This PR increments the RDB version, and has an effect on OBJECT ENCODING Possible conversions when elements are added: intset -> listpack listpack -> hashtable intset -> hashtable Note: No conversion happens when elements are deleted. If all elements are deleted and then added again, the set is deleted and recreated, thus implicitly converted to a smaller encoding.
-
- 18 Oct, 2022 1 commit
-
-
Binbin authored
And fix a few newly detected typo. Closes #11394
-
- 09 Mar, 2022 2 commits
- 21 Feb, 2022 1 commit
-
-
yoav-steinberg authored
This includes two fixes: * We forgot to count non-key reallocs in defragmentation stats. * Fix the script defrag tests so to make dict entries less signigicant in fragmentation by making the scripts larger. This assures active defrage will complete and reach desired results. Some inherent fragmentation might exists in dict entries which we need to ignore. This lead to occasional CI failures.
-
- 11 Feb, 2022 1 commit
-
-
yoav-steinberg authored
Remove scripts defragger since it was broken since #10126 (released in 7.0 RC1). would crash the server if defragger starts in a server that contains eval scripts. In #10126 the global `lua_script` dict became a dict to a custom `luaScript` struct with an internal `robj` in it instead of a generic `sds` -> `robj` dict. This means we need custom code to defrag it and since scripts should never really cause much fragmentation it makes more sense to simply remove the defrag code for scripts.
-
- 21 Dec, 2021 1 commit
-
-
zhugezy authored
# Background The main goal of this PR is to remove relevant logics on Lua script verbatim replication, only keeping effects replication logic, which has been set as default since Redis 5.0. As a result, Lua in Redis 7.0 would be acting the same as Redis 6.0 with default configuration from users' point of view. There are lots of reasons to remove verbatim replication. Antirez has listed some of the benefits in Issue #5292: >1. No longer need to explain to users side effects into scripts. They can do whatever they want. >2. No need for a cache about scripts that we sent or not to the slaves. >3. No need to sort the output of certain commands inside scripts (SMEMBERS and others): this both simplifies and gains speed. >4. No need to store scripts inside the RDB file in order to startup correctly. >5. No problems about evicting keys during the script execution. When looking back at Redis 5.0, antirez and core team decided to set the config `lua-replicate-commands yes` by default instead of removing verbatim replication directly, in case some bad situations happened. 3 years later now before Redis 7.0, it's time to remove it formally. # Changes - configuration for lua-replicate-commands removed - created config file stub for backward compatibility - Replication script cache removed - this is useless under script effects replication - relevant statistics also removed - script persistence in RDB files is also removed - Propagation of SCRIPT LOAD and SCRIPT FLUSH to replica / AOF removed - Deterministic execution logic in scripts removed (i.e. don't run write commands after random ones, and sorting output of commands with random order) - the flags indicating which commands have non-deterministic results are kept as hints to clients. - `redis.replicate_commands()` & `redis.set_repl()` changed - now `redis.replicate_commands()` does nothing and return an 1 - ...and then `redis.set_repl()` can be issued before `redis.replicate_commands()` now - Relevant TCL cases adjusted - DEBUG lua-always-replicate-commands removed # Other changes - Fix a recent bug comparing CLIENT_ID_AOF to original_client->flags instead of id. (introduced in #9780) Co-authored-by:Oran Agra <oran@redislabs.com>
-
- 01 Dec, 2021 1 commit
-
-
meir@redislabs.com authored
The following variable was renamed: 1. lua_caller -> script_caller 2. lua_time_limit -> script_time_limit 3. lua_timedout -> script_timedout 4. lua_oom -> script_oom 5. lua_disable_deny_script -> script_disable_deny_script 6. in_eval -> in_script The following variables was moved to lctx under eval.c 1. lua 2. lua_client 3. lua_cur_script 4. lua_scripts 5. lua_scripts_mem 6. lua_replicate_commands 7. lua_write_dirty 8. lua_random_dirty 9. lua_multi_emitted 10. lua_repl 11. lua_kill 12. lua_time_start 13. lua_time_snapshot This commit is in a low risk of introducing any issues and it is just moving varibales around and not changing any logic.
-
- 04 Nov, 2021 1 commit
-
-
Eduardo Semprebon authored
For diskless replication in swapdb mode, considering we already spend replica memory having a backup of current db to restore in case of failure, we can have the following benefits by instead swapping database only in case we succeeded in transferring db from master: - Avoid `LOADING` response during failed and successful synchronization for cases where the replica is already up and running with data. - Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping. - This could be implemented also for disk replication with similar benefits if consumers are willing to spend the extra memory usage. General notes: - The concept of `backupDb` becomes `tempDb` for clarity. - Async loading mode will only kick in if the replica is syncing from a master that has the same repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline. - New property in INFO: `async_loading` to differentiate from the blocking loading - Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db and the tempDb that is passed around. - Because this is affecting replicas only, we assume that if they are not readonly and write commands during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET here anyways to avoid complications. Considerations for review: - We have many cases where server.loading flag is used and even though I tried my best, there may be cases where async_loading should be checked as well and cases where it shouldn't (would require very good understanding of whole code) - Several places that had different behavior depending on the loading flag where actually meant to just handle commands coming from the AOF client differently than ones coming from real clients, changed to check CLIENT_ID_AOF instead. **Additional for Release Notes** - Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't contribute on triggering next database SAVE - New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING - Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event. Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED, ABORTED and COMPLETED. - New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions to allow modules to declare they support the diskless replication with async loading (when absent, we fall back to disk-based loading). Co-authored-by:
Eduardo Semprebon <edus@saxobank.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 03 Nov, 2021 1 commit
-
-
perryitay authored
Redis lists are stored in quicklist, which is currently a linked list of ziplists. Ziplists are limited to storing elements no larger than 4GB, so when bigger items are added they're getting truncated. This PR changes quicklists so that they're capable of storing large items in quicklist nodes that are plain string buffers rather than ziplist. As part of the PR there were few other changes in redis: 1. new DEBUG sub-commands: - QUICKLIST-PACKED-THRESHOLD - set the threshold of for the node type to be plan or ziplist. default (1GB) - QUICKLIST <key> - Shows low level info about the quicklist encoding of <key> 2. rdb format change: - A new type was added - RDB_TYPE_LIST_QUICKLIST_2 . - container type (packed / plain) was added to the beginning of the rdb object (before the actual node list). 3. testing: - Tests that requires over 100MB will be by default skipped. a new flag was added to 'runtest' to run the large memory tests (not used by default) Co-authored-by:sundb <sundbcn@gmail.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 09 Sep, 2021 2 commits
-
-
sundb authored
Part two of implementing #8702 (zset), after #8887. ## Description of the feature Replaced all uses of ziplist with listpack in t_zset, and optimized some of the code to optimize performance. ## Rdb format changes New `RDB_TYPE_ZSET_LISTPACK` rdb type. ## Rdb loading improvements: 1) Pre-expansion of dict for validation of duplicate data for listpack and ziplist. 2) Simplifying the release of empty key objects when RDB loading. 3) Unify ziplist and listpack data verify methods for zset and hash, and move code to rdb.c. ## Interface changes 1) New `zset-max-listpack-entries` config is an alias for `zset-max-ziplist-entries` (same with `zset-max-listpack-value`). 2) OBJECT ENCODING will return listpack instead of ziplist. ## Listpack improvements: 1) Add `lpDeleteRange` and `lpDeleteRangeWithEntry` functions to delete a range of entries from listpack. 2) Improve the performance of `lpCompare`, converting from string to integer is faster than converting from integer to string. 3) Replace `snprintf` with `ll2string` to improve performance in converting numbers to strings in `lpGet()`. ## Zset improvements: 1) Improve the performance of `zzlFind` method, use `lpFind` instead of `lpCompare` in a loop. 2) Use `lpDeleteRangeWithEntry` instead of `lpDelete` twice to delete a element of zset. ## Tests 1) Add some unittests for `lpDeleteRange` and `lpDeleteRangeWithEntry` function. 2) Add zset RDB loading test. 3) Add benchmark test for `lpCompare` and `ziplsitCompare`. 4) Add empty listpack zset corrupt dump test.
-
Huang Zhw authored
Add two INFO metrics: ``` total_active_defrag_time:12345 current_active_defrag_time:456 ``` `current_active_defrag_time` if greater than 0, means how much time has passed since active defrag started running. If active defrag stops, this metric is reset to 0. `total_active_defrag_time` means total time the fragmentation was over the defrag threshold since the server started. This is a followup PR for #9031
-
- 31 Aug, 2021 1 commit
-
-
Viktor Söderqvist authored
* Enhance dict to support arbitrary metadata carried in dictEntry Co-authored-by:
Viktor Söderqvist <viktor.soderqvist@est.tech> * Rewrite slot-to-keys mapping to linked lists using dict entry metadata This is a memory enhancement for Redis Cluster. The radix tree slots_to_keys (which duplicates all key names prefixed with their slot number) is replaced with a linked list for each slot. The dict entries of the same cluster slot form a linked list and the pointers are stored as metadata in each dict entry of the main DB dict. This commit also moves the slot-to-key API from db.c to cluster.c. Co-authored-by:
Jim Brunner <brunnerj@amazon.com>
-
- 10 Aug, 2021 1 commit
-
-
sundb authored
Part one of implementing #8702 (taking hashes first before other types) ## Description of the feature 1. Change ziplist encoded hash objects to listpack encoding. 2. Convert existing ziplists on RDB loading time. an O(n) operation. ## Rdb format changes 1. Add RDB_TYPE_HASH_LISTPACK rdb type. 2. Bump RDB_VERSION to 10 ## Interface changes 1. New `hash-max-listpack-entries` config is an alias for `hash-max-ziplist-entries` (same with `hash-max-listpack-value`) 2. OBJECT ENCODING will return `listpack` instead of `ziplist` ## Listpack improvements: 1. Support direct insert, replace integer element (rather than convert back and forth from string) 3. Add more listpack capabilities to match the ziplist ones (like `lpFind`, `lpRandomPairs` and such) 4. Optimize element length fetching, avoid multiple calculations 5. Use inline to avoid function call overhead. ## Tests 1. Add a new test to the RDB load time conversion 2. Adding the listpack unit tests. (based on the one in ziplist.c) 3. Add a few "corrupt payload: fuzzer findings" tests, and slightly modify existing ones. Co-authored-by:Oran Agra <oran@redislabs.com>
-
- 05 Aug, 2021 1 commit
-
-
yoav-steinberg authored
Reduce dict struct memory overhead on 64bit dict size goes down from jemalloc's 96 byte bin to its 56 byte bin. summary of changes: - Remove `privdata` from callbacks and dict creation. (this affects many files, see "Interface change" below). - Meld `dictht` struct into the `dict` struct to eliminate struct padding. (this affects just dict.c and defrag.c) - Eliminate the `sizemask` field, can be calculated from size when needed. - Convert the `size` field into `size_exp` (exponent), utilizes one byte instead of 8. Interface change: pass dict pointer to dict type call back functions. This is instead of passing the removed privdata field. In the future if we'd like to have private data in the callbacks we can extract it from the dict type. We can extend dictType to include a custom dict struct allocator and use it to allocate more data at the end of the dict struct. This data can then be used to store private data later acccessed by the callbacks.
-
- 16 Jun, 2021 1 commit
-
-
chenyang8094 authored
Create new module type enhanced callbacks: mem_usage2, free_effort2, unlink2, copy2. These will be given a context point from which the module can obtain the key name and database id. In addition the digest and defrag context can now be used to obtain the key name and database id.
-
- 14 Jun, 2021 2 commits
-
-
ZhaolongLi authored
Co-authored-by:lizhaolong.lzl <lizhaolong.lzl@B-54MPMD6R-0221.local>
-
ZhaolongLi authored
Co-authored-by:lizhaolong.lzl <lizhaolong.lzl@B-54MPMD6R-0221.local>
-
- 18 May, 2021 1 commit
-
-
Binbin authored
incrementing the defragged hits counter twice in activeDefragSdsListAndDict
-
- 28 Mar, 2021 1 commit
-
-
Huang Zhw authored
In activeDefragSdsListAndDict when defrag list key sucess, the key in dict is also replaced. Then the corresponding dictEntry is defraged. But the dictEntry will be defraged in next step by defrag the dict values too.
-
- 08 Feb, 2021 1 commit
-
-
Huang Zw authored
Fix typo and some out of date comments
-
- 27 Jan, 2021 1 commit
-
-
Huang Zw authored
In activeDefragSdsListAndDict when dict_val_type is DEFRAG_SDS_DICT_VAL_VOID_PTR, it should update de->v.val not ln->value. Because this code path will never be executed, so this bug never happened.
-
- 04 Jan, 2021 1 commit
-
-
Oran Agra authored
-
- 13 Dec, 2020 1 commit
-
-
Yossi Gottlieb authored
Add a new set of defrag functions that take a defrag context and allow defragmenting memory blocks and RedisModuleStrings. Modules can register a defrag callback which will be invoked when the defrag process handles globals. Modules with custom data types can also register a datatype-specific defrag callback which is invoked for keys that require defragmentation. The callback and associated functions support both one-step and multi-step options, depending on the complexity of the key as exposed by the free_effort callback.
-
- 06 Dec, 2020 1 commit
-
-
Oran Agra authored
When loading an encoded payload we will at least do a shallow validation to check that the size that's encoded in the payload matches the size of the allocation. This let's us later use this encoded size to make sure the various offsets inside encoded payload don't reach outside the allocation, if they do, we'll assert/panic, but at least we won't segfault or smear memory. We can also do 'deep' validation which runs on all the records of the encoded payload and validates that they don't contain invalid offsets. This lets us detect corruptions early and reject a RESTORE command rather than accepting it and asserting (crashing) later when accessing that payload via some command. configuration: - adding ACL flag skip-sanitize-payload - adding config sanitize-dump-payload [yes/no/clients] For now, we don't have a good way to ensure MIGRATE in cluster resharding isn't being slowed down by these sanitation, so i'm setting the default value to `no`, but later on it should be set to `clients` by default. changes: - changing rdbReportError not to `exit` in RESTORE command - adding a new stat to be able to later check if cluster MIGRATE isn't being slowed down by sanitation.
-
- 10 Sep, 2020 1 commit
-
-
Oran Agra authored
List of squashed commits or PRs =============================== commit 66801ea Author: hwware <wen.hui.ware@gmail.com> Date: Mon Jan 13 00:54:31 2020 -0500 typo fix in acl.c commit 46f55db Author: Itamar Haber <itamar@redislabs.com> Date: Sun Sep 6 18:24:11 2020 +0300 Updates a couple of comments Specifically: * RM_AutoMemory completed instead of pointing to docs * Updated link to custom type doc commit 61a2aa0 Author: xindoo <xindoo@qq.com> Date: Tue Sep 1 19:24:59 2020 +0800 Correct errors in code comments commit a5871d1 Author: yz1509 <pro-756@qq.com> Date: Tue Sep 1 18:36:06 2020 +0800 fix typos in module.c commit 41eede7 Author: bookug <bookug@qq.com> Date: Sat Aug 15 01:11:33 2020 +0800 docs: fix typos in comments commit c303c84 Author: lazy-snail <ws.niu@outlook.com> Date: Fri Aug 7 11:15:44 2020 +0800 fix spelling in redis.conf commit 1e...
-
- 21 Jul, 2020 1 commit
-
-
Wen Hui authored
Since the dynamic allocations in raxIterator are only used for deep walks, memory leak due to missing call to raxStop can only happen for rax with key names longer than 32 bytes. Out of all the missing calls, the only ones that may lead to a leak are the rax for consumer groups and consumers, and these were only in AOFRW and rdbSave, which normally only happen in fork or at shutdown.
-
- 10 Jul, 2020 1 commit
-
-
huangzhw authored
it to calculate hash, we should use newsds.
-
- 20 May, 2020 1 commit
-
-
Oran Agra authored
There's a rare case which leads to stagnation in the defragger, causing it to keep scanning the keyspace and do nothing (not moving any allocation), this happens when all the allocator slabs of a certain bin have the same % utilization, but the slab from which new allocations are made have a lower utilization. this commit fixes it by removing the current slab from the overall average utilization of the bin, and also eliminate any precision loss in the utilization calculation and move the decision about the defrag to reside inside jemalloc. and also add a test that consistently reproduce this issue.
-
- 18 Feb, 2020 1 commit
-
-
Oran Agra authored
When active defrag kicks in and finds a big list, it will create a bookmark to a node so that it is able to resume iteration from that node later. The quicklist manages that bookmark, and updates it in case that node is deleted. This will increase memory usage only on lists of over 1000 (see active-defrag-max-scan-fields) quicklist nodes (1000 ziplists, not 1000 items) by 16 bytes. In 32 bit build, this change reduces the maximum effective config of list-compress-depth and list-max-ziplist-size (from 32767 to 8191)
-
- 12 Nov, 2019 1 commit
-
-
Oran Agra authored
Reduce default minimum effort, so that when fragmentation is just detected, the impact on the latency will be minor. Reduce the default maximum effort, mainly to prevent a case were a sudden massive deletions, won't trigger an aggressive defrag that will cause latency. When activedefrag is disabled mid-run, reset the 'running' info field, and clear the scan cursor, so that when it'll be re-enabled, a new fresh scan will start. Clearing the 'running' variable is important since lowering the defragger tunables mid-scan won't help, the defragger only considers new threshold when a new scan starts, and during a scan it can only become more aggressive, (when more severe fragmentation is detected), it'll never go less aggressive. So by temporarily disabling activedefrag, one can lower th the tunables. Removing the experimantal warning.
-
- 07 Oct, 2019 1 commit
-
-
Oran Agra authored
cluster.c - stack buffer memory alignment The pointer 'buf' is cast to a more strictly aligned pointer type evict.c - lazyfree_lazy_eviction, lazyfree_lazy_eviction always called defrag.c - bug in dead code server.c - casting was missing parenthesis rax.c - indentation / newline suggested an 'else if' was intended
-
- 27 Sep, 2019 1 commit
-
-
antirez authored
-
- 17 Jul, 2019 1 commit
-
-
Oran Agra authored
* create module API for forking child processes. * refactor duplicate code around creating and tracking forks by AOF and RDB. * child processes listen to SIGUSR1 and dies exitFromChild in order to eliminate a valgrind warning of unhandled signal. * note that BGSAVE error reply has changed. valgrind error is: Process terminating with default action of signal 10 (SIGUSR1)
-
- 08 May, 2019 1 commit
-
-
yongman authored
-
- 18 Jul, 2018 1 commit
-
-
Oran Agra authored
on slower machines, the active defrag test tended to fail. although the fragmentation ratio was below the treshold, the defragger was still in the middle of a scan cycle. this commit changes: - the defragger uses the current fragmentation state, rather than the cache one that is updated by server cron every 100ms. this actually fixes a bug of starting one excess scan cycle - the test lets the defragger use more CPU cycles, in hope that the defrag will be faster, but also give it more time before we give up.
-