- 22 May, 2024 2 commits
-
-
Ozan Tezcan authored
This PR contains a few optimizations for hfe listpack. - Hfe fields are ordered by TTL in the listpack. There are two cases that we want to search listpack according to TTLs: - As part of active-expiry, we need to find the fields that are expired. e.g. find fields that have smaller TTLs than given timestamp. - When we want to add a new field, we need to find the correct position to maintain the order by TTL. e.g. find the field that has a higher TTL than the one we want to insert. Iterating with lpNext() to compare TTLs has a performance cost as lpNext() calls lpValidateIntegrity() for each entry. Instead, this PR adds `lpFindCb()` to the listpack which accepts a comparator callback. It preserves same validation logic of lpFind() which is faster than search with lpNext(). - We have field name, value, ttl for a single hfe field. Inserting these items one by one to listpack is costly. Especially, as we place fields according to TTL, most additions will end up in the middle of the listpack. Each insert causes realloc + memmove. This PR introduces `lpBatchInsert()` to add multiple items in one go. - For hsetf, if we are going to update value and TTL at the same time, currently, we update the value first and later update the TTL (two distinct listpack operation). This PR improves it by doing it with a single update operation. --------- Co-authored-by:
debing.sun <debing.sun@redis.com>
-
debing.sun authored
Add the following validations: 1. Get TTL using the lpGetIntegerValue() method instead of lpGetValue(), Ref https://github.com/redis/redis/pull/13209#discussion_r1602569422 2. The TTL of listpackex is a number in the valid range (0~EB_EXPIRE_TIME_MAX) and ordered. 3. The TTL fields of listpackex are ordered. 4. The TTL of hashtable is within the valid range (0~EB_EXPIRE_TIME_MAX). Other: Fix the missing of handling OBJ_ENCODING_LISTPACK_EX in dismissHashObject(). --------- Co-authored-by:
Ozan Tezcan <ozantezcan@gmail.com>
-
- 17 May, 2024 1 commit
-
-
Ronen Kalish authored
Add RDB de/serialization for HFE This PR adds two new RDB types: `RDB_TYPE_HASH_METADATA` and `RDB_TYPE_HASH_LISTPACK_TTL` to save HFE data. When the hash RAM encoding is dict, it will be saved in the former, and when it is listpack it will be saved in the latter. Both formats just add the TTL value for each field after the data that was previously saved, i.e HASH_METADATA will save the number of entries and, for each entry, key, value and TTL, whereas listpack is saved as a blob. On read, the usual dict <--> listpack conversion takes place if required. In addition, when reading a hash that was saved as a dict fields are actively expired if expiry is due. Currently this slao holds for listpack encoding, but it is supposed to be removed. TODO: Remove active expiry on load when loading from listpack format (unless we'll decide to keep it)
-
- 16 May, 2024 1 commit
-
-
Moti Cohen authored
The same goes to: HPEXPIRE, HEXPIREAT, HPEXPIREAT, HEXPIRETIME, HPEXPIRETIME, HPTTL, HTTL, HPERSIST
-
- 14 May, 2024 1 commit
-
-
debing.sun authored
## Background 1. All hash objects that contain HFE are referenced by db->hexpires. 2. All fields in a dict hash object with HFE are referenced by an ebucket. So when we defrag the hash object or the field in a dict with HFE, we also need to update the references in them. ## Interface 1. Add a new interface `ebDefragItem`, which can accept a defrag callback to defrag items in ebuckets, and simultaneously update their references in the ebucket. ## Mainly changes 1. The key type of dict of hash object is no longer sds, so add new `activeDefragHfieldDict()` to defrag the dict instead of `activeDefragSdsDict()`. 2. When we defrag the dict of hash object by using `dictScanDefrag()`, we always set the defrag callback `defragKey` of `dictDefragFunctions` to NULL, because we can't reallocate a field with out updating it's reference in ebuckets. Instead, we will defrag the field of the dict and update its reference in the callback `dictScanDefrag` of dictScanFunction(). 3. When we defrag the hash robj with HFE, we will use `ebDefragItem` to defrag the robj and update the reference in db->hexpires. ## TODO: Defrag ebucket structure incremently, which will be handler in a future PR. --------- Co-authored-by:
Ozan Tezcan <ozantezcan@gmail.com> Co-authored-by:
Moti Cohen <moti.cohen@redis.com>
-
- 13 May, 2024 1 commit
-
-
Ozan Tezcan authored
If encoding is listpack, hgetf and hsetf commands reply field value type as integer. This PR fixes it by returning string. Problematic cases: ``` 127.0.0.1:6379> hset hash one 1 (integer) 1 127.0.0.1:6379> hgetf hash fields 1 one 1) (integer) 1 127.0.0.1:6379> hsetf hash GETOLD fvs 1 one 2 1) (integer) 1 127.0.0.1:6379> hsetf hash DOF GETNEW fvs 1 one 2 1) (integer) 2 ``` Additional fixes: - hgetf/hsetf command description text Fixes #13261, #13262
-
- 09 May, 2024 1 commit
-
-
debing.sun authored
1. Add `hpersist` notification for `hpersist` command. 2. Add `pexpire` notification for `hexpire`, `hexpireat` and `hpexpire`.
-
- 08 May, 2024 2 commits
-
-
Ozan Tezcan authored
**Changes:** - Adds listpack support to hash field expiration - Implements hgetf/hsetf commands **Listpack support for hash field expiration** We keep field name and value pairs in listpack for the hash type. With this PR, if one of hash field expiration command is called on the key for the first time, it converts listpack layout to triplets to hold field name, value and ttl per field. If a field does not have a TTL, we store zero as the ttl value. Zero is encoded as two bytes in the listpack. So, once we convert listpack to hold triplets, for the fields that don't have a TTL, it will be consuming those extra 2 bytes per item. Fields are ordered by ttl in the listpack to find the field with minimum expiry time efficiently. **New command implementations as part of this PR:** - HGETF command For each specified field get its value and optionally set the field's expiration time in sec/msec /unix-sec/unix-msec: ``` HGETF key [NX | XX | GT | LT] [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | PERSIST] <FIELDS count field [field ...]> ``` - HSETF command For each specified field value pair: set field to value and optionally set the field's expiration time in sec/msec /unix-sec/unix-msec: ``` HSETF key [DC] [DCF | DOF] [NX | XX | GT | LT] [GETNEW | GETOLD] [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | KEEPTTL] <FVS count field value [field value …]> ``` Todo: - Performance improvement. - rdb load/save - aof - defrag
-
Moti Cohen authored
- On ebExpire() verify the logic of update expired value to a new time rather than remove it. - Refine ebuckets benchmark
-
- 25 Apr, 2024 1 commit
-
-
Moti Cohen authored
Unify infra of `HSETF`, `HEXPIRE`, `HSET` and provide API for RDB load as well. Whereas setting plain fields is rather straightforward, setting expiration time to fields might be time-consuming and complex since each update of expiration time, not only updates `ebuckets` of corresponding hash, but also might update `ebuckets` of global HFE DS. It is required to opt sequence of field updates with expirartion for a given hash, such that only once done, the global HFE DS will get updated. To do so, follow the scheme: 1. Call `hashTypeSetExInit()` to initialize the HashTypeSetEx struct. 2. Call `hashTypeSetEx()` one time or more, for each field/expiration update. 3. Call `hashTypeSetExDone()` for notification and update of global HFE. If expiration is not required, then avoid this API and use instead hashTypeSet().
-
- 18 Apr, 2024 1 commit
-
-
Moti Cohen authored
- Add ebuckets & mstr data structures - Integrate active & lazy expiration - Add most of the commands - Add support for dict (listpack is missing) TODOs: RDB, notification, listpack, HSET, HGETF, defrag, aof
-
- 20 Mar, 2024 1 commit
-
-
Pieter Cailliau authored
[Read more about the license change here](https://redis.com/blog/redis-adopts-dual-source-available-licensing/) Live long and prosper
🖖
-
- 15 Jan, 2024 1 commit
-
-
Yanqi Lv authored
When we insert entries into dict, it may autonomously expand if needed. However, when we delete entries from dict, it doesn't shrink to the proper size. If there are few entries in a very large dict, it may cause huge waste of memory and inefficiency when iterating. The main keyspace dicts (keys and expires), are shrinked by cron (`tryResizeHashTables` calls `htNeedsResize` and `dictResize`), And some data structures such as zset and hash also do that (call `htNeedsResize`) right after a loop of calls to `dictDelete`, But many other dicts are completely missing that call (they can only expand). In this PR, we provide the ability to automatically shrink the dict when deleting. The conditions triggering the shrinking is the same as `htNeedsResize` used to have. i.e. we expand when we're over 100% utilization, and shrink when we're below 10% utilization. Additionally: * Add `dictPauseAutoResize` so that flows that do mass deletions, will only trigger shrinkage at the end. * Rename `dictResize` to `dictShrinkToFit` (same logic as it used to have, but better name describing it) * Rename `_dictExpand` to `_dictResize` (same logic as it used to have, but better name describing it) related to discussion https://github.com/redis/redis/pull/12819#discussion_r1409293878 --------- Co-authored-by:
Oran Agra <oran@redislabs.com> Co-authored-by:
zhaozhao.zz <zhaozhao.zz@alibaba-inc.com>
-
- 15 Oct, 2023 1 commit
-
-
Vitaly authored
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data. ## Important changes * Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms. * getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time. * Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree. * scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue. * Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot. * Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading. * DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well. ## Performance This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict. RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load. ## Interface changes * Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS` * Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored. * New RDB version to support the new op code for SLOT information. --------- Co-authored-by:
Vitaly Arbuzov <arvit@amazon.com> Co-authored-by:
Harkrishn Patro <harkrisp@amazon.com> Co-authored-by:
Roshan Khatri <rvkhatri@amazon.com> Co-authored-by:
Madelyn Olson <madelyneolson@gmail.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 22 May, 2023 1 commit
-
-
Binbin authored
Optimized HRANDFIELD and ZRANDMEMBER commands as in #8444, CASE 3 under listpack encoding. Boost optimization to CASE 2.5. CASE 2.5 listpack only. Sampling unique elements, in non-random order. Listpack encoded hashes / zsets are meant to be relatively small, so HRANDFIELD_SUB_STRATEGY_MUL / ZRANDMEMBER_SUB_STRATEGY_MUL isn't necessary and we rather not make copies of the entries. Instead, we emit them directly to the output buffer. Simple benchmarks shows it provides some 400% improvement in HRANDFIELD and ZRANGESTORE both in CASE 3. Unrelated changes: remove useless setTypeRandomElements and fix a typo.
-
- 16 May, 2023 1 commit
-
-
Binbin authored
In the judgment in setTypeCreate, we should judge size_hint <= max_entries. This results in the following inconsistencies: ``` 127.0.0.1:6379> config set set-max-intset-entries 5 set-max-listpack-entries 5 OK 127.0.0.1:6379> sadd intset_set1 1 2 3 4 5 (integer) 5 127.0.0.1:6379> object encoding intset_set1 "hashtable" 127.0.0.1:6379> sadd intset_set2 1 2 3 4 (integer) 4 127.0.0.1:6379> sadd intset_set2 5 (integer) 1 127.0.0.1:6379> object encoding intset_set2 "intset" 127.0.0.1:6379> sadd listpack_set1 a 1 2 3 4 (integer) 5 127.0.0.1:6379> object encoding listpack_set1 "hashtable" 127.0.0.1:6379> sadd listpack_set2 a 1 2 3 (integer) 4 127.0.0.1:6379> sadd listpack_set2 4 (integer) 1 127.0.0.1:6379> object encoding listpack_set2 "listpack" ``` This was introduced in #12019, added corresponding tests.
-
- 08 May, 2023 1 commit
-
-
Madelyn Olson authored
For sets and hashes that will eventually be stored as the hash encoding, it's much faster to immediately convert them to their hash encoding and then perform the insertions since it avoids the O(N) search and frequent reallocations. This change checks the number of arguments in the incoming command, and converts the data-structure if the number of new entries exceeds the listpack-max-entries configuration. This can cause us to over-allocate memory if their are duplicate entries in the input, which is unexpected. unstable Summary: throughput summary: 805.54 requests per second latency summary (msec): avg min p50 p95 p99 max 61.908 25.680 68.351 73.279 75.967 79.295 hset-improvement Summary: throughput summary: 4701.46 requests per second latency summary (msec): avg min p50 p95 p99 max 10.546 0.832 11.959 12.471 13.119 14.967
-
- 28 Feb, 2023 1 commit
-
-
Oran Agra authored
Issue happens when passing a negative long value that greater than the max positive value that the long can store.
-
- 16 Jan, 2023 2 commits
-
-
Oran Agra authored
Related to the hang reported in #11671 Currently, redis can disconnect a client due to reaching output buffer limit, it'll also avoid feeding that output buffer with more data, but it will keep running the loop in the command (despite the client already being marked for disconnection) This PR is an attempt to mitigate the problem, specifically for commands that are easy to abuse, specifically: KEYS, HRANDFIELD, SRANDMEMBER, ZRANDMEMBER. The RAND family of commands can take a negative COUNT argument (which is not bound to the number of elements in the key), so it's enough to create a key with one field, and then these commands can be used to hang redis. For KEYS the caller can use the existing keyspace in redis (if big enough).
-
Oran Agra authored
missing range check in ZRANDMEMBER and HRANDIFLD leading to panic due to protocol limitations
-
- 28 Aug, 2022 1 commit
-
-
chendianqiang authored
Check the validity of the value before performing the create operation, prevents new data from being generated even if the request fails to execute. Co-authored-by:
Oran Agra <oran@redislabs.com> Co-authored-by:
chendianqiang <chendianqiang@meituan.com> Co-authored-by:
Binbin <binloveplay1314@qq.com>
-
- 14 Aug, 2022 1 commit
-
-
kmy2001 authored
Optimization in t_hash.c: Avoid looking for a same field twice by using dictAddRaw() instead of dictFind() and dictAdd() (#11110) Before this change in hashTypeSet() function, we first use dictFind() to look for the field and if it does not exist, we use dictAdd() to add it. In dictAdd() function the dictionary will look for the field again and I think this is meaningless as we already know that the field does not exist. An optimization is to use dictAddRaw() instead of dictFind() and dictAdd(). If we use dictAddRaw(), a new entry will be added when the field does not exist, and what we should do then is just set the value of that entry, and set its key to 'sdsdup(field)' in the case that 'HASH_SET_TAKE_FIELD' flag wasn't set.
-
- 24 Apr, 2022 1 commit
-
-
Lu JJ authored
-
- 11 Apr, 2022 1 commit
-
-
Ernesto Rodriguez Reina authored
* Extending the use of hashTypeGetValue. Functions hashTypeExists, hashTypeGetValueLength and addHashFieldToReply have a similar pattern on calling hashTypeGetFromHashTable or hashTypeGetFromZipList depending on the underlying data structure. What does functions are duing is exactly what hashTypeGetValue does. Those functions were changed to use existing function hashTypeGetValue making the code more consistent. Co-authored-by:
Madelyn Olson <madelyneolson@gmail.com>
-
- 05 Apr, 2022 1 commit
-
-
Lu JJ authored
Fixed a bug that used the `hincrbyfloat` or `hincrby` commands to make the field or value exceed the `hash_max_listpack_value` but did not change the object encoding of the hash structure. Add a length check for field and value, check the length of value first, if the length of value does not exceed `hash_max_listpack_value` then check the length of field. If the length of field or value is too long, it will reduce the efficiency of listpack, and the object encoding will become hashtable after AOF restart, so this is also to keep the same before and after AOF restart.
-
- 23 Jan, 2022 1 commit
-
-
Binbin authored
Summary of changes: 1. Rename `redisCommand->name` to `redisCommand->declared_name`, it is a const char * for native commands and SDS for module commands. 2. Store the [sub]command fullname in `redisCommand->fullname` (sds). 3. List subcommands in `ACL CAT` 4. List subcommands in `COMMAND LIST` 5. `moduleUnregisterCommands` now will also free the module subcommands. 6. RM_GetCurrentCommandName returns full command name Other changes: 1. Add `addReplyErrorArity` and `addReplyErrorExpireTime` 2. Remove `getFullCommandName` function that now is useless. 3. Some cleanups about `fullname` since now it is SDS. 4. Delete `populateSingleCommand` function from server.h that is useless. 5. Added tests to cover this change. 6. Add some module unload tests and fix the leaks 7. Make error messages uniform, make sure they always contain the full command name and that it's quoted. 7. Fixes some typos see the history in #9504, fixes #10124 Co-authored-by:
Oran Agra <oran@redislabs.com> Co-authored-by:
guybe7 <guy.benoish@redislabs.com>
-
- 21 Nov, 2021 1 commit
-
-
Oran Agra authored
Leak found by the corrupt-dump-fuzzer when using GCC ASAN, which seems to falsely report leaks on pointers kept only on the stack when calling exit. Instead we now use _exit on panic / assert to skip these leak checks. Additionally, check for sanitizer warnings in the corrupt-dump-fuzzer between iterations, so that when something is found we know which test to relate it too (and it prints reproduction command list)
-
- 04 Oct, 2021 1 commit
-
-
Oran Agra authored
- fix possible heap corruption in ziplist and listpack resulting by trying to allocate more than the maximum size of 4GB. - prevent ziplist (hash and zset) from reaching size of above 1GB, will be converted to HT encoding, that's not a useful size. - prevent listpack (stream) from reaching size of above 1GB. - XADD will start a new listpack if the new record may cause the previous listpack to grow over 1GB. - XADD will respond with an error if a single stream record is over 1GB - List type (ziplist in quicklist) was truncating strings that were over 4GB, now it'll respond with an error. Co-authored-by:
sundb <sundbcn@gmail.com>
-
- 24 Sep, 2021 1 commit
-
-
sundb authored
In the `HRANDFIELD`, `SRANDMEMBER` and `ZRANDMEMBER` commands, There are some strategies that could in some rare cases return an unfair random. these cases are where s small dict happens be be hashed unevenly. Specifically when `count*ZRANDMEMBER_SUB_STRATEGY_MUL > size`, using `dictGetRandomKey` to randomize from a dict will result in an unfair random result.
-
- 09 Sep, 2021 1 commit
-
-
sundb authored
Part two of implementing #8702 (zset), after #8887. ## Description of the feature Replaced all uses of ziplist with listpack in t_zset, and optimized some of the code to optimize performance. ## Rdb format changes New `RDB_TYPE_ZSET_LISTPACK` rdb type. ## Rdb loading improvements: 1) Pre-expansion of dict for validation of duplicate data for listpack and ziplist. 2) Simplifying the release of empty key objects when RDB loading. 3) Unify ziplist and listpack data verify methods for zset and hash, and move code to rdb.c. ## Interface changes 1) New `zset-max-listpack-entries` config is an alias for `zset-max-ziplist-entries` (same with `zset-max-listpack-value`). 2) OBJECT ENCODING will return listpack instead of ziplist. ## Listpack improvements: 1) Add `lpDeleteRange` and `lpDeleteRangeWithEntry` functions to delete a range of entries from listpack. 2) Improve the performance of `lpCompare`, converting from string to integer is faster than converting from integer to string. 3) Replace `snprintf` with `ll2string` to improve performance in converting numbers to strings in `lpGet()`. ## Zset improvements: 1) Improve the performance of `zzlFind` method, use `lpFind` instead of `lpCompare` in a loop. 2) Use `lpDeleteRangeWithEntry` instead of `lpDelete` twice to delete a element of zset. ## Tests 1) Add some unittests for `lpDeleteRange` and `lpDeleteRangeWithEntry` function. 2) Add zset RDB loading test. 3) Add benchmark test for `lpCompare` and `ziplsitCompare`. 4) Add empty listpack zset corrupt dump test.
-
- 10 Aug, 2021 1 commit
-
-
sundb authored
Part one of implementing #8702 (taking hashes first before other types) ## Description of the feature 1. Change ziplist encoded hash objects to listpack encoding. 2. Convert existing ziplists on RDB loading time. an O(n) operation. ## Rdb format changes 1. Add RDB_TYPE_HASH_LISTPACK rdb type. 2. Bump RDB_VERSION to 10 ## Interface changes 1. New `hash-max-listpack-entries` config is an alias for `hash-max-ziplist-entries` (same with `hash-max-listpack-value`) 2. OBJECT ENCODING will return `listpack` instead of `ziplist` ## Listpack improvements: 1. Support direct insert, replace integer element (rather than convert back and forth from string) 3. Add more listpack capabilities to match the ziplist ones (like `lpFind`, `lpRandomPairs` and such) 4. Optimize element length fetching, avoid multiple calculations 5. Use inline to avoid function call overhead. ## Tests 1. Add a new test to the RDB load time conversion 2. Adding the listpack unit tests. (based on the one in ziplist.c) 3. Add a few "corrupt payload: fuzzer findings" tests, and slightly modify existing ones. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 05 Aug, 2021 1 commit
-
-
yoav-steinberg authored
Reduce dict struct memory overhead on 64bit dict size goes down from jemalloc's 96 byte bin to its 56 byte bin. summary of changes: - Remove `privdata` from callbacks and dict creation. (this affects many files, see "Interface change" below). - Meld `dictht` struct into the `dict` struct to eliminate struct padding. (this affects just dict.c and defrag.c) - Eliminate the `sizemask` field, can be calculated from size when needed. - Convert the `size` field into `size_exp` (exponent), utilizes one byte instead of 8. Interface change: pass dict pointer to dict type call back functions. This is instead of passing the removed privdata field. In the future if we'd like to have private data in the callbacks we can extract it from the dict type. We can extend dictType to include a custom dict struct allocator and use it to allocate more data at the end of the dict struct. This data can then be used to store private data later acccessed by the callbacks.
-
- 05 Jul, 2021 1 commit
-
-
Binbin authored
due to a copy-paste bug, it used to reply with null response rather than empty array. this commit includes new tests that are looking at the RESP response directly in order to be able to tell the difference between them. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 30 Jun, 2021 1 commit
-
-
luvine authored
1. Add one key-value pair to myhash, which the length of key and value both less than hash-max-ziplist-value, for example: >hset myhash key value 2. Then execute the following command >hsetnx myhash key value1 (the length greater than hash-max-ziplist-value) 3. This will add nothing, but the code type of "myhash" changed from ziplist to dict even there are only one key-value pair in "myhash", and both of them less than hash-max-ziplist-value.
-
- 22 Jun, 2021 1 commit
-
-
Binbin authored
Remove extra semicolon.
-
- 15 Jun, 2021 1 commit
-
-
DarrenJiang13 authored
-
- 27 Apr, 2021 1 commit
-
-
Andy Pan authored
-
- 14 Apr, 2021 1 commit
-
-
Bonsai authored
-
- 22 Feb, 2021 1 commit
-
-
Wen Hui authored
SRANDMEMBER with negative count (non unique) can return the same member multiple times, and the order of elements in the returned collection matters. For these reasons returning a RESP3 Set type is not valid for the negative count, but also not really valid for the positive (unique) variant either (the command returns an array of random picks, not a set) This PR also contains a minor optimization for SRANDMEMBER, HRANDFIELD, and ZRANDMEMBER, to avoid the temporary dict from being rehashed while it grows. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 16 Feb, 2021 1 commit
-
-
Viktor Söderqvist authored
Avoids memmove and reallocs when replacing a ziplist element of the same encoded size as the new value. Affects HSET, HINRBY, HINCRBYFLOAT (via hashTypeSet) and LSET (via quicklistReplaceAtIndex).
-