- 12 Sep, 2024 1 commit
-
-
Moti Cohen authored
Add basic iterator API for ebuckets of start, next, nextBucket and stop.
-
- 03 Jun, 2024 1 commit
-
-
Moti Cohen authored
At `ebuckets` structure, On `ebExpire()`, if the callback indicated to update the item expiration time and return it back to ebuckets (`ACT_UPDATE_EXP_ITEM`), then returned value `nextExpireTime` should be updated, if needed. Invalid value of `nextExpireTime` was modified from 0 to `EB_EXPIRE_TIME_INVALID`.
-
- 29 May, 2024 1 commit
-
-
Moti Cohen authored
* For replica sake, rewrite commands `H*EXPIRE*` , `HSETF`, `HGETF` to have absolute unix time in msec. * On active-expiration of field, propagate HDEL to replica (`propagateHashFieldDeletion()`) * On lazy-expiration, propagate HDEL to replica (`hashTypeGetValue()` now calls `hashTypeDelete()`. It also takes care to call `propagateHashFieldDeletion()`). * Fix `H*EXPIRE*` command such that if it gets flag `LT` and it doesn’t have any expiration on the field then it will considered as valid condition. Note, replicas doesn’t make any active expiration, and should avoid lazy expiration. On `hashTypeGetValue()` it doesn't check expiration (As long as the master didn’t request to delete the field, it is valid) TODO: * Attach `dbid` to HASH metadata. See [here](https://github.com/redis/redis/pull/13209#discussion_r1593385850 ) --------- Co-authored-by:
debing.sun <debing.sun@redis.com>
-
- 14 May, 2024 1 commit
-
-
debing.sun authored
## Background 1. All hash objects that contain HFE are referenced by db->hexpires. 2. All fields in a dict hash object with HFE are referenced by an ebucket. So when we defrag the hash object or the field in a dict with HFE, we also need to update the references in them. ## Interface 1. Add a new interface `ebDefragItem`, which can accept a defrag callback to defrag items in ebuckets, and simultaneously update their references in the ebucket. ## Mainly changes 1. The key type of dict of hash object is no longer sds, so add new `activeDefragHfieldDict()` to defrag the dict instead of `activeDefragSdsDict()`. 2. When we defrag the dict of hash object by using `dictScanDefrag()`, we always set the defrag callback `defragKey` of `dictDefragFunctions` to NULL, because we can't reallocate a field with out updating it's reference in ebuckets. Instead, we will defrag the field of the dict and update its reference in the callback `dictScanDefrag` of dictScanFunction(). 3. When we defrag the hash robj with HFE, we will use `ebDefragItem` to defrag the robj and update the reference in db->hexpires. ## TODO: Defrag ebucket structure incremently, which will be handler in a future PR. --------- Co-authored-by:
Ozan Tezcan <ozantezcan@gmail.com> Co-authored-by:
Moti Cohen <moti.cohen@redis.com>
-
- 25 Apr, 2024 1 commit
-
-
Moti Cohen authored
Unify infra of `HSETF`, `HEXPIRE`, `HSET` and provide API for RDB load as well. Whereas setting plain fields is rather straightforward, setting expiration time to fields might be time-consuming and complex since each update of expiration time, not only updates `ebuckets` of corresponding hash, but also might update `ebuckets` of global HFE DS. It is required to opt sequence of field updates with expirartion for a given hash, such that only once done, the global HFE DS will get updated. To do so, follow the scheme: 1. Call `hashTypeSetExInit()` to initialize the HashTypeSetEx struct. 2. Call `hashTypeSetEx()` one time or more, for each field/expiration update. 3. Call `hashTypeSetExDone()` for notification and update of global HFE. If expiration is not required, then avoid this API and use instead hashTypeSet().
-
- 18 Apr, 2024 1 commit
-
-
Moti Cohen authored
- Add ebuckets & mstr data structures - Integrate active & lazy expiration - Add most of the commands - Add support for dict (listpack is missing) TODOs: RDB, notification, listpack, HSET, HGETF, defrag, aof
-