- 04 Feb, 2024 2 commits
-
-
Daz authored
The JSON file lacks the following structural API changes: - GEORADIUSBYMEMBER: add the ANY option for COUNT since 6.2.0. - GEORADIUSBYMEMBER_RO: add the ANY option for COUNT since 6.2.0. - GEORADIUS_RO: Added support for uppercase unit names since 7.0.0. - GEORADIUSBYMEMBER_RO: Added support for uppercase unit names since 7.0.0. --------- Signed-off-by:
daz-3ux <daz-3ux@proton.me> Co-authored-by:
bodong.ybd <bodong.ybd@alibaba-inc.com> Co-authored-by:
Viktor Söderqvist <viktor.soderqvist@est.tech> Co-authored-by:
yangpengda.333 <yangpengda.333@bytedance.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Yanqi Lv authored
Currently, We compute `db->avg_ttl` after each short `dbScan` sweep (a few buckets without checking the time limit). But after each `dbScan` sweep, we don't have much data and this makes the db->avg_ttl less precise. For example, even if we scan the whole db, we can't get the exact avg_ttl because we separate the data. i.e. because of the running average, if we issue 16 calls to scan, we'll give lower weight to the first one, and higher weight to the last one. I think we should calculate `db->avg_ttl` until completing more of the db iteration (judgement of time limit or the beginning of iterating next db) because we have more sample data in this db and can get more accurate result. In the best case, if we scan the whole db, we can get the exact avg_ttl. In this PR, we postpone the avg_ttl calculation until the judgement of time limit or iteration of next db, so we can accumulate more data to get more precise avg_ttl. Note that we still need to make sure to decay the old TTLs at the same speed as before, which is why we want to run the decay mechanism several times, or use the Pow formula, see the comment in the code. In my experiment, this PR can improve 89% or 52% accuracy in different workload. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 01 Feb, 2024 1 commit
-
-
Yanqi Lv authored
In Redis, rdb is produced in three scenarios mainly. - backup, such as `bgsave` and `save` command - full sync in replication - aof rewrite if `aof-use-rdb-preamble` is yes We also have some RDB flags to identify the purpose of rdb saving. ```C /* flags on the purpose of rdb save or load */ #define RDBFLAGS_NONE 0 /* No special RDB loading. */ #define RDBFLAGS_AOF_PREAMBLE (1<<0) /* Load/save the RDB as AOF preamble. */ #define RDBFLAGS_REPLICATION (1<<1) /* Load/save for SYNC. */ ``` But currently, it seems that these flags and purposes of rdb saving don't exactly match. I find it in `rdbSaveRioWithEOFMark` which calls `startSaving` with `RDBFLAGS_REPLICATION` but `rdbSaveRio` with `RDBFLAGS_NONE`. ```C int rdbSaveRioWithEOFMark(int req, rio *rdb, int *error, rdbSaveInfo *rsi) { char eofmark[RDB_EOF_MARK_SIZE]; startSaving(RDBFLAGS_REPLICATION); getRandomHexChars(eofmark,RDB_EOF_MARK_SIZE); if (error) *error = 0; if (rioWrite(rdb,"$EOF:",5) == 0) goto werr; if (rioWrite(rdb,eofmark,RDB_EOF_MARK_SIZE) == 0) goto werr; if (rioWrite(rdb,"\r\n",2) == 0) goto werr; if (rdbSaveRio(req,rdb,error,RDBFLAGS_NONE,rsi) == C_ERR) goto werr; if (rioWrite(rdb,eofmark,RDB_EOF_MARK_SIZE) == 0) goto werr; stopSaving(1); return C_OK; werr: /* Write error. */ /* Set 'error' only if not already set by rdbSaveRio() call. */ if (error && *error == 0) *error = errno; stopSaving(0); return C_ERR; } ``` In this PR, I refine the purpose of rdb saving with accurate flags.
-
- 31 Jan, 2024 2 commits
-
-
Binbin authored
When we use a timer to unblock a client in module, if the timer period and the block timeout are very close, they will unblock the client in the same event loop, and it will trigger the assertion. The reason is that in moduleBlockedClientTimedOut we will protect against re-processing, so we don't actually call updateStatsOnUnblock (see #12817), so we are not able to reset the c->duration. The reason is unblockClientOnTimeout() didn't realize that bc had been unblocked. We add a function to the module to determine if bc is blocked, and then use it in unblockClientOnTimeout() to exit. There is the stack: ``` beforeSleep blockedBeforeSleep handleBlockedClientsTimeout checkBlockedClientTimeout unblockClientOnTimeout unblockClient resetClient -- assertion, crash the server 'c->duration == 0' is not true ```
-
Binbin authored
The block timeout is passed in the test case, but we do not pass in the timeout_callback, and it will crash when unlocking. In this case, in moduleBlockedClientTimedOut we will check timeout_callback. There is the stack: ``` beforeSleep blockedBeforeSleep handleBlockedClientsTimeout checkBlockedClientTimeout unblockClientOnTimeout replyToBlockedClientTimedOut moduleBlockedClientTimedOut -- timeout_callback is NULL, invalidFunctionWasCalled bc->timeout_callback(&ctx,(void**)c->argv,c->argc); ```
-
- 30 Jan, 2024 5 commits
-
-
Chen Tianjie authored
Add a way to HSCAN a hash key, and get only the filed names. Command syntax is now: ``` HSCAN key cursor [MATCH pattern] [COUNT count] [NOVALUES] ``` when `NOVALUES` is on, the command will only return keys in the hash. --------- Co-authored-by:
Viktor Söderqvist <viktor.soderqvist@est.tech>
-
Slava Koyfman authored
Adds an ability to kill clients older than a specified age. Also, fixed the age calculation in `catClientInfoString` to use `commandTimeSnapshot` instead of the old `server.unixtime`, and added missing documentation for `CLIENT KILL ID` to output of `CLIENT help`. --------- Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Binbin authored
This was introduced in #13004, missing this assignment. It causes timeout to be a random value (may be less than now), and then in `Unblock by timer` test, the client is unblocked and then it call timeout_callback, since the callback is NULL, the server will crash. The crash stack is: ``` beforesleep handleBlockedClientsTimeout checkBlockedClientTimeout unblockClientOnTimeout replyToBlockedClientTimedOut moduleBlockedClientTimedOut -- the timeout_callback is NULL, invalidFunctionWasCalled bc->timeout_callback(&ctx,(void**)c->argv,c->argc); ```
-
Binbin authored
This allows specifying the timeout value for opening the TCP connection to a server. The timeout, default 0 means no limit, depending on the OS. It can be specified using the new `-t` switch. revive #3764, fixes #3763 --------- Co-authored-by:
Itamar Haber <itamar@redislabs.com> Co-authored-by:
yoav-steinberg <yoav@redislabs.com>
-
Binbin authored
In #11012, we will reprocess command when client is unblocked on keys, in some blocking commands, for example, in the XREADGROUP BLOCK scenario, because of the re-processing command, we will recalculate the block timeout, causing the blocking time to be reset. This commit add a new CLIENT_REPROCESSING_COMMAND clent flag, explicitly let the command know that it is being re-processed, later in blockForKeys we will not reset the timeout. Affected BLOCK cases: - list / zset / stream, added test cases for each. Unaffected cases: - module (never re-process the commands). - WAIT / WAITAOF (never re-process the commands). Fixes #12998.
-
- 29 Jan, 2024 3 commits
-
-
Chen Tianjie authored
The function `tryResizeHashTables` only attempts to shrink the dicts that has keys (change from #11695), this was a serious problem until the change in #12850 since it meant if all keys are deleted, we won't shrink the dick. But still, both dictShrink and dictExpand may be blocked by a fork child process, therefore, the cron job needs to perform both dictShrink and dictExpand, for not just non-empty dicts, but all dicts in DBs. What this PR does: 1. Try to resize all dicts in DBs (not just non-empty ones, as it was since #12850) 2. handle both shrink and expand (not just shrink, as it was since forever) 3. Refactor some APIs about dict resizing (get rid of `htNeedsShrink` `htNeedsShrink` `dictShrinkToFit`, and expose `dictShrinkIfNeeded` `dictExpandIfNeeded` which already contains all the code of those functions we get rid of, to make APIs more neat) 4. In the `Don't rehash if redis has child process` test, now that cron would do resizing, we no longer need to write to DB after the child process got killed, and can wait for the cron to expand the hash table.
-
Ozan Tezcan authored
Modules may want to handle allocation failures gracefully. Adding RM_TryCalloc() and RM_TryRealloc() for it. RM_TryAlloc() was added before: https://github.com/redis/redis/pull/10541
-
Binbin authored
Fix maxmemory-samples stack overflow crash in evictionPoolPopulate, limit its value to [1,64] (#13000) We have not limited the value of maxmemory-samples in the past, it can be set very large. If it is set very large, we will have stack overflow in evictionPoolPopulate when we trigger the key eviction. There is no reason for this config to be set too high, so just limit its range to [1,64].
-
- 27 Jan, 2024 1 commit
-
-
Roshan Khatri authored
#### Problem Statement: For any read/update operation during rehashing, we're doing ~10+ random DRAM lookups to do the rehashing, as we are using the `rehashidx` to rehash 10 buckets, whose dict entries most likely aren't cached in the CPU or near the bucket we are operating on. If these random bucket are empty, the rehashing process during that command execution is skipped. #### Implementation: For reducing the performance recession while dict is rehashing, we determine the index at which the key would be stored in the 0th HT, we check if that index has already been rehashed, if not we will rehash the bucket containing the key and the bucket will be moved from 0th HT to the 1st HT. If the key has already been rehashed, we perform the random access bucket rehash (using `rehashidx`) and we again verify if rehashing is still ongoing and look up the key in the respective HT. This ensures rehashing is not skipped in any command call and that we rehash a particular bucket or random bucket in each call. #### Changes in this PR: - Added a new method `dictBucketRehash` to perform rehash on a single bucket. - Helper function `moveKeysInBucketOldtoNew` for `dictRehash` and `dictBucketRehash` to move all the keys in a bucket from the old to the new hash HT. - Helper function `verifyMoreRehashRequired` for `dictRehash` and `dictBucketRehash` to check if we have already rehashed the whole table and if more rehashing is required. ### Benchmark: - This PR still shows **~13%** improvement in the latency during rehashing. - Rehashing is now **~2%** faster for this PR when compared to unstable. --------- Co-authored-by:
Oran Agra <oran@redislabs.com> Co-authored-by:
Madelyn Olson <34459052+madolson@users.noreply.github.com>
-
- 26 Jan, 2024 1 commit
-
-
judeng authored
The question is introduced in #12799 , the script cannot find the correct src and deps directories, so it always returns dirty as 0.
-
- 25 Jan, 2024 2 commits
-
-
Binbin authored
Code incorrectly set the limit value to 1024MB. Introduced in #12961.
-
zhaozhao.zz authored
Fix #9926 , and introduce an alternative method to prevent abuse of transactions: 1. revert #5454 (which was blocking read-only transactions in OOM state), and break the tie of MULTI state memory usage and the server OOM state. Meaning that we'll limit the total memory a single client can queue, and do that unconditionally regardless of the server being OOM or not. 2. to prevent abuse of transactions, we use the `client-query-buffer-limit` to restrict the size of the transaction. Because the commands cached in the MULTI/EXEC queue have not been executed yet, so they are also considered a part of the "query buffer" in a broader sense. In other words, the commands in the MULTI queue and the `querybuf` of the client together constitute the "query buffer". When they exceed the limit, the connection will be disconnected. The reasoning is that it's sensible to sends a single command with a huge (1GB) argument, and it's sensible to sends a transaction with many small commands, but it's probably not common to sends a long transaction with many huge arguments (will consume a lot of memory before even being executed). If anyone runs into that, they can simply increase the `client-query-buffer-limit` config. P.S. To prevent DDoS attacks, unauthenticated clients have a separate hard limit. Their query buffer should not exceed a maximum of 1MB. In other words, if the query buffer of an unauthenticated client exceeds 1MB or the `client-query-buffer-limit` (if it is set to a value smaller than 1MB,), the connection will be disconnected.
-
- 23 Jan, 2024 5 commits
-
-
Binbin authored
In the following case sender may be unknown, so we need to set up a NULL check for sender: ``` /* If this is a MEET packet from an unknown node, we still process * the gossip section here since we have to trust the sender because * of the message type. */ if (!sender && type == CLUSTERMSG_TYPE_MEET) clusterProcessGossipSection(hdr,link); ```
-
Binbin authored
In #11568 we removed the NOSCRIPT flag from commands, e.g. removing NOSCRIPT flag from WAIT. Aiming to allow them in scripts and let them implicitly behave in the non-blocking way. This PR remove NOSCRIPT flag from WAITAOF just like WAIT (to be symmetrical)). And this PR also add BLOCKING flag for WAIT and WAITAOF.
-
Binbin authored
This PR did some cleanups around function: - drop the comment about Libraries Ctx, since we do have comment in functionsLibCtx, no need to maintain multiple copies. - remove outdated comment about the dropped Library description. - remove unused desc and code vars in functionExtractLibMetaData. - fix engines_nemory typo, changed it to engines_memory. - remove outdated comment about FUNCTION CREATE and FUNCTION INFO, FUNCTION CREATE was renamed to FUNCTION LOAD. - Check in initServer whether the return of functionsInit is OK.
-
Oran Agra authored
seems that we forgot to update the array in redis-check rdb.
-
Harkrishn Patro authored
Currently slowlog gets disabled if slowlog-log-slower-than is set to less than zero. I think we should also disable it if slowlog-max-len is set to zero. We apply the same logic to acllog-max-len.
-
- 22 Jan, 2024 2 commits
-
-
Brennan authored
There have been occasional instances of memory corruption (though code bugs or bit flips) leading to invalid node information being gossiped around. To prevent this invalid information spreading, we verify the node IDs in received gossip are in an acceptable format, and disregard any gossiped nodes with invalid IDs. This PR uses the existing verifyClusterNodeId function to check the validity of the gossiped node IDs and if an invalid one is encountered, logs raw byte information to help debug the corruption. --------- Co-authored-by:
Madelyn Olson <madelyneolson@gmail.com>
-
zhaozhao.zz authored
background: some modules need to know the `dbid` information, such as the function used during RDB loading: ``` robj *rdbLoadObject(int rdbtype, rio *rdb, sds key, int dbid, int *error) { .... moduleInitIOContext(io,mt,rdb,&keyobj,dbid); ``` However, during replication, the "tempDb" created for diskless RDB loading is not correctly set with the dbid. This leads to passing the wrong dbid to the `rdbLoadObject` function (as tempDb uses zcalloc, all ids are 0). ``` disklessLoadInitTempDb()->rdbLoadRioWithLoadingCtx()-> /* Read value */ val = rdbLoadObject(type,rdb,key,db->id,&error); ``` To fix it, set the correct ID (relative index) for the tempdb.
-
- 19 Jan, 2024 3 commits
-
-
Yanqi Lv authored
In #12838, we misuse the safe iterator of the client dict, so we can't catch the synchronous release of the client if there is a bug. Since we realize that clients (even subscribers) are released with async free, we change the safe iterators of the client dict into unsafe iterators in `pubsub.c`. And I also remove redundant code.
-
Yanqi Lv authored
Before this change (most recently modified in https://github.com/redis/redis/pull/12850#discussion_r1421406393), The trigger for normal expand threshold was 100% utilization and the trigger for normal shrink threshold was 10% (HASHTABLE_MIN_FILL). While during fork (DICT_RESIZE_AVOID), when we want to avoid rehash, the trigger thresholds were multiplied by 5 (`dict_force_resize_ratio`), meaning 500% for expand and 2% (100/10/5) for shrink. However, in `dictRehash` (the incremental rehashing), the rehashing threshold for shrinking during fork (DICT_RESIZE_AVOID) was 20% by mistake. This meant that if a shrinking is triggered when `dict_can_resize` is `DICT_RESIZE_ENABLE` which the threshold is 10%, the rehashing can continue when `dict_can_resize` is `DICT_RESIZE_AVOID`. This would cause unwanted CopyOnWrite damage. It'll make sense to change the thresholds of the rehash trigger and the thresholds of the incremental rehashing the same, however, in one we compare the size of the hash table to the number of records, and in the other we compare the size of ht[0] to the size of ht[1], so the formula is not exactly the same. to make things easier we change all the thresholds to powers of 2, so the normal shrinking threshold is changed from 100/10 (i.e. 10%) to 100/8 (i.e. 12.5%), and we change the threshold during forks from 5 to 4, i.e. from 500% to 400% for expand, and from 2% (100/10/5) to 3.125% (100/8/4)
-
debing.sun authored
Fix #12785 and other race condition issues. See the following isolated comments. The following report was obtained using SANITIZER thread. ```sh make SANITIZER=thread ./runtest-moduleapi --config io-threads 4 --config io-threads-do-reads yes --accurate ``` 1. Fixed thread-safe issue in RM_UnblockClient() Related discussion: https://github.com/redis/redis/pull/12817#issuecomment-1831181220 * When blocking a client in a module using `RM_BlockClientOnKeys()` or `RM_BlockClientOnKeysWithFlags()` with a timeout_callback, calling RM_UnblockClient() in module threads can lead to race conditions in `updateStatsOnUnblock()`. - Introduced: Version: 6.2 PR: #7491 - Touch: `server.stat_numcommands`, `cmd->latency_histogram`, `server.slowlog`, and `server.latency_events` - Harm Level: High Potentially corrupts the memory data of `cmd->latency_histogram`, `server.slowlog`, and `server.latency_events` - Solution: Differentiate whether the call to moduleBlockedClientTimedOut() comes from the module or the main thread. Since we can't know if RM_UnblockClient() comes from module threads, we always assume it does and let `updateStatsOnUnblock()` asynchronously update the unblock status. * When error reply is called in timeout_callback(), ctx is not thread-safe, eventually lead to race conditions in `afterErrorReply`. - Introduced: Version: 6.2 PR: #8217 - Touch `server.stat_total_error_replies`, `server.errors`, - Harm Level: High Potentially corrupts the memory data of `server.errors` - Solution: Make the ctx in `timeout_callback()` with `REDISMODULE_CTX_THREAD_SAFE`, and asynchronously reply errors to the client. 2. Made RM_Reply*() family API thread-safe Related discussion: https://github.com/redis/redis/pull/12817#discussion_r1408707239 Call chain: `RM_Reply*()` -> `_addReplyToBufferOrList()` -> touch server.current_client - Introduced: Version: 7.2.0 PR: #12326 - Harm Level: None Since the module fake client won't have the `CLIENT_PUSHING` flag, even if we touch server.current_client, we can still exit after `c->flags & CLIENT_PUSHING`. - Solution Checking `c->flags & CLIENT_PUSHING` earlier. 3. Made freeClient() thread-safe Fix #12785 - Introduced: Version: 4.0 Commit: https://github.com/redis/redis/commit/3fcf959e609e850a114d4016843e4c991066ebac - Harm Level: Moderate * Trigger assertion It happens when the module thread calls freeClient while the io-thread is in progress, which just triggers an assertion, and doesn't make any race condiaions. * Touch `server.current_client`, `server.stat_clients_type_memory`, and `clientMemUsageBucket->clients`. It happens between the main thread and the module threads, may cause data corruption. 1. Error reset `server.current_client` to NULL, but theoretically this won't happen, because the module has already reset `server.current_client` to old value before entering freeClient. 2. corrupts `clientMemUsageBucket->clients` in updateClientMemUsageAndBucket(). 3. Causes server.stat_clients_type_memory memory statistics to be inaccurate. - Solution: * No longer counts memory usage on fake clients, to avoid updating `server.stat_clients_type_memory` in freeClient. * No longer resetting `server.current_client` in unlinkClient, because the fake client won't be evicted or disconnected in the mid of the process. * Judgment assertion `io_threads_op == IO_THREADS_OP_IDLE` only if c is not a fake client. 4. Fixed free client args without GIL Related discussion: https://github.com/redis/redis/pull/12817#discussion_r1408706695 When freeing retained strings in the module thread (refcount decr), or using them in some way (refcount incr), we should do so while holding the GIL, otherwise, they might be simultaneously freed while the main thread is processing the unblock client state. - Introduced: Version: 6.2.0 PR: #8141 - Harm Level: Low Trigger assertion or double free or memory leak. - Solution: Documenting that module API users need to ensure any access to these retained strings is done with the GIL locked 5. Fix adding fake client to server.clients_pending_write It will incorrectly log the memory usage for the fake client. Related discussion: https://github.com/redis/redis/pull/12817#issuecomment-1851899163 - Introduced: Version: 4.0 Commit: https://github.com/redis/redis/commit/9b01b64430fbc1487429144d2e4e72a4a7fd9db2 - Harm Level: None Only result in NOP - Solution: * Don't add fake client into server.clients_pending_write * Add c->conn assertion for updateClientMemUsageAndBucket() and updateClientMemoryUsage() to avoid same issue in the future. So now it will be the responsibility of the caller of both of them to avoid passing in fake client. 6. Fix calling RM_BlockedClientMeasureTimeStart() and RM_BlockedClientMeasureTimeEnd() without GIL - Introduced: Version: 6.2 PR: #7491 - Harm Level: Low Causes inaccuracies in command latency histogram and slow logs, but does not corrupt memory. - Solution: Module API users, if know that non-thread-safe APIs will be used in multi-threading, need to take responsibility for protecting them with their own locks instead of the GIL, as using the GIL is too expensive. ### Other issue 1. RM_Yield is not thread-safe, fixed via #12905. ### Summarize 1. Fix thread-safe issues for `RM_UnblockClient()`, `freeClient()` and `RM_Yield`, potentially preventing memory corruption, data disorder, or assertion. 2. Updated docs and module test to clarify module API users' responsibility for locking non-thread-safe APIs in multi-threading, such as RM_BlockedClientMeasureTimeStart/End(), RM_FreeString(), RM_RetainString(), and RM_HoldString(). ### About backpot to 7.2 1. The implement of (1) is not too satisfying, would like to get more eyes. 2. (2), (3) can be safely for backport 3. (4), (6) just modifying the module tests and updating the documentation, no need for a backpot. 4. (5) is harmless, no need for a backpot. --------- Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 18 Jan, 2024 4 commits
-
-
Chen Tianjie authored
When doing dict resizing, dictTypeResizeAllowed is used to judge whether the new allocated memory for rehashing would cause OOM. However when shrinking, we alloc `_dictNextExp(d->ht_used[0])` bytes of memory, while in `dictTypeResizeAllowed` we still use `_dictNextExp(d->ht_used[0]+1)` as the new allocated memory size. This will overestimate the memory used by shrinking at special conditions, causing a false OOM judgement.
-
Binbin authored
Introduced in #12952, reported by valgrind.
-
Binbin authored
We doing this in diskless on-empty-db mode, when diskless loading fails, we will call emptyData to remove the half-loaded data in case we started with an empty replica. Now when a disk-based sync rdbLoad fails, we will call emptyData too in case it loads partially incomplete data. when the replica attempts another re-sync, it'll empty the dataset again anyway, so this affects two things: 1. memory consumption in the time gap until the next rdb loading begins 2. if the unsynced replica is for some reason promoted, it would have kept the partial dataset instead of being empty.
-
Binbin authored
In the past we used integers to compare ratios, let us assume that we have the following data in expanding: ``` used / size > 5 `80 / 16 > 5` is false `81 / 16 > 5` is false `95 / 16 > 5` is false `96 / 16 > 5` is true ``` Because the integer result is rounded, our resize breaks the ratio constraint, this has existed since the beginning, which resulted in us not strictly following the ratio (shrink also has the same issue). This PR change it to multiplication to avoid floating point calculations.
-
- 15 Jan, 2024 3 commits
-
-
Binbin authored
The new shrink was added in #12850. Also updated outdated comments, see #11692.
-
Yanqi Lv authored
When we insert entries into dict, it may autonomously expand if needed. However, when we delete entries from dict, it doesn't shrink to the proper size. If there are few entries in a very large dict, it may cause huge waste of memory and inefficiency when iterating. The main keyspace dicts (keys and expires), are shrinked by cron (`tryResizeHashTables` calls `htNeedsResize` and `dictResize`), And some data structures such as zset and hash also do that (call `htNeedsResize`) right after a loop of calls to `dictDelete`, But many other dicts are completely missing that call (they can only expand). In this PR, we provide the ability to automatically shrink the dict when deleting. The conditions triggering the shrinking is the same as `htNeedsResize` used to have. i.e. we expand when we're over 100% utilization, and shrink when we're below 10% utilization. Additionally: * Add `dictPauseAutoResize` so that flows that do mass deletions, will only trigger shrinkage at the end. * Rename `dictResize` to `dictShrinkToFit` (same logic as it used to have, but better name describing it) * Rename `_dictExpand` to `_dictResize` (same logic as it used to have, but better name describing it) related to discussion https://github.com/redis/redis/pull/12819#discussion_r1409293878 --------- Co-authored-by:
Oran Agra <oran@redislabs.com> Co-authored-by:
zhaozhao.zz <zhaozhao.zz@alibaba-inc.com>
-
zhaozhao.zz authored
Regarding how to obtain the hash slot of a key, there is an optimization in `getKeySlot()`, it is used to avoid redundant hash calculations for keys: when the current client is in the process of executing a command, it can directly use the slot of the current client because the slot to access has already been calculated in advance in `processCommand()`. However, scripts are a special case where, in default mode or with `allow-cross-slot-keys` enabled, they are allowed to access keys beyond the pre-declared range. This means that the keys they operate on may not belong to the slot of the pre-declared keys. Currently, when the commands in a script are executed, the slot of the original client (i.e., the current client) is not correctly updated, leading to subsequent access to the wrong slot. This PR fixes the above issue. When checking the cluster constraints in a script, the slot to be accessed by the current command is set for the original client (i.e., the current client). This ensures that `getKeySlot()` gets the correct slot cache. Additionally, the following modifications are made: 1. The 'sort' and 'sort_ro' commands use `getKeySlot()` instead of `c->slot` because the client could be an engine client in a script and can lead to potential bug. 2. `getKeySlot()` is also used in pubsub to obtain the slot for the channel, standardizing the way slots are retrieved.
-
- 14 Jan, 2024 1 commit
-
-
Binbin authored
The open function returns a fd on success or -1 on failure, here we should check fd != -1, otherwise -1 will be judged as success. This closes #12938.
-
- 12 Jan, 2024 1 commit
-
-
Chen Tianjie authored
Change the calculation method of bytes_per_key to make it closer to the true average key size. The calculation method is as follows: mh->bytes_per_key = mh->total_keys ? (mh->dataset / mh->total_keys) : 0;
-
- 11 Jan, 2024 2 commits
-
-
Harkrishn Patro authored
Avoid crash while performing `DEBUG CLUSTERLINK KILL` mutliple times (cluster link might not be created/valid).
-
bentotten authored
When one shard, sole primary node marks potentially failed replica as FAIL instead of PFAIL (#12824) Fixes issue where a single primary cannot mark a replica as failed in a single-shard cluster.
-
- 09 Jan, 2024 1 commit
-
-
Oran Agra authored
#11766 introduced a bug in sdsResize where it could forget to update the sds type in the sds header and then cause an overflow in sdsalloc. it looks like the only implication of that is a possible assertion in HLL, but it's hard to rule out possible heap corruption issues with clientsCronResizeQueryBuffer
-
- 08 Jan, 2024 1 commit
-
-
Binbin authored
We should close server.rdb_child_exit_pipe when redisFork fails, otherwise the pipe fd will be leaked. Just a cleanup.
-