- 06 Feb, 2024 3 commits
-
-
Binbin authored
Currently, once active defrag starts, we can not adjust active_defrag_running downwards. This is because active_defrag_running will be dynamically compute based on the fragmentation, we think we should not lower the effort when the fragmentation drops. However, we need to note that active_defrag_running will also be dynamically computed based on configurations. In this case, we are not respecting cycle-min or cycle-max. Some people may realize halfway through that defrag consumes a lot and want to adjust it. Previously we could only turn off activedefrag and then turn it on again to adjust active_defrag_running downwards. So in this PR, when a active defrag configuration change is made, we will re-compute it. These configuration items are: - active-defrag-cycle-min - active-defrag-cycle-max - active-defrag-threshold-upper
-
Binbin authored
Since now a DB in cluster mode is divided into 16384 dicts, here we directly check kvstoreDictSize instead of kvstoreSize, which may have a higher probability that we can save the lookup. The other change is a cleanup, obviously kvstoreGetHash should be applied to the db->expires dicts.
-
Binbin authored
When the dict is NULL, we also need to push resize_cursor, otherwise it will keep doing useless continue here, and there is no way to resize the other dict behind it. Introduced in #12822. --------- Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 05 Feb, 2024 2 commits
-
-
guybe7 authored
# Description Gather most of the scattered `redisDb`-related code from the per-slot dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e. it's a class that represents an array of dictionaries. # Motivation The main motivation is code cleanliness, the idea of using an array of dictionaries is very well-suited to becoming a self-contained data structure. This allowed cleaning some ugly code, among others: loops that run twice on the main dict and expires dict, and duplicate code for allocating and releasing this data structure. # Notes 1. This PR reverts the part of https://github.com/redis/redis/pull/12848 where the `rehashing` list is global (handling rehashing `dict`s is under the responsibility of `kvstore`, and should not be managed by the server) 2. This PR also replaces the type of `server.pubsubshard_channels` from `dict**` to `kvstore` (original PR: https://github.com/redis/redis/pull/12804). After that was done, server.pubsub_channels was also chosen to be a `kvstore` (with only one `dict`, which seems odd) just to make the code cleaner by making it the same type as `server.pubsubshard_channels`, see `pubsubtype.serverPubSubChannels` 3. the keys and expires kvstores are currenlty configured to allocate the individual dicts only when the first key is added (unlike before, in which they allocated them in advance), but they won't release them when the last key is deleted. Worth mentioning that due to the recent change the reply of DEBUG HTSTATS changed, in case no keys were ever added to the db. before: ``` 127.0.0.1:6379> DEBUG htstats 9 [Dictionary HT] Hash table 0 stats (main hash table): No stats available for empty dictionaries [Expires HT] Hash table 0 stats (main hash table): No stats available for empty dictionaries ``` after: ``` 127.0.0.1:6379> DEBUG htstats 9 [Dictionary HT] [Expires HT] ```
-
Binbin authored
When db->expires_cursor==0, it means the DB is done the scanning, we should exit the loop to avoid the useless scanning. It is easy to see the active expire timeout in the modified test, for example, let's assume that there is only 1 expired key in the DB, and the size / buckets ratio is less than 1%, which means that we will skip it in isExpiryDictValidForSamplingCb, and the return value of expires_cursor is 0. Because `data.sampled == 0` is always true, so `repeat` is also always true, we will keep scanning the DB, but every time it is skipped by the previous judgment (expires_cursor = 0), until the timelimit is finally exhausted.
-
- 04 Feb, 2024 2 commits
-
-
Daz authored
The JSON file lacks the following structural API changes: - GEORADIUSBYMEMBER: add the ANY option for COUNT since 6.2.0. - GEORADIUSBYMEMBER_RO: add the ANY option for COUNT since 6.2.0. - GEORADIUS_RO: Added support for uppercase unit names since 7.0.0. - GEORADIUSBYMEMBER_RO: Added support for uppercase unit names since 7.0.0. --------- Signed-off-by:
daz-3ux <daz-3ux@proton.me> Co-authored-by:
bodong.ybd <bodong.ybd@alibaba-inc.com> Co-authored-by:
Viktor Söderqvist <viktor.soderqvist@est.tech> Co-authored-by:
yangpengda.333 <yangpengda.333@bytedance.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Yanqi Lv authored
Currently, We compute `db->avg_ttl` after each short `dbScan` sweep (a few buckets without checking the time limit). But after each `dbScan` sweep, we don't have much data and this makes the db->avg_ttl less precise. For example, even if we scan the whole db, we can't get the exact avg_ttl because we separate the data. i.e. because of the running average, if we issue 16 calls to scan, we'll give lower weight to the first one, and higher weight to the last one. I think we should calculate `db->avg_ttl` until completing more of the db iteration (judgement of time limit or the beginning of iterating next db) because we have more sample data in this db and can get more accurate result. In the best case, if we scan the whole db, we can get the exact avg_ttl. In this PR, we postpone the avg_ttl calculation until the judgement of time limit or iteration of next db, so we can accumulate more data to get more precise avg_ttl. Note that we still need to make sure to decay the old TTLs at the same speed as before, which is why we want to run the decay mechanism several times, or use the Pow formula, see the comment in the code. In my experiment, this PR can improve 89% or 52% accuracy in different workload. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 01 Feb, 2024 1 commit
-
-
Yanqi Lv authored
In Redis, rdb is produced in three scenarios mainly. - backup, such as `bgsave` and `save` command - full sync in replication - aof rewrite if `aof-use-rdb-preamble` is yes We also have some RDB flags to identify the purpose of rdb saving. ```C /* flags on the purpose of rdb save or load */ #define RDBFLAGS_NONE 0 /* No special RDB loading. */ #define RDBFLAGS_AOF_PREAMBLE (1<<0) /* Load/save the RDB as AOF preamble. */ #define RDBFLAGS_REPLICATION (1<<1) /* Load/save for SYNC. */ ``` But currently, it seems that these flags and purposes of rdb saving don't exactly match. I find it in `rdbSaveRioWithEOFMark` which calls `startSaving` with `RDBFLAGS_REPLICATION` but `rdbSaveRio` with `RDBFLAGS_NONE`. ```C int rdbSaveRioWithEOFMark(int req, rio *rdb, int *error, rdbSaveInfo *rsi) { char eofmark[RDB_EOF_MARK_SIZE]; startSaving(RDBFLAGS_REPLICATION); getRandomHexChars(eofmark,RDB_EOF_MARK_SIZE); if (error) *error = 0; if (rioWrite(rdb,"$EOF:",5) == 0) goto werr; if (rioWrite(rdb,eofmark,RDB_EOF_MARK_SIZE) == 0) goto werr; if (rioWrite(rdb,"\r\n",2) == 0) goto werr; if (rdbSaveRio(req,rdb,error,RDBFLAGS_NONE,rsi) == C_ERR) goto werr; if (rioWrite(rdb,eofmark,RDB_EOF_MARK_SIZE) == 0) goto werr; stopSaving(1); return C_OK; werr: /* Write error. */ /* Set 'error' only if not already set by rdbSaveRio() call. */ if (error && *error == 0) *error = errno; stopSaving(0); return C_ERR; } ``` In this PR, I refine the purpose of rdb saving with accurate flags.
-
- 31 Jan, 2024 3 commits
-
-
Binbin authored
Ci report this failure: ``` *** [err]: Don't rehash if used memory exceeds maxmemory after rehash in tests/unit/maxmemory.tcl Expected '4098' to equal or match '4002' WARNING: the new maxmemory value set via CONFIG SET (1176088) is smaller than the current memory usage (1231083) ``` It can be seen from the log that used_memory changed before we set maxmemory. The reason is that in #12819, in cron, in addition to trying to shrink, we will also tyring to expand. The dict was expanded by cron before we set maxmemory, causing the test to fail. Before setting maxmemory, we only add 4095 keys to avoid triggering resize.
-
Binbin authored
When we use a timer to unblock a client in module, if the timer period and the block timeout are very close, they will unblock the client in the same event loop, and it will trigger the assertion. The reason is that in moduleBlockedClientTimedOut we will protect against re-processing, so we don't actually call updateStatsOnUnblock (see #12817), so we are not able to reset the c->duration. The reason is unblockClientOnTimeout() didn't realize that bc had been unblocked. We add a function to the module to determine if bc is blocked, and then use it in unblockClientOnTimeout() to exit. There is the stack: ``` beforeSleep blockedBeforeSleep handleBlockedClientsTimeout checkBlockedClientTimeout unblockClientOnTimeout unblockClient resetClient -- assertion, crash the server 'c->duration == 0' is not true ```
-
Binbin authored
The block timeout is passed in the test case, but we do not pass in the timeout_callback, and it will crash when unlocking. In this case, in moduleBlockedClientTimedOut we will check timeout_callback. There is the stack: ``` beforeSleep blockedBeforeSleep handleBlockedClientsTimeout checkBlockedClientTimeout unblockClientOnTimeout replyToBlockedClientTimedOut moduleBlockedClientTimedOut -- timeout_callback is NULL, invalidFunctionWasCalled bc->timeout_callback(&ctx,(void**)c->argv,c->argc); ```
-
- 30 Jan, 2024 6 commits
-
-
Chen Tianjie authored
Add a way to HSCAN a hash key, and get only the filed names. Command syntax is now: ``` HSCAN key cursor [MATCH pattern] [COUNT count] [NOVALUES] ``` when `NOVALUES` is on, the command will only return keys in the hash. --------- Co-authored-by:
Viktor Söderqvist <viktor.soderqvist@est.tech>
-
Slava Koyfman authored
Adds an ability to kill clients older than a specified age. Also, fixed the age calculation in `catClientInfoString` to use `commandTimeSnapshot` instead of the old `server.unixtime`, and added missing documentation for `CLIENT KILL ID` to output of `CLIENT help`. --------- Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Oran Agra authored
tests consistently fail on timeout (sleep that's too short). it now takes more time because in #12819 we iterate on all dicts, not just non-empty ones. it passed the PR's CI because it skips the `slow` tag, which might have been misplaced, but now it is probably required. with the fix, the tests take quite a lot of time: ``` [ok]: Redis can trigger resizing (1860 ms) [ok]: Redis can rewind and trigger smaller slot resizing (744 ms) ``` before #12819: ``` [ok]: Redis can trigger resizing (309 ms) [ok]: Redis can rewind and trigger smaller slot resizing (295 ms) ``` failure: https://github.com/redis/redis/actions/runs/7704158180/job/20995931735 ``` *** [err]: expire scan should skip dictionaries with lot's of empty buckets in tests/unit/expire.tcl scan didn't handle slot skipping logic. *** [err]: Redis can trigger resizing in tests/unit/other.tcl Expected '[Dictionary HT] Hash table 0 stats (main hash table): table size: 128 number of elements: 5 [Expires HT] Hash table 0 stats (main hash table): No stats available for empty dictionaries ' to match '*table size: 8*' (context: type eval line 29 cmd {assert_match "*table size: 8*" [r debug HTSTATS 0]} proc ::test) *** [err]: Redis can rewind and trigger smaller slot resizing in tests/unit/other.tcl Expected '[Dictionary HT] Hash table 0 stats (main hash table): table size: 256 number of elements: 10 [Expires HT] Hash table 0 stats (main hash table): No stats available for empty dictionaries ' to match '*table size: 16*' (context: type eval line 27 cmd {assert_match "*table size: 16*" [r debug HTSTATS 0]} proc ::test) ```
-
Binbin authored
This was introduced in #13004, missing this assignment. It causes timeout to be a random value (may be less than now), and then in `Unblock by timer` test, the client is unblocked and then it call timeout_callback, since the callback is NULL, the server will crash. The crash stack is: ``` beforesleep handleBlockedClientsTimeout checkBlockedClientTimeout unblockClientOnTimeout replyToBlockedClientTimedOut moduleBlockedClientTimedOut -- the timeout_callback is NULL, invalidFunctionWasCalled bc->timeout_callback(&ctx,(void**)c->argv,c->argc); ```
-
Binbin authored
This allows specifying the timeout value for opening the TCP connection to a server. The timeout, default 0 means no limit, depending on the OS. It can be specified using the new `-t` switch. revive #3764, fixes #3763 --------- Co-authored-by:
Itamar Haber <itamar@redislabs.com> Co-authored-by:
yoav-steinberg <yoav@redislabs.com>
-
Binbin authored
In #11012, we will reprocess command when client is unblocked on keys, in some blocking commands, for example, in the XREADGROUP BLOCK scenario, because of the re-processing command, we will recalculate the block timeout, causing the blocking time to be reset. This commit add a new CLIENT_REPROCESSING_COMMAND clent flag, explicitly let the command know that it is being re-processed, later in blockForKeys we will not reset the timeout. Affected BLOCK cases: - list / zset / stream, added test cases for each. Unaffected cases: - module (never re-process the commands). - WAIT / WAITAOF (never re-process the commands). Fixes #12998.
-
- 29 Jan, 2024 3 commits
-
-
Chen Tianjie authored
The function `tryResizeHashTables` only attempts to shrink the dicts that has keys (change from #11695), this was a serious problem until the change in #12850 since it meant if all keys are deleted, we won't shrink the dick. But still, both dictShrink and dictExpand may be blocked by a fork child process, therefore, the cron job needs to perform both dictShrink and dictExpand, for not just non-empty dicts, but all dicts in DBs. What this PR does: 1. Try to resize all dicts in DBs (not just non-empty ones, as it was since #12850) 2. handle both shrink and expand (not just shrink, as it was since forever) 3. Refactor some APIs about dict resizing (get rid of `htNeedsShrink` `htNeedsShrink` `dictShrinkToFit`, and expose `dictShrinkIfNeeded` `dictExpandIfNeeded` which already contains all the code of those functions we get rid of, to make APIs more neat) 4. In the `Don't rehash if redis has child process` test, now that cron would do resizing, we no longer need to write to DB after the child process got killed, and can wait for the cron to expand the hash table.
-
Ozan Tezcan authored
Modules may want to handle allocation failures gracefully. Adding RM_TryCalloc() and RM_TryRealloc() for it. RM_TryAlloc() was added before: https://github.com/redis/redis/pull/10541
-
Binbin authored
Fix maxmemory-samples stack overflow crash in evictionPoolPopulate, limit its value to [1,64] (#13000) We have not limited the value of maxmemory-samples in the past, it can be set very large. If it is set very large, we will have stack overflow in evictionPoolPopulate when we trigger the key eviction. There is no reason for this config to be set too high, so just limit its range to [1,64].
-
- 27 Jan, 2024 1 commit
-
-
Roshan Khatri authored
#### Problem Statement: For any read/update operation during rehashing, we're doing ~10+ random DRAM lookups to do the rehashing, as we are using the `rehashidx` to rehash 10 buckets, whose dict entries most likely aren't cached in the CPU or near the bucket we are operating on. If these random bucket are empty, the rehashing process during that command execution is skipped. #### Implementation: For reducing the performance recession while dict is rehashing, we determine the index at which the key would be stored in the 0th HT, we check if that index has already been rehashed, if not we will rehash the bucket containing the key and the bucket will be moved from 0th HT to the 1st HT. If the key has already been rehashed, we perform the random access bucket rehash (using `rehashidx`) and we again verify if rehashing is still ongoing and look up the key in the respective HT. This ensures rehashing is not skipped in any command call and that we rehash a particular bucket or random bucket in each call. #### Changes in this PR: - Added a new method `dictBucketRehash` to perform rehash on a single bucket. - Helper function `moveKeysInBucketOldtoNew` for `dictRehash` and `dictBucketRehash` to move all the keys in a bucket from the old to the new hash HT. - Helper function `verifyMoreRehashRequired` for `dictRehash` and `dictBucketRehash` to check if we have already rehashed the whole table and if more rehashing is required. ### Benchmark: - This PR still shows **~13%** improvement in the latency during rehashing. - Rehashing is now **~2%** faster for this PR when compared to unstable. --------- Co-authored-by:
Oran Agra <oran@redislabs.com> Co-authored-by:
Madelyn Olson <34459052+madolson@users.noreply.github.com>
-
- 26 Jan, 2024 1 commit
-
-
judeng authored
The question is introduced in #12799 , the script cannot find the correct src and deps directories, so it always returns dirty as 0.
-
- 25 Jan, 2024 2 commits
-
-
Binbin authored
Code incorrectly set the limit value to 1024MB. Introduced in #12961.
-
zhaozhao.zz authored
Fix #9926 , and introduce an alternative method to prevent abuse of transactions: 1. revert #5454 (which was blocking read-only transactions in OOM state), and break the tie of MULTI state memory usage and the server OOM state. Meaning that we'll limit the total memory a single client can queue, and do that unconditionally regardless of the server being OOM or not. 2. to prevent abuse of transactions, we use the `client-query-buffer-limit` to restrict the size of the transaction. Because the commands cached in the MULTI/EXEC queue have not been executed yet, so they are also considered a part of the "query buffer" in a broader sense. In other words, the commands in the MULTI queue and the `querybuf` of the client together constitute the "query buffer". When they exceed the limit, the connection will be disconnected. The reasoning is that it's sensible to sends a single command with a huge (1GB) argument, and it's sensible to sends a transaction with many small commands, but it's probably not common to sends a long transaction with many huge arguments (will consume a lot of memory before even being executed). If anyone runs into that, they can simply increase the `client-query-buffer-limit` config. P.S. To prevent DDoS attacks, unauthenticated clients have a separate hard limit. Their query buffer should not exceed a maximum of 1MB. In other words, if the query buffer of an unauthenticated client exceeds 1MB or the `client-query-buffer-limit` (if it is set to a value smaller than 1MB,), the connection will be disconnected.
-
- 23 Jan, 2024 7 commits
-
-
Binbin authored
In the following case sender may be unknown, so we need to set up a NULL check for sender: ``` /* If this is a MEET packet from an unknown node, we still process * the gossip section here since we have to trust the sender because * of the message type. */ if (!sender && type == CLUSTERMSG_TYPE_MEET) clusterProcessGossipSection(hdr,link); ```
-
Wen Hui authored
We have test cases for incr related commands with no key exist and spaces in key and wrong type of key. However, we dont have test cases covered for INCRBY INCRBYFLOAT DECRBY INCR DECR HINCRBY HINCRBYFLOAT ZINCRBY with valid key and invalid value as argument, and float value to incrby and decrby. So added test cases for the scenarios in incr.tcl. Thank you!
-
Binbin authored
In #11568 we removed the NOSCRIPT flag from commands, e.g. removing NOSCRIPT flag from WAIT. Aiming to allow them in scripts and let them implicitly behave in the non-blocking way. This PR remove NOSCRIPT flag from WAITAOF just like WAIT (to be symmetrical)). And this PR also add BLOCKING flag for WAIT and WAITAOF.
-
Binbin authored
This PR did some cleanups around function: - drop the comment about Libraries Ctx, since we do have comment in functionsLibCtx, no need to maintain multiple copies. - remove outdated comment about the dropped Library description. - remove unused desc and code vars in functionExtractLibMetaData. - fix engines_nemory typo, changed it to engines_memory. - remove outdated comment about FUNCTION CREATE and FUNCTION INFO, FUNCTION CREATE was renamed to FUNCTION LOAD. - Check in initServer whether the return of functionsInit is OK.
-
Oran Agra authored
seems that we forgot to update the array in redis-check rdb.
-
dependabot[bot] authored
Bumps [actions/cache](https://github.com/actions/cache) from 3 to 4. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/actions/cache/releases">actions/cache's releases</a>.</em></p> <blockquote> <h2>v4.0.0</h2> <h2>What's Changed</h2> <ul> <li>Update action to node20 by <a href="https://github.com/takost"><code>@takost</code></a> in <a href="https://redirect.github.com/actions/cache/pull/1284">actions/cache#1284</a></li> <li>feat: save-always flag by <a href="https://github.com/to-s"><code>@to-s</code></a> in <a href="https://redirect.github.com/actions/cache/pull/1242">actions/cache#1242</a></li> </ul> <h2>New Contributors</h2> <ul> <li><a href="https://github.com/takost"><code>@takost</code></a> made their first contribution in <a href="https://redirect.github.com/actions/cache/pull/1284">actions/cache#1284</a></li> <li><a href="https://github.com/to-s"><code>@to-s</code></a> made their first contribution in <a href="https://redirect.github.com/actions/cache/pull/1242">actions/cache#1242</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/actions/cache/compare/v3...v4.0.0">https://github.com/actions/cache/compare/v3...v4.0.0</a></p> <h2>v3.3.3</h2> <h2>What's Changed</h2> <ul> <li>Cache v3.3.3 by <a href="https://github.com/robherley"><code>@robherley</code></a> in <a href="https://redirect.github.com/actions/cache/pull/1302">actions/cache#1302</a></li> </ul> <h2>New Contributors</h2> <ul> <li><a href="https://github.com/robherley"><code>@robherley</code></a> made their first contribution in <a href="https://redirect.github.com/actions/cache/pull/1302">actions/cache#1302</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/actions/cache/compare/v3...v3.3.3">https://github.com/actions/cache/compare/v3...v3.3.3</a></p> <h2>v3.3.2</h2> <h2>What's Changed</h2> <ul> <li>Fixed readme with new segment timeout values by <a href="https://github.com/kotewar"><code>@kotewar</code></a> in <a href="https://redirect.github.com/actions/cache/pull/1133">actions/cache#1133</a></li> <li>Readme fixes by <a href="https://github.com/kotewar"><code>@kotewar</code></a> in <a href="https://redirect.github.com/actions/cache/pull/1134">actions/cache#1134</a></li> <li>Updated description of the lookup-only input for main action by <a href="https://github.com/kotewar"><code>@kotewar</code></a> in <a href="https://redirect.github.com/actions/cache/pull/1130">actions/cache#1130</a></li> <li>Change two new actions mention as quoted text by <a href="https://github.com/bishal-pdMSFT"><code>@bishal-pdMSFT</code></a> in <a href="https://redirect.github.com/actions/cache/pull/1131">actions/cache#1131</a></li> <li>Update Cross-OS Caching tips by <a href="https://github.com/pdotl"><code>@pdotl</code></a> in <a href="https://redirect.github.com/actions/cache/pull/1122">actions/cache#1122</a></li> <li>Bazel example (Take <a href="https://redirect.github.com/actions/cache/issues/2">#2</a>️⃣) by <a href="https://github.com/vorburger"><code>@vorburger</code></a> in <a href="https://redirect.github.com/actions/cache/pull/1132">actions/cache#1132</a></li> <li>Remove actions to add new PRs and issues to a project board by <a href="https://github.com/jorendorff"><code>@jorendorff</code></a> in <a href="https://redirect.github.com/actions/cache/pull/1187">actions/cache#1187</a></li> <li>Consume latest toolkit and fix dangling promise bug by <a href="https://github.com/chkimes"><code>@chkimes</code></a> in <a href="https://redirect.github.com/actions/cache/pull/1217">actions/cache#1217</a></li> <li>Bump action version to 3.3.2 by <a href="https://github.com/bethanyj28"><code>@bethanyj28</code></a> in <a href="https://redirect.github.com/actions/cache/pull/1236">actions/cache#1236</a></li> </ul> <h2>New Contributors</h2> <ul> <li><a href="https://github.com/vorburger"><code>@vorburger</code></a> made their first contribution in <a href="https://redirect.github.com/actions/cache/pull/1132">actions/cache#1132</a></li> <li><a href="https://github.com/jorendorff"><code>@jorendorff</code></a> made their first contribution in <a href="https://redirect.github.com/actions/cache/pull/1187">actions/cache#1187</a></li> <li><a href="https://github.com/chkimes"><code>@chkimes</code></a> made their first contribution in <a href="https://redirect.github.com/actions/cache/pull/1217">actions/cache#1217</a></li> <li><a href="https://github.com/bethanyj28"><code>@bethanyj28</code></a> made their first contribution in <a href="https://redirect.github.com/actions/cache/pull/1236">actions/cache#1236</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/actions/cache/compare/v3...v3.3.2">https://github.com/actions/cache/compare/v3...v3.3.2</a></p> <h2>v3.3.1</h2> <h2>What's Changed</h2> <ul> <li>Reduced download segment size to 128 MB and timeout to 10 minutes by <a href="https://github.com/kotewar"><code>@kotewar</code></a> in <a href="https://redirect.github.com/actions/cache/pull/1129">actions/cache#1129</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/actions/cache/compare/v3...v3.3.1">https://github.com/actions/cache/compare/v3...v3.3.1</a></p> <h2>v3.3.0</h2> <h2>What's Changed</h2> <ul> <li>Bug: Permission is missing in cache delete example by <a href="https://github.com/kotokaze"><code>@kotokaze</code></a> in <a href="https://redirect.github.com/actions/cache/pull/1123">actions/cache#1123</a></li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/actions/cache/blob/main/RELEASES.md">actions/cache's changelog</a>.</em></p> <blockquote> <h1>Releases</h1> <h3>3.0.0</h3> <ul> <li>Updated minimum runner version support from node 12 -> node 16</li> </ul> <h3>3.0.1</h3> <ul> <li>Added support for caching from GHES 3.5.</li> <li>Fixed download issue for files > 2GB during restore.</li> </ul> <h3>3.0.2</h3> <ul> <li>Added support for dynamic cache size cap on GHES.</li> </ul> <h3>3.0.3</h3> <ul> <li>Fixed avoiding empty cache save when no files are available for caching. (<a href="https://redirect.github.com/actions/cache/issues/624">issue</a>)</li> </ul> <h3>3.0.4</h3> <ul> <li>Fixed tar creation error while trying to create tar with path as <code>~/</code> home folder on <code>ubuntu-latest</code>. (<a href="https://redirect.github.com/actions/cache/issues/689">issue</a>)</li> </ul> <h3>3.0.5</h3> <ul> <li>Removed error handling by consuming actions/cache 3.0 toolkit, Now cache server error handling will be done by toolkit. (<a href="https://redirect.github.com/actions/cache/pull/834">PR</a>)</li> </ul> <h3>3.0.6</h3> <ul> <li>Fixed <a href="https://redirect.github.com/actions/cache/issues/809">#809</a> - zstd -d: no such file or directory error</li> <li>Fixed <a href="https://redirect.github.com/actions/cache/issues/833">#833</a> - cache doesn't work with github workspace directory</li> </ul> <h3>3.0.7</h3> <ul> <li>Fixed <a href="https://redirect.github.com/actions/cache/issues/810">#810</a> - download stuck issue. A new timeout is introduced in the download process to abort the download if it gets stuck and doesn't finish within an hour.</li> </ul> <h3>3.0.8</h3> <ul> <li>Fix zstd not working for windows on gnu tar in issues <a href="https://redirect.github.com/actions/cache/issues/888">#888</a> and <a href="https://redirect.github.com/actions/cache/issues/891">#891</a>.</li> <li>Allowing users to provide a custom timeout as input for aborting download of a cache segment using an environment variable <code>SEGMENT_DOWNLOAD_TIMEOUT_MINS</code>. Default is 60 minutes.</li> </ul> <h3>3.0.9</h3> <ul> <li>Enhanced the warning message for cache unavailablity in case of GHES.</li> </ul> <h3>3.0.10</h3> <ul> <li>Fix a bug with sorting inputs.</li> <li>Update definition for restore-keys in README.md</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/actions/cache/commit/13aacd865c20de90d75de3b17ebe84f7a17d57d2"><code>13aacd8</code></a> Merge pull request <a href="https://redirect.github.com/actions/cache/issues/1242">#1242</a> from to-s/main</li> <li><a href="https://github.com/actions/cache/commit/53b35c543921fe2e8b288765ff817de9de8d906f"><code>53b35c5</code></a> Merge branch 'main' into main</li> <li><a href="https://github.com/actions/cache/commit/65b8989fab3bb394817bdb845a453dff480c2b51"><code>65b8989</code></a> Merge pull request <a href="https://redirect.github.com/actions/cache/issues/1284">#1284</a> from takost/update-to-node-20</li> <li><a href="https://github.com/actions/cache/commit/d0be34d54485f31ca2ccbe66e6ea3d96544a807b"><code>d0be34d</code></a> Fix dist</li> <li><a href="https://github.com/actions/cache/commit/66cf064d47313d2cccf392d01bd10925da2bd072"><code>66cf064</code></a> Merge branch 'main' into update-to-node-20</li> <li><a href="https://github.com/actions/cache/commit/1326563738ddb735c5f2ce85cba8c79f33b728cd"><code>1326563</code></a> Merge branch 'main' into main</li> <li><a href="https://github.com/actions/cache/commit/e71876755e268d6cc25a5d3e3c46ae447e35290a"><code>e718767</code></a> Fix format</li> <li><a href="https://github.com/actions/cache/commit/01229828ffa049a8dee4db27bcb23ed33f2b451f"><code>0122982</code></a> Apply workaround for earlyExit</li> <li><a href="https://github.com/actions/cache/commit/3185ecfd6135856ca6d904ae032cff4f39b8b365"><code>3185ecf</code></a> Update "only-" actions to node20</li> <li><a href="https://github.com/actions/cache/commit/25618a0a675e8447e5ffc8ed9b7ddb2aaf927f65"><code>25618a0</code></a> Bump version</li> <li>Additional commits viewable in <a href="https://github.com/actions/cache/compare/v3...v4">compare view</a></li> </ul> </details> <br /> [](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores ) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> Signed-off-by:
dependabot[bot] <support@github.com> Co-authored-by:
dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
-
Harkrishn Patro authored
Currently slowlog gets disabled if slowlog-log-slower-than is set to less than zero. I think we should also disable it if slowlog-max-len is set to zero. We apply the same logic to acllog-max-len.
-
- 22 Jan, 2024 2 commits
-
-
Brennan authored
There have been occasional instances of memory corruption (though code bugs or bit flips) leading to invalid node information being gossiped around. To prevent this invalid information spreading, we verify the node IDs in received gossip are in an acceptable format, and disregard any gossiped nodes with invalid IDs. This PR uses the existing verifyClusterNodeId function to check the validity of the gossiped node IDs and if an invalid one is encountered, logs raw byte information to help debug the corruption. --------- Co-authored-by:
Madelyn Olson <madelyneolson@gmail.com>
-
zhaozhao.zz authored
background: some modules need to know the `dbid` information, such as the function used during RDB loading: ``` robj *rdbLoadObject(int rdbtype, rio *rdb, sds key, int dbid, int *error) { .... moduleInitIOContext(io,mt,rdb,&keyobj,dbid); ``` However, during replication, the "tempDb" created for diskless RDB loading is not correctly set with the dbid. This leads to passing the wrong dbid to the `rdbLoadObject` function (as tempDb uses zcalloc, all ids are 0). ``` disklessLoadInitTempDb()->rdbLoadRioWithLoadingCtx()-> /* Read value */ val = rdbLoadObject(type,rdb,key,db->id,&error); ``` To fix it, set the correct ID (relative index) for the tempdb.
-
- 19 Jan, 2024 3 commits
-
-
Yanqi Lv authored
In #12838, we misuse the safe iterator of the client dict, so we can't catch the synchronous release of the client if there is a bug. Since we realize that clients (even subscribers) are released with async free, we change the safe iterators of the client dict into unsafe iterators in `pubsub.c`. And I also remove redundant code.
-
Yanqi Lv authored
Before this change (most recently modified in https://github.com/redis/redis/pull/12850#discussion_r1421406393), The trigger for normal expand threshold was 100% utilization and the trigger for normal shrink threshold was 10% (HASHTABLE_MIN_FILL). While during fork (DICT_RESIZE_AVOID), when we want to avoid rehash, the trigger thresholds were multiplied by 5 (`dict_force_resize_ratio`), meaning 500% for expand and 2% (100/10/5) for shrink. However, in `dictRehash` (the incremental rehashing), the rehashing threshold for shrinking during fork (DICT_RESIZE_AVOID) was 20% by mistake. This meant that if a shrinking is triggered when `dict_can_resize` is `DICT_RESIZE_ENABLE` which the threshold is 10%, the rehashing can continue when `dict_can_resize` is `DICT_RESIZE_AVOID`. This would cause unwanted CopyOnWrite damage. It'll make sense to change the thresholds of the rehash trigger and the thresholds of the incremental rehashing the same, however, in one we compare the size of the hash table to the number of records, and in the other we compare the size of ht[0] to the size of ht[1], so the formula is not exactly the same. to make things easier we change all the thresholds to powers of 2, so the normal shrinking threshold is changed from 100/10 (i.e. 10%) to 100/8 (i.e. 12.5%), and we change the threshold during forks from 5 to 4, i.e. from 500% to 400% for expand, and from 2% (100/10/5) to 3.125% (100/8/4)
-
debing.sun authored
Fix #12785 and other race condition issues. See the following isolated comments. The following report was obtained using SANITIZER thread. ```sh make SANITIZER=thread ./runtest-moduleapi --config io-threads 4 --config io-threads-do-reads yes --accurate ``` 1. Fixed thread-safe issue in RM_UnblockClient() Related discussion: https://github.com/redis/redis/pull/12817#issuecomment-1831181220 * When blocking a client in a module using `RM_BlockClientOnKeys()` or `RM_BlockClientOnKeysWithFlags()` with a timeout_callback, calling RM_UnblockClient() in module threads can lead to race conditions in `updateStatsOnUnblock()`. - Introduced: Version: 6.2 PR: #7491 - Touch: `server.stat_numcommands`, `cmd->latency_histogram`, `server.slowlog`, and `server.latency_events` - Harm Level: High Potentially corrupts the memory data of `cmd->latency_histogram`, `server.slowlog`, and `server.latency_events` - Solution: Differentiate whether the call to moduleBlockedClientTimedOut() comes from the module or the main thread. Since we can't know if RM_UnblockClient() comes from module threads, we always assume it does and let `updateStatsOnUnblock()` asynchronously update the unblock status. * When error reply is called in timeout_callback(), ctx is not thread-safe, eventually lead to race conditions in `afterErrorReply`. - Introduced: Version: 6.2 PR: #8217 - Touch `server.stat_total_error_replies`, `server.errors`, - Harm Level: High Potentially corrupts the memory data of `server.errors` - Solution: Make the ctx in `timeout_callback()` with `REDISMODULE_CTX_THREAD_SAFE`, and asynchronously reply errors to the client. 2. Made RM_Reply*() family API thread-safe Related discussion: https://github.com/redis/redis/pull/12817#discussion_r1408707239 Call chain: `RM_Reply*()` -> `_addReplyToBufferOrList()` -> touch server.current_client - Introduced: Version: 7.2.0 PR: #12326 - Harm Level: None Since the module fake client won't have the `CLIENT_PUSHING` flag, even if we touch server.current_client, we can still exit after `c->flags & CLIENT_PUSHING`. - Solution Checking `c->flags & CLIENT_PUSHING` earlier. 3. Made freeClient() thread-safe Fix #12785 - Introduced: Version: 4.0 Commit: https://github.com/redis/redis/commit/3fcf959e609e850a114d4016843e4c991066ebac - Harm Level: Moderate * Trigger assertion It happens when the module thread calls freeClient while the io-thread is in progress, which just triggers an assertion, and doesn't make any race condiaions. * Touch `server.current_client`, `server.stat_clients_type_memory`, and `clientMemUsageBucket->clients`. It happens between the main thread and the module threads, may cause data corruption. 1. Error reset `server.current_client` to NULL, but theoretically this won't happen, because the module has already reset `server.current_client` to old value before entering freeClient. 2. corrupts `clientMemUsageBucket->clients` in updateClientMemUsageAndBucket(). 3. Causes server.stat_clients_type_memory memory statistics to be inaccurate. - Solution: * No longer counts memory usage on fake clients, to avoid updating `server.stat_clients_type_memory` in freeClient. * No longer resetting `server.current_client` in unlinkClient, because the fake client won't be evicted or disconnected in the mid of the process. * Judgment assertion `io_threads_op == IO_THREADS_OP_IDLE` only if c is not a fake client. 4. Fixed free client args without GIL Related discussion: https://github.com/redis/redis/pull/12817#discussion_r1408706695 When freeing retained strings in the module thread (refcount decr), or using them in some way (refcount incr), we should do so while holding the GIL, otherwise, they might be simultaneously freed while the main thread is processing the unblock client state. - Introduced: Version: 6.2.0 PR: #8141 - Harm Level: Low Trigger assertion or double free or memory leak. - Solution: Documenting that module API users need to ensure any access to these retained strings is done with the GIL locked 5. Fix adding fake client to server.clients_pending_write It will incorrectly log the memory usage for the fake client. Related discussion: https://github.com/redis/redis/pull/12817#issuecomment-1851899163 - Introduced: Version: 4.0 Commit: https://github.com/redis/redis/commit/9b01b64430fbc1487429144d2e4e72a4a7fd9db2 - Harm Level: None Only result in NOP - Solution: * Don't add fake client into server.clients_pending_write * Add c->conn assertion for updateClientMemUsageAndBucket() and updateClientMemoryUsage() to avoid same issue in the future. So now it will be the responsibility of the caller of both of them to avoid passing in fake client. 6. Fix calling RM_BlockedClientMeasureTimeStart() and RM_BlockedClientMeasureTimeEnd() without GIL - Introduced: Version: 6.2 PR: #7491 - Harm Level: Low Causes inaccuracies in command latency histogram and slow logs, but does not corrupt memory. - Solution: Module API users, if know that non-thread-safe APIs will be used in multi-threading, need to take responsibility for protecting them with their own locks instead of the GIL, as using the GIL is too expensive. ### Other issue 1. RM_Yield is not thread-safe, fixed via #12905. ### Summarize 1. Fix thread-safe issues for `RM_UnblockClient()`, `freeClient()` and `RM_Yield`, potentially preventing memory corruption, data disorder, or assertion. 2. Updated docs and module test to clarify module API users' responsibility for locking non-thread-safe APIs in multi-threading, such as RM_BlockedClientMeasureTimeStart/End(), RM_FreeString(), RM_RetainString(), and RM_HoldString(). ### About backpot to 7.2 1. The implement of (1) is not too satisfying, would like to get more eyes. 2. (2), (3) can be safely for backport 3. (4), (6) just modifying the module tests and updating the documentation, no need for a backpot. 4. (5) is harmless, no need for a backpot. --------- Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 18 Jan, 2024 4 commits
-
-
Chen Tianjie authored
When doing dict resizing, dictTypeResizeAllowed is used to judge whether the new allocated memory for rehashing would cause OOM. However when shrinking, we alloc `_dictNextExp(d->ht_used[0])` bytes of memory, while in `dictTypeResizeAllowed` we still use `_dictNextExp(d->ht_used[0]+1)` as the new allocated memory size. This will overestimate the memory used by shrinking at special conditions, causing a false OOM judgement.
-
Binbin authored
Introduced in #12952, reported by valgrind.
-
Binbin authored
We doing this in diskless on-empty-db mode, when diskless loading fails, we will call emptyData to remove the half-loaded data in case we started with an empty replica. Now when a disk-based sync rdbLoad fails, we will call emptyData too in case it loads partially incomplete data. when the replica attempts another re-sync, it'll empty the dataset again anyway, so this affects two things: 1. memory consumption in the time gap until the next rdb loading begins 2. if the unsynced replica is for some reason promoted, it would have kept the partial dataset instead of being empty.
-
Binbin authored
Before #12850, we will only try to shrink the dict in serverCron, which we can control by using a child process, but now every time we delete a key, the shrink check will be called. In these test (added in #12802), we meant to disable the resizing, but druing the delete, the dict will meet the force shrink, like 2 / 128 = 0.015 < 0.2, the delete will trigger a force resize and will cause the test to fail. In this commit, we try to keep the load factor at 3 / 128 = 0.023, that is, do not meet the force shrink.
-