- 18 Jun, 2024 1 commit
-
-
debing.sun authored
this PR fixes two crashes: 1. Fix missing slotToKeyInit() when using `flushdb async` under cluster mode. https://github.com/redis/redis/issues/13205 2. Fix missing expires_cursor reset when stopping active defrag in the middle of defragment. https://github.com/redis/redis/issues/13307 If we stop active defrag in the middle of defragging db->expires, if `expires_cursor` is not reset to 0, the next time we enable active defrag again, defragLaterStep(db, ...) will be entered. However, at this time, `db` has been reset to NULL, which results in crash. The affected code were removed by #11695 and #13058 in usntable, so we just need backport this to 7.2.
-
- 04 Mar, 2023 1 commit
-
-
Binbin authored
This test is very sensitive and fragile. It often fails in Daily, in most cases, it failed in test-ubuntu-32bit (the AOF loading one), with the range in (31, 40): ``` [err]: Active defrag in tests/unit/memefficiency.tcl Expected 38 <= 30 (context: type eval line 113 cmd {assert {$max_latency <= 30}} proc ::test) ``` The AOF loading part isn't tightly fixed to the cron hz. It calls processEventsWhileBlocked once in every 1024 command calls. ``` /* Serve the clients from time to time */ if (!(loops++ % 1024)) { off_t progress_delta = ftello(fp) - last_progress_report_size; loadingIncrProgress(progress_delta); last_progress_report_size += progress_delta; processEventsWhileBlocked(); processModuleLoadingProgressEvent(1); } ``` In this case, we can either decrease the 1024 or increase the threshold of just the AOF part of that test. Considering the test machines are sometimes slow, and all sort of quirks could happen (which do not indicate a bug), and we've already set to 30, we suppose we can set it a little bit higher, set it to 40. We can have this instead of adding another testing config (we can add it when we really need it). Fixes #11868
-
- 09 Dec, 2022 1 commit
-
-
Oran Agra authored
We do defrag during AOF loading, but aim to detect fragmentation only once a second, so this test aims to slow down the AOF loading and mimic loading of a large file. On fast machines the sleep, plus the actual work we did was insufficient making it sleep longer so the test won't fail. The error we used to get is this one: Expected 0 > 100000 (context: type eval line 106 cmd {assert {$hits > 100000}} proc ::test)
-
- 09 Mar, 2022 1 commit
-
-
蔡相跃 authored
-
- 21 Feb, 2022 1 commit
-
-
yoav-steinberg authored
This includes two fixes: * We forgot to count non-key reallocs in defragmentation stats. * Fix the script defrag tests so to make dict entries less signigicant in fragmentation by making the scripts larger. This assures active defrage will complete and reach desired results. Some inherent fragmentation might exists in dict entries which we need to ignore. This lead to occasional CI failures.
-
- 11 Feb, 2022 1 commit
-
-
yoav-steinberg authored
Remove scripts defragger since it was broken since #10126 (released in 7.0 RC1). would crash the server if defragger starts in a server that contains eval scripts. In #10126 the global `lua_script` dict became a dict to a custom `luaScript` struct with an internal `robj` in it instead of a generic `sds` -> `robj` dict. This means we need custom code to defrag it and since scripts should never really cause much fragmentation it makes more sense to simply remove the defrag code for scripts.
-
- 19 Dec, 2021 1 commit
-
-
Oran Agra authored
- add needs:debug flag for some tests - disable "save" in external tests (speedup?) - use debug_digest proc instead of debug command directly so it can be skipped - use OBJECT ENCODING instead of DEBUG OBJECT to get encoding - add a proc for OBJECT REFCOUNT so it can be skipped - move a bunch of tests in latency_monitor tests to happen later so that latency monitor has some values in it - add missing close_replication_stream calls - make sure to close the temp client if DEBUG LOG fails
-
- 21 Nov, 2021 1 commit
-
-
Oran Agra authored
Background: Following the upgrade to jemalloc 5.2, there was a test that used to be flaky and started failing consistently (on 32bit), so we disabled it (see #9645). This is a test that i introduced in #7289 when i attempted to solve a rare stagnation problem, and it later turned out i failed to solve it, ans what's more i added a test that caused it to be not so rare, and as i mentioned, now in jemalloc 5.2 it became consistent on 32bit. Stagnation can happen when all the slabs of the bin are equally utilized, so the decision to move an allocation from a relatively empty slab to a relatively full one, will never happen, and in that test all the slabs are at 50% utilization, so the defragger could just keep scanning the keyspace and not move anything. What this PR changes: * First, finally in jemalloc 5.2 we have the count of non-full slabs, so when we compare the utilization of the current slab, we can compare it to the average utilization of the non-full slabs in our bin, instead of the total average of our bin. this takes the full slabs out of the game, since they're not candidates for migration (neither source nor target). * Secondly, We add some 12% (100/8) to the decision to defrag an allocation, this is the part that aims to avoid stagnation, and it's especially important since the above mentioned change can get us closer to stagnation. * Thirdly, since jemalloc 5.2 adds sharded bins, we take into account all shards (something that's missing from the original PR that merged it), this isn't expected to make any difference since anyway there should be just one shard. How this was benchmarked. What i did was run the memefficiency test unit with `--verbose` and compare the defragger hits and misses the tests reported. At first, when i took into consideration only the non-full slabs, it got a lot worse (i got into stagnation, or just got a lot of misses and a lot of hits), but when i added the 10% i got back to results that were slightly better than the ones of the jemalloc 5.1 branch. i.e. full defragmentation was achieved with fewer hits (relocations), and fewer misses (keyspace scans).
-
- 02 Nov, 2021 1 commit
-
-
menwen authored
the latency event was renamed in #7726, and the outcome was that the test was ineffective (unable to measure the max latency, always seeing 0)
-
- 18 Oct, 2021 1 commit
-
-
yoav-steinberg authored
Test started failing consistently in 32bit builds after upgrading to jemalloc 5.2.1 (#9623).
-
- 30 Aug, 2021 1 commit
-
-
Oran Agra authored
Failed on Raspberry Pi 3b where that single test took about 170 seconds
-
- 10 Jun, 2021 1 commit
-
-
Binbin authored
This PR adds a spell checker CI action that will fail future PRs if they introduce typos and spelling mistakes. This spell checker is based on blacklist of common spelling mistakes, so it will not catch everything, but at least it is also unlikely to cause false positives. Besides that, the PR also fixes many spelling mistakes and types, not all are a result of the spell checker we use. Here's a summary of other changes: 1. Scanned the entire source code and fixes all sorts of typos and spelling mistakes (including missing or extra spaces). 2. Outdated function / variable / argument names in comments 3. Fix outdated keyspace masks error log when we check `config.notify-keyspace-events` in loadServerConfigFromString. 4. Trim the white space at the end of line in `module.c`. Check: https://github.com/redis/redis/pull/7751 5. Some outdated https link URLs. 6. Fix some outdated comment. Such as: - In README: about the rdb, we used to said create a `thread`, change to `process` - dbRandomKey function coment (about the dictGetRandomKey, change to dictGetFairRandomKey) - notifyKeyspaceEvent fucntion comment (add type arg) - Some others minor fix in comment (Most of them are incorrectly quoted by variable names) 7. Modified the error log so that users can easily distinguish between TCP and TLS in `changeBindAddr`
-
- 09 Jun, 2021 1 commit
-
-
Yossi Gottlieb authored
This commit revives the improves the ability to run the test suite against external servers, instead of launching and managing `redis-server` processes as part of the test fixture. This capability existed in the past, using the `--host` and `--port` options. However, it was quite limited and mostly useful when running a specific tests. Attempting to run larger chunks of the test suite experienced many issues: * Many tests depend on being able to start and control `redis-server` themselves, and there's no clear distinction between external server compatible and other tests. * Cluster mode is not supported (resulting with `CROSSSLOT` errors). This PR cleans up many things and makes it possible to run the entire test suite against an external server. It also provides more fine grained controls to handle cases where the external server supports a subset of the Redis commands, limited number of databases, cluster mode, etc. The tests directory now contains a `README.md` file that describes how this works. This commit also includes additional cleanups and fixes: * Tests can now be tagged. * Tag-based selection is now unified across `start_server`, `tags` and `test`. * More information is provided about skipped or ignored tests. * Repeated patterns in tests have been extracted to common procedures, both at a global level and on a per-test file basis. * Cleaned up some cases where test setup was based on a previous test executing (a major anti-pattern that repeats itself in many places). * Cleaned up some cases where test teardown was not part of a test (in the future we should have dedicated teardown code that executes even when tests fail). * Fixed some tests that were flaky running on external servers.
-
- 08 Jan, 2021 1 commit
-
-
Oran Agra authored
The defragger works well on these systems, but the tests and their thresholds are not adjusted for these big pages, so the defragger isn't able to get down the fragmentation to the levels the test expects and it fails on "defrag didn't stop". Randomly choosing 8k as the threshold for the skipping Fixes #8265 (which had 65k pages)
-
- 14 Dec, 2020 1 commit
-
-
Oran Agra authored
Additionally the older defrag tests are using an obsolete way to check if the defragger is suuported (the error no longer contains "DISABLED"). this doesn't usually makes a difference since these tests are completely skipped if the allocator is not jemalloc, but that would fail if the allocator is a jemalloc that doesn't support defrag.
-
- 04 Nov, 2020 1 commit
-
-
Yossi Gottlieb authored
Not disabling save, slower systems begun background save that did not complete in time, resulting with SAVE failing with "ERR Background save already in progress".
-
- 22 Oct, 2020 1 commit
-
-
Yossi Gottlieb authored
Useful for running tests on systems which may be way slower than usual.
-
- 03 Sep, 2020 1 commit
-
-
Oran Agra authored
During long running scripts or loading RDB/AOF, we may need to do some defragging. Since processEventsWhileBlocked is called periodically at unknown intervals, and many cron jobs either depend on run_with_period (including active defrag), or rely on being called at server.hz rate (i.e. active defrag knows ho much time to run by looking at server.hz), the whileBlockedCron may have to run a loop triggering the cron jobs in it (currently only active defrag) several times. Other changes: - Adding a test for defrag during aof loading. - Changing key-load-delay config to take negative values for fractions of a microsecond sleep
-
- 20 May, 2020 1 commit
-
-
Oran Agra authored
There's a rare case which leads to stagnation in the defragger, causing it to keep scanning the keyspace and do nothing (not moving any allocation), this happens when all the allocator slabs of a certain bin have the same % utilization, but the slab from which new allocations are made have a lower utilization. this commit fixes it by removing the current slab from the overall average utilization of the bin, and also eliminate any precision loss in the utilization calculation and move the decision about the defrag to reside inside jemalloc. and also add a test that consistently reproduce this issue.
-
- 16 Apr, 2020 1 commit
-
-
Oran Agra authored
this test is time sensitive and it sometimes fail to pass below the latency threshold, even on strong machines. this test was the reson we're running just 2 parallel tests in the github actions CI, revering this.
-
- 27 Feb, 2020 1 commit
-
-
Oran Agra authored
it seems that running two clients at a time is ok too, resuces action time from 20 minutes to 10. we'll use this for now, and if one day it won't be enough we'll have to run just the sensitive tests one by one separately from the others. this commit also fixes an issue with the defrag test that appears to be very rare.
-
- 25 Feb, 2020 1 commit
-
-
Oran Agra authored
seems that github actions are slow, using just one client to reduce false positives. also adding verbose, testing only on latest ubuntu, and building on older one. when doing that, i can reduce the test threshold back to something saner
-
- 23 Feb, 2020 1 commit
-
-
Oran Agra authored
I saw that the new defag test for list was failing in CI recently, so i reduce it's threshold from 12 to 60. besides that, i add / improve the latency test for that other two defrag tests (add a sensitive latency and digest / save checks) and fix bad usage of debug populate (can't overrides existing keys). this was the original intention, which creates higher fragmentation.
-
- 18 Feb, 2020 1 commit
-
-
Oran Agra authored
When active defrag kicks in and finds a big list, it will create a bookmark to a node so that it is able to resume iteration from that node later. The quicklist manages that bookmark, and updates it in case that node is deleted. This will increase memory usage only on lists of over 1000 (see active-defrag-max-scan-fields) quicklist nodes (1000 ziplists, not 1000 items) by 16 bytes. In 32 bit build, this change reduces the maximum effective config of list-compress-depth and list-max-ziplist-size (from 32767 to 8191)
-
- 12 Nov, 2018 1 commit
-
-
Oran Agra authored
-
- 21 Aug, 2018 1 commit
-
-
Oran Agra authored
Few tests had borderline thresholds that were adjusted. The slave buffers test had two issues, preventing the slave buffer from growing: 1) the slave didn't necessarily go to sleep on time, or woke up too early, now using SIGSTOP to make sure it goes to sleep exactly when we want. 2) the master disconnected the slave on timeout
-
- 18 Jul, 2018 1 commit
-
-
Oran Agra authored
on slower machines, the active defrag test tended to fail. although the fragmentation ratio was below the treshold, the defragger was still in the middle of a scan cycle. this commit changes: - the defragger uses the current fragmentation state, rather than the cache one that is updated by server cron every 100ms. this actually fixes a bug of starting one excess scan cycle - the test lets the defragger use more CPU cycles, in hope that the defrag will be faster, but also give it more time before we give up.
-
- 27 Jun, 2018 2 commits
- 24 May, 2018 1 commit
-
-
antirez authored
They failed when active defrag could not be activated because the Jemalloc version does not include the additional APIs.
-
- 17 May, 2018 1 commit
-
-
Oran Agra authored
problems fixed: * failing to read fragmentation information from jemalloc * overflow in jemalloc fragmentation hint to the defragger * test suite not triggering eviction after population
-
- 12 Mar, 2018 1 commit
-
-
Oran Agra authored
other fixes / improvements: - LUA script memory isn't taken from zmalloc (taken from libc malloc) so it can cause high fragmentation ratio to be displayed (which is false) - there was a problem with "fragmentation" info being calculated from RSS and used_memory sampled at different times (now sampling them together) other details: - adding a few more allocator info fields to INFO and MEMORY commands - improve defrag test to measure defrag latency of big keys - increasing the accuracy of the defrag test (by looking at real grag info) this way we can use an even lower threshold and still avoid false positives - keep the old (total) "fragmentation" field unchanged, but add new ones for spcific things - add these the MEMORY DOCTOR command - deduct LUA memory from the rss in case of non jemalloc allocator (one for which we don't "allocator active/used") - reduce sampling rate of the rss and allocator info
-
- 22 Apr, 2017 3 commits
-
-
antirez authored
Related to #3786.
-
antirez authored
Apparently 1.4 is too low compared to what you get in certain setups (including mine). I raised it to 1.55 that hopefully is still enough to test that the fragmentation went down from 1.7 but without incurring in issues, however the test setup may be still fragile so certain times this may lead to false positives again, it's hard to test for these things in a determinsitic way. Related to #3786.
-
oranagra authored
-
- 10 Feb, 2015 1 commit
-
-
antirez authored
This test on Linux was extremely slow, since in Tcl we can't enable easily tcp-nodelay, so the busy loop used to take *a lot* with bigger writes. Fixed using pipelining.
-
- 25 Nov, 2013 1 commit
-
-
antirez authored
Fixes issue #1298.
-
- 29 Aug, 2013 1 commit
-
-
antirez authored
-