- 12 Dec, 2022 1 commit
-
-
filipe oliveira authored
GEODIST used snprintf("%.4f") for the reply using addReplyDoubleDistance, which was slow. This PR optimizes it without breaking compatibility by following the approach of ll2string with some changes to match the use case of distance and precision. I.e. we multiply it by 10000 format it as an integer, and then add a decimal point. This can achieve about 35% increase in the achievable ops/sec. Co-authored-by:
Oran Agra <oran@redislabs.com> (cherry picked from commit 61c85a2b)
-
- 20 Jun, 2022 1 commit
-
-
Tian authored
The current process to persist files is `write` the data, `fsync` and `rename` the file, but a underlying problem is that the rename may be lost when a sudden crash like power outage and the directory hasn't been persisted. The article [Ensuring data reaches disk](https://lwn.net/Articles/457667/) mentions a safe way to update file should be: 1. create a new temp file (on the same file system!) 2. write data to the temp file 3. fsync() the temp file 4. rename the temp file to the appropriate name 5. fsync() the containing directory This commit handles CONFIG REWRITE, AOF manifest, and RDB file (both for persistence, and the one the replica gets from the master). It doesn't handle (yet), ACL SAVE and Cluster configs, since these don't yet follow this pattern.
-
- 10 May, 2022 1 commit
-
-
Mariya Markova authored
I suggest to use "[fpclassify](https://en.cppreference.com/w/cpp/numeric/math/fpclassify)" for float comparison with zero, because of expression "value == 0" with value very close to zero can be considered as true with some performance compiler optimizations. Note: this code was introduced by 9d520a7f to accept zset scores that get ERANGE in conversion due to precision loss near 0. But with Intel compilers, ICC and ICX, where optimizations for 0 check are more aggressive, "==0" is true for mentioned functions, however should not be. Behavior is seen starting from O2. This leads to a failure in the ZSCAN test in scan.tcl
-
- 18 Apr, 2022 1 commit
-
-
Binbin authored
There is a implicit conversion warning in clang: ``` util.c:574:23: error: implicit conversion from 'long long' to 'double' changes value from -4611686018427387903 to -4611686018427387904 [-Werror,-Wimplicit-const-int-float-conversion] if (d < -LLONG_MAX/2 || d > LLONG_MAX/2) ``` introduced in #10486 Co-authored-by:
sundb <sundbcn@gmail.com>
-
- 17 Apr, 2022 1 commit
-
-
Oran Agra authored
When the score doesn't have fractional part, and can be stored as an integer, we use the integer capabilities of listpack to store it, rather than convert it to string. This already existed before this PR (lpInsert dose that conversion implicitly). But to do that, we would have first converted the score from double to string (calling `d2string`), then pass the string to `lpAppend` which identified it as being an integer and convert it back to an int. Now, instead of converting it to a string, we store it using lpAppendInteger`. Unrelated: --- * Fix the double2ll range check (negative and positive ranges, and also the comparison operands were slightly off. but also, the range could be made much larger, see comment). * Unify the double to string conversion code in rdb.c with the one in util.c * Small optimization in lpStringToInt64, don't attempt to convert strings that are obviously too long. Benchmark; --- Up to 20% improvement in certain tight loops doing zzlInsert with large integers. (if listpack is pre-allocated to avoid realloc, and insertion is sorted from largest to smaller)
-
- 14 Mar, 2022 1 commit
-
-
DarrenJiang13 authored
For an integer string like "123456789012345678901" which could cause overflow-failure in string2ll() conversion, we could compare its length at the beginning to avoid extra work. * move LONG_STR_SIZE to be in declared in util.h, next to MAX_LONG_DOUBLE_CHARS
-
- 18 Jan, 2022 1 commit
-
-
Yossi Gottlieb authored
This extends the previous fix (#10049) to address any form of non-printable or whitespace character (including newlines, quotes, non-printables, etc.) Also, removes the limitation on appenddirname, to align with the way filenames are handled elsewhere in Redis.
-
- 10 Jan, 2022 2 commits
-
-
chenyang8094 authored
1. Ban whitespace characters in `appenddirname` 2. Handle the case where `appendfilename` contains spaces (for backwards compatibility)
-
Madelyn Olson authored
Changed latency percentile output to omit trailing 0s and periods
-
- 03 Jan, 2022 1 commit
-
-
chenyang8094 authored
Implement Multi-Part AOF mechanism to avoid overheads during AOFRW. Introducing a folder with multiple AOF files tracked by a manifest file. The main issues with the the original AOFRW mechanism are: * buffering of commands that are processed during rewrite (consuming a lot of RAM) * freezes of the main process when the AOFRW completes to drain the remaining part of the buffer and fsync it. * double disk IO for the data that arrives during AOFRW (had to be written to both the old and new AOF files) The main modifications of this PR: 1. Remove the AOF rewrite buffer and related code. 2. Divide the AOF into multiple files, they are classified as two types, one is the the `BASE` type, it represents the full amount of data (Maybe AOF or RDB format) after each AOFRW, there is only one `BASE` file at most. The second is `INCR` type, may have more than one. They represent the incremental commands since the last AOFRW. 3. Use a AOF manifest file to record and manage these AOF files mentioned above. 4. The original configuration of `appendfilename` will be the base part of the new file name, for example: `appendonly.aof.1.base.rdb` and `appendonly.aof.2.incr.aof` 5. Add manifest-related TCL tests, and modified some existing tests that depend on the `appendfilename` 6. Remove the `aof_rewrite_buffer_length` field in info. 7. Add `aof-disable-auto-gc` configuration. By default we're automatically deleting HISTORY type AOFs. It also gives users the opportunity to preserve the history AOFs. just for testing use now. 8. Add AOFRW limiting measure. When the AOFRW failures reaches the threshold (3 times now), we will delay the execution of the next AOFRW by 1 minute. If the next AOFRW also fails, it will be delayed by 2 minutes. The next is 4, 8, 16, the maximum delay is 60 minutes (1 hour). During the limit period, we can still use the 'bgrewriteaof' command to execute AOFRW immediately. 9. Support upgrade (load) data from old version redis. 10. Add `appenddirname` configuration, as the directory name of the append only files. All AOF files and manifest file will be placed in this directory. 11. Only the last AOF file (BASE or INCR) can be truncated. Otherwise redis will exit even if `aof-load-truncated` is enabled. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 24 Nov, 2021 1 commit
-
-
sundb authored
Part three of implementing #8702, following #8887 and #9366 . ## Description of the feature 1. Replace the ziplist container of quicklist with listpack. 2. Convert existing quicklist ziplists on RDB loading time. an O(n) operation. ## Interface changes 1. New `list-max-listpack-size` config is an alias for `list-max-ziplist-size`. 2. Replace `debug ziplist` command with `debug listpack`. ## Internal changes 1. Add `lpMerge` to merge two listpacks . (same as `ziplistMerge`) 2. Add `lpRepr` to print info of listpack which is used in debugCommand and `quicklistRepr`. (same as `ziplistRepr`) 3. Replace `QUICKLIST_NODE_CONTAINER_ZIPLIST` with `QUICKLIST_NODE_CONTAINER_PACKED`(following #9357 ). It represent that a quicklistNode is a packed node, as opposed to a plain node. 4. Remove `createZiplistObject` method, which is never used. 5. Calculate listpack entry size using overhead overestimation in `quicklistAllowInsert`. We prefer an overestimation, which would at worse lead to a few bytes below the lowest limit of 4k. ## Improvements 1. Calling `lpShrinkToFit` after converting Ziplist to listpack, which was missed at #9366. 2. Optimize `quicklistAppendPlainNode` to avoid memcpy data. ## Bugfix 1. Fix crash in `quicklistRepr` when ziplist is compressed, introduced from #9366. ## Test 1. Add unittest for `lpMerge`. 2. Modify the old quicklist ziplist corrupt dump test. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 16 Nov, 2021 1 commit
-
-
sundb authored
This is a preparation step in order to add a new test in quicklist.c see #9776
-
- 23 Sep, 2021 1 commit
-
-
yoav-steinberg authored
### Description A mechanism for disconnecting clients when the sum of all connected clients is above a configured limit. This prevents eviction or OOM caused by accumulated used memory between all clients. It's a complimentary mechanism to the `client-output-buffer-limit` mechanism which takes into account not only a single client and not only output buffers but rather all memory used by all clients. #### Design The general design is as following: * We track memory usage of each client, taking into account all memory used by the client (query buffer, output buffer, parsed arguments, etc...). This is kept up to date after reading from the socket, after processing commands and after writing to the socket. * Based on the used memory we sort all clients into buckets. Each bucket contains all clients using up up to x2 memory of the clients in the bucket below it. For example up to 1m clients, up to 2m clients, up to 4m clients, ... * Before processing a command and before sleep we check if we're over the configured limit. If we are we start disconnecting clients from larger buckets downwards until we're under the limit. #### Config `maxmemory-clients` max memory all clients are allowed to consume, above this threshold we disconnect clients. This config can either be set to 0 (meaning no limit), a size in bytes (possibly with MB/GB suffix), or as a percentage of `maxmemory` by using the `%` suffix (e.g. setting it to `10%` would mean 10% of `maxmemory`). #### Important code changes * During the development I encountered yet more situations where our io-threads access global vars. And needed to fix them. I also had to handle keeps the clients sorted into the memory buckets (which are global) while their memory usage changes in the io-thread. To achieve this I decided to simplify how we check if we're in an io-thread and make it much more explicit. I removed the `CLIENT_PENDING_READ` flag used for checking if the client is in an io-thread (it wasn't used for anything else) and just used the global `io_threads_op` variable the same way to check during writes. * I optimized the cleanup of the client from the `clients_pending_read` list on client freeing. We now store a pointer in the `client` struct to this list so we don't need to search in it (`pending_read_list_node`). * Added `evicted_clients` stat to `INFO` command. * Added `CLIENT NO-EVICT ON|OFF` sub command to exclude a specific client from the client eviction mechanism. Added corrosponding 'e' flag in the client info string. * Added `multi-mem` field in the client info string to show how much memory is used up by buffered multi commands. * Client `tot-mem` now accounts for buffered multi-commands, pubsub patterns and channels (partially), tracking prefixes (partially). * CLIENT_CLOSE_ASAP flag is now handled in a new `beforeNextClient()` function so clients will be disconnected between processing different clients and not only before sleep. This new function can be used in the future for work we want to do outside the command processing loop but don't want to wait for all clients to be processed before we get to it. Specifically I wanted to handle output-buffer-limit related closing before we process client eviction in case the two race with each other. * Added a `DEBUG CLIENT-EVICTION` command to print out info about the client eviction buckets. * Each client now holds a pointer to the client eviction memory usage bucket it belongs to and listNode to itself in that bucket for quick removal. * Global `io_threads_op` variable now can contain a `IO_THREADS_OP_IDLE` value indicating no io-threading is currently being executed. * In order to track memory used by each clients in real-time we can't rely on updating these stats in `clientsCron()` alone anymore. So now I call `updateClientMemUsage()` (used to be `clientsCronTrackClientsMemUsage()`) after command processing, after writing data to pubsub clients, after writing the output buffer and after reading from the socket (and maybe other places too). The function is written to be fast. * Clients are evicted if needed (with appropriate log line) in `beforeSleep()` and before processing a command (before performing oom-checks and key-eviction). * All clients memory usage buckets are grouped as follows: * All clients using less than 64k. * 64K..128K * 128K..256K * ... * 2G..4G * All clients using 4g and up. * Added client-eviction.tcl with a bunch of tests for the new mechanism. * Extended maxmemory.tcl to test the interaction between maxmemory and maxmemory-clients settings. * Added an option to flag a numeric configuration variable as a "percent", this means that if we encounter a '%' after the number in the config file (or config set command) we consider it as valid. Such a number is store internally as a negative value. This way an integer value can be interpreted as either a percent (negative) or absolute value (positive). This is useful for example if some numeric configuration can optionally be set to a percentage of something else. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 23 Aug, 2021 1 commit
-
-
Wen Hui authored
This aims to solve the issue in CONFIG SET maxmemory can only set maxmemory to up to 9223372036854775807 (2^63) while the maxmemory should be ULLONG. Added a memtoull function to convert a string representing an amount of memory into the number of bytes (similar to memtoll but for ull). Also added ull2string to convert a ULLong to string (Similar to ll2string).
-
- 10 Mar, 2021 1 commit
-
-
sundb authored
1. Add `redis-server test all` support to run all tests. 2. Add redis test to daily ci. 3. Add `--accurate` option to run slow tests for more iterations (so that by default we run less cycles (shorter time, and less prints). 4. Move dict benchmark to REDIS_TEST. 5. fix some leaks in tests 6. make quicklist tests run on a specific fill set of options rather than huge ranges 7. move some prints in quicklist test outside their loops to reduce prints 8. removing sds.h from dict.c since it is now used in both redis-server and redis-cli (uses hiredis sds)
-
- 15 Feb, 2021 1 commit
-
-
Yossi Gottlieb authored
Fixes #8489
-
- 18 Jan, 2021 1 commit
-
-
Raghav Muddur authored
-
- 13 Dec, 2020 1 commit
-
-
Yossi Gottlieb authored
* Allow runtest-moduleapi use a different 'make', for systems where GNU Make is 'gmake'. * Fix issue with builds on Solaris re-building everything from scratch due to CFLAGS/LDFLAGS not stored. * Fix compile failure on Solaris due to atomicvar and a bunch of warnings. * Fix garbled log timestamps on Solaris.
-
- 06 May, 2020 1 commit
-
-
antirez authored
-
- 05 May, 2020 1 commit
-
-
Brad Dunbar authored
-
- 23 Apr, 2020 1 commit
-
-
antirez authored
Now that we have an interface to use this API directly, via ACL GENPASS, we are no longer sure what people could do with it. So why don't make it a strong primitive exported by Redis in order to create unique IDs and so forth? The implementation was tested against the test vectors that can be found in RFC4231.
-
- 30 Jan, 2020 1 commit
-
-
Guy Benoish authored
This bug affected RM_StringToLongDouble and HINCRBYFLOAT. I added tests for both cases. Main changes: 1. Fixed string2ld to fail if string contains \0 in the middle 2. Use string2ld in getLongDoubleFromObject - No point of having duplicated code here The two changes above broke RM_SaveLongDouble/RM_LoadLongDouble because the long double string was saved with length+1 (An innocent mistake, but it's actually a bug - The length passed to RM_SaveLongDouble should not include the last \0).
-
- 05 Nov, 2019 1 commit
-
-
Guy Benoish authored
-
- 04 Nov, 2019 1 commit
-
-
Oran Agra authored
rename RM_ServerInfoGetFieldNumerical RM_ServerInfoGetFieldSigned move string2ull to util.c fix leak in RM_GetServerInfo when duplicate info fields exist
-
- 03 Nov, 2019 2 commits
-
-
Oran Agra authored
looks like each platform implements long double differently (different bit count) so we can't save them as binary, and we also want to avoid creating a new RDB format version, so we save these are hex strings using "%La". This commit includes a change in the arguments of ld2string to support this. as well as tests for coverage and short reads. coded by @guybe7
-
Oran Agra authored
- Add RM_GetServerInfo and friends - Add auto memory for new opaque struct - Add tests for new APIs other minor fixes: - add const in various char pointers - requested_section in modulesCollectInfo was actually not sds but char* - extract new string2d out of getDoubleFromObject for code reuse Add module API for
-
- 28 Jan, 2019 1 commit
-
-
Guy Benoish authored
The string representation of `long double` may take up to ~5000 chars (see PR #3745). Before this fix HINCRBYFLOAT would never overflow (since the string could not exceed 256 chars). Now it can.
-
- 11 Dec, 2018 2 commits
- 15 Nov, 2018 1 commit
-
-
Weiliang Li authored
fix comment typo in util.c
-
- 26 Oct, 2018 2 commits
-
-
David Carlier authored
-
David Carlier authored
timezone global is a linux-ism whereas it is a function under BSD. Here a helper to get the timezone value in a more portable manner.
-
- 24 Jul, 2018 2 commits
- 23 Jul, 2018 2 commits
- 21 Jul, 2018 1 commit
-
-
dsomeshwar authored
-
- 03 Jul, 2018 1 commit
-
-
Jack Drogon authored
-
- 05 Apr, 2018 1 commit
-
-
antirez authored
-
- 12 Dec, 2017 1 commit
-
-
nashe authored
-