1. 23 Nov, 2023 1 commit
  2. 06 Nov, 2023 1 commit
  3. 28 Feb, 2023 1 commit
  4. 16 Feb, 2023 1 commit
  5. 12 Feb, 2023 1 commit
    • Tian's avatar
      Reclaim page cache of RDB file (#11248) · 7dae142a
      Tian authored
      # Background
      The RDB file is usually generated and used once and seldom used again, but the content would reside in page cache until OS evicts it. A potential problem is that once the free memory exhausts, the OS have to reclaim some memory from page cache or swap anonymous page out, which may result in a jitters to the Redis service.
      
      Supposing an exact scenario, a high-capacity machine hosts many redis instances, and we're upgrading the Redis together. The page cache in host machine increases as RDBs are generated. Once the free memory drop into low watermark(which is more likely to happen in older Linux kernel like 3.10, before [watermark_scale_factor](https://lore.kernel.org/lkml/1455813719-2395-1-git-send-email-hannes@cmpxchg.org/) is introduced, the `low watermark` is linear to `min watermark`, and there'is not too much buffer space for `kswapd` to be wake up to reclaim memory), a `direct reclaim` happens, which means the process would stall to wait for memory allocation.
      
      # What the PR does
      The PR introduces a capability to reclaim the cache when the RDB is operated. Generally there're two cases, read and write the RDB. For read it's a little messy to address the incremental reclaim, so the reclaim is done in one go in background after the load is finished to avoid blocking the work thread. For write, incremental reclaim amortizes the work of reclaim so no need to put it into background, and the peak watermark of cache can be reduced in this way.
      
      Two cases are addresses specially, replication and restart, for both of which the cache is leveraged to speed up the processing, so the reclaim is postponed to a right time. To do this, a flag is added to`rdbSave` and `rdbLoad` to control whether the cache need to be kept, with the default value false.
      
      # Something deserve noting
      1. Though `posix_fadvise` is the POSIX standard, but only few platform support it, e.g. Linux, FreeBSD 10.0.
      2. In Linux `posix_fadvise` only take effect on writeback-ed pages, so a `sync`(or `fsync`, `fdatasync`) is needed to flush the dirty page before `posix_fadvise` if we reclaim write cache.
      
      # About test
      A unit test is added to verify the effect of `posix_fadvise`.
      In integration test overall cache increase is checked, as well as the cache backed by RDB as a specific TCL test is executed in isolated Github action job.
      7dae142a
  6. 06 Feb, 2023 1 commit
  7. 15 Dec, 2022 1 commit
  8. 08 Dec, 2022 1 commit
  9. 04 Dec, 2022 1 commit
  10. 15 Oct, 2022 1 commit
    • filipe oliveira's avatar
      optimizing d2string() and addReplyDouble() with grisu2: double to string... · 29380ff7
      filipe oliveira authored
      optimizing d2string() and addReplyDouble() with grisu2: double to string conversion based on Florian Loitsch's Grisu-algorithm (#10587)
      
      All commands / use cases that heavily rely on double to a string representation conversion,
      (e.g. meaning take a double-precision floating-point number like 1.5 and return a string like "1.5" ),
      could benefit from a performance boost by swapping snprintf(buf,len,"%.17g",value) by the
      equivalent [fpconv_dtoa](https://github.com/night-shift/fpconv) or any other algorithm that ensures
      100% coverage of conversion.
      
      This is a well-studied topic and Projects like MongoDB. RedPanda, PyTorch leverage libraries
      ( fmtlib ) that use the optimized double to string conversion underneath.
      
      
      The positive impact can be substantial. This PR uses the grisu2 approach ( grisu explained on
      https://www.cs.tufts.edu/~nr/cs257/archive/florian-loitsch/printf.pdf section 5 ). 
      
      test suite changes:
      Despite being compatible, in some cases it produces a different result from printf, and some tests
      had to be adjusted.
      one case is that `%.17g` (which means %e or %f which ever is shorter), chose to use `5000000000`
      instead of 5e+9, which sounds like a bug?
      In other cases, we changed TCL to compare numbers instead of strings to ignore minor rounding
      issues (`expr 0.8 == 0.79999999999999999`) 
      29380ff7
  11. 18 Jul, 2022 1 commit
    • ranshid's avatar
      Avoid using unsafe C functions (#10932) · eacca729
      ranshid authored
      replace use of:
      sprintf --> snprintf
      strcpy/strncpy  --> redis_strlcpy
      strcat/strncat  --> redis_strlcat
      
      **why are we making this change?**
      Much of the code uses some unsafe variants or deprecated buffer handling
      functions.
      While most cases are probably not presenting any issue on the known path
      programming errors and unterminated strings might lead to potential
      buffer overflows which are not covered by tests.
      
      **As part of this PR we change**
      1. added implementation for redis_strlcpy and redis_strlcat based on the strl implementation: https://linux.die.net/man/3/strl
      2. change all occurrences of use of sprintf with use of snprintf
      3. change occurrences of use of  strcpy/strncpy with redis_strlcpy
      4. change occurrences of use of strcat/strncat with redis_strlcat
      5. change the behavior of ll2string/ull2string/ld2string so that it will always place null
        termination ('\0') on the output buffer in the first index. this was done in order to make
        the use of these functions more safe in cases were the user will not check the output
        returned by them (for example in rdbRemoveTempFile)
      6. we added a compiler directive to issue a deprecation error in case a use of
        sprintf/strcpy/strcat is found during compilation which will result in error during compile time.
        However keep in mind that since the deprecation attribute is not supported on all compilers,
        this is expected to fail during push workflows.
      
      
      **NOTE:** while this is only an initial milestone. We might also consider
      using the *_s implementation provided by the C11 Extensions (however not
      yet widly supported). I would also suggest to start
      looking at static code analyzers to track unsafe use cases.
      For example LLVM clang checker supports security.insecureAPI.DeprecatedOrUnsafeBufferHandling
      which can help locate unsafe function usage.
      https://clang.llvm.org/docs/analyzer/checkers.html#security-insecureapi-deprecatedorunsafebufferhandling-c
      The main reason not to onboard it at this stage is that the alternative
      excepted by clang is to use the C11 extensions which are not always
      supported by stdlib.
      eacca729
  12. 20 Jun, 2022 1 commit
    • Tian's avatar
      Fsync directory while persisting AOF manifest, RDB file, and config file (#10737) · 99a425d0
      Tian authored
      The current process to persist files is `write` the data, `fsync` and `rename` the file,
      but a underlying problem is that the rename may be lost when a sudden crash like
      power outage and the directory hasn't been persisted.
      
      The article [Ensuring data reaches disk](https://lwn.net/Articles/457667/) mentions
      a safe way to update file should be:
      
      1. create a new temp file (on the same file system!)
      2. write data to the temp file
      3. fsync() the temp file
      4. rename the temp file to the appropriate name
      5. fsync() the containing directory
      
      This commit handles CONFIG REWRITE, AOF manifest, and RDB file (both for persistence,
      and the one the replica gets from the master).
      It doesn't handle (yet), ACL SAVE and Cluster configs, since these don't yet follow this pattern.
      99a425d0
  13. 10 May, 2022 1 commit
    • Mariya Markova's avatar
      Replace float zero comparison to FP_ZERO comparison (#10675) · c2d8d4e6
      Mariya Markova authored
      I suggest to use "[fpclassify](https://en.cppreference.com/w/cpp/numeric/math/fpclassify)" for float
      comparison with zero, because of expression "value == 0" with value very close to zero can be
      considered as true with some performance compiler optimizations.
      
      Note: this code was introduced by 9d520a7f to accept zset scores that get ERANGE in conversion
      due to precision loss near 0.
      But with Intel compilers, ICC and ICX, where optimizations for 0 check are more aggressive, "==0" is
      true for mentioned functions, however should not be. Behavior is seen starting from O2.
      This leads to a failure in the ZSCAN test in scan.tcl
      c2d8d4e6
  14. 18 Apr, 2022 1 commit
    • Binbin's avatar
      Fix long long to double implicit conversion warning (#10595) · fe4b4806
      Binbin authored
      
      
      There is a implicit conversion warning in clang:
      ```
      util.c:574:23: error: implicit conversion from 'long long' to 'double'
      changes value from -4611686018427387903 to -4611686018427387904
      [-Werror,-Wimplicit-const-int-float-conversion]
          if (d < -LLONG_MAX/2 || d > LLONG_MAX/2)
      ```
      
      introduced in #10486
      Co-authored-by: default avatarsundb <sundbcn@gmail.com>
      fe4b4806
  15. 17 Apr, 2022 1 commit
    • Oran Agra's avatar
      Optimize integer zset scores in listpack (converting to string and back) (#10486) · 0c4733c8
      Oran Agra authored
      When the score doesn't have fractional part, and can be stored as an integer,
      we use the integer capabilities of listpack to store it, rather than convert it to string.
      This already existed before this PR (lpInsert dose that conversion implicitly).
      
      But to do that, we would have first converted the score from double to string (calling `d2string`),
      then pass the string to `lpAppend` which identified it as being an integer and convert it back to an int.
      Now, instead of converting it to a string, we store it using lpAppendInteger`.
      
      Unrelated:
      ---
      * Fix the double2ll range check (negative and positive ranges, and also the comparison operands
        were slightly off. but also, the range could be made much larger, see comment).
      * Unify the double to string conversion code in rdb.c with the one in util.c
      * Small optimization in lpStringToInt64, don't attempt to convert strings that are obviously too long.
      
      Benchmark;
      ---
      Up to 20% improvement in certain tight loops doing zzlInsert with large integers.
      (if listpack is pre-allocated to avoid realloc, and insertion is sorted from largest to smaller)
      0c4733c8
  16. 14 Mar, 2022 1 commit
  17. 18 Jan, 2022 1 commit
    • Yossi Gottlieb's avatar
      Fix additional AOF filename issues. (#10110) · 25e6d4d4
      Yossi Gottlieb authored
      This extends the previous fix (#10049) to address any form of
      non-printable or whitespace character (including newlines, quotes,
      non-printables, etc.)
      
      Also, removes the limitation on appenddirname, to align with the way
      filenames are handled elsewhere in Redis.
      25e6d4d4
  18. 10 Jan, 2022 2 commits
  19. 03 Jan, 2022 1 commit
    • chenyang8094's avatar
      Implement Multi Part AOF mechanism to avoid AOFRW overheads. (#9788) · 87789fae
      chenyang8094 authored
      
      
      Implement Multi-Part AOF mechanism to avoid overheads during AOFRW.
      Introducing a folder with multiple AOF files tracked by a manifest file.
      
      The main issues with the the original AOFRW mechanism are:
      * buffering of commands that are processed during rewrite (consuming a lot of RAM)
      * freezes of the main process when the AOFRW completes to drain the remaining part of the buffer and fsync it.
      * double disk IO for the data that arrives during AOFRW (had to be written to both the old and new AOF files)
      
      The main modifications of this PR:
      1. Remove the AOF rewrite buffer and related code.
      2. Divide the AOF into multiple files, they are classified as two types, one is the the `BASE` type,
        it represents the full amount of data (Maybe AOF or RDB format) after each AOFRW, there is only
        one `BASE` file at most. The second is `INCR` type, may have more than one. They represent the
        incremental commands since the last AOFRW.
      3. Use a AOF manifest file to record and manage these AOF files mentioned above.
      4. The original configuration of `appendfilename` will be the base part of the new file name, for example:
        `appendonly.aof.1.base.rdb` and `appendonly.aof.2.incr.aof`
      5. Add manifest-related TCL tests, and modified some existing tests that depend on the `appendfilename`
      6. Remove the `aof_rewrite_buffer_length` field in info.
      7. Add `aof-disable-auto-gc` configuration. By default we're automatically deleting HISTORY type AOFs.
        It also gives users the opportunity to preserve the history AOFs. just for testing use now.
      8. Add AOFRW limiting measure. When the AOFRW failures reaches the threshold (3 times now),
        we will delay the execution of the next AOFRW by 1 minute. If the next AOFRW also fails, it will be
        delayed by 2 minutes. The next is 4, 8, 16, the maximum delay is 60 minutes (1 hour). During the limit
        period, we can still use the 'bgrewriteaof' command to execute AOFRW immediately.
      9. Support upgrade (load) data from old version redis.
      10. Add `appenddirname` configuration, as the directory name of the append only files. All AOF files and
        manifest file will be placed in this directory.
      11. Only the last AOF file (BASE or INCR) can be truncated. Otherwise redis will exit even if
        `aof-load-truncated` is enabled.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      87789fae
  20. 24 Nov, 2021 1 commit
    • sundb's avatar
      Replace ziplist with listpack in quicklist (#9740) · 45129059
      sundb authored
      
      
      Part three of implementing #8702, following #8887 and #9366 .
      
      ## Description of the feature
      1. Replace the ziplist container of quicklist with listpack.
      2. Convert existing quicklist ziplists on RDB loading time. an O(n) operation.
      
      ## Interface changes
      1. New `list-max-listpack-size` config is an alias for `list-max-ziplist-size`.
      2. Replace `debug ziplist` command with `debug listpack`.
      
      ## Internal changes
      1. Add `lpMerge` to merge two listpacks . (same as `ziplistMerge`)
      2. Add `lpRepr` to print info of listpack which is used in debugCommand and `quicklistRepr`. (same as `ziplistRepr`)
      3. Replace `QUICKLIST_NODE_CONTAINER_ZIPLIST` with `QUICKLIST_NODE_CONTAINER_PACKED`(following #9357 ).
          It represent that a quicklistNode is a packed node, as opposed to a plain node.
      4. Remove `createZiplistObject` method, which is never used.
      5. Calculate listpack entry size using overhead overestimation in `quicklistAllowInsert`.
          We prefer an overestimation, which would at worse lead to a few bytes below the lowest limit of 4k.
      
      ## Improvements
      1. Calling `lpShrinkToFit` after converting Ziplist to listpack, which was missed at #9366.
      2. Optimize `quicklistAppendPlainNode` to avoid memcpy data.
      
      ## Bugfix
      1. Fix crash in `quicklistRepr` when ziplist is compressed, introduced from #9366.
      
      ## Test
      1. Add unittest for `lpMerge`.
      2. Modify the old quicklist ziplist corrupt dump test.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      45129059
  21. 16 Nov, 2021 1 commit
  22. 23 Sep, 2021 1 commit
    • yoav-steinberg's avatar
      Client eviction (#8687) · 2753429c
      yoav-steinberg authored
      
      
      ### Description
      A mechanism for disconnecting clients when the sum of all connected clients is above a
      configured limit. This prevents eviction or OOM caused by accumulated used memory
      between all clients. It's a complimentary mechanism to the `client-output-buffer-limit`
      mechanism which takes into account not only a single client and not only output buffers
      but rather all memory used by all clients.
      
      #### Design
      The general design is as following:
      * We track memory usage of each client, taking into account all memory used by the
        client (query buffer, output buffer, parsed arguments, etc...). This is kept up to date
        after reading from the socket, after processing commands and after writing to the socket.
      * Based on the used memory we sort all clients into buckets. Each bucket contains all
        clients using up up to x2 memory of the clients in the bucket below it. For example up
        to 1m clients, up to 2m clients, up to 4m clients, ...
      * Before processing a command and before sleep we check if we're over the configured
        limit. If we are we start disconnecting clients from larger buckets downwards until we're
        under the limit.
      
      #### Config
      `maxmemory-clients` max memory all clients are allowed to consume, above this threshold
      we disconnect clients.
      This config can either be set to 0 (meaning no limit), a size in bytes (possibly with MB/GB
      suffix), or as a percentage of `maxmemory` by using the `%` suffix (e.g. setting it to `10%`
      would mean 10% of `maxmemory`).
      
      #### Important code changes
      * During the development I encountered yet more situations where our io-threads access
        global vars. And needed to fix them. I also had to handle keeps the clients sorted into the
        memory buckets (which are global) while their memory usage changes in the io-thread.
        To achieve this I decided to simplify how we check if we're in an io-thread and make it
        much more explicit. I removed the `CLIENT_PENDING_READ` flag used for checking
        if the client is in an io-thread (it wasn't used for anything else) and just used the global
        `io_threads_op` variable the same way to check during writes.
      * I optimized the cleanup of the client from the `clients_pending_read` list on client freeing.
        We now store a pointer in the `client` struct to this list so we don't need to search in it
        (`pending_read_list_node`).
      * Added `evicted_clients` stat to `INFO` command.
      * Added `CLIENT NO-EVICT ON|OFF` sub command to exclude a specific client from the
        client eviction mechanism. Added corrosponding 'e' flag in the client info string.
      * Added `multi-mem` field in the client info string to show how much memory is used up
        by buffered multi commands.
      * Client `tot-mem` now accounts for buffered multi-commands, pubsub patterns and
        channels (partially), tracking prefixes (partially).
      * CLIENT_CLOSE_ASAP flag is now handled in a new `beforeNextClient()` function so
        clients will be disconnected between processing different clients and not only before sleep.
        This new function can be used in the future for work we want to do outside the command
        processing loop but don't want to wait for all clients to be processed before we get to it.
        Specifically I wanted to handle output-buffer-limit related closing before we process client
        eviction in case the two race with each other.
      * Added a `DEBUG CLIENT-EVICTION` command to print out info about the client eviction
        buckets.
      * Each client now holds a pointer to the client eviction memory usage bucket it belongs to
        and listNode to itself in that bucket for quick removal.
      * Global `io_threads_op` variable now can contain a `IO_THREADS_OP_IDLE` value
        indicating no io-threading is currently being executed.
      * In order to track memory used by each clients in real-time we can't rely on updating
        these stats in `clientsCron()` alone anymore. So now I call `updateClientMemUsage()`
        (used to be `clientsCronTrackClientsMemUsage()`) after command processing, after
        writing data to pubsub clients, after writing the output buffer and after reading from the
        socket (and maybe other places too). The function is written to be fast.
      * Clients are evicted if needed (with appropriate log line) in `beforeSleep()` and before
        processing a command (before performing oom-checks and key-eviction).
      * All clients memory usage buckets are grouped as follows:
        * All clients using less than 64k.
        * 64K..128K
        * 128K..256K
        * ...
        * 2G..4G
        * All clients using 4g and up.
      * Added client-eviction.tcl with a bunch of tests for the new mechanism.
      * Extended maxmemory.tcl to test the interaction between maxmemory and
        maxmemory-clients settings.
      * Added an option to flag a numeric configuration variable as a "percent", this means that
        if we encounter a '%' after the number in the config file (or config set command) we
        consider it as valid. Such a number is store internally as a negative value. This way an
        integer value can be interpreted as either a percent (negative) or absolute value (positive).
        This is useful for example if some numeric configuration can optionally be set to a percentage
        of something else.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      2753429c
  23. 23 Aug, 2021 1 commit
    • Wen Hui's avatar
      config memory limits: handle values larger than (signed) LLONG_MAX (#9313) · 641780a9
      Wen Hui authored
      This aims to solve the issue in CONFIG SET maxmemory can only set maxmemory to up
      to 9223372036854775807 (2^63) while the maxmemory should be ULLONG.
      Added a memtoull function to convert a string representing an amount of memory
      into the number of bytes (similar to memtoll but for ull). Also added ull2string to
      convert a ULLong to string (Similar to ll2string).
      641780a9
  24. 10 Mar, 2021 1 commit
    • sundb's avatar
      Add run all test support with define REDIS_TEST (#8570) · 95d6297d
      sundb authored
      1. Add `redis-server test all` support to run all tests.
      2. Add redis test to daily ci.
      3. Add `--accurate` option to run slow tests for more iterations (so that
         by default we run less cycles (shorter time, and less prints).
      4. Move dict benchmark to REDIS_TEST.
      5. fix some leaks in tests
      6. make quicklist tests run on a specific fill set of options rather than huge ranges
      7. move some prints in quicklist test outside their loops to reduce prints
      8. removing sds.h from dict.c since it is now used in both redis-server and
         redis-cli (uses hiredis sds)
      95d6297d
  25. 15 Feb, 2021 1 commit
  26. 18 Jan, 2021 1 commit
  27. 13 Dec, 2020 1 commit
    • Yossi Gottlieb's avatar
      Several (mostly Solaris-related) cleanups (#8171) · 86e3395c
      Yossi Gottlieb authored
      * Allow runtest-moduleapi use a different 'make', for systems where GNU Make is 'gmake'.
      * Fix issue with builds on Solaris re-building everything from scratch due to CFLAGS/LDFLAGS not stored.
      * Fix compile failure on Solaris due to atomicvar and a bunch of warnings.
      * Fix garbled log timestamps on Solaris.
      86e3395c
  28. 06 May, 2020 1 commit
  29. 05 May, 2020 1 commit
  30. 23 Apr, 2020 1 commit
    • antirez's avatar
      getRandomBytes(): use HMAC-SHA256. · 9ae8254e
      antirez authored
      Now that we have an interface to use this API directly, via ACL GENPASS,
      we are no longer sure what people could do with it. So why don't make it
      a strong primitive exported by Redis in order to create unique IDs and
      so forth?
      
      The implementation was tested against the test vectors that can
      be found in RFC4231.
      9ae8254e
  31. 30 Jan, 2020 1 commit
    • Guy Benoish's avatar
      ld2string should fail if string contains \0 in the middle · 2deb5551
      Guy Benoish authored
      This bug affected RM_StringToLongDouble and HINCRBYFLOAT.
      I added tests for both cases.
      
      Main changes:
      1. Fixed string2ld to fail if string contains \0 in the middle
      2. Use string2ld in getLongDoubleFromObject - No point of
         having duplicated code here
      
      The two changes above broke RM_SaveLongDouble/RM_LoadLongDouble
      because the long double string was saved with length+1 (An innocent
      mistake, but it's actually a bug - The length passed to
      RM_SaveLongDouble should not include the last \0).
      2deb5551
  32. 05 Nov, 2019 1 commit
  33. 04 Nov, 2019 1 commit
    • Oran Agra's avatar
      Add RM_ServerInfoGetFieldUnsigned · 04233097
      Oran Agra authored
      rename RM_ServerInfoGetFieldNumerical RM_ServerInfoGetFieldSigned
      move string2ull to util.c
      fix leak in RM_GetServerInfo when duplicate info fields exist
      04233097
  34. 03 Nov, 2019 2 commits
    • Oran Agra's avatar
      Module API for loading and saving long double · 779aebc9
      Oran Agra authored
      looks like each platform implements long double differently (different bit count)
      so we can't save them as binary, and we also want to avoid creating a new RDB
      format version, so we save these are hex strings using "%La".
      
      This commit includes a change in the arguments of ld2string to support this.
      as well as tests for coverage and short reads.
      
      coded by @guybe7
      779aebc9
    • Oran Agra's avatar
      Add module api for looking into INFO fields · 4d580438
      Oran Agra authored
      - Add RM_GetServerInfo and friends
      - Add auto memory for new opaque struct
      - Add tests for new APIs
      
      other minor fixes:
      - add const in various char pointers
      - requested_section in modulesCollectInfo was actually not sds but char*
      - extract new string2d out of getDoubleFromObject for code reuse
      
      Add module API for
      4d580438
  35. 28 Jan, 2019 1 commit
  36. 11 Dec, 2018 2 commits
  37. 15 Nov, 2018 1 commit