- 10 Apr, 2023 1 commit
-
-
sundb authored
## Issue When we use GCC-12 later or clang 9.0 later to build with `-D_FORTIFY_SOURCE=3`, we can see the following buffer overflow: ``` === REDIS BUG REPORT START: Cut & paste starting from here === 6263:M 06 Apr 2023 08:59:12.915 # Redis 255.255.255 crashed by signal: 6, si_code: -6 6263:M 06 Apr 2023 08:59:12.915 # Crashed running the instruction at: 0x7f03d59efa7c ------ STACK TRACE ------ EIP: /lib/x86_64-linux-gnu/libc.so.6(pthread_kill+0x12c)[0x7f03d59efa7c] Backtrace: /lib/x86_64-linux-gnu/libc.so.6(+0x42520)[0x7f03d599b520] /lib/x86_64-linux-gnu/libc.so.6(pthread_kill+0x12c)[0x7f03d59efa7c] /lib/x86_64-linux-gnu/libc.so.6(raise+0x16)[0x7f03d599b476] /lib/x86_64-linux-gnu/libc.so.6(abort+0xd3)[0x7f03d59817f3] /lib/x86_64-linux-gnu/libc.so.6(+0x896f6)[0x7f03d59e26f6] /lib/x86_64-linux-gnu/libc.so.6(__fortify_fail+0x2a)[0x7f03d5a8f76a] /lib/x86_64-linux-gnu/libc.so.6(+0x1350c6)[0x7f03d5a8e0c6] src/redis-server 127.0.0.1:25111(+0xd5e80)[0x557cddd3be80] src/redis-server 127.0.0.1:25111(feedReplicationBufferWithObject+0x78)[0x557cddd3c768] src/redis-server 127.0.0.1:25111(replicationFeedSlaves+0x1a4)[0x557cddd3cbc4] src/redis-server 127.0.0.1:25111(+0x8721a)[0x557cddced21a] src/redis-server 127.0.0.1:25111(call+0x47a)[0x557cddcf38ea] src/redis-server 127.0.0.1:25111(processCommand+0xbf4)[0x557cddcf4aa4] src/redis-server 127.0.0.1:25111(processInputBuffer+0xe6)[0x557cddd22216] src/redis-server 127.0.0.1:25111(readQueryFromClient+0x3a8)[0x557cddd22898] src/redis-server 127.0.0.1:25111(+0x1b9134)[0x557cdde1f134] src/redis-server 127.0.0.1:25111(aeMain+0x119)[0x557cddce5349] src/redis-server 127.0.0.1:25111(main+0x466)[0x557cddcd6716] /lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x7f03d5982d90] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x7f03d5982e40] src/redis-server 127.0.0.1:25111(_start+0x25)[0x557cddcd7025] ``` The main reason is that when FORTIFY_SOURCE is enabled, GCC or clang will enhance some common functions, such as `strcpy`, `memcpy`, `fgets`, etc, so that they can detect buffer overflow errors and stop program execution, thus improving the safety of the program. We use `zmalloc_usable_size()` everywhere to use memory blocks, but that is an abuse since the malloc_usable_size() isn't meant for this kind of use, it is for diagnostics only. That is also why the behavior is flaky when built with _FORTIFY_SOURCE, the compiler can sense that we reach outside the allocated block and SIGABRT. ### Solution If we need to use the additional memory we got, we need to use a dummy realloc with `alloc_size` attribute and no inlining, (see `extend_to_usable`) to let the compiler see the large of memory we need to use. This can either be an implicit call inside `z*usable` that returns the size, so that the caller doesn't have any other worry, or it can be a normal zmalloc call which means that if the caller wants to use zmalloc_usable_size it must also use extend_to_usable. ### Changes This PR does the following: 1) rename the current z[try]malloc_usable family to z[try]malloc_internal and don't expose them to users outside zmalloc.c, 2) expose a new set of `z[*]_usable` family that use z[*]_internal and `extend_to_usable()` implicitly, the caller gets the size of the allocation and it is safe to use. 3) go over all the users of `zmalloc_usable_size` and convert them to use the `z[*]_usable` family if possible. 4) in the places where the caller can't use `z[*]_usable` and store the real size, and must still rely on zmalloc_usable_size, we still make sure that the allocation used `z[*]_usable` (which has a call to `extend_to_usable()`) and ignores the returning size, this way a later call to `zmalloc_usable_size` is still safe. [4] was done for module.c and listpack.c, all the others places (sds, reply proto list, replication backlog, client->buf) are using [3]. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 20 Jan, 2023 1 commit
-
-
Viktor Söderqvist authored
If a dict has only keys, and no use of values, then a key can be stored directly in a dict's hashtable. The key replaces the dictEntry. To distinguish between a key and a dictEntry, we only use this optimization if the key is odd, i.e. if the key has the least significant bit set. This is true for sds strings, since the sds header is always an odd number of bytes. Dict entries are used as a fallback when there is a hash collision. A special dict entry without a value (only key and next) is used so we save one word in this case too. This saves 24 bytes per set element for larges sets, and also gains some speed improvement as a side effect (less allocations and cache misses). A quick test adding 1M elements to a set using the command below resulted in memory usage of 28.83M, compared to 46.29M on unstable. That's 18 bytes per set element on average. eval 'for i=1,1000000,1 do redis.call("sadd", "myset", "x"..i) end' 0 Other changes: Allocations are ensured to have at least 8 bits alignment on all systems. This affects 32-bit builds compiled without HAVE_MALLOC_SIZE (not jemalloc or glibc) in which Redis stores the size of each allocation, after this change in 8 bytes instead of previously 4 bytes per allocation. This is done so we can reliably use the 3 least significant bits in a pointer to encode stuff.
-
- 18 Jul, 2022 1 commit
-
-
ranshid authored
replace use of: sprintf --> snprintf strcpy/strncpy --> redis_strlcpy strcat/strncat --> redis_strlcat **why are we making this change?** Much of the code uses some unsafe variants or deprecated buffer handling functions. While most cases are probably not presenting any issue on the known path programming errors and unterminated strings might lead to potential buffer overflows which are not covered by tests. **As part of this PR we change** 1. added implementation for redis_strlcpy and redis_strlcat based on the strl implementation: https://linux.die.net/man/3/strl 2. change all occurrences of use of sprintf with use of snprintf 3. change occurrences of use of strcpy/strncpy with redis_strlcpy 4. change occurrences of use of strcat/strncat with redis_strlcat 5. change the behavior of ll2string/ull2string/ld2string so that it will always place null termination ('\0') on the output buffer in the first index. this was done in order to make the use of these functions more safe in cases were the user will not check the output returned by them (for example in rdbRemoveTempFile) 6. we added a compiler directive to issue a deprecation error in case a use of sprintf/strcpy/strcat is found during compilation which will result in error during compile time. However keep in mind that since the deprecation attribute is not supported on all compilers, this is expected to fail during push workflows. **NOTE:** while this is only an initial milestone. We might also consider using the *_s implementation provided by the C11 Extensions (however not yet widly supported). I would also suggest to start looking at static code analyzers to track unsafe use cases. For example LLVM clang checker supports security.insecureAPI.DeprecatedOrUnsafeBufferHandling which can help locate unsafe function usage. https://clang.llvm.org/docs/analyzer/checkers.html#security-insecureapi-deprecatedorunsafebufferhandling-c The main reason not to onboard it at this stage is that the alternative excepted by clang is to use the C11 extensions which are not always supported by stdlib.
-
- 13 Jul, 2022 1 commit
-
-
Oran Agra authored
The corrupt dump fuzzer uncovered a valgrind warning saying: ``` ==76370== Argument 'size' of function malloc has a fishy (possibly negative) value: -3744781444216323815 ``` This allocation would have failed (returning NULL) and being handled properly by redis (even before this change), but we also want to silence the valgrind warnings (which are checking that casting to ssize_t produces a non-negative value). The solution i opted for is to explicitly fail these allocations (returning NULL), before even reaching `malloc` (which would have failed and return NULL too). The implication is that we will not be able to support a single allocation of more than 2GB on a 32bit system (which i don't think is a realistic scenario). i.e. i do think we could be facing cases were redis consumes more than 2gb on a 32bit system, but not in a single allocation. The byproduct of this, is that i dropped the overflow assertions, since these will now lead to the same OOM panic we have for failed allocations.
-
- 22 May, 2022 1 commit
-
-
yoav-steinberg authored
This PR does 2 main things: 1) Add warning for suspected slow system clocksource setting. This is Linux specific. 2) Add a `--check-system` argument to redis which runs all system checks and prints a report. ## System checks Add a command line option `--check-system` which runs all known system checks and provides a report to stdout of which systems checks have failed with details on how to reconfigure the system for optimized redis performance. The `--system-check` mode exists with an appropriate error code after running all the checks. ## Slow clocksource details We check the system's clocksource performance by running `clock_gettime()` in a loop and then checking how much time was spent in a system call (via `getrusage()`). If we spend more than 10% of the time in the kernel then we print a warning. I verified that using the slow clock sources: `acpi_pm` (~90% in the kernel on my laptop) and `xen` (~30% in the kernel on ...
-
- 08 May, 2022 1 commit
-
-
David CARLIER authored
also fixing already defined constants build warning while at it. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 19 Jan, 2022 1 commit
-
-
David CARLIER authored
Add support for getting the RSS in OpenBSD
-
- 16 Jan, 2022 1 commit
-
-
David CARLIER authored
Seems like the previous implementation was broken (always returning 0) since kinfo_proc2 is used the KERN_PROC2 sysctl oid is more appropriate and also the query's length was not necessarily accurate (6 here).
-
- 05 Jan, 2022 1 commit
-
-
filipe oliveira authored
Added INFO LATENCYSTATS section: latency by percentile distribution/latency by cumulative distribution of latencies (#9462) # Short description The Redis extended latency stats track per command latencies and enables: - exporting the per-command percentile distribution via the `INFO LATENCYSTATS` command. **( percentile distribution is not mergeable between cluster nodes ).** - exporting the per-command cumulative latency distributions via the `LATENCY HISTOGRAM` command. Using the cumulative distribution of latencies we can merge several stats from different cluster nodes to calculate aggregate metrics . By default, the extended latency monitoring is enabled since the overhead of keeping track of the command latency is very small. If you don't want to track extended latency metrics, you can easily disable it at runtime using the command: - `CONFIG SET latency-tracking no` By default, the exported latency percentiles are the p50, p99, and p999. You can alter them at runtime using the command: - `CONFIG SET latency-tracking-info-percentiles "0.0 50.0 100.0"` ## Some details: - The total size per histogram should sit around 40 KiB. We only allocate those 40KiB when a command was called for the first time. - With regards to the WRITE overhead As seen below, there is no measurable overhead on the achievable ops/sec or full latency spectrum on the client. Including also the measured redis-benchmark for unstable vs this branch. - We track from 1 nanosecond to 1 second ( everything above 1 second is considered +Inf ) ## `INFO LATENCYSTATS` exposition format - Format: `latency_percentiles_usec_<CMDNAME>:p0=XX,p50....` ## `LATENCY HISTOGRAM [command ...]` exposition format Return a cumulative distribution of latencies in the format of a histogram for the specified command names. The histogram is composed of a map of time buckets: - Each representing a latency range, between 1 nanosecond and roughly 1 second. - Each bucket covers twice the previous bucket's range. - Empty buckets are not printed. - Everything above 1 sec is considered +Inf. - At max there will be log2(1000000000)=30 buckets We reply a map for each command in the format: `<command name> : { `calls`: <total command calls> , `histogram` : { <bucket 1> : latency , < bucket 2> : latency, ... } }` Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 16 Nov, 2021 1 commit
-
-
sundb authored
This is a preparation step in order to add a new test in quicklist.c see #9776
-
- 10 Aug, 2021 1 commit
-
-
DarrenJiang13 authored
We only use MADV_DONTNEED on Linux, that's were it was tested.
-
- 04 Aug, 2021 1 commit
-
-
Wang Yuan authored
## Backgroud As we know, after `fork`, one process will copy pages when writing data to these pages(CoW), and another process still keep old pages, they totally cost more memory. For redis, we suffered that redis consumed much memory when the fork child is serializing key/values, even that maybe cause OOM. But actually we find, in redis fork child process, the child process don't need to keep some memory and parent process may write or update that, for example, child process will never access the key-value that is serialized but users may update it in parent process. So we think it may reduce COW if the child process release memory that it is not needed. ## Implementation For releasing key value in child process, we may think we call `decrRefCount` to free memory, but i find the fork child process still use much memory when we don't write any data to redis, and it costs much more time that slows down bgsave. Maybe because memory allocator doesn't really release memory to OS, and it may modify some inner data for this free operation, especially when we free small objects. Moreover, CoW is based on pages, so it is a easy way that we only free the memory bulk that is not less than kernel page size. madvise(MADV_DONTNEED) can quickly release specified region pages to OS bypassing memory allocator, and allocator still consider that this memory still is used and don't change its inner data. There are some buffers we can release in the fork child process: - **Serialized key-values** the fork child process never access serialized key-values, so we try to free them. Because we only can release big bulk memory, and it is time consumed to iterate all items/members/fields/entries of complex data type. So we decide to iterate them and try to release them only when their average size of item/member/field/entry is more than page size of OS. - **Replication backlog** Because replication backlog is a cycle buffer, it will be changed quickly if redis has heavy write traffic, but in fork child process, we don't need to access that. - **Client buffers** If clients have requests during having the fork child process, clients' buffer also be changed frequently. The memory includes client query buffer, output buffer, and client struct used memory. To get child process peak private dirty memory, we need to count peak memory instead of last used memory, because the child process may continue to release memory (since COW used to only grow till now, the last was equivalent to the peak). Also we're adding a new `current_cow_peak` info variable (to complement the existing `current_cow_size`) Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 22 Mar, 2021 1 commit
-
-
Yossi Gottlieb authored
Reading CoW from /proc/<pid>/smaps can be slow with large processes on some platforms. This measures the time it takes to read CoW info and limits the duty cycle of future updates to roughly 1/100. As current_cow_size no longer represnets a current, fixed interval value there is also a new current_cow_size_age field that provides information about the age of the size value, in seconds.
-
- 10 Mar, 2021 1 commit
-
-
sundb authored
1. Add `redis-server test all` support to run all tests. 2. Add redis test to daily ci. 3. Add `--accurate` option to run slow tests for more iterations (so that by default we run less cycles (shorter time, and less prints). 4. Move dict benchmark to REDIS_TEST. 5. fix some leaks in tests 6. make quicklist tests run on a specific fill set of options rather than huge ranges 7. move some prints in quicklist test outside their loops to reduce prints 8. removing sds.h from dict.c since it is now used in both redis-server and redis-cli (uses hiredis sds)
-
- 09 Mar, 2021 1 commit
-
-
Yossi Gottlieb authored
The obtained process_rss was incorrect (the OS reports pages, not bytes), resulting with many other fields getting corrupted. This has been tested on FreeBSD but not other platforms.
-
- 25 Feb, 2021 1 commit
-
-
Yossi Gottlieb authored
* Add better control of malloc_usable_size() usage. * Use malloc_usable_size on alpine libc daily job. * Add no-malloc-usable-size daily jobs. * Fix zmalloc(0) when HAVE_MALLOC_SIZE is undefined. In order to align with the jemalloc behavior, this should never return NULL or OOM panic.
-
- 23 Feb, 2021 1 commit
-
-
Yossi Gottlieb authored
Also adds a new daily CI test, relying on the fact that we don't use malloc_size() on alpine libmusl. Fixes #8531
-
- 22 Feb, 2021 1 commit
-
-
Yossi Gottlieb authored
On 32-bit systems, setting the proto-max-bulk-len config parameter to a high value may result with integer overflow and a subsequent heap overflow when parsing an input bulk (CVE-2021-21309). This fix has two parts: Set a reasonable limit to the config parameter. Add additional checks to prevent the problem in other potential but unknown code paths.
-
- 08 Jan, 2021 1 commit
-
-
Oran Agra authored
- the last COW report wasn't always read from the pipe (receiveLastChildInfo wasn't used) - but in fact, there's no reason we won't always try to drain that pipe so i'm unifying receiveLastChildInfo with receiveChildInfo - adjust threshold of the COW test when run in accurate mode - add some prints in case this test fails again - fix indentation, page size, and PID! in MacOS proc info p.s. it seems that pri_pages_dirtied is always 0
-
- 13 Dec, 2020 1 commit
-
-
Yossi Gottlieb authored
* Allow runtest-moduleapi use a different 'make', for systems where GNU Make is 'gmake'. * Fix issue with builds on Solaris re-building everything from scratch due to CFLAGS/LDFLAGS not stored. * Fix compile failure on Solaris due to atomicvar and a bunch of warnings. * Fix garbled log timestamps on Solaris.
-
- 06 Dec, 2020 2 commits
-
-
David CARLIER authored
-
Oran Agra authored
When RDB input attempts to make a huge memory allocation that fails, RESTORE should fail gracefully rather than die with panic
-
- 08 Nov, 2020 1 commit
-
-
David CARLIER authored
-
- 26 Oct, 2020 1 commit
-
-
Yossi Gottlieb authored
When using a system with no malloc_usable_size(), zmalloc_size() assumed that the heap allocator always returns blocks that are long-padded. This may not always be the case, and will result with zmalloc_size() returning a size that is bigger than allocated. At least in one case this leads to out of bound write, process crash and a potential security vulnerability. Effectively this does not affect the vast majority of users, who use jemalloc or glibc. This problem along with a (different) fix was reported by Drew DeVault.
-
- 02 Oct, 2020 1 commit
-
-
Oran Agra authored
This commit has two aspects: 1) improve memory reporting for all the places that use sdsAllocSize to compute memory used by a string, in this case it'll include the internal fragmentation. 2) reduce the need for realloc calls by making the sds implicitly take over the internal fragmentation of the block it allocated.
-
- 29 Sep, 2020 1 commit
-
-
David CARLIER authored
-
- 17 Sep, 2020 1 commit
-
-
Wang Yuan authored
Redis 6.0 introduces I/O threads, it is so cool and efficient, we use C11 _Atomic to establish inter-thread synchronization without mutex. But the compiler that must supports C11 _Atomic can compile redis code, that brings a lot of inconvenience since some common platforms can't support by default such as CentOS7, so we want to implement redis atomic type to make it more portable. We have implemented our atomic variable for redis that only has 'relaxed' operations in src/atomicvar.h, so we implement some operations with 'sequentially-consistent', just like the default behavior of C11 _Atomic that can establish inter-thread synchronization. And we replace all uses of C11 _Atomic with redis atomic variable. Our implementation of redis atomic variable uses C11 _Atomic, __atomic or __sync macros if available, it supports most common platforms, and we will detect automatically which feature we use. In Makefile we use a dummy file to detect if the compiler supports C11 _Atomic. Now for gcc, we can compile redis code theoretically if your gcc version is not less than 4.1.2(starts to support __sync_xxx operations). Otherwise, we remove use mutex fallback to implement redis atomic variable for performance and test. You will get compiling errors if your compiler doesn't support all features of above. For cover redis atomic variable tests, we add other CI jobs that build redis on CentOS6 and CentOS7 and workflow daily jobs that run the tests on them. For them, we just install gcc by default in order to cover different compiler versions, gcc is 4.4.7 by default installation on CentOS6 and 4.8.5 on CentOS7. We restore the feature that we can test redis with Helgrind to find data race errors. But you need install Valgrind in the default path configuration firstly before running your tests, since we use macros in helgrind.h to tell Helgrind inter-thread happens-before relationship explicitly for avoiding false positives. Please open an issue on github if you find data race errors relate to this commit. Unrelated: - Fix redefinition of typedef 'RedisModuleUserChangedFunc' For some old version compilers, they will report errors or warnings, if we re-define function type.
-
- 31 Jul, 2020 1 commit
-
- 02 Dec, 2019 1 commit
-
-
antirez authored
-
- 02 Oct, 2019 1 commit
-
-
Oran Agra authored
It seeems that since I added the creation of the jemalloc thread redis sometimes fails to start with the following error: Inconsistency detected by ld.so: dl-tls.c: 493: _dl_allocate_tls_init: Assertion `listp->slotinfo[cnt].gen <= GL(dl_tls_generation)' failed! This seems to be due to a race bug in ld.so, in which TLS creation on the thread, collide with dlopen. Move the creation of BIO and jemalloc threads to after modules are loaded. plus small bugfix when trying to disable the jemalloc thread at runtime
-
- 20 Sep, 2019 1 commit
-
-
David Carlier authored
-
- 15 Sep, 2019 1 commit
-
-
David Carlier authored
-
- 28 Jul, 2019 1 commit
-
-
David Carlier authored
-
- 02 Jun, 2019 1 commit
-
-
Oran Agra authored
jemalloc 5 doesn't immediately release memory back to the OS, instead there's a decaying mechanism, which doesn't work when there's no traffic (no allocations). this is most evident if there's no traffic after flushdb, the RSS will remain high. 1) enable jemalloc background purging 2) explicitly purge in flushdb
-
- 21 Mar, 2019 1 commit
-
-
antirez authored
-
- 30 Sep, 2018 1 commit
-
-
Bruce Merry authored
When HAVE_MALLOC_SIZE is false, each call to zrealloc causes used_memory to increase by PREFIX_SIZE more than it should, due to mis-matched accounting between the original zmalloc (which includes PREFIX size in its increment) and zrealloc (which misses it from its decrement). I've also supplied a command-line test to easily demonstrate the problem. It's not wired into the test framework, because I don't know TCL so I'm not sure how to automate it.
-
- 22 Jul, 2018 1 commit
-
-
Oran Agra authored
-
- 16 Jul, 2018 1 commit
-
-
Oran Agra authored
A) slave buffers didn't count internal fragmentation and sds unused space, this caused them to induce eviction although we didn't mean for it. B) slave buffers were consuming about twice the memory of what they actually needed. - this was mainly due to sdsMakeRoomFor growing to twice as much as needed each time but networking.c not storing more than 16k (partially fixed recently in 237a38737). - besides it wasn't able to store half of the new string into one buffer and the other half into the next (so the above mentioned fix helped mainly for small items). - lastly, the sds buffers had up to 30% internal fragmentation that was wasted, consumed but not used. C) inefficient performance due to starting from a small string and reallocing many times. what i changed: - creating dedicated buffers for reply list, counting their size with zmalloc_size - when creating a new reply node from, preallocate it to at least 16k. - when appending a new reply to the buffer, first fill all the unused space of the previous node before starting a new one. other changes: - expose mem_not_counted_for_evict info field for the benefit of the test suite - add a test to make sure slave buffers are counted correctly and that they don't cause eviction
-
- 03 Jul, 2018 1 commit
-
-
Jack Drogon authored
-
- 14 Jun, 2018 1 commit
-
-
Fuxin Hao authored
-