- 06 Dec, 2020 9 commits
-
-
Oran Agra authored
When RDB input attempts to make a huge memory allocation that fails, RESTORE should fail gracefully rather than die with panic
-
Oran Agra authored
If RESTORE passes successfully with full sanitization, we can't affort to crash later on assertion due to duplicate records in a hash when converting it form ziplist to dict. This means that when doing full sanitization, we must make sure there are no duplicate records in any of the collections.
-
Oran Agra authored
when using --baseport to run two tests suite in parallel (different folders), we need to also make sure the port used by the testsuite to communicate with it's workers is unique. otherwise the attept to find a free port connects to the other test suite and messes it. maybe one day we need to attempt to bind, instead of connect when tring to find a free port.
-
Oran Agra authored
The test creates keys with various encodings, DUMP them, corrupt the payload and RESTORES it. It utilizes the recently added use-exit-on-panic config to distinguish between asserts and segfaults. If the restore succeeds, it runs random commands on the key to attempt to trigger a crash. It runs in two modes, one with deep sanitation enabled and one without. In the first one we don't expect any assertions or segfaults, in the second one we expect assertions, but no segfaults. We also check for leaks and invalid reads using valgrind, and if we find them we print the commands that lead to that issue. Changes in the code (other than the test): - Replace a few NPD (null pointer deference) flows and division by zero with an assertion, so that it doesn't fail the test. (since we set the server to use `exit` rather than `abort` on assertion). - Fix quite a lot of flows in rdb.c that could have lead to memory leaks in RESTORE command (since it now responds with an error rather than panic) - Add a DEBUG flag for SET-SKIP-CHECKSUM-VALIDATION so that the test don't need to bother with faking a valid checksum - Remove a pile of code in serverLogObjectDebugInfo which is actually unsafe to run in the crash report (see comments in the code) - fix a missing boundary check in lzf_decompress test suite infra improvements: - be able to run valgrind checks before the process terminates - rotate log files when restarting servers
-
Oran Agra authored
- improve stream rdb encoding test to include more types of stream metadata - add test to cover various ziplist encoding entries (although it does look like the stress test above it is able to find some too - add another test for ziplist encoding for hash with full sanitization - add similar ziplist encoding tests for list
-
Oran Agra authored
When loading an encoded payload we will at least do a shallow validation to check that the size that's encoded in the payload matches the size of the allocation. This let's us later use this encoded size to make sure the various offsets inside encoded payload don't reach outside the allocation, if they do, we'll assert/panic, but at least we won't segfault or smear memory. We can also do 'deep' validation which runs on all the records of the encoded payload and validates that they don't contain invalid offsets. This lets us detect corruptions early and reject a RESTORE command rather than accepting it and asserting (crashing) later when accessing that payload via some command. configuration: - adding ACL flag skip-sanitize-payload - adding config sanitize-dump-payload [yes/no/clients] For now, we don't have a good way to ensure MIGRATE in cluster resharding isn't being slowed down by these sanitation, so i'm setting the default value to `no`, but later on it should be set to `clients` by default. changes: - changing rdbReportError not to `exit` in RESTORE command - adding a new stat to be able to later check if cluster MIGRATE isn't being slowed down by sanitation.
-
Oran Agra authored
When client tracking is enabled signalModifiedKey can increase memory usage, this can cause the loop in performEvictions to keep running since it was measuring the memory usage impact of signalModifiedKey. The section that measures the memory impact of the eviction should be just on dbDelete, excluding keyspace notification, client tracking, and propagation to AOF and replicas. This resolves part of the problem described in #8069 p.s. fix took 1 minute, test took about 3 hours to write.
-
guybe7 authored
One way this was happening is when a module issued an RM_Call which would inject MULTI. If the module command that does that was itself issued by something else that already did added MULTI (e.g. another module, or a Lua script), it would have caused nested MULTI. In fact the MULTI state in the client or the MULTI_EMITTED flag in the context isn't the right indication that we need to propagate MULTI or not, because on a nested calls (possibly a module action called by a keyspace event of another module action), these flags aren't retained / reflected. instead there's now a global propagate_in_transaction flag for that. in addition to that, we now have a global in_eval and in_exec flags, to serve the flags of RM_GetContextFlags, since their dependence on the current client is wrong for the same reasons mentioned above.
-
Wang Yuan authored
As we know, redis may reject user's requests or evict some keys if used memory is over maxmemory. Dictionaries expanding may make things worse, some big dictionaries, such as main db and expires dict, may eat huge memory at once for allocating a new big hash table and be far more than maxmemory after expanding. There are related issues: #4213 #4583 More details, when expand dict in redis, we will allocate a new big ht[1] that generally is double of ht[0], The size of ht[1] will be very big if ht[0] already is big. For db dict, if we have more than 64 million keys, we need to cost 1GB for ht[1] when dict expands. If the sum of used memory and new hash table of dict needed exceeds maxmemory, we shouldn't allow the dict to expand. Because, if we enable keys eviction, we still couldn't add much more keys after eviction and rehashing, what's worse, redis will keep less keys when redis only remains a little memory for storing new hash table instead of users' data. Moreover users can't write data in redis if disable keys eviction. What this commit changed ? Add a new member function expandAllowed for dict type, it provide a way for caller to allow expand or not. We expose two parameters for this function: more memory needed for expanding and dict current load factor, users can implement a function to make a decision by them. For main db dict and expires dict type, these dictionaries may be very big and cost huge memory for expanding, so we implement a judgement function: we can stop dict to expand provisionally if used memory will be over maxmemory after dict expands, but to guarantee the performance of redis, we still allow dict to expand if dict load factor exceeds the safe load factor. Add test cases to verify we don't allow main db to expand when left memory is not enough, so that avoid keys eviction. Other changes: For new hash table size when expand. Before this commit, the size is that double used of dict and later _dictNextPower. Actually we aim to control a dict load factor between 0.5 and 1.0. Now we replace *2 with +1, since the first check is that used >= size, the outcome of before will usually be the same as _dictNextPower(used+1). The only case where it'll differ is when dict_can_resize is false during fork, so that later the _dictNextPower(used*2) will cause the dict to jump to *4 (i.e. _dictNextPower(1025*2) will return 4096). Fix rehash test cases due to changing algorithm of new hash table size when expand.
-
- 03 Dec, 2020 3 commits
-
-
Itamar Haber authored
Adds the ability to use exclusive (open) start and end query intervals in XRANGE and XREVRANGE queries. Fixes #6562
-
Felipe Machado authored
In the iterator for these functions, we'll traverse the sorted sets in a reversed way so that largest elements come first. We prefer this order because it's optimized for insertion in a skiplist, which is the destination of the elements being iterated in there functions.
- 02 Dec, 2020 4 commits
-
-
Itamar Haber authored
-
Wang Yuan authored
Backup keys to slots map and restore when fail to sync if diskless-load type is swapdb in cluster mode (#8108) When replica diskless-load type is swapdb in cluster mode, we didn't backup keys to slots map, so we will lose keys to slots map if fail to sync. Now we backup keys to slots map at first, and restore it properly when fail. This commit includes a refactory/cleanup of the backups mechanism (moving it to db.c and re-structuring it a bit). Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Yossi Gottlieb authored
-
luhuachao authored
As described in redis-benchamrk help message 'The test names are the same as the ones produced as output.', In redis-benchmark output, we can only see PING_BULK, but the cmd `redis-benchmark -t ping_bulk` is not supported. We have to run it with ping_mbulk which is not user friendly.
-
- 01 Dec, 2020 3 commits
-
-
Madelyn Olson authored
* Fixed SET GET executing on wrong type Co-authored-by:
Madelyn Olson <madelyneolson@gmail.com>
-
sundb authored
SELECT used to read the index into a `long` variable, and then pass it to a function that takes an `int`, possibly causing an overflow before the range check. Now all these commands use better and cleaner range check, and that also results in a slight change of the error response in case of an invalid database index. SELECT: in the past it would have returned either `-ERR invalid DB index` (if not a number), or `-ERR DB index is out of range` (if not between 1..16 or alike). now it'll return either `-ERR value is out of range` (if not a number), or `-ERR value is out of range, value must between -2147483648 and 2147483647` (if not in the range for an int), or `-ERR DB index is out of range` (if not between 0..16 or alike) MOVE: in the past it would only fail with `-ERR index out of range` no matter the reason. now return the same errors as the new ones for SELECT mentioned above. (i.e. unlike for SELECT even for a value like 17 we changed the error message) COPY: doesn't really matter how it behaved in the past (new command), new behavior is like the above two.
-
Itamar Haber authored
Fixes #7923. This PR appropriates the special `&` symbol (because `@` and `*` are taken), followed by a literal value or pattern for describing the Pub/Sub patterns that an ACL user can interact with. It is similar to the existing key patterns mechanism in function (additive) and implementation (copy-pasta). It also adds the allchannels and resetchannels ACL keywords, naturally. The default user is given allchannels permissions, whereas new users get whatever is defined by the acl-pubsub-default configuration directive. For backward compatibility in 6.2, the default of this directive is allchannels but this is likely to be changed to resetchannels in the next major version for stronger default security settings. Unless allchannels is set for the user, channel access permissions are checked as follows : * Calls to both PUBLISH and SUBSCRIBE will fail unless a pattern matching the argumentative channel name(s) exists for the user. * Calls to PSUBSCRIBE will fail unless the pattern(s) provided as an argument literally exist(s) in the user's list. Such failures are logged to the ACL log. Runtime changes to channel permissions for a user with existing subscribing clients cause said clients to disconnect unless the new permissions permit the connections to continue. Note, however, that PSUBSCRIBErs' patterns are matched literally, so given the change bar:* -> b*, pattern subscribers to bar:* will be disconnected. Notes/questions: * UNSUBSCRIBE, PUNSUBSCRIBE and PUBSUB remain unprotected due to lack of reasons for touching them.
-
- 30 Nov, 2020 3 commits
-
-
Wang Yuan authored
On FLUSHDB or full sync, reset old average TTL stat. This Stat is incrementally collected by the master over time when it searches for expired keys.
-
sundb authored
when performing the and operation, if the output is 0, we can jump out of the loop. when performing an or operation, if the output is 0xff, we can jump out of the loop.
-
Itamar Haber authored
See https://github.com/redis/redis-doc/pull/1443 Also allows nameless commands.
-
- 29 Nov, 2020 1 commit
-
-
guybe7 authored
Used to filter stream pending entries by their idle-time, useful for XCLAIMing entries that have not been processed for some time
-
- 26 Nov, 2020 1 commit
-
-
Oran Agra authored
we recently did that for SETRANGE and APPEND
-
- 25 Nov, 2020 4 commits
-
-
Oran Agra authored
this metric already includes the argv bytes, like what clientsCronTrackClientsMemUsage does, but it's missing the array itself. p.s. For the purpose of tracking expensive clients we don't need to include the size of the client struct and the static reply buffer in it.
-
kukey authored
Merge two aeDeleteFileEvent refs into one
-
Dipankar Achinta authored
-
David CARLIER authored
__ILP32__ is 32 bits ABI and does not imply x86, this patch resolves this.
-
- 24 Nov, 2020 2 commits
-
-
sundb authored
Avoid multiple conditional judgments Avoid allocating robj->ptr when we're gonna replace it right after.
-
Yossi Gottlieb authored
Seems to have gone unnoticed for a long time, because at least with glibc it will only be triggered if setenv() was called before spt_init, which Redis doesn't. Fixes #8064.
-
- 23 Nov, 2020 2 commits
-
-
David CARLIER authored
-
Itamar Haber authored
-
- 22 Nov, 2020 5 commits
-
-
xindoo authored
Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Yossi Gottlieb authored
When USE_SYSTEMD=yes is specified, try to use pkg-config to determine libsystemd linker flags. If not found, silently fall back to simply using "-lsystemd". We now use a LIBSYSTEMD_LIBS variable so users can explicitly override it and specify their own library. If USE_SYSTEMD is unspecified the old behavior of auto-enabling it if both pkg-config and libsystemd are available is retained.
-
Wang Yuan authored
If we enable diskless replication, set repl-diskless-sync-delay to 0, and master has non-rdb child process such as rewrite aof child, master will try to start to a new BGSAVE but fails immediately (before fork) when replicas ask for full synchronization, and master always fails to start a new BGSAVE and disconnects with replicas until non-rdb child process exists. this bug was introduced in #6271 (not yet released in 6.0.x)
-
Oran Agra authored
This is hopefully usually harmles. The server.ready_keys will usually be empty so the code after releasing the GIL will soon be done. The only case where it'll actually process things is when a module releases a client (or module) blocked on a key, by triggering this NOT from within a command (e.g. a timer event). This bug was introduced in redis 6.0.9, see #7903
-
Oran Agra authored
Fix: When oom-score-adj-values is provided in the config file after oom-score-adj yes, it'll take an immediate action, before readOOMScoreAdj was acquired, resulting in an error (out of range score due to uninitialized value. delay the reaction the real call is made by main(). Since the values are clamped to -1000..1000, and they're applied as an offset from the value at startup (which may be -1000), we need to allow the offsets to reach to +2000 so that a value of +1000 is achievable in case the value at startup was -1000. Adding an option for absolute values rather than relative ones.
-
- 20 Nov, 2020 1 commit
-
-
Rosen Penev authored
backtrace can be compile time disabled.
-
- 18 Nov, 2020 1 commit
-
-
guybe7 authored
The bug was introduced by #5021 which only attempted avoid EXIST on an already expired key from returning 1 on a replica. Before that commit, dbExists was used instead of lookupKeyRead (which had an undesired effect to "touch" the LRU/LFU) Other than that, this commit fixes OBJECT to also come empty handed on expired keys in replica. And DEBUG DIGEST-VALUE to behave like DEBUG OBJECT (get the data from the key regardless of it's expired state)
-
- 17 Nov, 2020 1 commit
-
-
Meir Shpilraien (Spielrein) authored
Blocking command should not be used with MULTI, LUA, and RM_Call. This is because, the caller, who executes the command in this context, expects a reply. Today, LUA and MULTI have a special (and different) treatment to blocking commands: LUA - Most commands are marked with no-script flag which are checked when executing and command from LUA, commands that are not marked (like XREAD) verify that their blocking mode is not used inside LUA (by checking the CLIENT_LUA client flag). MULTI - Command that is going to block, first verify that the client is not inside multi (by checking the CLIENT_MULTI client flag). If the client is inside multi, they return a result which is a match to the empty key with no timeout (for example blpop inside MULTI will act as lpop) For modules that perform RM_Call with blocking command, the returned results type is REDISMODULE_REPLY_UNKNOWN and the caller can not really know what happened. Disadvantages of the current state are: No unified approach, LUA, MULTI, and RM_Call, each has a different treatment Module can not safely execute blocking command (and get reply or error). Though It is true that modules are not like LUA or MULTI and should be smarter not to execute blocking commands on RM_Call, sometimes you want to execute a command base on client input (for example if you create a module that provides a new scripting language like javascript or python). While modules (on modules command) can check for REDISMODULE_CTX_FLAGS_LUA or REDISMODULE_CTX_FLAGS_MULTI to know not to block the client, there is no way to check if the command came from another module using RM_Call. So there is no way for a module to know not to block another module RM_Call execution. This commit adds a way to unify the treatment for blocking clients by introducing a new CLIENT_DENY_BLOCKING client flag. On LUA, MULTI, and RM_Call the new flag turned on to signify that the client should not be blocked. A blocking command verifies that the flag is turned off before blocking. If a blocking command sees that the CLIENT_DENY_BLOCKING flag is on, it's not blocking and return results which are matches to empty key with no timeout (as MULTI does today). The new flag is checked on the following commands: List blocking commands: BLPOP, BRPOP, BRPOPLPUSH, BLMOVE, Zset blocking commands: BZPOPMIN, BZPOPMAX Stream blocking commands: XREAD, XREADGROUP SUBSCRIBE, PSUBSCRIBE, MONITOR In addition, the new flag is turned on inside the AOF client, we do not want to block the AOF client to prevent deadlocks and commands ordering issues (and there is also an existing assert in the code that verifies it). To keep backward compatibility on LUA, all the no-script flags on existing commands were kept untouched. In addition, a LUA special treatment on XREAD and XREADGROUP was kept. To keep backward compatibility on MULTI (which today allows SUBSCRIBE, and PSUBSCRIBE). We added a special treatment on those commands to allow executing them on MULTI. The only backward compatibility issue that this PR introduces is that now MONITOR is not allowed inside MULTI. Tests were added to verify blocking commands are not blocking the client on LUA, MULTI, or RM_Call. Tests were added to verify the module can check for CLIENT_DENY_BLOCKING flag. Co-authored-by:
Oran Agra <oran@redislabs.com> Co-authored-by:
Itamar Haber <itamar@redislabs.com>
-