- 21 Sep, 2022 24 commits
-
-
sundb authored
This PR mainly deals with 2 crashes introduced in #9357, and fix the QUICKLIST-PACKED-THRESHOLD mess in external test mode. 1. Fix crash due to deleting an entry from a compress quicklistNode When inserting a large element, we need to create a new quicklistNode first, and then delete its previous element, if the node where the deleted element is located is compressed, it will cause a crash. Now add `dont_compress` to quicklistNode, if we want to use a quicklistNode after some operation, we can use this flag like following: ```c node->dont_compress = 1; /* Prevent to be compressed */ some_operation(node); /* This operation might try to compress this node */ some_other_operation(node); /* We can use this node without decompress it */ node->dont_compress = 0; /* Re-able compression */ quicklistCompressNode(node); ``` Perhaps in the future, we could just disable the current entry from being compressed during the iterator loop, but that would require more work. 2. Fix crash due to wrongly split quicklist before #9357, the offset param of _quicklistSplitNode() will not negative. For now, when offset is negative, the split extent will be wrong. following example: ```c int orig_start = after ? offset + 1 : 0; int orig_extent = after ? -1 : offset; int new_start = after ? 0 : offset; int new_extent = after ? offset + 1 : -1; # offset: -2, after: 1, node->count: 2 # current wrong range: [-1,-1] [0,-1] # correct range: [1,-1] [0, 1] ``` Because only `_quicklistInsert()` splits the quicklistNode and only `quicklistInsertAfter()`, `quicklistInsertBefore()` call _quicklistInsert(), so `quicklistReplaceEntry()` and `listTypeInsert()` might occur this crash. But the iterator of `listTypeInsert()` is alway from head to tail(iter->offset is always positive), so it is not affected. The final conclusion is this crash only occur when we insert a large element with negative index into a list, that affects `LSET` command and `RM_ListSet` module api. 3. In external test mode, we need to restore quicklist packed threshold after when the end of test. 4. Show `node->count` in quicklistRepr(). 5. Add new tcl proc `config_get_set` to support restoring config in tests. (cherry picked from commit 13d25dd9)
-
zhaozhao.zz authored
This bug is introduced in #7653. (Redis 6.2.0) When `server.maxmemory_eviction_tenacity` is 100, `eviction_time_limit_us` is `ULONG_MAX`, and if we cannot find the best key to delete (e.g. maxmemory-policy is `volatile-lru` and all keys with ttl have been evicted), in `cant_free` redis will sleep forever if some items are being freed in the lazyfree thread. (cherry picked from commit 464aa041)
-
Viktor Söderqvist authored
(cherry picked from commit 42e4241e)
-
Madelyn Olson authored
(cherry picked from commit 6c03786b)
-
chendianqiang authored
EVAL scripts are by default not considered `write` commands, so they were allowed on a replica. But when adding a shebang, they become `write` command (unless the `no-writes` flag is added). With this change we'll handle them as write commands, and reply with MOVED instead of READONLY when executed on a redis cluster replica. Co-authored-by:
chendianqiang <chendianqiang@meituan.com> (cherry picked from commit e42d98ed)
-
Oran Agra authored
Redis 7.0 has #9890 which added an assertion when the propagation queue was not flushed and we got to beforeSleep. But it turns out that when processCommands calls getNodeByQuery and decides to reject the command, it can lead to a key that was lazy expired and is deleted without later flushing the propagation queue. This change prevents lazy expiry from deleting the key at this stage (not as part of a command being processed in `call`) (cherry picked from commit c789fb0a)
-
Itamar Haber authored
(cherry picked from commit 407b5c91)
-
Itamar Haber authored
This change was part of #9656 (Redis 7.0) (cherry picked from commit 31ef410e)
-
DarrenJiang13 authored
Fix bug with scripts ignoring client tracking NOLOOP and send an invalidation message anyway. (cherry picked from commit 44859a41)
-
Huang Zhw authored
`bitfield` with `get` may not be readonly. ``` 127.0.0.1:6384> acl setuser hello on nopass %R~* +@all OK 127.0.0.1:6384> auth hello 1 OK 127.0.0.1:6384> bitfield hello set i8 0 1 (error) NOPERM this user has no permissions to access one of the keys used as arguments 127.0.0.1:6384> bitfield hello set i8 0 1 get i8 0 1) (integer) 0 2) (integer) 1 ``` Co-authored-by:
Oran Agra <oran@redislabs.com> (cherry picked from commit ec5034a2)
-
Rudi Floren authored
The docs state that there is a new and an old argument format. The current state of the arguments allows mixing the old and new format, thus the need for two additional oneof blocks. One for differentiating the new from the old format and then one to allow setting multiple filters using the new format. (cherry picked from commit 4ce3fd51)
-
Meir Shpilraien (Spielrein) authored
Fix #11030, use lua_rawget to avoid triggering metatables. #11030 shows how return `_G` from the Lua script (either function or eval), cause the Lua interpreter to Panic and the Redis processes to exit with error code 1. Though return `_G` only panic on Redis 7 and 6.2.7, the underline issue exists on older versions as well (6.0 and 6.2). The underline issue is returning a table with a metatable such that the metatable raises an error. The following example demonstrate the issue: ``` 127.0.0.1:6379> eval "local a = {}; setmetatable(a,{__index=function() foo() end}) return a" 0 Error: Server closed the connection ``` ``` PANIC: unprotected error in call to Lua API (user_script:1: Script attempted to access nonexistent global variable 'foo') ``` The Lua panic happened because when returning the result to the client, Redis needs to introspect the returning table and transform the table into a resp. In order to scan the table, Redis uses `lua_gettable` api which might trigger the metatable (if exists) and might raise an error. This code is not running inside `pcall` (Lua protected call), so raising an error causes the Lua to panic and exit. Notice that this is not a crash, its a Lua panic that exit with error code 1. Returning `_G` panics on Redis 7 and 6.2.7 because on those versions `_G` has a metatable that raises error when trying to fetch a none existing key. ### Solution Instead of using `lua_gettable` that might raise error and cause the issue, use `lua_rawget` that simply return the value from the table without triggering any metatable logic. This is promised not to raise and error. The downside of this solution is that it might be considered as breaking change, if someone rely on metatable in the returned value. An alternative solution is to wrap this entire logic with `pcall` (Lua protected call), this alternative require a much bigger refactoring. ### Back Porting The same fix will work on older versions as well (6.2, 6.0). Notice that on those version, the issue can cause Redis to crash if inside the metatable logic there is an attempt to accesses Redis (`redis.call`). On 7.0, there is not crash and the `redis.call` is executed as if it was done from inside the script itself. ### Tests Tests was added the verify the fix (cherry picked from commit 020e046b)
-
Binbin authored
In rewriteAppendOnlyFileBackground, after flushAppendOnlyFile(1), and before openNewIncrAofForAppend, we should call redis_fsync to fsync the aof file. Because we may open a new INCR AOF in openNewIncrAofForAppend, in the case of using everysec policy, the old AOF file may not be fsynced in time (or even at all). When using everysec, we don't want to pay the disk latency from the main thread, so we will do a background fsync. Adding a argument for bioCreateCloseJob, a `need_fsync` flag to indicate that a fsync is required before the file is closed. So we will fsync the old AOF file before we close it. A cleanup, we make union become a union, since the free_* args and the fd / fsync args are never used together. Co-authored-by:
Oran Agra <oran@redislabs.com> (cherry picked from commit 03fff10a)
-
chenyang8094 authored
RM_SetAbsExpire and RM_GetAbsExpire were not actually operational since they were introduced, due to omission in API registration. (cherry picked from commit 39d216a3)
-
Pavel Krush authored
Swap M and N in the complexity formula of [B]ZMPOP Co-authored-by:
Pavel Krush <neon@pushwoosh.com> (cherry picked from commit 5879e490)
-
Wen Hui authored
these are missing from the RO_ commands, present in the other ones. Co-authored-by:
Ubuntu <lucas.guang.yang1@huawei.com> (cherry picked from commit 56828bab)
-
Wen Hui authored
According to the Redis functions documentation, FCALL command format could be FCALL function_name numberOfKeys [key1, key2, key3.....] [arg1, arg2, arg3.....] So in the json file of fcall and fcall_ro, we should add optional for key and arg part. Just like EVAL... Co-authored-by:
Binbin <binloveplay1314@qq.com> (cherry picked from commit 2d3240f3)
-
- 18 Jul, 2022 3 commits
-
-
Oran Agra authored
-
Oran Agra authored
The temporary array for deleted entries reply of XAUTOCLAIM was insufficient, but also in fact the COUNT argument should be used to control the size of the reply, so instead of terminating the loop by only counting the claimed entries, we'll count deleted entries as well. Fix #10968 Addresses CVE-2022-31144 (cherry picked from commit 2825b605)
-
Valentino Geron authored
If we do not use jemalloc (mostly with valgrind) and use an old compiler that does not support C11 we will get compilation error Co-authored-by:
Valentino Geron <valentino@redis.com> (cherry picked from commit 82b82035)
-
- 11 Jul, 2022 5 commits
-
-
Oran Agra authored
-
Oran Agra authored
-
Binbin authored
In #9389, we add a new `cluster-port` config and make cluster bus port configurable, and currently redis-cli --cluster create/add-node doesn't support with a configurable `cluster-port` instance. Because redis-cli uses the old way (port + 10000) to send the `CLUSTER MEET` command. Now we add this support on redis-cli `--cluster`, note we don't need to explicitly pass in the `cluster-port` parameter, we can get the real `cluster-port` of the node in `clusterManagerNodeLoadInfo`, so the `--cluster create` and `--cluster add-node` interfaces have not changed. We will use the `cluster-port` when we are doing `CLUSTER MEET`, also note that `CLUSTER MEET` bus-port parameter was added in 4.0, so if the bus_port (the one in redis-cli) is 0, or equal (port + 10000), we just call `CLUSTER MEET` with 2 arguments, using the old form. Co-authored-by:
Madelyn Olson <34459052+madolson@users.noreply.github.com>
-
Binbin authored
-
Madelyn Olson authored
* Fix an engine crash when there are nodes in handshaking and a user calls cluster shards
-
- 10 Jul, 2022 1 commit
-
-
Madelyn Olson authored
* Only print ACL syntax errors once and include command names in errors
-
- 07 Jul, 2022 1 commit
-
-
Binbin authored
* Fix some outdated comments and some typo
-
- 06 Jul, 2022 1 commit
-
-
Binbin authored
We already have `pubsub_channels` and `pubsub_patterns` in INFO stats, now add `pubsubshard_channels` (symmetry). Sharded pubsub was added in #8621
-
- 05 Jul, 2022 1 commit
-
-
Yossi Gottlieb authored
Use SSL_shutdown(), in a best-effort manner, when closing a TLS connection. This change better supports OpenSSL 3.x clients that will not silently ignore the socket-level EOF.
-
- 04 Jul, 2022 4 commits
-
-
Harkrishn Patro authored
-
Harkrishn Patro authored
## Issue During the MULTI/EXEC flow, each command gets queued until the `EXEC` command is received and during this phase on every command queue, a `realloc` is being invoked. This could be expensive based on the realloc behavior (if copy to a new memory location). ## Solution In order to reduce the no. of syscall, couple of optimization I've used. 1. By default, reserve memory for atleast two commands. `MULTI/EXEC` for a single command doesn't have any significance. Hence, I believe customer wouldn't use it. 2. For further reservation, increase the memory allocation in exponent growth (power of 2). This reduces the no. of `realloc` call from `N` to `log(N)` times. ## Other changes: * Include multi exec queued command array in client memory consumption calculation (affects client eviction too)
-
Qu Chen authored
Currently in cluster mode, Redis process locks the cluster config file when starting up and holds the lock for the entire lifetime of the process. When the server shuts down, it doesn't explicitly release the lock on the cluster config file. We noticed a problem with restart testing that if you shut down a very large redis-server process (i.e. with several hundred GB of data stored), it takes the OS a while to free the resources and unlock the cluster config file. So if we immediately try to restart the redis server process, it might fail to acquire the lock on the cluster config file and fail to come up. This fix explicitly releases the lock on the cluster config file upon a shutdown rather than relying on the OS to release the lock, which is a cleaner and safer approach to free up resources acquired.
-
Harkrishn Patro authored
Account sharded pubsub channels memory consumption in client memory usage computation to accurately evict client based on the set threshold for `maxmemory-clients`.
-