- 11 Sep, 2021 1 commit
-
-
David CARLIER authored
-
- 10 Sep, 2021 1 commit
-
-
zhaozhao.zz authored
-
- 09 Sep, 2021 11 commits
-
-
Meir Shpilraien (Spielrein) authored
-
sundb authored
Part two of implementing #8702 (zset), after #8887. ## Description of the feature Replaced all uses of ziplist with listpack in t_zset, and optimized some of the code to optimize performance. ## Rdb format changes New `RDB_TYPE_ZSET_LISTPACK` rdb type. ## Rdb loading improvements: 1) Pre-expansion of dict for validation of duplicate data for listpack and ziplist. 2) Simplifying the release of empty key objects when RDB loading. 3) Unify ziplist and listpack data verify methods for zset and hash, and move code to rdb.c. ## Interface changes 1) New `zset-max-listpack-entries` config is an alias for `zset-max-ziplist-entries` (same with `zset-max-listpack-value`). 2) OBJECT ENCODING will return listpack instead of ziplist. ## Listpack improvements: 1) Add `lpDeleteRange` and `lpDeleteRangeWithEntry` functions to delete a range of entries from listpack. 2) Improve the performance of `lpCompare`, converting from string to integer is faster than converting from integer to string. 3) Replace `snprintf` with `ll2string` to improve performance in converting numbers to strings in `lpGet()`. ## Zset improvements: 1) Improve the performance of `zzlFind` method, use `lpFind` instead of `lpCompare` in a loop. 2) Use `lpDeleteRangeWithEntry` instead of `lpDelete` twice to delete a element of zset. ## Tests 1) Add some unittests for `lpDeleteRange` and `lpDeleteRangeWithEntry` function. 2) Add zset RDB loading test. 3) Add benchmark test for `lpCompare` and `ziplsitCompare`. 4) Add empty listpack zset corrupt dump test.
-
Madelyn Olson authored
Throw an error when a user is provided multiple times on the command line instead of silently throwing one of them away. Remove unneeded validation for validating users on ACL load.
-
yancz2000 authored
Add make test-cluster option
-
yvette903 authored
A write request may be paused unexpectedly because `server.client_pause_end_time` is old. **Recreate this:** redis-cli -p 6379 127.0.0.1:6379> client pause 500000000 write OK 127.0.0.1:6379> client unpause OK 127.0.0.1:6379> client pause 10000 write OK 127.0.0.1:6379> set key value The write request `set key value` is paused util the timeout of 500000000 milliseconds was reached. **Fix:** reset `server.client_pause_end_time` = 0 in `unpauseClients`
-
Kamil Cudnik authored
For a lot of long strings which have same prefix which extends beyond hashing limit, there will be many hash collisions which result in performance degradation using commands like KEYS
-
Binbin authored
We want to add COUNT option for BLPOP. But we can't do it without breaking compatibility due to the command arguments syntax. So this commit introduce two new commands. Syntax for the new LMPOP command: `LMPOP numkeys [<key> ...] LEFT|RIGHT [COUNT count]` Syntax for the new BLMPOP command: `BLMPOP timeout numkeys [<key> ...] LEFT|RIGHT [COUNT count]` Some background: - LPOP takes one key, and can return multiple elements. - BLPOP takes multiple keys, but returns one element from just one key. - LMPOP can take multiple keys and return multiple elements from just one key. Note that LMPOP/BLMPOP can take multiple keys, it eventually operates on just one key. And it will propagate as LPOP or RPOP with the COUNT option. As a new command, it still return NIL if we can't pop any elements. For the normal response is nested arrays in RESP2 and RESP3, like: ``` LMPOP/BLMPOP 1) keyname 2) 1) element1 2) element2 ``` I.e. unlike BLPOP that returns a key name and one element so it uses a flat array, and LPOP that returns multiple elements with no key name, and again uses a flat array, this one has to return a nested array, and it does for for both RESP2 and RESP3 (like SCAN does) Some discuss can see: #766 #8824
-
Huang Zhw authored
Add two INFO metrics: ``` total_active_defrag_time:12345 current_active_defrag_time:456 ``` `current_active_defrag_time` if greater than 0, means how much time has passed since active defrag started running. If active defrag stops, this metric is reset to 0. `total_active_defrag_time` means total time the fragmentation was over the defrag threshold since the server started. This is a followup PR for #9031
-
Wang Yuan authored
* Delay to discard cache master when full synchronization * Don't disconnect with replicas before loading transferred RDB when full sync Previously, once replica need to start full synchronization with master, it will discard cached master whatever full synchronization is failed or not. Now we discard cached master only when transferring RDB is finished and start to change data space, this make replica could start partial resynchronization with another new master if new master is failed during full synchronization.
-
chenyang8094 authored
When parsing an array type reply, ctx will be lost when recursively parsing its elements, which will cause a memory leak in automemory mode. This is a result of the changes in #9202 Add test for callReplyParseCollection fix
-
Itamar Haber authored
-
- 08 Sep, 2021 2 commits
-
-
chenyang8094 authored
-
zhaozhao.zz authored
When a replica paused, it would not apply any commands event the command comes from master, if we feed the non-applied command to replication stream, the replication offset would be wrong, and data would be lost after failover(since replica's `master_repl_offset` grows but command is not applied). To fix it, here are the changes: * Don't update replica's replication offset or propagate commands to sub-replicas when it's paused in `commandProcessed`. * Show `slave_read_repl_offset` in info reply. * Add an assert to make sure master client should never be blocked unless pause or module (some modules may use block way to do background (parallel) processing and forward original block module command to the replica, it's not a good way but it can work, so the assert excludes module now, but someday in future all modules should rewrite block command to propagate like what `BLPOP` does).
-
- 07 Sep, 2021 1 commit
-
-
gavinshark authored
-
- 06 Sep, 2021 1 commit
-
-
Viktor Söderqvist authored
Until now, giving a negative index seeks from the end of a list and a positive seeks from the beginning. This change makes it seek from the nearest end, regardless of the sign of the given index. quicklistIndex is used by all list commands which operate by index. LINDEX key 999999 in a list if 1M elements is greately optimized by this change. Latency is cut by 75%. LINDEX key -1000000 in a list of 1M elements, likewise. LRANGE key -1 -1 is affected by this, since LRANGE converts the indices to positive numbers before seeking. The tests for corrupt dumps are updated to make sure the corrup data is seeked in the same direction as before.
-
- 05 Sep, 2021 1 commit
-
-
Wen Hui authored
Use sentinel debug to reduce default timeouts and allow tests to execute faster.
-
- 03 Sep, 2021 1 commit
-
-
Madelyn Olson authored
-
- 02 Sep, 2021 3 commits
-
-
guybe7 authored
1. MIGRATE has a potnetial key arg in argv[3]. It should be reflected in the command table. 2. getKeysUsingCommandTable should never free getKeysResult, it is always freed by the caller) The reason we never encountered this double-free bug is that almost always getKeysResult uses the statis buffer and doesn't allocate a new one.
-
sundb authored
Normally we execute the read event first and then the write event. When the barrier is set, we will do it reverse. However, under `kqueue`, if an `fd` has both read and write events, reading the event using `kevent` will generate two events, which will result in uncontrolled read and write timing. This also means that the guarantees of AOF `appendfsync` = `always` are not met on MacOS without this fix. The main change to this pr is to cache the events already obtained when reading them, so that if the same `fd` occurs again, only the mask in the cache is updated, rather than a new event is generated. This was exposed by the following test failure on MacOS: ``` *** [err]: AOF fsync always barrier issue in tests/integration/aof.tcl Expected 544 != 544 (context: type eval line 26 cmd {assert {$size1 != $size2}} proc ::test) ```
-
Yossi Gottlieb authored
This is considered a safer approach as it prevents a race condition that could lead to chmod executed on a different file. Not a major risk, but CodeQL alerted this so it makes sense to fix.
-
- 31 Aug, 2021 1 commit
-
-
Viktor Söderqvist authored
* Enhance dict to support arbitrary metadata carried in dictEntry Co-authored-by:
Viktor Söderqvist <viktor.soderqvist@est.tech> * Rewrite slot-to-keys mapping to linked lists using dict entry metadata This is a memory enhancement for Redis Cluster. The radix tree slots_to_keys (which duplicates all key names prefixed with their slot number) is replaced with a linked list for each slot. The dict entries of the same cluster slot form a linked list and the pointers are stored as metadata in each dict entry of the main DB dict. This commit also moves the slot-to-key API from db.c to cluster.c. Co-authored-by:
Jim Brunner <brunnerj@amazon.com>
-
- 30 Aug, 2021 2 commits
-
-
Oran Agra authored
Failed on Raspberry Pi 3b where that single test took about 170 seconds
-
Wang Yuan authored
We implement incremental data sync in rio.c by call fsync, on slow disk, that may cost a lot of time, sync_file_range could provide async fsync, so we could serialize key/value and sync file data at the same time. > one tip for sync_file_range usage: http://lkml.iu.edu/hypermail/linux/kernel/1005.2/01845.html Additionally, this change avoids a single large write to be used, which can result in a mass of dirty pages in the kernel (increasing the risk of someone else's write to block). On HDD, current solution could reduce approximate half of dumping RDB time, this PR costs 50s for dump 7.7G rdb but unstable branch costs 93s. On NVME SSD, this PR can't reduce much time, this PR costs 40s, unstable branch costs 48s. Moreover, I find calling data sync every 4MB is better than 32MB.
-
- 29 Aug, 2021 2 commits
-
-
Binbin authored
This one follow #9313 and goes deeper (validation of config file parsing) Move the check/update logic to a new updateClientOutputBufferLimit function. So that it can be used in CONFIG SET and config file parsing.
-
Viktor Söderqvist authored
1. The output of --help: * On the Usage line, just write [OPTIONS] [COMMAND ARGS...] instead listing only a few arbitrary options and no command. * For --cluster, describe that if the command is supplied on the command line, the key must contain "{tag}". Otherwise, the command will not be sent to the right cluster node. * For -r, add a note that if -r is omitted, all commands in a benchmark will use the same key. Also align the description. * For -t, describe that -t is ignored if a command is supplied on the command line. 2. Print a warning if -t is present when a specific command is supplied. 3. Print all warnings and errors to stderr. 4. Remove -e from calls in redis-benchmark test suite.
-
- 25 Aug, 2021 1 commit
-
-
Huang Zhw authored
In multipe threads mode, every thread output throughput info. This may cause some problems: - Bug in https://github.com/redis/redis/pull/8615; - The show throughput is called too frequently; - showThroughput which updates shared variable lacks synchronization mechanism. This commit also reverts changes in #8615 and changes time event interval to macro.
-
- 24 Aug, 2021 1 commit
-
-
Garen Chan authored
When `decr_step` is greater than `oldlimit`, the final `bestlimit` may be invalid. For example, oldlimit = 10, decr_step = 16. Current bestlimit = 15 and setrlimit() failed. Since bestlimit is less than decr_step , then exit the loop. The final bestlimit is larger than oldlimit but is invalid. Note that this only matters if the system fd limit is below 16, so unlikely to have any actual effect.
-
- 23 Aug, 2021 1 commit
-
-
Wen Hui authored
This aims to solve the issue in CONFIG SET maxmemory can only set maxmemory to up to 9223372036854775807 (2^63) while the maxmemory should be ULLONG. Added a memtoull function to convert a string representing an amount of memory into the number of bytes (similar to memtoll but for ull). Also added ull2string to convert a ULLong to string (Similar to ll2string).
-
- 22 Aug, 2021 3 commits
-
-
Viktor Söderqvist authored
Also make sure function can't return NULL by another assert.
-
Binbin authored
In old way, we always increase server.dirty in BITSET and BITFIELD SET. Even the command doesn't really change anything. This commit make sure BITSET and BITFIELD SET only increase dirty when the value changed. Because of that, if the value not changed, some others implications: - Avoid adding useless AOF - Reduce replication traffic - Will not trigger keyspace notifications (setbit) - Will not invalidate WATCH - Will not sent the invalidation message to the tracking client
-
Viktor Söderqvist authored
-
- 20 Aug, 2021 1 commit
-
-
sundb authored
-
- 18 Aug, 2021 2 commits
-
-
Yossi Gottlieb authored
We only run OOM related tests on x86_64 and aarch64, as jemalloc on other platforms (notably s390x) may actually succeed very large allocations. As a result the test may hang for a very long time at the cleanup phase, iterating as many as 2^61 hash table slots.
-
yoav-steinberg authored
Following compilation warnings on s390x.
-
- 15 Aug, 2021 1 commit
-
-
Yossi Gottlieb authored
On systems that have unsigned char by default (s390x, arm), redis-server could crash as soon as it populates the command table.
-
- 14 Aug, 2021 1 commit
-
-
Wang Yuan authored
If we want to check `defined(SYNC_FILE_RANGE_WAIT_BEFORE)`, we should include fcntl.h. otherwise, SYNC_FILE_RANGE_WAIT_BEFORE is not defined, and there is alway not `sync_file_range` system call. Introduced by #8532
-
- 12 Aug, 2021 2 commits
-
-
Madelyn Olson authored
-
Madelyn Olson authored
-