- 10 Mar, 2022 1 commit
-
-
rangerzhang authored
* fix-replication-comments The described capacity `and to schedule a new BGSAVE if there are slaves that attached while a BGSAVE was in progress` was moved to `checkChildrenDone()` named by `replicationStartPendingFork` But the comment was not changed, may misleading others. * remove-misleading-comments The described capacity `to schedule a new BGSAVE if there are slaves that attached while a BGSAVE was in progress` and `or when the replication RDB transfer strategy is modified from disk to socket or the other way around` were not correct now.
-
- 09 Mar, 2022 4 commits
-
-
Oran Agra authored
* stats and latency commands have non-deterministic output. * the ones about latency should be sent to ALL_NODES (considering reads from replicas) * the ones about running scripts and memory usage only to masters. * stats aggregation is SPECIAL (like in INFO)
-
a2tt authored
-
蔡相跃 authored
-
sundb authored
c->buf is not sds, so we should use dismissMemory instead of dismissSds to dismiss it. This is a recent regression from #10371
-
- 08 Mar, 2022 6 commits
-
-
Ronald Petty authored
Typo in conf file comment.
-
guybe7 authored
Deleting a stream while a client is blocked XREADGROUP should unblock the client. The idea is that if a client is blocked via XREADGROUP is different from any other blocking type in the sense that it depends on the existence of both the key and the group. Even if the key is deleted and then revived with XADD it won't help any clients blocked on XREADGROUP because the group no longer exist, so they would fail with -NOGROUP anyway. The conclusion is that it's better to unblock these clients (with error) upon the deletion of the key, rather than waiting for the first XADD. Other changes: 1. Slightly optimize all `serveClientsBlockedOn*` functions by checking `server.blocked_clients_by_type` 2. All `serveClientsBlockedOn*` functions now use a list iterator rather than looking at `listFirst`, relying on `unblockClient` to delete the head of the list. Before this commit, only `serveClientsBlockedOnStreams` used to work like that. 3. bugfix: CLIENT UNBLOCK ERROR should work even if the command doesn't have a timeout_callback (only relevant to module commands)
-
zhaozhao.zz authored
In some special commands like eval_ro / fcall_ro we allow no-writes commands. But may-replicate commands are no-writes too, that leads crash when client pause write:
-
Oran Agra authored
since #9822, the static reply buffer is no longer part of the client structure, so we need to dismiss it.
-
zhugezy authored
introduced in #10147 since we blocked the first-arg mechanism on subcommands
-
Yossi Gottlieb authored
Currently, CLUSTER NODES is parsed and was not done correctly for IPv6 addresses.
-
- 07 Mar, 2022 3 commits
-
-
Binbin authored
Add `DEPRECATED` doc_flag.
-
Shaya Potter authored
Add a new REDISMODULE_EVENT_CONFIG event type for notifying modules when Redis configuration changes.
-
Binbin authored
`Expected '*table size: 4096*' to match '*table size: 8192*'` This test failed once on daily macOS, the reason is because the bgsave has not stopped after the kill and `after 200`. So there is a child process and no rehash triggered. This commit use `waitForBgsave` to wait for it to finish.
-
- 06 Mar, 2022 1 commit
-
-
Yossi Gottlieb authored
Apparently using `\x` produces different results between tclsh 8.5 and 8.6, whereas `\u` is more consistent.
-
- 05 Mar, 2022 1 commit
-
-
Yuta Hongo authored
Normally, `redis-cli` escapes non-printable data received from Redis, using a custom scheme (which is also used to handle quoted input). When using `--json` this is not desired as it is not compatible with RFC 7159, which specifies JSON strings are assumed to be Unicode and how they should be escaped. This commit changes `--json` to follow RFC 7159, which means that properly encoded Unicode strings in Redis will result with a valid Unicode JSON. However, this introduces a new problem with `--json` and data that is not valid Unicode (e.g., random binary data, text that follows other encoding, etc.). To address this, we add `--quoted-json` which produces JSON strings that follow the original redis-cli quoting scheme. For example, a value that consists of only null (0x00) bytes will show up as: * `"\u0000\u0000\u0000"` when using `--json` * `"\\x00\\x00\\x00"` when using `--quoted-json`
-
- 03 Mar, 2022 1 commit
-
-
Binbin authored
Cluster node name is not null terminated, so need to be constrained.
-
- 02 Mar, 2022 1 commit
-
-
Henry authored
1. since ZSKIPLIST_P is float, using it directly inside the condition used to causes floating point code to be used (gcc/x86) 2. In some operating system(eg.Windows), the largest value returned from random() is 0x7FFF(15bit), so after bitwise AND with 0xFFFF, the probability of the less operation returning true in the while loop's condition is no more equal to ZSKIPLIST_P. 3. In case some library has random() returning int in range [0~ZSKIPLIST_P*65535], the while loop will be an infinite loop. 4. on Linux where RAND_MAX is higher than 0xFFFF, this change actually improves precision (despite not matching the result against a float value)
-
- 01 Mar, 2022 3 commits
-
-
ranshid authored
In order to resolve some flaky tests which hard rely on examine memory footprint. we introduce the following fixes: # Fix in client-eviction test - by @yoav-steinberg Sometime the libc allocator can use different size client struct allocations. this may cause unexpected memory calculations to fail the test. # Introduce new DEBUG command for disabling reply buffer resizing In order to eliminate reply buffer resizing during specific tests. we introduced the ability to disable (and enable) the resizing cron job Co-authored-by: yoav-steinberg yoav@redislabs.com
-
Madelyn Olson authored
* Moved configuration storage from a list to a hash table * Configs are returned in a non-deterministic order. It's possible that a client was relying on order (hopefully not). * Fixed an esoteric bug where if you did a set with an alias with an error, it would throw an error indicating a bug with the preferred name for that config.
-
Harkrishn Patro authored
-
- 28 Feb, 2022 6 commits
-
-
Vitah Lin authored
* Fix memory leak in RM_StreamIteratorStop * Fix memory leak in moduleFreeKeyIterator
-
Oran Agra authored
-
ranshid authored
After introducing #9822 need to prevent client reply buffer shrink to maintain correct client memory math. add needs:debug missing one one test. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Oran Agra authored
re-generate help.h from commands.json
-
Binbin authored
* The type of node-id should be string, not integer. * Also improve the CLUSTER SETSLOT help message.
-
chenyang8094 authored
-
- 27 Feb, 2022 3 commits
-
-
Meir Shpilraien (Spielrein) authored
This PR fix 2 issues on Lua scripting: * Server error reply statistics (some errors were counted twice). * Error code and error strings returning from scripts (error code was missing / misplaced). ## Statistics a Lua script user is considered part of the user application, a sophisticated transaction, so we want to count an error even if handled silently by the script, but when it is propagated outwards from the script we don't wanna count it twice. on the other hand, if the script decides to throw an error on its own (using `redis.error_reply`), we wanna count that too. Besides, we do count the `calls` in command statistics for the commands the script calls, we we should certainly also count `failed_calls`. So when a simple `eval "return redis.call('set','x','y')" 0` fails, it should count the failed call to both SET and EVAL, but the `errorstats` and `total_error_replies` should be counted only once. The PR changes the error object that is raised on errors. Instead of raising a simple Lua string, Redis will raise a Lua table in the following format: ``` { err='<error message (including error code)>', source='<User source file name>', line='<line where the error happned>', ignore_error_stats_update=true/false, } ``` The `luaPushError` function was modified to construct the new error table as describe above. The `luaRaiseError` was renamed to `luaError` and is now simply called `lua_error` to raise the table on the top of the Lua stack as the error object. The reason is that since its functionality is changed, in case some Redis branch / fork uses it, it's better to have a compilation error than a bug. The `source` and `line` fields are enriched by the error handler (if possible) and the `ignore_error_stats_update` is optional and if its not present then the default value is `false`. If `ignore_error_stats_update` is true, the error will not be counted on the error stats. When parsing Redis call reply, each error is translated to a Lua table on the format describe above and the `ignore_error_stats_update` field is set to `true` so we will not count errors twice (we counted this error when we invoke the command). The changes in this PR might have been considered as a breaking change for users that used Lua `pcall` function. Before, the error was a string and now its a table. To keep backward comparability the PR override the `pcall` implementation and extract the error message from the error table and return it. Example of the error stats update: ``` 127.0.0.1:6379> lpush l 1 (integer) 2 127.0.0.1:6379> eval "return redis.call('get', 'l')" 0 (error) WRONGTYPE Operation against a key holding the wrong kind of value. script: e471b73f1ef44774987ab00bdf51f21fd9f7974a, on @user_script:1. 127.0.0.1:6379> info Errorstats # Errorstats errorstat_WRONGTYPE:count=1 127.0.0.1:6379> info commandstats # Commandstats cmdstat_eval:calls=1,usec=341,usec_per_call=341.00,rejected_calls=0,failed_calls=1 cmdstat_info:calls=1,usec=35,usec_per_call=35.00,rejected_calls=0,failed_calls=0 cmdstat_lpush:calls=1,usec=14,usec_per_call=14.00,rejected_calls=0,failed_calls=0 cmdstat_get:calls=1,usec=10,usec_per_call=10.00,rejected_calls=0,failed_calls=1 ``` ## error message We can now construct the error message (sent as a reply to the user) from the error table, so this solves issues where the error message was malformed and the error code appeared in the middle of the error message: ```diff 127.0.0.1:6379> eval "return redis.call('set','x','y')" 0 -(error) ERR Error running script (call to 71e6319f97b0fe8bdfa1c5df3ce4489946dda479): @user_script:1: OOM command not allowed when used memory > 'maxmemory'. +(error) OOM command not allowed when used memory > 'maxmemory' @user_script:1. Error running script (call to 71e6319f97b0fe8bdfa1c5df3ce4489946dda479) ``` ```diff 127.0.0.1:6379> eval "redis.call('get', 'l')" 0 -(error) ERR Error running script (call to f_8a705cfb9fb09515bfe57ca2bd84a5caee2cbbd1): @user_script:1: WRONGTYPE Operation against a key holding the wrong kind of value +(error) WRONGTYPE Operation against a key holding the wrong kind of value script: 8a705cfb9fb09515bfe57ca2bd84a5caee2cbbd1, on @user_script:1. ``` Notica that `redis.pcall` was not change: ``` 127.0.0.1:6379> eval "return redis.pcall('get', 'l')" 0 (error) WRONGTYPE Operation against a key holding the wrong kind of value ``` ## other notes Notice that Some commands (like GEOADD) changes the cmd variable on the client stats so we can not count on it to update the command stats. In order to be able to update those stats correctly we needed to promote `realcmd` variable to be located on the client struct. Tests was added and modified to verify the changes. Related PR's: #10279, #10218, #10278, #10309 Co-authored-by:
Oran Agra <oran@redislabs.com>
-
filipe oliveira authored
Adds `-3` option to cause redis-benchmark to send a `HELLO 3` to it can benchmark the effects of RESP3 on the server.
-
Madelyn Olson authored
-
- 24 Feb, 2022 3 commits
-
-
filipe oliveira authored
Avoid deferred array reply on genericZrangebyrankCommand() when consumer type is client. I.e. any ZRANGE / ZREVRNGE (when tank is used). This was a performance regression introduced in #7844 (v 6.2) mainly affecting pipelined workloads. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Yossi Gottlieb authored
This is basically just a subtree pull of the latest (unreleased) hiredis. Unfortunately, the `sds -> hisds` patch was pulled as a subtree update from a remote branch rather than a local redis change. Because of that, it goes away on every subtree update. It is now applied as a local commit so it should survive in the future.
-
Binbin authored
Add a comma, this would have resulted in missing newline in the message. Forgot to add in #9127
-
- 23 Feb, 2022 4 commits
-
-
Itamar Haber authored
Adds the ability to track the lag of a consumer group (CG), that is, the number of entries yet-to-be-delivered from the stream. The proposed constant-time solution is in the spirit of "best-effort." Partially addresses #8737. ## Description of approach We add a new "entries_added" property to the stream. This starts at 0 for a new stream and is incremented by 1 with every `XADD`. It is essentially an all-time counter of the entries added to the stream. Given the stream's length and this counter value, we can trivially find the logical "entries_added" counter of the first ID if and only if the stream is contiguous. A fragmented stream contains one or more tombstones generated by `XDEL`s. The new "xdel_max_id" stream property tracks the latest tombstone. The CG also tracks its last delivered ID's as an "entries_read" counter and increments it independently when delivering new messages, unless the this read counter is invalid (-1 means invalid offset). When the CG's counter is available, the reported lag is the difference between added and read counters. Lastly, this also adds a "first_id" field to the stream structure in order to make looking it up cheaper in most cases. ## Limitations There are two cases in which the mechanism isn't able to track the lag. In these cases, `XINFO` replies with `null` in the "lag" field. The first case is when a CG is created with an arbitrary last delivered ID, that isn't "0-0", nor the first or the last entries of the stream. In this case, it is impossible to obtain a valid read counter (short of an O(N) operation). The second case is when there are one or more tombstones fragmenting the stream's entries range. In both cases, given enough time and assuming that the consumers are active (reading and lacking) and advancing, the CG should be able to catch up with the tip of the stream and report zero lag. Once that's achieved, lag tracking would resume as normal (until the next tombstone is set). ## API changes * `XGROUP CREATE` added with the optional named argument `[ENTRIESREAD entries-read]` for explicitly specifying the new CG's counter. * `XGROUP SETID` added with an optional positional argument `[ENTRIESREAD entries-read]` for specifying the CG's counter. * `XINFO` reports the maximal tombstone ID, the recorded first entry ID, and total number of entries added to the stream. * `XINFO` reports the current lag and logical read counter of CGs. * `XSETID` is an internal command that's used in replication/aof. It has been added with the optional positional arguments `[ENTRIESADDED entries-added] [MAXDELETEDID max-deleted-entry-id]` for propagating the CG's offset and maximal tombstone ID of the stream. ## The generic unsolved problem The current stream implementation doesn't provide an efficient way to obtain the approximate/exact size of a range of entries. While it could've been nice to have that ability (#5813) in general, let alone specifically in the context of CGs, the risk and complexities involved in such implementation are in all likelihood prohibitive. ## A refactoring note The `streamGetEdgeID` has been refactored to accommodate both the existing seek of any entry as well as seeking non-deleted entries (the addition of the `skip_tombstones` argument). Furthermore, this refactoring also migrated the seek logic to use the `streamIterator` (rather than `raxIterator`) that was, in turn, extended with the `skip_tombstones` Boolean struct field to control the emission of these. Co-authored-by:
Guy Benoish <guy.benoish@redislabs.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
filipe oliveira authored
Avoid sprintf/ll2string on setDeferredAggregateLen()/addReplyLongLongWithPrefix() when we can used shared objects. In some pipelined workloads this achieves about 10% improvement. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
Moti Cohen authored
In case HELLO message received from another sentinel, with same address like another instance registered in the past but with different runid. Then there was cumbersome logic to modify the instance the port to 0 to in order to mark as invalid and later on to delete it. But the deletion is happening during update of instances in such a way that we might end up accessing an instance that was deleted just before. Didn't find a good reason why to postpone the deletion action of an obsolete instance (deletion is taking place instantly, for other cases ) -> Lets delete at once There is a mixture of logic of Sentinel address update with the logic of deletion of Sentinels that match a given Address -> Split to two!
-
Binbin authored
The test will fail on slow machines (valgrind or FreeBsd). Because in #10256 when WATCH is called on a key that's already logically expired, we will add an `expired` flag, and we will skip it in `isWatchedKeyExpired` check. Apparently we need to increase the expiration time so that the key can not expire logically then the WATCH is called. Also added retries to make sure it doesn't fail. I suppose 100ms is enough in valgrind, tested locally, no need to retry.
-
- 22 Feb, 2022 3 commits
-
-
Wen Hui authored
argument was missing, affecting redis.io docs
-
Andy Pan authored
There are scenarios where it results in many small objects in the reply list, such as commands heavily using deferred array replies (`addReplyDeferredLen`). E.g. what COMMAND command and CLUSTER SLOTS used to do (see #10056, #7123), but also in case of a transaction or a pipeline of commands that use just one differed array reply. We used to have to run multiple loops along with multiple calls to `write()` to send data back to peer based on the current code, but by means of `writev()`, we can gather those scattered objects in reply list and include the static reply buffer as well, then send it by one system call, that ought to achieve higher performance. In the case of TLS, we simply check and concatenate buffers into one big buffer and send it away by one call to `connTLSWrite()`, if the amount of all buffers exceeds `NET_MAX_WRITES_PER_EVENT`, then invoke `connTLSWrite()` multiple times to avoid a huge massive of memory copies. Note that aside of reducing system calls, this change will also reduce the amount of small TCP packets sent.
-
Viktor Söderqvist authored
When WATCH is called on a key that's already logically expired, avoid discarding the transaction when the keys is actually deleted. When WATCH is called, a flag is stored if the key is already expired at the time of watch. The expired key is not deleted, only checked. When a key is "touched", if it is deleted and it was already expired when a client watched it, the client is not marked as dirty. Co-authored-by:
Oran Agra <oran@redislabs.com> Co-authored-by:
zhaozhao.zz <zhaozhao.zz@alibaba-inc.com>
-