- 03 Sep, 2024 1 commit
-
-
Filipe Oliveira (Redis) authored
Created specific SMEMBERS command logic which avoids sinterGenericCommand, and minimizes processing and memory overhead (#13499) This PR introduces a dedicated implementation for the SMEMBERS command that avoids using the more generalized sinterGenericCommand function. By tailoring the logic specifically for SMEMBERS, we reduce unnecessary processing and memory overheads that were previously incurred by handling more complex cases like set intersections. --------- Co-authored-by:
debing.sun <debing.sun@redis.com>
-
- 04 Aug, 2024 1 commit
-
-
Moti Cohen authored
H[P]TTL should be marked as NONDETERMINISTIC_OUTPUT just like [P]TTL.
-
- 14 Jun, 2024 1 commit
-
-
Jo authored
I reviewed `XREAD` command syntax: ``` XREAD [COUNT count] [BLOCK milliseconds] STREAMS key [key ...] id [id ...] ``` Here’s the structure for `XREAD`: ```json "arguments": [ { "token": "COUNT", "name": "count", "type": "integer", "optional": true }, { "token": "BLOCK", "name": "milliseconds", "type": "integer", "optional": true }, { "name": "streams", "token": "STREAMS", "type": "block", "arguments": [ { "name": "key", "type": "key", "key_spec_index": 0, "multiple": true }, { "name": "ID", "type": "string", "multiple": true } ] } ] ``` Now, consider the `HEXPIRE` syntax: ``` HEXPIRE key seconds [NX | XX | GT | LT] FIELDS numfields field [field ...] ``` Since the `FIELDS` token functions similarly to `STREAMS`, and given that `STREAMS` is defined as a block, I believe the `FIELDS` in `hepxire` should also be defined as a block.
-
- 29 May, 2024 1 commit
-
-
Ozan Tezcan authored
Fix position of numfields in H(P)EXPIRE json files
-
- 27 May, 2024 2 commits
-
-
Ozan Tezcan authored
In https://github.com/redis/redis/pull/13291, we've changed that hfe commands to return empty array if the key does not exist. Forgot to update json schemas.
-
debing.sun authored
-
- 26 May, 2024 1 commit
-
-
Ozan Tezcan authored
Changes: - Delete hsetf and hgetf commands - Hfe commands will return empty array instead of nil. --------- Co-authored-by:
Moti Cohen <moticless@gmail.com>
-
- 16 May, 2024 1 commit
-
-
Moti Cohen authored
The same goes to: HPEXPIRE, HEXPIREAT, HPEXPIREAT, HEXPIRETIME, HPEXPIRETIME, HPTTL, HTTL, HPERSIST
-
- 13 May, 2024 1 commit
-
-
Ozan Tezcan authored
If encoding is listpack, hgetf and hsetf commands reply field value type as integer. This PR fixes it by returning string. Problematic cases: ``` 127.0.0.1:6379> hset hash one 1 (integer) 1 127.0.0.1:6379> hgetf hash fields 1 one 1) (integer) 1 127.0.0.1:6379> hsetf hash GETOLD fvs 1 one 2 1) (integer) 1 127.0.0.1:6379> hsetf hash DOF GETNEW fvs 1 one 2 1) (integer) 2 ``` Additional fixes: - hgetf/hsetf command description text Fixes #13261, #13262
-
- 09 May, 2024 1 commit
-
-
debing.sun authored
1. Add `hpersist` notification for `hpersist` command. 2. Add `pexpire` notification for `hexpire`, `hexpireat` and `hpexpire`.
-
- 08 May, 2024 1 commit
-
-
Ozan Tezcan authored
**Changes:** - Adds listpack support to hash field expiration - Implements hgetf/hsetf commands **Listpack support for hash field expiration** We keep field name and value pairs in listpack for the hash type. With this PR, if one of hash field expiration command is called on the key for the first time, it converts listpack layout to triplets to hold field name, value and ttl per field. If a field does not have a TTL, we store zero as the ttl value. Zero is encoded as two bytes in the listpack. So, once we convert listpack to hold triplets, for the fields that don't have a TTL, it will be consuming those extra 2 bytes per item. Fields are ordered by ttl in the listpack to find the field with minimum expiry time efficiently. **New command implementations as part of this PR:** - HGETF command For each specified field get its value and optionally set the field's expiration time in sec/msec /unix-sec/unix-msec: ``` HGETF key [NX | XX | GT | LT] [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | PERSIST] <FIELDS count field [field ...]> ``` - HSETF command For each specified field value pair: set field to value and optionally set the field's expiration time in sec/msec /unix-sec/unix-msec: ``` HSETF key [DC] [DCF | DOF] [NX | XX | GT | LT] [GETNEW | GETOLD] [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | KEEPTTL] <FVS count field value [field value …]> ``` Todo: - Performance improvement. - rdb load/save - aof - defrag
-
- 03 May, 2024 1 commit
-
-
debing.sun authored
-
- 18 Apr, 2024 1 commit
-
-
Moti Cohen authored
- Add ebuckets & mstr data structures - Integrate active & lazy expiration - Add most of the commands - Add support for dict (listpack is missing) TODOs: RDB, notification, listpack, HSET, HGETF, defrag, aof
-
- 01 Mar, 2024 1 commit
-
-
Chen Tianjie authored
Sometimes we need to make fast judgement about why Redis is suddenly taking more memory. One of the reasons is main DB's dicts doing rehashing. We may use `MEMORY STATS` to monitor the overhead memory of each DB, but there still lacks a total sum to show an overall trend. So this PR adds the total overhead of all DBs to `INFO MEMORY` section, together with the total count of rehashing DB dicts, providing some intuitive metrics about main dicts rehashing. This PR adds the following metrics to INFO MEMORY * `mem_overhead_db_hashtable_rehashing` - only size of ht[0] in dictionaries we're rehashing (i.e. the memory that's gonna get released soon) and a similar ones to MEMORY STATS: * `overhead.db.hashtable.lut` (complements the existing `overhead.hashtable.main` and `overhead.hashtable.expires` which also counts the `dictEntry` structs too) * `overhead.db.hashtable.rehashing` - temporary rehashing overhead. * `db.dict.rehashing.count` - number of top level dictionaries being rehashed. --------- Co-authored-by:
zhaozhao.zz <zhaozhao.zz@alibaba-inc.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 29 Feb, 2024 1 commit
-
-
Binbin authored
In XREADGROUP ACK, because streamPropagateXCLAIM does not propagate entries-read, entries-read will be inconsistent between master and replicas. I.e. if no entries were claimed, it would have propagated correctly, but if some were claimed, then the entries-read field would be inconsistent on the replica. The fix was suggested by guybe7, call streamPropagateGroupID unconditionally, so that we will normalize entries_read on the replicas. In the past, we would only set propagate_last_id when NOACK was specified. And in #9127, XCLAIM did not propagate entries_read in ACK, which would cause entries_read to be inconsistent between master and replicas. Another approach is add another arg to XCLAIM and let it propagate entries_read, but we decided not to use it. Because we want minimal damage in case there's an old target and new source (in the worst case scenario, the new source doesn't recognize XGROUP SETID ... ENTRIES READ and the lag is lost. If we change XCLAIM, the damage is much more severe). In this patch, now if the user uses XREADGROUP .. COUNT 1 there will be an additional overhead of MULTI, EXEC and XGROUPSETID. We assume the extra command in case of COUNT 1 (4x factor, changing from one XCLAIM to MULTI+XCLAIM+XSETID+EXEC), is probably ok since reading just one entry is in any case very inefficient (a client round trip per record), so we're hoping it's not a common case. Issue was introduced in #9127.
-
- 22 Feb, 2024 1 commit
-
-
guybe7 authored
Now it matches the information in xinfo-stream.json
-
- 21 Feb, 2024 1 commit
-
-
Binbin authored
This field was added in #12996 but forgot to add it in json file. This also causes reply-schemas-validator to fail.
-
- 20 Feb, 2024 1 commit
-
-
Binbin authored
Recently I saw in CI that reply-schemas-validator fails here: ``` Failed validating 'minimum' in schema[1]['properties']['groups']['items']['properties']['consumers']['items']['properties']['active-time']: {'description': 'Last time this consumer was active (successful ' 'reading/claiming).', 'minimum': 0, 'type': 'integer'} On instance['groups'][0]['consumers'][0]['active-time']: -1729380548878722639 ``` The reason is that in fuzzer, we may restore corrupted active-time, which will cause the reply schema CI to fail. The fuzzer can cause corrupt the state in many places, which will bugs that mess up the reply, so we decided to skip logreqres. Also, seen-time is the same type as active-time, adding the minimum. --------- Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 19 Feb, 2024 1 commit
-
-
guybe7 authored
Add readme about the command json folder, what it does, and who should (not) use it. see discussion https://github.com/redis/redis/issues/9359#issuecomment-1936420698 --------- Co-authored-by:
Oran Agra <oran@redislabs.com> Co-authored-by:
Binbin <binloveplay1314@qq.com>
-
- 04 Feb, 2024 1 commit
-
-
Daz authored
The JSON file lacks the following structural API changes: - GEORADIUSBYMEMBER: add the ANY option for COUNT since 6.2.0. - GEORADIUSBYMEMBER_RO: add the ANY option for COUNT since 6.2.0. - GEORADIUS_RO: Added support for uppercase unit names since 7.0.0. - GEORADIUSBYMEMBER_RO: Added support for uppercase unit names since 7.0.0. --------- Signed-off-by:
daz-3ux <daz-3ux@proton.me> Co-authored-by:
bodong.ybd <bodong.ybd@alibaba-inc.com> Co-authored-by:
Viktor Söderqvist <viktor.soderqvist@est.tech> Co-authored-by:
yangpengda.333 <yangpengda.333@bytedance.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 30 Jan, 2024 2 commits
-
-
Chen Tianjie authored
Add a way to HSCAN a hash key, and get only the filed names. Command syntax is now: ``` HSCAN key cursor [MATCH pattern] [COUNT count] [NOVALUES] ``` when `NOVALUES` is on, the command will only return keys in the hash. --------- Co-authored-by:
Viktor Söderqvist <viktor.soderqvist@est.tech>
-
Slava Koyfman authored
Adds an ability to kill clients older than a specified age. Also, fixed the age calculation in `catClientInfoString` to use `commandTimeSnapshot` instead of the old `server.unixtime`, and added missing documentation for `CLIENT KILL ID` to output of `CLIENT help`. --------- Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 23 Jan, 2024 1 commit
-
-
Binbin authored
In #11568 we removed the NOSCRIPT flag from commands, e.g. removing NOSCRIPT flag from WAIT. Aiming to allow them in scripts and let them implicitly behave in the non-blocking way. This PR remove NOSCRIPT flag from WAITAOF just like WAIT (to be symmetrical)). And this PR also add BLOCKING flag for WAIT and WAITAOF.
-
- 08 Jan, 2024 1 commit
-
-
debing.sun authored
In #10122, we set the destination key's flag of SINTERSTORE to `RW`, however, this command doesn't actually read or modify the destination key, just overwrites it. Therefore, we change it to `OW` similarly to all other *STORE commands.
-
- 10 Dec, 2023 1 commit
-
-
Binbin authored
overhead.hashtable.slot-to-keys was added in 7.0 in #10017, then removed in #11695. Now remove it from reply_schema.
-
- 12 Oct, 2023 1 commit
-
-
zhaozhao.zz authored
In #11568 we removed the NOSCRIPT flag from commands and keep the BLOCKING flag. Aiming to allow them in scripts and let them implicitly behave in the non-blocking way. In that sense, the old behavior was to allow LPOP and reject BLPOP, and the new behavior, is to allow BLPOP too, and fail it only in case it ends up blocking. So likewise, so far we allowed XREAD and rejected XREAD BLOCK, and we will now allow that too, and only reject it if it ends up blocking.
-
- 10 Oct, 2023 1 commit
-
-
Binbin authored
The current commands.json doesn't mention the special NO ONE arguments. This change is also applied to SLAVEOF
-
- 10 Sep, 2023 1 commit
-
-
Binbin authored
An unintentional change was introduced in #10536, we used to use addReplyLongLong and now it is addReplyBulkLonglong, revert it back the previous behavior.
-
- 04 Sep, 2023 1 commit
-
-
nihohit authored
Updated the command tips for ACL SAVE / SETUSER / DELUSER, CLIENT SETNAME / SETINFO, and LATENCY RESET. The tips now match CONFIG SET, since there's a similar behavior for all of these commands - the user expects to update the various configurations & states on all nodes, not only on a single, random node. For LATENCY RESET the response tip is now agg_sum. Co-authored-by:
Shachar Langbeheim <shachlan@amazon.com>
-
- 31 Aug, 2023 1 commit
-
-
Binbin authored
Also added a test to cover this case, so this can cover the reply schemas check.
-
- 30 Aug, 2023 1 commit
-
-
nihohit authored
Since the three commands have similar behavior (change config, return OK), the tips that govern how they should behave should be similar. Co-authored-by:
Shachar Langbeheim <shachlan@amazon.com>
-
- 15 Aug, 2023 1 commit
-
-
Binbin authored
We iterate over all replicas to get the result, the time complexity should be O(N), like CLUSTER NODES complexity is O(N).
-
- 05 Aug, 2023 1 commit
-
-
Binbin authored
GEOHASH / GEODIST / GEOPOS use zsetScore to get the score, in skiplist encoding, we use dictFind to get the score, which is O(1), same as ZSCORE command. It is not clear why these commands had O(Log(N)), and O(N) until now.
-
- 25 Jul, 2023 1 commit
-
-
nihohit authored
changing the response and request policy of a few commands, see https://redis.io/docs/reference/command-tips 1. RANDOMKEY used to have no response policy, which means that when sent to multiple shards, the responses should be aggregated. this normally applies to commands that return arrays, but since RANDOMKEY replies with a simple string, it actually requires a SPECIAL response policy (for the client to select just one) 2. SCAN used to have no response policy, but although the key names part of the response can be aggregated, the cursor part certainly can't. 3. MSETNX had a request policy of MULTI_SHARD and response policy of AGG_MIN, but in fact the contract with MSETNX is that when one key exists, it returns 0 and doesn't set any key, routing it to multiple shards would mean that if one failed and another succeeded, it's atomicity is broken and it's impossible to return a valid response to the caller. Co-authored-by:
Shachar Langbeheim <shachlan@amazon.com> Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 20 Jun, 2023 1 commit
-
-
Binbin authored
The parameter name is WITHSCORE instead of WITHSCORES.
-
- 19 Jun, 2023 1 commit
-
-
Binbin authored
In the original implementation, the time complexity of the commands is actually O(N*M), where N is the number of patterns the client is already subscribed and M is the number of patterns to subscribe to. The docs are all wrong about this. Specifically, because the original client->pubsub_patterns is a list, so we need to do listSearchKey which is O(N). In this PR, we change it to a dict, so the search becomes O(1). At the same time, both pubsub_channels and pubsubshard_channels are dicts. Changing pubsub_patterns to a dictionary improves the readability and maintainability of the code.
-
- 13 Jun, 2023 1 commit
-
-
Harkrishn Patro authored
It would be helpful for clients to get cluster slots/shards information during a node failover and is loading data.
-
- 26 May, 2023 1 commit
-
-
Binbin authored
It was missing in #12223, and the reply-schemas daily was failing: ``` jsonschema.exceptions.ValidationError: 'nothing' is not valid under any of the given schemas Failed validating 'oneOf' in schema[0]['properties']['loglevel']: {'oneOf': [{'const': 'debug'}, {'const': 'verbose'}, {'const': 'notice'}, {'const': 'warning'}, {'const': 'unknown'}]} On instance['loglevel']: 'nothing' ```
-
- 17 May, 2023 1 commit
-
-
Wen Hui authored
Extend SENTINEL CONFIG SET and SENTINEL CONFIG GET to be compatible with variadic CONFIG SET and CONFIG GET and allow multiple parameters to be modified in a single call atomically. Co-authored-by:
Oran Agra <oran@redislabs.com>
-
- 10 May, 2023 1 commit
-
-
Binbin authored
This pattern is from COMMAND INFO: Returns information about one, multiple or all commands. Also re-generate commands.def, the GEO change was missing in #12151.
-