- 05 Jun, 2016 3 commits
-
-
Pierre Chapuis authored
-
Pierre Chapuis authored
-
Pierre Chapuis authored
-
- 02 Feb, 2016 1 commit
-
-
Itamar Haber authored
-
- 27 Jul, 2015 1 commit
-
-
antirez authored
-
- 26 Jul, 2015 5 commits
- 02 Jan, 2015 3 commits
-
-
Matt Stancliff authored
This removes: - list-max-ziplist-entries - list-max-ziplist-value This adds: - list-max-ziplist-size - list-compress-depth Also updates config file with new sections and updates tests to use quicklist settings instead of old list settings.
-
Matt Stancliff authored
Let user set how many nodes to *not* compress. We can specify a compression "depth" of how many nodes to leave uncompressed on each end of the quicklist. Depth 0 = disable compression. Depth 1 = only leave head/tail uncompressed. - (read as: "skip 1 node on each end of the list before compressing") Depth 2 = leave head, head->next, tail->prev, tail uncompressed. - ("skip 2 nodes on each end of the list before compressing") Depth 3 = Depth 2 + head->next->next + tail->prev->prev - ("skip 3 nodes...") etc. This also: - updates RDB storage to use native quicklist compression (if node is already compressed) instead of uncompressing, generating the RDB string, then re-compressing the quicklist node. - internalizes the "fill" parameter for the quicklist so we don't need to pass it to _every_ function. Now it's just a property of the list. - allows a runtime-configurable compression option, so we can expose a compresion parameter in the configuration file if people want to trade slight request-per-second performance for up to 90%+ memory savings in some situations. - updates the quicklist tests to do multiple passes: 200k+ tests now.
-
Matt Stancliff authored
This replaces individual ziplist vs. linkedlist representations for Redis list operations. Big thanks for all the reviews and feedback from everybody in https://github.com/antirez/redis/pull/2143
-
- 10 Dec, 2014 1 commit
-
-
antirez authored
Slaves key expire is orchestrated by the master. Sometimes the master will send the synthesized DEL to expire keys on the slave with a non trivial delay (when the key is not accessed, only the incremental expiry algorithm will expire it in background). During that time, a key is logically expired, but slaves still return the key if you GET (or whatever) it. This is a bad behavior. However we can't simply trust the slave view of the key, since we need the master to be able to send write commands to update the slave data set, and DELs should only happen when the key is expired in the master in order to ensure consistency. However 99.99% of the issues with this behavior is when a client which is not a master sends a read only command. In this case we are safe and can consider the key as non existing. This commit does a few changes in order to make this sane: 1. lookupKeyRead() is modified in order to return NULL if the above conditions are met. 2. Calls to lookupKeyRead() in commands actually writing to the data set are repliaced with calls to lookupKeyWrite(). There are redundand checks, so for example, if in "2" something was overlooked, we should be still safe, since anyway, when the master writes the behavior is to don't care about what expireIfneeded() returns. This commit is related to #1768, #1770, #2131.
-
- 26 Jun, 2014 1 commit
-
-
antirez authored
-
- 21 May, 2014 1 commit
-
-
Matt Stancliff authored
Behrad Zari discovered [1] and Josiah reported [2]: if you block and wait for a list to exist, but the list creates from a non-push command, the blocked client never gets notified. This commit adds notification of blocked clients into the DB layer and away from individual commands. Lists can be created by [LR]PUSH, SORT..STORE, RENAME, MOVE, and RESTORE. Previously, blocked client notifications were only triggered by [LR]PUSH. Your client would never get notified if a list were created by SORT..STORE or RENAME or a RESTORE, etc. Blocked client notification now happens in one unified place: - dbAdd() triggers notification when adding a list to the DB Two new tests are added that fail prior to this commit. All test pass. Fixes #1668 [1]: https://groups.google.com/forum/#!topic/redis-db/k4oWfMkN1NU [2]: #1668
-
- 10 Dec, 2013 1 commit
-
-
antirez authored
Redis hash table implementation has many non-blocking features like incremental rehashing, however while deleting a large hash table there was no way to have a callback called to do some incremental work. This commit adds this support, as an optiona callback argument to dictEmpty() that is currently called at a fixed interval (one time every 65k deletions).
-
- 05 Dec, 2013 1 commit
-
-
antirez authored
-
- 03 Dec, 2013 1 commit
-
-
antirez authored
-
- 22 Jul, 2013 1 commit
-
-
antirez authored
Previously two string encodings were used for string objects: 1) REDIS_ENCODING_RAW: a string object with obj->ptr pointing to an sds stirng. 2) REDIS_ENCODING_INT: a string object where the obj->ptr void pointer is casted to a long. This commit introduces a experimental new encoding called REDIS_ENCODING_EMBSTR that implements an object represented by an sds string that is not modifiable but allocated in the same memory chunk as the robj structure itself. The chunk looks like the following: +--------------+-----------+------------+--------+----+ | robj data... | robj->ptr | sds header | string | \0 | +--------------+-----+-----+------------+--------+----+ | ^ +-----------------------+ The robj->ptr points to the contiguous sds string data, so the object can be manipulated with the same functions used to manipulate plan string objects, however we need just on malloc and one free in order to allocate or release this kind of objects. Moreover it has better cache locality. This new allocation strategy should benefit both the memory usage and the performances. A performance gain between 60 and 70% was observed during micro-benchmarks, however there is more work to do to evaluate the performance impact and the memory usage behavior.
-
- 28 Jan, 2013 3 commits
-
-
antirez authored
When keyspace events are enabled, the overhead is not sever but noticeable, so this commit introduces the ability to select subclasses of events in order to avoid to generate events the user is not interested in. The events can be selected using redis.conf or CONFIG SET / GET.
-
antirez authored
-
antirez authored
decrRefCount used to get its argument as a void* pointer in order to be used as destructor where a 'void free_object(void*)' prototype is expected. However this made simpler to introduce bugs by freeing the wrong pointer. This commit fixes the argument type and introduces a new wrapper called decrRefCountVoid() that can be used when the void* argument is needed.
-
- 19 Jan, 2013 1 commit
-
-
guiquanz authored
-
- 02 Dec, 2012 2 commits
-
-
antirez authored
To store the keys we block for during a blocking pop operation, in the case the client is blocked for more data to arrive, we used a simple linear array of redis objects, in the blockingState structure: robj **keys; int count; However in order to fix issue #801 we also use a dictionary in order to avoid to end in the blocked clients queue for the same key multiple times with the same client. The dictionary was only temporary, just to avoid duplicates, but since we create / destroy it there is no point in doing this duplicated work, so this commit simply use a dictionary as the main structure to store the keys we are blocked for. So instead of the previous fields we now just have: dict *keys; This simplifies the code and reduces the work done by the server during a blocking POP operation.
-
antirez authored
Sending a command like: BLPOP foo foo foo foo 0 Resulted into a crash before this commit since the client ended being inserted in the waiting list for this key multiple times. This resulted into the function handleClientsBlockedOnLists() to fail because we have code like that: if (de) { list *clients = dictGetVal(de); int numclients = listLength(clients); while(numclients--) { listNode *clientnode = listFirst(clients); /* server clients here... */ } } The code to serve clients used to remove the served client from the waiting list, so if a client is blocking multiple times, eventually the call to listFirst() will return NULL or worse will access random memory since the list may no longer exist as it is removed by the function unblockClientWaitingData() if there are no more clients waiting for this list. To avoid making the rest of the implementation more complex, this commit modifies blockForKeys() so that a client will be put just a single time into the waiting list for a given key. Since it is Saturday, I hope this fixes issue #801.
-
- 08 Nov, 2012 1 commit
-
-
antirez authored
-
- 17 Sep, 2012 1 commit
-
-
antirez authored
Redis provides support for blocking operations such as BLPOP or BRPOP. This operations are identical to normal LPOP and RPOP operations as long as there are elements in the target list, but if the list is empty they block waiting for new data to arrive to the list. All the clients blocked waiting for th same list are served in a FIFO way, so the first that blocked is the first to be served when there is more data pushed by another client into the list. The previous implementation of blocking operations was conceived to serve clients in the context of push operations. For for instance: 1) There is a client "A" blocked on list "foo". 2) The client "B" performs `LPUSH foo somevalue`. 3) The client "A" is served in the context of the "B" LPUSH, synchronously. Processing things in a synchronous way was useful as if "A" pushes a value that is served by "B", from the point of view of the database is a NOP (no operation) thing, that is, nothing is replicated, nothing is written in the AOF file, and so forth. However later we implemented two things: 1) Variadic LPUSH that could add multiple values to a list in the context of a single call. 2) BRPOPLPUSH that was a version of BRPOP that also provided a "PUSH" side effect when receiving data. This forced us to make the synchronous implementation more complex. If client "B" is waiting for data, and "A" pushes three elemnents in a single call, we needed to propagate an LPUSH with a missing argument in the AOF and replication link. We also needed to make sure to replicate the LPUSH side of BRPOPLPUSH, but only if in turn did not happened to serve another blocking client into another list ;) This were complex but with a few of mutually recursive functions everything worked as expected... until one day we introduced scripting in Redis. Scripting + synchronous blocking operations = Issue #614. Basically you can't "rewrite" a script to have just a partial effect on the replicas and AOF file if the script happened to serve a few blocked clients. The solution to all this problems, implemented by this commit, is to change the way we serve blocked clients. Instead of serving the blocked clients synchronously, in the context of the command performing the PUSH operation, it is now an asynchronous and iterative process: 1) If a key that has clients blocked waiting for data is the subject of a list push operation, We simply mark keys as "ready" and put it into a queue. 2) Every command pushing stuff on lists, as a variadic LPUSH, a script, or whatever it is, is replicated verbatim without any rewriting. 3) Every time a Redis command, a MULTI/EXEC block, or a script, completed its execution, we run the list of keys ready to serve blocked clients (as more data arrived), and process this list serving the blocked clients. 4) As a result of "3" maybe more keys are ready again for other clients (as a result of BRPOPLPUSH we may have push operations), so we iterate back to step "3" if it's needed. The new code has a much simpler semantics, and a simpler to understand implementation, with the disadvantage of not being able to "optmize out" a PUSH+BPOP as a No OP. This commit will be tested with care before the final merge, more tests will be added likely.
-
- 18 Apr, 2012 1 commit
-
-
antirez authored
-
- 27 Mar, 2012 1 commit
-
-
Premysl Hruby authored
-
- 29 Feb, 2012 2 commits
- 28 Feb, 2012 2 commits
- 31 Jan, 2012 1 commit
-
-
antirez authored
-
- 19 Dec, 2011 1 commit
-
-
BigCat authored
Using `getLongFromObjectOrReply` instead of `atoi` if possible. The following functions are modified. * lrangeCommand * ltrimCommand * lremCommand * lindexCommand * lsetCommand * zunionInterGenericCommand * genericZrangebyscoreCommand * sortCommand
-
- 08 Nov, 2011 1 commit
-
-
antirez authored
-
- 04 Oct, 2011 1 commit
-
-
antirez authored
-
- 14 Sep, 2011 1 commit
-
-
antirez authored
Optimize LRANGE to scan the list starting from the head or the tail in order to traverse the minimal number of elements. Thanks to Didier Spezia for noticing the problem and providing a patch.
-
- 12 Sep, 2011 1 commit
-
-
antirez authored
-