- 01 Oct, 2015 5 commits
- 30 Sep, 2015 5 commits
- 06 Aug, 2015 3 commits
-
-
antirez authored
-
antirez authored
-
antirez authored
Add the concept of slaves capabilities to Redis, the slave now presents to the Redis master with a set of capabilities in the form: REPLCONF capa SOMECAPA capa OTHERCAPA ... This has the effect of setting slave->slave_capa with the corresponding SLAVE_CAPA macros that the master can test later to understand if it the slave will understand certain formats and protocols of the replication process. This makes it much simpler to introduce new replication capabilities in the future in a way that don't break old slaves or masters. This patch was designed and implemented together with Oran Agra (@oranagra).
-
- 05 Aug, 2015 1 commit
-
-
antirez authored
In previous commits we moved the FULLRESYNC to the moment we start the BGSAVE, so that the offset we provide is the right one. However this also means that we need to re-emit the SELECT statement every time a new slave starts to accumulate the changes. To obtian this effect in a more clean way, the function that sends the FULLRESYNC reply was overloaded with a more important role of also doing this and chanigng the slave state. So it was renamed to replicationSetupSlaveForFullResync() to better reflect what it does now.
-
- 04 Aug, 2015 1 commit
-
-
antirez authored
This commit attempts to fix a bug involving PSYNC and diskless replication (currently experimental) found by Yuval Inbar from Redis Labs and that was later found to have even more far reaching effects (the bug also exists when diskstore is off). The gist of the bug is that, a Redis master replies with +FULLRESYNC to a PSYNC attempt that fails and requires a full resynchronization. However, the baseline offset sent along with FULLRESYNC was always the current master replication offset. This is not ok, because there are many reasosn that may delay the RDB file creation. And... guess what, the master offset we communicate must be the one of the time the RDB was created. So for example: 1) When the BGSAVE for replication is delayed since there is one already but is not good for replication. 2) When the BGSAVE is not needed as we attach one currently ongoing. 3) When because of diskless replication the BGSAVE is delayed. In all the above cases the PSYNC reply is wrong and the slave may reconnect later claiming to need a wrong offset: this may cause data curruption later.
-
- 28 Jul, 2015 2 commits
- 27 Jul, 2015 1 commit
-
-
antirez authored
-
- 26 Jul, 2015 6 commits
- 29 Jun, 2015 3 commits
- 24 Jun, 2015 1 commit
-
-
antirez authored
-
- 23 Jun, 2015 2 commits
-
-
antirez authored
1. We no longer use a fake client but just rewriting. 2. We group all the inserts into a single ZADD dispatch (big speed win). 3. As a side effect of the correct implementation, replication works. 4. The return value of the command is now correct.
-
antirez authored
This commit simplifies the implementation in a few ways: 1. zsetScore implementation improved a bit and moved into t_zset.c where is now also used to implement the ZSCORE command. 2. Range extraction from the sorted set remains a separated implementation from the one in t_zset.c, but was hyper-specialized in order to avoid accumulating results into a list and remove the ones outside the radius. 3. A new type is introduced: geoArray, which can accumulate geoPoint structures in a vector with power of two expansion policy. This is useful since we have to call qsort() against it before returning the result to the user. 4. As a result of 1, 2, 3, the two files zset.c and zset.h are now removed, including the function to merge two lists (now handled with functions that can add elements to existing geoArray arrays) and the machinery used in order to pass zset results. 5. geoPoint structure simplified because of the general code structure simplification, so we no longer need to take references to objects. 6. Not counting the JSON removal the refactoring removes 200 lines of code for the same functionalities, with a simpler to read implementation. 7. GEORADIUS is now 2.5 times faster testing with 10k elements and a radius resulting in 124 elements returned. However this is mostly a side effect of the refactoring and simplification. More speed gains can be achieved by trying to optimize the code.
-
- 22 Jun, 2015 2 commits
-
-
antirez authored
Now used both in geo.c and t_zset to provide ZSCORE.
-
Matt Stancliff authored
Current todo: - replace functions in zset.{c,h} with a new unified Redis zset access API. Once we get the zset interface fixed, we can squash relevant commits in this branch and have one nice commit to merge into unstable. This commit adds: - Geo commands - Tests; runnable with: ./runtest --single unit/geo - Geo helpers in deps/geohash-int/ - src/geo.{c,h} and src/geojson.{c,h} implementing geo commands - Updated build configurations to get everything working - TEMPORARY: src/zset.{c,h} implementing zset score and zset range reading without writing to client output buffers. - Modified linkage of one t_zset.c function for use in zset.c Conflicts: src/Makefile src/redis.c
-
- 24 Mar, 2015 1 commit
-
-
antirez authored
Bug as old as Redis and blocking operations. It's hard to trigger since only happens on instance role switch, but the results are quite bad since an inconsistency between master and slave is created. How to trigger the bug is a good description of the bug itself. 1. Client does "BLPOP mylist 0" in master. 2. Master is turned into slave, that replicates from New-Master. 3. Client does "LPUSH mylist foo" in New-Master. 4. New-Master propagates write to slave. 5. Slave receives the LPUSH, the blocked client get served. Now Master "mylist" key has "foo", Slave "mylist" key is empty. Highlights: * At step "2" above, the client remains attached, basically escaping any check performed during command dispatch: read only slave, in that case. * At step "5" the slave (that was the master), serves the blocked client consuming a list element, which is not consumed on the master side. This scenario is technically likely to happen during failovers, however since Redis Sentinel already disconnects clients using the CLIENT command when changing the role of the instance, the bug is avoided in Sentinel deployments. Closes #2473.
-
- 11 Mar, 2015 1 commit
-
-
antirez authored
Still many things to convert inside config.c in the next commits. Some const safety in String objects creation and addReply() family functions.
-
- 27 Feb, 2015 1 commit
-
-
antirez authored
1. HVSTRLEN -> HSTRLEN. It's unlikely one needs the length of the key, not clear how the API would work (by value does not make sense) and there will be better names anyway. 2. Default is to return 0 when field is missing. 3. Default is to return 0 when key is missing. 4. The implementation was slower than needed, and produced unnecessary COW. Related issue #2415.
-
- 24 Feb, 2015 1 commit
-
-
Matt Stancliff authored
Revert some size_t back to off_t Set reply_bytes needs to 64 bits everywhere Revert bufpos to int since it's a max of 16k into buf[]
-
- 21 Feb, 2015 1 commit
-
-
Jason Roth authored
the hvstrlen command returns the length of a hash field value
-
- 11 Feb, 2015 1 commit
-
-
antirez authored
Severan problems are addressed but still a few missing. Since replication of this command was more complex than others since it needs to replicate multiple SREM commands, an old API able to do this was reused (it was taken inside the implementation since it was pretty obvious soon or later that would be useful). The API was improved a bit so that now a command may opt-out for the standard command replication when the server.dirty counter is incremented, in order to "manually" replicate what it wants.
-
- 03 Feb, 2015 1 commit
-
-
antirez authored
This also makes it backward compatible in the usage, but for the command name. However the old command name was less obvious so it is worth to break it probably. With the new setup the program main can perform argument parsing and everything else useful for an RDB check regardless of the Redis server itself.
-
- 28 Jan, 2015 1 commit
-
-
Matt Stancliff authored
redis-check-dump is now named redis-check-rdb and it runs as a mode of redis-server instead of an independent binary. You can now use 'redis-server redis.conf --check-rdb' to check the RDB defined in redis.conf. Using argument --check-rdb checks the RDB and exits. We could potentially also allow the server to continue starting if the RDB check succeeds. This change also enables us to use RDB checking programatically from inside Redis for certain failure conditions.
-