- 07 Sep, 2016 1 commit
-
-
antirez authored
-
- 01 Sep, 2016 2 commits
-
-
antirez authored
Technically as soon as Redis 64 bit gets proper support for loading collections and/or DBs with more than 2^32 elements, the 32 bit version should be modified in order to check if what we read from rdbLoadLen() overflows. This would only apply to huge RDB files created with a 64 bit instance and later loaded into a 32 bit instance.
-
antirez authored
-
- 11 Aug, 2016 1 commit
-
-
antirez authored
-
- 05 Aug, 2016 1 commit
-
-
Salvatore Sanfilippo authored
Display the nodes' proper summary once the cluster is created using redis-trib
-
- 04 Aug, 2016 2 commits
-
-
Salvatore Sanfilippo authored
Use the standard predefined identifier __func__ (since C99)
-
Guo Xiao authored
Fix warning: ISO C does not support '__FUNCTION__' predefined identifier [-Wpedantic]
-
- 03 Aug, 2016 5 commits
-
-
antirez authored
-
antirez authored
After all crashing at every API misuse makes everybody's life more complex.
-
antirez authored
This is an attempt at mitigating problems due to cross protocol scripting, an attack targeting services using line oriented protocols like Redis that can accept HTTP requests as valid protocol, by discarding the invalid parts and accepting the payloads sent, for example, via a POST request. For this to be effective, when we detect POST and Host: and terminate the connection asynchronously, the networking code was modified in order to never process further input. It was later verified that in a pipelined request containing a POST command, the successive commands are not executed.
-
antirez authored
-
antirez authored
-
- 02 Aug, 2016 3 commits
-
-
antirez authored
RedisModule_StringRetain() allows, when automatic memory management is on, to keep string objects living after the callback returns. Can also be used in order to use Redis reference counting of objects inside modules. The reason why this is useful is that sometimes when implementing new data types we want to reference RedisModuleString objects inside the module private data structures, so those string objects must be valid after the callback returns even if not referenced inside the Redis key space.
-
Qu Chen authored
-
antirez authored
-
- 29 Jul, 2016 1 commit
-
-
antirez authored
The problem was fixed in antirez/linenoise repository applying a patch contributed by @lamby. Here the new version is updated in the Redis source tree. Close #1418 Close #3322
-
- 28 Jul, 2016 1 commit
-
-
antirez authored
-
- 27 Jul, 2016 2 commits
-
-
antirez authored
This feature is useful, especially in deployments using Sentinel in order to setup Redis HA, where the slave is executed with NAT or port forwarding, so that the auto-detected port/ip addresses, as listed in the "INFO replication" output of the master, or as provided by the "ROLE" command, don't match the real addresses at which the slave is reachable for connections.
-
antirez authored
By grepping the continuous integration errors log a number of GEORADIUS tests failures were detected. Fortunately when a GEORADIUS failure happens, the test suite logs enough information in order to reproduce the problem: the PRNG seed, coordinates and radius of the query. By reproducing the issues, three different bugs were discovered and fixed in this commit. This commit also improves the already good reporting of the fuzzer and adds the failure vectors as regression tests. The issues found: 1. We need larger squares around the poles in order to cover the area requested by the user. There were already checks in order to use a smaller step (larger squares) but the limit set (+/- 67 degrees) is not enough in certain edge cases, so 66 is used now. 2. Even near the equator, when the search area center is very near the edge of the square, the north, south, west or ovest square may not be able to fully cover the specified radius. Now a test is performed at the edge of the initial guessed search area, and larger squares are used in case the test fails. 3. Because of rounding errors between Redis and Tcl, sometimes the test signaled false positives. This is now addressed. Whenever possible the original code was improved a bit in other ways. A debugging example stanza was added in order to make the next debugging session simpler when the next bug is found.
-
- 22 Jul, 2016 3 commits
-
-
antirez authored
In a previous commit the replication code was changed in order to centralize the BGSAVE for replication trigger in replicationCron(), however after further testings, the 1 second delay imposed by this change is not acceptable. So now the BGSAVE is only delayed if the AOF rewriting process is active. However past comments made sure that replicationCron() is always able to trigger the BGSAVE when needed, making the code generally more robust. The new code is more similar to the initial @oranagra patch where the BGSAVE was delayed only if an AOF rewrite was in progress. Trivia: delaying the BGSAVE uncovered a minor Sentinel issue that is now fixed.
-
antirez authored
-
antirez authored
During the initial handshake with the master a slave will report to have a very high disconnection time from its master (since technically it was disconnected since forever, so the current UNIX time in seconds is reported). However when the slave is connected again the Sentinel may re-scan the INFO output again only after 10 seconds, which is a long time. During this time Sentinels will consider this instance unable to failover, so a useless delay is introduced. Actaully this hardly happened in the practice because when a slave's master is down, the INFO period for slaves changes to 1 second. However when a manual failover is attempted immediately after adding slaves (like in the case of the Sentinel unit test), this problem may happen. This commit changes the INFO period to 1 second even in the case the slave's master is not down, but the slave reported to be disconnected from the master (by publishing, last time we checked, a master disconnection time field in INFO). This change is required as a result of an unrelated change in the replication code that adds a small delay in the master-slave first synchronization.
-
- 21 Jul, 2016 3 commits
-
-
antirez authored
This patch, written in collaboration with Oran Agra (@oranagra) is a companion to 780a8b1d. Together the two patches should avoid that the AOF and RDB saving processes can be spawned at the same time. Previously conditions that could lead to two saving processes at the same time were: 1. When AOF is enabled via CONFIG SET and an RDB saving process is already active. 2. When the SYNC command decides to start an RDB saving process ASAP in order to serve a new slave that cannot partially resynchronize (but only if we have a disk target for replication, for diskless replication there is not such a problem). Condition "1" is not very severe but "2" can happen often and is definitely good at degrading Redis performances in an unexpected way. The two commits have the effect of always spawning RDB savings for replication in replicationCron() instead of attempting to start an RDB save synchronously. Moreover when a BGSAVE or AOF rewrite must be performed, they are instead just postponed using flags that will try to perform such operations ASAP. Finally the BGSAVE command was modified in order to accept a SCHEDULE option so that if an AOF rewrite is in progress, when this option is given, the command no longer returns an error, but instead schedules an RDB rewrite operation for when it will be possible to start it.
-
antirez authored
This makes the replication code conceptually simpler by removing the synchronous BGSAVE trigger in syncCommand(). This also means that socket and disk BGSAVE targets are handled by the same code.
-
antirez authored
-
- 20 Jul, 2016 3 commits
- 18 Jul, 2016 5 commits
-
-
antirez authored
Verified to have better real world performances with power-law access patterns because of the data accumulated across calls.
-
antirez authored
-
antirez authored
It is possible to get better results by using the pool like in the LRU case. Also from tests during the morning I believe the current implementation has issues in the frequency decay function that should decrease the counter at periodic intervals.
-
antirez authored
This way it is possible from an observer to tell when the key is replaced with a new one having the same name.
-
antirez authored
-
- 15 Jul, 2016 1 commit
-
-
antirez authored
Implementation of LFU maxmemory policy for anything related to Redis objects. Still no actual eviction implemented.
-
- 14 Jul, 2016 4 commits
-
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
We have 24 total bits of space in each object in order to implement an LFU (Least Frequently Used) eviction policy. We split the 24 bits into two fields: 8 bits 16 bits +--------+----------------+ | LOG_C | Last decr time | +--------+----------------+ LOG_C is a logarithmic counter that provides an indication of the access frequency. However this field must also be deceremented otherwise what used to be a frequently accessed key in the past, will remain ranked like that forever, while we want the algorithm to adapt to access pattern changes. So the remaining 16 bits are used in order to store the "decrement time", a reduced-precision unix time (we take 16 bits of the time converted in minutes since we don't care about wrapping around) where the LOG_C counter is halved if it has an high value, or just decremented if it has a low value. New keys don't start at zero, in order to have the ability to collect some accesses before being trashed away, so they start at COUNTER_INIT_VAL. The logaritmic increment performed on LOG_C takes care of COUNTER_INIT_VAL when incrementing the key, so that keys starting at COUNTER_INIT_VAL (or having a smaller value) have a very high chance of being incremented on access. The simulation starts with a power-law access pattern, and later converts into a flat access pattern in order to see how the algorithm adapts. Currenty the decrement operation period is 1 minute, however note that it is not guaranteed that each key will be scanned 1 time every minute, so the actual frequency can be lower. However under high load, we access 3/5 keys every newly inserted key (because of how Redis eviction works). This is a work in progress at this point to evaluate if this works well.
-
- 13 Jul, 2016 1 commit
-
-
antirez authored
The LRU eviction code used to make local choices: for each DB visited it selected the best key to evict. This was repeated for each DB. However this means that there could be DBs with very frequently accessed keys that are targeted by the LRU algorithm while there were other DBs with many better candidates to expire. This commit attempts to fix this problem for the LRU policy. However the TTL policy is still not fixed by this commit. The TTL policy will be fixed in a successive commit. This is an initial (partial because of TTL policy) fix for issue #2647.
-
- 12 Jul, 2016 1 commit
-
-
antirez authored
To destroy and recreate the pool[].key element is slow, so we allocate in pool[].cached SDS strings that can account up to 255 chars keys and try to reuse them. This provides a solid 20% performance improvement in real world workload alike benchmarks.
-