- 11 Sep, 2012 1 commit
-
-
antirez authored
Unfortunately we had still the lame atoi() without any error checking in place, so "SELECT foo" would work as "SELECT 0". This was not an huge problem per se but some people expected that DB can be strings and not just numbers, and without errors you get the feeling that they can be numbers, but not the behavior. Now getLongFromObjectOrReply() is used as almost everybody else across the code, generating an error if the number is not an integer or overflows the long type. Thanks to @mipearson for reporting that on Twitter.
-
- 10 Sep, 2012 1 commit
-
-
antirez authored
-
- 05 Sep, 2012 2 commits
-
-
Haruto Otake authored
remove unsafe and unnecessary cast. until now, this cast may lead segmentation fault when end > UINT_MAX setbit foo 0 1 bitcount 0 4294967295 => ok bitcount 0 4294967296 => cause segmentation fault. Note by @antirez: the commit was modified a bit to also change the string length type to long, since it's guaranteed to be at max 512 MB in size, so we can work with the same type across all the code path. A regression test was also added.
-
Saj Goonatilleke authored
REDIS_REPL_PING_SLAVE_PERIOD controls how often the master should transmit a heartbeat (PING) to its slaves. This period, which defaults to 10, is measured in seconds. Redis 2.4 masters used to ping their slaves every ten seconds, just like it says on the tin. The Redis 2.6 masters I have been experimenting with, on the other hand, ping their slaves *every second*. (master_last_io_seconds_ago never approaches 10.) I think the ping period was inadvertently slashed to one-tenth of its nominal value around the time REDIS_HZ was introduced. This commit reintroduces correct ping schedule behaviour.
-
- 04 Sep, 2012 1 commit
-
-
antirez authored
SORT is able to return (faster than when ordering) unordered output if the "BY" clause is used with a constant value. However we try to play well with scripting requirements of determinism providing always sorted outputs when SORT (and other similar commands) are called by Lua scripts. However we used the general mechanism in place in scripting in order to reorder SORT output, that is, if the command has the "S" flag set, the Lua scripting engine will take an additional step when converting a multi bulk reply to Lua value, calling a Lua sorting function. This is suboptimal as we can do it faster inside SORT itself. This is also broken as issue #545 shows us: basically when SORT is used with a constant BY, and additionally also GET is used, the Lua scripting engine was trying to order the output as a flat array, while it was actually a list of key-value pairs. What we do know is to recognized if the caller of SORT is the Lua client (since we can check this using the REDIS_LUA_CLIENT flag). If so, and if a "don't sort" condition is triggered by the BY option with a constant string, we force the lexicographical sorting. This commit fixes this bug and improves the performance, and at the same time simplifies the implementation. This does not mean I'm smart today, it means I was stupid when I committed the original implementation ;)
-
- 03 Sep, 2012 1 commit
-
-
antirez authored
During the first synchronization step of the replication process, a Redis slave connects with the master in a non blocking way. However once the connection is established the replication continues sending the REPLCONF command, and sometimes the AUTH command if needed. Those commands are send in a partially blocking way (blocking with timeout in the order of seconds). Because it is common for a blocked master to accept connections even if it is actually not able to reply to the slave requests, it was easy for a slave to block if the master had serious issues, but was still able to accept connections in the listening socket. For this reason we now send an asynchronous PING request just after the non blocking connection ended in a successful way, and wait for the reply before to continue with the replication process. It is very unlikely that a master replying to PING can't reply to the other commands. This solution was proposed by Didier Spezia (Thanks!) so that we don't need to turn all the replication process into a non blocking affair, but still the probability of a slave blocked is minimal even in the event of a failing master. Also we now use getsockopt(SO_ERROR) in order to check errors ASAP in the event handler, instead of waiting for actual I/O to return an error. This commit fixes issue #632.
-
- 31 Aug, 2012 3 commits
-
-
antirez authored
Lua scripting uses a fake client in order to run commands in the context of a client, accumulate the reply, and convert it into a Lua object to return to the caller. This client is reused again and again, and is referenced by the server.lua_client globally accessible pointer. However after every call to redis.call() or redis.pcall(), that is handled by the luaRedisGenericCommand() function, the reply_bytes field of the client was not set back to zero. This filed is used to estimate the amount of memory currently used in the reply. Because of the lack of reset, script after script executed, this value used to get bigger and bigger, and in the end on 32 bit systems it triggered the following assert: redisAssert(c->reply_bytes < ULONG_MAX-(1024*64)); On 64 bit systems this does not happen because it takes too much time to reach values near to 2^64 for users to see the practical effect of the bug. Now in the cleanup stage of luaRedisGenericCommand() we reset the reply_bytes counter to zero, avoiding the issue. It is not practical to add a test for this bug, but the fix was manually tested using a debugger. This commit fixes issue #656.
-
antirez authored
A Redis slave can now be configured with a priority, that is an integer number that is shown in INFO output and can be get and set using the redis.conf file or the CONFIG GET/SET command. This field is used by Sentinel during slave election. A slave with lower priority is preferred. A slave with priority zero is never elected (and is considered to be impossible to elect even if it is the only slave available). A next commit will add support in the Sentinel side as well.
-
antirez authored
Redis used to crash with a call like the following: EVAL "redis.call()" 0 Now the explicit check for at least one argument prevents the problem. This commit fixes issue #655.
-
- 28 Aug, 2012 1 commit
-
-
antirez authored
This fixes issue #539. Basically if there is enough free memory the OS may buffer the RDB file that the slave transfers on disk from the master. The file may actually be flused on disk at once by the operating system when it gets closed by Redis, causing the close system call to block for a long time. This patch is a modified version of one provided by yoav-steinberg of @garantiadata (the original version was posted in the issue #539 comments), and tries to flush the OS buffers incrementally (every 8 MB of loaded data).
-
- 24 Aug, 2012 2 commits
-
-
antirez authored
-
antirez authored
The previous implementation of zmalloc.c was not able to handle out of memory in an application-specific way. It just logged an error on standard error, and aborted. The result was that in the case of an actual out of memory in Redis where malloc returned NULL (In Linux this actually happens under specific overcommit policy settings and/or with no or little swap configured) the error was not properly logged in the Redis log. This commit fixes this problem, fixing issue #509. Now the out of memory is properly reported in the Redis log and a stack trace is generated. The approach used is to provide a configurable out of memory handler to zmalloc (otherwise the default one logging the event on the standard output is used).
-
- 22 Aug, 2012 3 commits
-
-
antirez authored
This new hiredis features allows us to reuse a previous context reader buffer even if already very big in order to maximize performances with big payloads (Usually hiredis re-creates buffers when they are too big and unused in order to save memory).
-
Pieter Noordhuis authored
-
Pieter Noordhuis authored
-
- 01 Aug, 2012 1 commit
-
-
antirez authored
-
- 31 Jul, 2012 3 commits
-
-
Michael Parker authored
Note by @antirez: this code was never compiled because utils.c lacked the float.h include, so we never noticed this variable was mispelled in the past. This should provide a noticeable speed boost when saving certain types of databases with many sorted sets inside.
-
Saj Goonatilleke authored
If Redis only manages to write out a partial buffer, the AOF file won't load back into Redis the next time it starts up. It is better to discard the short write than waste time running redis-check-aof.
-
Saj Goonatilleke authored
Behaves like rdb_last_bgsave_status -- even down to reporting 'ok' when no rewrite has been done yet. (You might want to check that aof_last_rewrite_time_sec is not -1.)
-
- 22 Jul, 2012 2 commits
-
-
Steeve Lennmark authored
-
antirez authored
Redis loading data from disk, and a Redis slave disconnected from its master with serve-stale-data disabled, are two conditions where commands are normally refused by Redis, returning an error. However there is no reason to disable Pub/Sub commands as well, given that this layer does not interact with the dataset. To allow Pub/Sub in as many contexts as possible is especially interesting now that Redis Sentinel uses Pub/Sub of a Redis master as a communication channel between Sentinels. This commit allows Pub/Sub to be used in the above two contexts where it was previously denied.
-
- 18 Jul, 2012 1 commit
-
-
antirez authored
For the C standard char can be either signed or unsigned, it's up to the compiler, but Redis assumed that it was signed in a few places. The practical effect of this patch is that now Redis 2.6 will run correctly in every system where char is unsigned, notably the RaspBerry PI and other ARM systems with GCC. Thanks to Georgi Marinov (@eesn on twitter) that reported the problem and allowed me to use his RaspBerry via SSH to trace and fix the issue!
-
- 09 Jul, 2012 1 commit
-
-
jokea authored
-
- 07 Jul, 2012 2 commits
-
-
antirez authored
-
antirez authored
The REPLCONF command is an internal command (not designed to be directly used by normal clients) that allows a slave to set some replication related state in the master before issuing SYNC to start the replication. The initial motivation for this command, and the only reason currently it is used by the implementation, is to let the slave instance communicate its listening port to the slave, so that the master can show all the slaves with their listening ports in the "replication" section of the INFO output. This allows clients to auto discover and query all the slaves attached into a master. Currently only a single option of the REPLCONF command is supported, and it is called "listening-port", so the slave now starts the replication process with something like the following chat: REPLCONF listening-prot 6380 SYNC Note that this works even if the master is an older version of Redis and does not understand REPLCONF, because the slave ignores the REPLCONF error. In the future REPLCONF can be used for partial replication and other replication related features where there is the need to exchange information between master and slave. NOTE: This commit also fixes a bug: the INFO outout already carried information about slaves, but the port was broken, and was obtained with getpeername(2), so it was actually just the ephemeral port used by the slave to connect to the master as a client.
-
- 21 Jun, 2012 2 commits
-
-
antirez authored
-
antirez authored
The way we compared the authentication password using strcmp() allowed an attacker to gain information about the password using a well known class of attacks called "timing attacks". The bug appears to be practically not exploitable in most modern systems running Redis since even using multiple bytes of differences in the input at a time instead of one the difference in running time in in the order of 10 nanoseconds, making it hard to exploit even on LAN. However attacks always get better so we are providing a fix ASAP. The new implementation uses two fixed length buffers and a constant time comparison function, with the goal of: 1) Completely avoid leaking information about the content of the password, since the comparison is always performed between 512 characters and without conditionals. 2) Partially avoid leaking information about the length of the password. About "2" we still have a stage in the code where the real password and the user provided password are copied in the static buffers, we also run two strlen() operations against the two inputs, so the running time of the comparison is a fixed amount plus a time proportional to LENGTH(A)+LENGTH(B). This means that the absolute time of the operation performed is still related to the length of the password in some way, but there is no way to change the input in order to get a difference in the execution time in the comparison that is not just proportional to the string provided by the user (because the password length is fixed). Thus in practical terms the user should try to discover LENGTH(PASSWORD) looking at the whole execution time of the AUTH command and trying to guess a proportionality between the whole execution time and the password length: this appears to be mostly unfeasible in the real world. Also protecting from this attack is not very useful in the case of Redis as a brute force attack is anyway feasible if the password is too short, while with a long password makes it not an issue that the attacker knows the length.
-
- 15 Jun, 2012 3 commits
-
-
antirez authored
-
antirez authored
In order to implement reply buffer limits introduced in 2.6 and useful to close the connection under user-selected circumastances of big output buffers (for instance slow consumers in pub/sub, a blocked slave, and so forth) Redis takes a counter with the amount of used memory in objects inside the output list stored into c->reply. The computation was broken in the function setDeferredMultiBulkLength(), in the case the object was glued with the next one. This caused the c->reply_bytes field to go out of sync, be subtracted more than needed, and wrap back near to ULONG_MAX values. This commit fixes this bug and adds an assertion that is able to trap this class of problems. This problem was discovered looking at the INFO output of an unrelated issue (issue #547).
-
antirez authored
Because Redis 2.6 introduced new integer encodings it is no longer true that if two entries have a different encoding they are not equal. An old ziplist can be loaded from an RDB file generated with Redis 2.4, in this case for instance a small unsigned integers is encoded with a 16 bit encoding, while in Redis 2.6 a more specific 8 bit encoding format is used. Because of this bug hashes ended with duplicated values or fields lookup failed, causing many bad behaviors. This in turn caused a crash while converting the ziplist encoded hash into a real hash table because an assertion was raised on duplicated elements. This commit fixes issue #547. Many thanks to Pinterest's Marty Weiner and colleagues for discovering the problem and helping us in the debugging process.
-
- 13 Jun, 2012 1 commit
-
-
Ted Nyman authored
Right there is a mix of help entries ending with periods or without periods. This standardizes the end of command as without periods, which seems to be the general custom in most unix tools, at least.
-
- 11 Jun, 2012 1 commit
-
-
antirez authored
The ziplist -> hashtable conversion code is triggered every time an hash value must be promoted to a full hash table because the number or size of elements reached the threshold. If a problem in the ziplist causes the same field to be present multiple times, the assertion of successful addition of the element inside the hash table will fail, crashing server with a failed assertion, but providing little information about the problem. This code adds a new logging function to perform the hex dump of binary data, and makes sure that the ziplist -> hashtable conversion code uses this new logging facility to dump the content of the ziplist when the assertion fails. This change was originally made in order to investigate issue #547.
-
- 07 Jun, 2012 1 commit
-
-
antirez authored
-
- 02 Jun, 2012 2 commits
-
-
Alex Mitrofanov authored
(additional commit notes by antirez@gmail.com): The rdbIsObjectType() macro was not updated when the new RDB object type of ziplist encoded hashes was added. As a result RESTORE, that uses rdbLoadObjectType(), failed when a ziplist encoded hash was loaded. This does not affected normal RDB loading because in that case we use the lower-level function rdbLoadType(). The commit also adds a regression test.
-
antirez authored
Improved comments to make clear that rdbLoadType() just loads a general TYPE in the context of RDB that can be an object type or an expire type, end-of-file, and so forth. While rdbLoadObjectType() enforces that the type is a valid Object Type otherwise it returns -1.
-
- 31 May, 2012 1 commit
-
-
antirez authored
In the issue #529 an user reported a bug that can be triggered with the following code: flushdb set a "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" bitop or x a b The bug was introduced with the speed optimization in commit 8bbc0768 that specializes every BITOP operation loop up to the minimum length of the input strings. However the computation of the minimum length contained an error when a non existing key was present in the input, after a key that was non zero length. This commit fixes the bug and adds a regression test for it.
-
- 25 May, 2012 1 commit
-
-
antirez authored
The 'persistence' section of INFO output now contains additional four fields related to RDB and AOF persistence: rdb_last_bgsave_time_sec Duration of latest BGSAVE in sec. rdb_current_bgsave_time_sec Duration of current BGSAVE in sec. aof_last_rewrite_time_sec Duration of latest AOF rewrite in sec. aof_current_rewrite_time_sec Duration of current AOF rewrite in sec. The 'current' fields are set to -1 if a BGSAVE / AOF rewrite is not in progress. The 'last' fileds are set to -1 if no previous BGSAVE / AOF rewrites were performed. Additionally a few fields in the persistence section were renamed for consistency: changes_since_last_save -> rdb_changes_since_last_save bgsave_in_progress -> rdb_bgsave_in_progress last_save_time -> rdb_last_save_time last_bgsave_status -> rdb_last_bgsave_status bgrewriteaof_in_progress -> aof_rewrite_in_progress bgrewriteaof_scheduled -> aof_rewrite_scheduled After the renaming, fields in the persistence section start with rdb_ or aof_ prefix depending on the persistence method they describe. The field 'loading' and related fields are not prefixed because they are unique for both the persistence methods.
-
- 24 May, 2012 3 commits
-
-
antirez authored
This commit adds a fast-path to the BITOP that can be used for all the bytes from 0 to the minimal length of the string, and if there are at max 16 input keys. Often the intersected bitmaps are roughly the same size, so this optimization can provide a 10x speed boost to most real world usages of the command. Bytes are processed four full words at a time, in loops specialized for the specific BITOP sub-command, without the need to check for length issues with the inputs (since we run this algorithm only as far as there is data from all the keys at the same time). The remaining part of the string is intersected in the usual way using the slow but generic algorith. It is possible to do better than this with inputs that are not roughly the same size, sorting the input keys by length, by initializing the result string in a smarter way, and noticing that the final part of the output string composed of only data from the longest string does not need any proecessing since AND, OR and XOR against an empty string does not alter the output (zero in the first case, and the original string in the other two cases). More implementations will be implemented later likely, but this should be enough to release Redis 2.6-RC4 with bitops merged in. Note: this commit also adds better testing for BITOP NOT command, that is currently the faster and hard to optimize further since it just flips the bits of a single input string.
-
antirez authored
A bug in the implementation caused BITOP to crash the server if at least one one of the source objects was integer encoded. The new implementation takes an additional array of Redis objects pointers and calls getDecodedObject() to get a reference to a string encoded object, and then uses decrRefCount() to release the object. Tests modified to cover the regression and improve coverage.
-
antirez authored
At Redis's default optimization level the command is now much faster, always using a constant-time bit manipualtion technique to count bits instead of GCC builtin popcount, and unrolling the loop. The current implementation performance is 1.5GB/s in a MBA 11" (1.8 Ghz i7) compiled with both GCC and clang. The algorithm used is described here: http://graphics.stanford.edu/~seander/bithacks.html
-