- 13 Jan, 2016 1 commit
-
-
Jan-Erik Rediger authored
1 microsecond = 1000 nanoseconds 1e3 = 1000 10e3 = 10000
-
- 02 Jan, 2016 2 commits
- 29 Dec, 2015 1 commit
-
-
antirez authored
-
- 22 Dec, 2015 2 commits
-
-
Salvatore Sanfilippo authored
Update pretty printing during debugging to generate valid Lua code for tables
-
Salvatore Sanfilippo authored
Update pretty printing in debugging to generate valid Lua code for userdata-like types.
-
- 18 Dec, 2015 3 commits
- 17 Dec, 2015 3 commits
-
-
antirez authored
It's a key invariant that when AOF is enabled, after the cluster reshards, a crash-recovery event causes all the keys to be still fine with the expected logical content. Now this is part of unit 04.
-
antirez authored
In issue #2948 a crash was reported in processCommand(). Later Oran Agra (@oranagra) traced the bug (in private chat) in the following sequence of events: 1. Some maxmemory is set. 2. The slave is the currently active client and is executing PING or REPLCONF or whatever a slave can send to its master. 3. freeMemoryIfNeeded() is called since maxmemory is set. 4. flushSlavesOutputBuffers() is called by freeMemoryIfNeeded(). 5. During slaves buffers flush, a write error could be encoutered in writeToClient() or sendReplyToClient() depending on the version of Redis. This will trigger freeClient() against the currently active client, so a segmentation fault will likely happen in processCommand() immediately after the call to freeMemoryIfNeeded(). There are different possible fixes: 1. Add flags to writeToClient() (recent versions code base) so that we can ignore the write errors, and use this flag in flushSlavesOutputBuffers(). However this is not simple to do in older versions of Redis. 2. Use freeClientAsync() during write errors. This works but changes the current behavior of releasing clients ASAP when possible. Normally we write to clients during the normal event loop processing, in the writable client, where there is no active client, so no care must be taken. 3. The fix of this commit: to detect that the current client is no longer valid. This fix is a bit "ad-hoc", but works across all the versions and has the advantage of not changing the remaining behavior. Only alters what happens during this race condition, hopefully.
-
antirez authored
-
- 16 Dec, 2015 6 commits
-
-
antirez authored
The old test, designed to do a transformation on the bits that was invertible, in order to avoid touching the original memory content, was not effective as it was redis-server --test-memory. The former often reported OK while the latter was able to spot the error. So the test was substituted with one that may perform better, however the new one must backup the memory tested, so it tests memory in small pieces. This limits the effectiveness because of the CPU caches. However some attempt is made in order to trash the CPU cache between the fill and the check stages, but not for the addressing test unfortunately. We'll see if this test will be able to find errors where the old failed.
-
antirez authored
-
antirez authored
-
antirez authored
-
Paul Kulchenko authored
-
Paul Kulchenko authored
-
- 15 Dec, 2015 4 commits
- 14 Dec, 2015 2 commits
-
-
antirez authored
-
Salvatore Sanfilippo authored
lua_struct.c/getnum: throw error if overflow happen
-
- 13 Dec, 2015 1 commit
-
-
Sun He authored
Fix issue #2855
-
- 11 Dec, 2015 10 commits
-
-
antirez authored
We use the new variadic/pipelined MIGRATE for faster migration. Testing is not easy because to see the time it takes for a slot to be migrated requires a very large data set, but even with all the overhead of migrating multiple slots and to setup them properly, what used to take 4 seconds (1 million keys, 200 slots migrated) is now 1.6 which is a good improvement. However the improvement can be a lot larger if: 1. We use large datasets where a single slot has many keys. 2. By moving more than 10 keys per iteration, making this configurable, which is planned. Close #2710 Close #2711
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
We need to process replies after errors in order to delete keys successfully transferred. Also argument rewriting was fixed since it was broken in several ways. Now a fresh argument vector is created and set if we are acknowledged of at least one key.
-
antirez authored
-
antirez authored
-
antirez authored
-
daniele authored
redis-trib.rb: --timeout XXXXX option added to fix and reshard commands. Defaults to 15000 milliseconds
-
antirez authored
We wait a fixed amount of time (5 seconds currently) much greater than the usual Cluster node to node communication latency, before migrating. This way when a failover occurs, before detecting the new master as a target for migration, we give the time to its natural slaves (the slaves of the failed over master) to announce they switched to the new master, preventing an useless migration operation.
-
- 10 Dec, 2015 3 commits
-
-
antirez authored
The old version was modeled with two failovers, however after the first it is possible that another slave will migrate to the new master, since for some time the new master is not backed by any slave. Probably there should be some pause after a failover, before the migration. Anyway the test is simpler in this way, and depends less on timing.
-
antirez authored
-
antirez authored
-
- 09 Dec, 2015 2 commits
-
-
antirez authored
-
antirez authored
Some time ago I broken replicas migration (reported in #2924). The idea was to prevent masters without replicas from getting replicas because of replica migration, I remember it to create issues with tests, but there is no clue in the commit message about why it was so undesirable. However my patch as a side effect totally ruined the concept of replicas migration since we want it to work also for instances that, technically, never had slaves in the past: promoted slaves. So now instead the ability to be targeted by replicas migration, is a new flag "migrate-to". It only applies to masters, and is set in the following two cases: 1. When a master gets a slave, it is set. 2. When a slave turns into a master because of fail over, it is set. This way replicas migration targets are only masters that used to have slaves, and slaves of masters (that used to have slaves... obviously) and are promoted. The new flag is only internal, and is never exposed in the output nor persisted in the nodes configuration, since all the information to handle it are implicit in the cluster configuration we already have.
-