- 30 Jul, 2019 1 commit
-
-
antirez authored
This was broken since a refactoring performed recently by myself.
-
- 29 Jul, 2019 1 commit
-
-
John Sully authored
-
- 24 Jul, 2019 2 commits
- 23 Jul, 2019 4 commits
-
-
antirez authored
-
antirez authored
-
zhaozhao.zz authored
-
Madelyn Olson authored
-
- 22 Jul, 2019 8 commits
-
-
Oran Agra authored
Other changes: * fix memory leak in error handling of rdb loading of type OBJ_MODULE
-
antirez authored
-
antirez authored
This is extremely useful in order to simulate an high load of requests about different keys, and force Redis to track a lot of informations about several clients, to simulate real world workloads.
-
antirez authored
Now that the call also invalidates client side caching slots, it is important that after an internal flush operation we both send the notifications to the clients and, at the same time, are able to reclaim the memory of the tracking table. This may even fix a few edge cases related to MULTI/EXEC + WATCH during resync, not sure, but in general looks more correct.
-
antirez authored
-
antirez authored
Otherwise what happens is that the tracking table will never get garbage collected if there are no longer clients with tracking enabled. Now the invalidation function immediately checks if there is any table allocated, otherwise it returns ASAP, so the overhead when the feature is not used should be near zero.
-
antirez authored
-
antirez authored
-
- 19 Jul, 2019 1 commit
-
-
antirez authored
Thanks to @JohnSully for noticing this problem.
-
- 18 Jul, 2019 4 commits
-
-
antirez authored
-
antirez authored
Without such change, the diskless replicas, when loading RDB files from the socket will not abort when a broken RDB file gets loaded. This is potentially unsafe, because right now Redis is not able to guarantee that encoding errors are safe from the POV of memory corruptions (for instance the LZF library may not be safe against untrusted data?) so better to abort when the RDB file we are going to load is corrupted. Instead I/O errors are still returned to the caller without aborting, so that in case of short read the diskless replica can try again.
-
antirez authored
-
zhaozhao.zz authored
-
- 17 Jul, 2019 7 commits
-
-
antirez authored
-
antirez authored
-
antirez authored
-
antirez authored
-
Oran Agra authored
now that replica can read rdb directly from the socket, it should avoid exiting on short read and instead try to re-sync. this commit tries to have minimal effects on non-diskless rdb reading. and includes a test that tries to trigger this scenario on various read cases.
-
zhaozhao.zz authored
-
Oran Agra authored
for instance detached thread safe contexts, or various callbacks that don't provide a context.
-
- 12 Jul, 2019 1 commit
-
-
antirez authored
-
- 10 Jul, 2019 6 commits
- 08 Jul, 2019 4 commits
-
-
antirez authored
-
antirez authored
-
Oran Agra authored
The implementation of the diskless replication was currently diskless only on the master side. The slave side was still storing the received rdb file to the disk before loading it back in and parsing it. This commit adds two modes to load rdb directly from socket: 1) when-empty 2) using "swapdb" the third mode of using diskless slave by flushdb is risky and currently not included. other changes: -------------- distinguish between aof configuration and state so that we can re-enable aof only when sync eventually succeeds (and not when exiting from readSyncBulkPayload after a failed attempt) also a CONFIG GET and INFO during rdb loading would have lied When loading rdb from the network, don't kill the server on short read (that can be a network error) Fix rdb check when performed on preamble AOF tests: run replication tests for diskless slave too make replication test a bit more aggressive Add test for diskless load swapdb
-
Angus Pearson authored
-
- 07 Jul, 2019 1 commit
-
-
Guy Korland authored
thanks to @rafie
-