- 03 Sep, 2020 1 commit
-
-
Oran Agra authored
During long running scripts or loading RDB/AOF, we may need to do some defragging. Since processEventsWhileBlocked is called periodically at unknown intervals, and many cron jobs either depend on run_with_period (including active defrag), or rely on being called at server.hz rate (i.e. active defrag knows ho much time to run by looking at server.hz), the whileBlockedCron may have to run a loop triggering the cron jobs in it (currently only active defrag) several times. Other changes: - Adding a test for defrag during aof loading. - Changing key-load-delay config to take negative values for fractions of a microsecond sleep
-
- 27 Aug, 2020 1 commit
-
-
Oran Agra authored
During a long AOF or RDB loading, the memory stats were not updated, and INFO would return stale data, specifically about fragmentation and RSS. In the past some of these were sampled directly inside the INFO command, but were moved to cron as an optimization. This commit introduces a concept of loadingCron which should take some of the responsibilities of serverCron. It attempts to limit it's rate to approximately the server Hz, but may not be very accurate. In order to avoid too many system call, we use the cached ustime, and also make sure to update it in both AOF loading and RDB loading inside processEventsWhileBlocked (it seems AOF loading was missing it).
-
- 23 Jul, 2020 1 commit
-
-
Meir Shpilraien (Spielrein) authored
Co-authored-by:
Oran Agra <oran@redislabs.com> Co-authored-by:
Itamar Haber <itamar@redislabs.com>
-
- 21 Jul, 2020 1 commit
-
-
Wen Hui authored
Since the dynamic allocations in raxIterator are only used for deep walks, memory leak due to missing call to raxStop can only happen for rax with key names longer than 32 bytes. Out of all the missing calls, the only ones that may lead to a leak are the rax for consumer groups and consumers, and these were only in AOFRW and rdbSave, which normally only happen in fork or at shutdown.
-
- 04 May, 2020 2 commits
-
-
Guy Benoish authored
Same goes for XGROUP DELCONSUMER (But in this case, it doesn't have any visible effect)
-
Oran Agra authored
* fix memlry leaks with diskless replica short read. * fix a few timing issues with valgrind runs * fix issue with valgrind and watchdog schedule signal about the valgrind WD issue: the stack trace test in logging.tcl, has issues with valgrind: ==28808== Can't extend stack to 0x1ffeffdb38 during signal delivery for thread 1: ==28808== too small or bad protection modes it seems to be some valgrind bug with SA_ONSTACK. SA_ONSTACK seems unneeded since WD is not recursive (SA_NODEFER was removed), also, not sure if it's even valid without a call to sigaltstack()
-
- 02 May, 2020 1 commit
-
-
zhenwei pi authored
Currently, there are several types of threads/child processes of a redis server. Sometimes we need deeply optimise the performance of redis, so we would like to isolate threads/processes. There were some discussion about cpu affinity cases in the issue: https://github.com/antirez/redis/issues/2863 So implement cpu affinity setting by redis.conf in this patch, then we can config server_cpulist/bio_cpulist/aof_rewrite_cpulist/ bgsave_cpulist by cpu list. Examples of cpulist in redis.conf: server_cpulist 0-7:2 means cpu affinity 0,2,4,6 bio_cpulist 1,3 means cpu affinity 1,3 aof_rewrite_cpulist 8-11 means cpu affinity 8,9,10,11 bgsave_cpulist 1,10-11 means cpu affinity 1,10,11 Test on linux/freebsd, both work fine. Signed-off-by:
zhenwei pi <pizhenwei@bytedance.com>
-
- 01 May, 2020 1 commit
-
-
antirez authored
We could use uint64_t specific macros, but after all it's simpler to just use an obvious equivalent type plus casting: this will be a no op and is simpler than fixed size types printf macros.
-
- 25 Apr, 2020 1 commit
-
-
Madelyn Olson authored
-
- 09 Apr, 2020 4 commits
-
-
antirez authored
-
antirez authored
Related to #3243.
-
antirez authored
-
antirez authored
Reloading of the RDB generated by DEBUG POPULATE 5000000 SAVE is now 25% faster. This commit also prepares the ability to have more flexibility when loading stuff from the RDB, since we no longer use dbAdd() but can control exactly how things are added in the database.
-
- 06 Apr, 2020 1 commit
-
-
qetu3790 authored
fix comments about RESIZE DB opcode in rdb.c
-
- 18 Mar, 2020 1 commit
-
-
WuYunlong authored
Before this commit, when upgrading a replica, expired keys will not be loaded, thus causing replica having less keys in db. To this point, master and replica's keys is logically consistent. However, before the keys in master and replica are physically consistent, that is, they have the same dbsize, if master got a problem and the replica got promoted and becomes new master of that partition, and master updates a key which does not exist on master, but physically exists on the old master(new replica), the old master would refuse to update the key, thus causing master and replica data inconsistent. How could this happen? That's all because of the wrong judgement of roles while starting up the server. We can not use server.masterhost to judge if the server is master or replica, since it fails in cluster mode. When we start the server, we load rdb and do want to load expired keys, and do not want to have the ability to active expire keys, if it is a replica.
-
- 16 Feb, 2020 1 commit
-
-
Oran Agra authored
-
- 05 Feb, 2020 1 commit
-
-
Oran Agra authored
-
- 30 Jan, 2020 1 commit
-
-
Guy Benoish authored
-
- 10 Nov, 2019 1 commit
-
-
Oran Agra authored
- the API name was odd, separated to two apis one for LRU and one for LFU - the LRU idle time was in 1 second resolution, which might be ok for RDB and RESTORE, but i think modules may need higher resolution - adding tests for LFU and for handling maxmemory policy mismatch
-
- 05 Nov, 2019 1 commit
-
-
antirez authored
After the thread in #6537 and thanks to the suggestions received, this commit updates the original patch in order to: 1. Solve the problem of updating the time in multiple places by updating it in call(). 2. Avoid introducing a new field but use our cached time. This required some minor refactoring to the function updating the time, and the introduction of a new cached time in microseconds in order to use less gettimeofday() calls.
-
- 29 Oct, 2019 1 commit
-
-
Oran Agra authored
* replication hooks: role change, master link status, replica online/offline * persistence hooks: saving, loading, loading progress * misc hooks: cron loop, shutdown, module loaded/unloaded * change the way hooks test work, and add tests for all of the above startLoading() now gets flag indicating what is loaded. stopLoading() now gets an indication of success or failure. adding startSaving() and stopSaving() with similar args and role.
-
- 24 Oct, 2019 1 commit
-
-
Oran Agra authored
-
- 15 Oct, 2019 1 commit
-
-
Yossi Gottlieb authored
-
- 07 Oct, 2019 2 commits
-
-
Oran Agra authored
misc: - handle SSL_has_pending by iterating though these in beforeSleep, and setting timeout of 0 to aeProcessEvents - fix issue with epoll signaling EPOLLHUP and EPOLLERR only to the write handlers. (needed to detect the rdb pipe was closed) - add key-load-delay config for testing - trim connShutdown which is no longer needed - rioFdsetWrite -> rioFdWrite - simplified since there's no longer need to write to multiple FDs - don't detect rdb child exited (don't call wait3) until we detect the pipe is closed - Cleanup bad optimization from rio.c, add another one
-
Yossi Gottlieb authored
* Introduce a connection abstraction layer for all socket operations and integrate it across the code base. * Provide an optional TLS connections implementation based on OpenSSL. * Pull a newer version of hiredis with TLS support. * Tests, redis-cli updates for TLS support.
-
- 27 Sep, 2019 2 commits
- 06 Sep, 2019 1 commit
-
-
suntiawnen authored
-
- 05 Sep, 2019 1 commit
-
-
Oran Agra authored
When implementing the code that saves and loads these aux fields we used rdb format that was added for that in redis 5.0, but then we added the 'when' field which meant that the old redis-check-rdb won't be able to skip these. this fix adds an opcode as if that 'when' is part of the module data.
-
- 22 Jul, 2019 1 commit
-
-
Oran Agra authored
Other changes: * fix memory leak in error handling of rdb loading of type OBJ_MODULE
-
- 19 Jul, 2019 1 commit
-
-
antirez authored
Thanks to @JohnSully for noticing this problem.
-
- 18 Jul, 2019 3 commits
-
-
antirez authored
-
antirez authored
Without such change, the diskless replicas, when loading RDB files from the socket will not abort when a broken RDB file gets loaded. This is potentially unsafe, because right now Redis is not able to guarantee that encoding errors are safe from the POV of memory corruptions (for instance the LZF library may not be safe against untrusted data?) so better to abort when the RDB file we are going to load is corrupted. Instead I/O errors are still returned to the caller without aborting, so that in case of short read the diskless replica can try again.
-
antirez authored
-
- 17 Jul, 2019 5 commits
-
-
antirez authored
-
antirez authored
-
antirez authored
-
Oran Agra authored
now that replica can read rdb directly from the socket, it should avoid exiting on short read and instead try to re-sync. this commit tries to have minimal effects on non-diskless rdb reading. and includes a test that tries to trigger this scenario on various read cases.
-
Oran Agra authored
* create module API for forking child processes. * refactor duplicate code around creating and tracking forks by AOF and RDB. * child processes listen to SIGUSR1 and dies exitFromChild in order to eliminate a valgrind warning of unhandled signal. * note that BGSAVE error reply has changed. valgrind error is: Process terminating with default action of signal 10 (SIGUSR1)
-
- 08 Jul, 2019 1 commit
-
-
Oran Agra authored
The implementation of the diskless replication was currently diskless only on the master side. The slave side was still storing the received rdb file to the disk before loading it back in and parsing it. This commit adds two modes to load rdb directly from socket: 1) when-empty 2) using "swapdb" the third mode of using diskless slave by flushdb is risky and currently not included. other changes: -------------- distinguish between aof configuration and state so that we can re-enable aof only when sync eventually succeeds (and not when exiting from readSyncBulkPayload after a failed attempt) also a CONFIG GET and INFO during rdb loading would have lied When loading rdb from the network, don't kill the server on short read (that can be a network error) Fix rdb check when performed on preamble AOF tests: run replication tests for diskless slave too make replication test a bit more aggressive Add test for diskless load swapdb
-