1. 03 Sep, 2020 1 commit
    • Oran Agra's avatar
      Run active defrag while blocked / loading (#7726) · 9ef8d2f6
      Oran Agra authored
      During long running scripts or loading RDB/AOF, we may need to do some
      defragging. Since processEventsWhileBlocked is called periodically at
      unknown intervals, and many cron jobs either depend on run_with_period
      (including active defrag), or rely on being called at server.hz rate
      (i.e. active defrag knows ho much time to run by looking at server.hz),
      the whileBlockedCron may have to run a loop triggering the cron jobs in it
      (currently only active defrag) several times.
      
      Other changes:
      - Adding a test for defrag during aof loading.
      - Changing key-load-delay config to take negative values for fractions
        of a microsecond sleep
      9ef8d2f6
  2. 27 Aug, 2020 1 commit
    • Oran Agra's avatar
      Update memory metrics for INFO during loading (#7690) · 8bdcbbb0
      Oran Agra authored
      During a long AOF or RDB loading, the memory stats were not updated, and
      INFO would return stale data, specifically about fragmentation and RSS.
      In the past some of these were sampled directly inside the INFO command,
      but were moved to cron as an optimization.
      
      This commit introduces a concept of loadingCron which should take
      some of the responsibilities of serverCron.
      It attempts to limit it's rate to approximately the server Hz, but may
      not be very accurate.
      
      In order to avoid too many system call, we use the cached ustime, and
      also make sure to update it in both AOF loading and RDB loading inside
      processEventsWhileBlocked (it seems AOF loading was missing it).
      8bdcbbb0
  3. 23 Jul, 2020 1 commit
  4. 21 Jul, 2020 1 commit
    • Wen Hui's avatar
      Add missing calls to raxStop (#7532) · 4e8f2d68
      Wen Hui authored
      Since the dynamic allocations in raxIterator are only used for deep walks, memory
      leak due to missing call to raxStop can only happen for rax with key names longer
      than 32 bytes.
      
      Out of all the missing calls, the only ones that may lead to a leak are the rax
      for consumer groups and consumers, and these were only in AOFRW and rdbSave, which
      normally only happen in fork or at shutdown.
      4e8f2d68
  5. 04 May, 2020 2 commits
    • Guy Benoish's avatar
      XPENDING should not update consumer's seen-time · bce3d08c
      Guy Benoish authored
      Same goes for XGROUP DELCONSUMER (But in this case, it doesn't
      have any visible effect)
      bce3d08c
    • Oran Agra's avatar
      add daily github actions with libc malloc and valgrind · deee2c1e
      Oran Agra authored
      * fix memlry leaks with diskless replica short read.
      * fix a few timing issues with valgrind runs
      * fix issue with valgrind and watchdog schedule signal
      
      about the valgrind WD issue:
      the stack trace test in logging.tcl, has issues with valgrind:
      ==28808== Can't extend stack to 0x1ffeffdb38 during signal delivery for thread 1:
      ==28808==   too small or bad protection modes
      
      it seems to be some valgrind bug with SA_ONSTACK.
      SA_ONSTACK seems unneeded since WD is not recursive (SA_NODEFER was removed),
      also, not sure if it's even valid without a call to sigaltstack()
      deee2c1e
  6. 02 May, 2020 1 commit
    • zhenwei pi's avatar
      Support setcpuaffinity on linux/bsd · 1a0deab2
      zhenwei pi authored
      Currently, there are several types of threads/child processes of a
      redis server. Sometimes we need deeply optimise the performance of
      redis, so we would like to isolate threads/processes.
      
      There were some discussion about cpu affinity cases in the issue:
      https://github.com/antirez/redis/issues/2863
      
      
      
      So implement cpu affinity setting by redis.conf in this patch, then
      we can config server_cpulist/bio_cpulist/aof_rewrite_cpulist/
      bgsave_cpulist by cpu list.
      
      Examples of cpulist in redis.conf:
      server_cpulist 0-7:2      means cpu affinity 0,2,4,6
      bio_cpulist 1,3           means cpu affinity 1,3
      aof_rewrite_cpulist 8-11  means cpu affinity 8,9,10,11
      bgsave_cpulist 1,10-11    means cpu affinity 1,10,11
      
      Test on linux/freebsd, both work fine.
      Signed-off-by: default avatarzhenwei pi <pizhenwei@bytedance.com>
      1a0deab2
  7. 01 May, 2020 1 commit
    • antirez's avatar
      Cast printf() argument to the format specifier. · a07a4ada
      antirez authored
      We could use uint64_t specific macros, but after all it's simpler to
      just use an obvious equivalent type plus casting: this will be a no op
      and is simpler than fixed size types printf macros.
      a07a4ada
  8. 25 Apr, 2020 1 commit
  9. 09 Apr, 2020 4 commits
  10. 06 Apr, 2020 1 commit
  11. 18 Mar, 2020 1 commit
    • WuYunlong's avatar
      Fix master replica inconsistency for upgrading scenario. · f6029fb9
      WuYunlong authored
      Before this commit, when upgrading a replica, expired keys will not
      be loaded, thus causing replica having less keys in db. To this point,
      master and replica's keys is logically consistent. However, before
      the keys in master and replica are physically consistent, that is,
      they have the same dbsize, if master got a problem and the replica
      got promoted and becomes new master of that partition, and master
      updates a key which does not exist on master, but physically exists
      on the old master(new replica), the old master would refuse to update
      the key, thus causing master and replica data inconsistent.
      
      How could this happen?
      That's all because of the wrong judgement of roles while starting up
      the server. We can not use server.masterhost to judge if the server
      is master or replica, since it fails in cluster mode.
      
      When we start the server, we load rdb and do want to load expired keys,
      and do not want to have the ability to active expire keys, if it is
      a replica.
      f6029fb9
  12. 16 Feb, 2020 1 commit
  13. 05 Feb, 2020 1 commit
  14. 30 Jan, 2020 1 commit
  15. 10 Nov, 2019 1 commit
    • Oran Agra's avatar
      rename RN_SetLRUOrLFU -> RM_SetLRU and RN_SetLFU · 28c20b4e
      Oran Agra authored
      - the API name was odd, separated to two apis one for LRU and one for LFU
      - the LRU idle time was in 1 second resolution, which might be ok for RDB
        and RESTORE, but i think modules may need higher resolution
      - adding tests for LFU and for handling maxmemory policy mismatch
      28c20b4e
  16. 05 Nov, 2019 1 commit
    • antirez's avatar
      Update PR #6537 patch to for generality. · 824f5f0b
      antirez authored
      After the thread in #6537 and thanks to the suggestions received, this
      commit updates the original patch in order to:
      
      1. Solve the problem of updating the time in multiple places by updating
      it in call().
      2. Avoid introducing a new field but use our cached time.
      
      This required some minor refactoring to the function updating the time,
      and the introduction of a new cached time in microseconds in order to
      use less gettimeofday() calls.
      824f5f0b
  17. 29 Oct, 2019 1 commit
    • Oran Agra's avatar
      Modules hooks: complete missing hooks for the initial set of hooks · 51c3ff8d
      Oran Agra authored
      * replication hooks: role change, master link status, replica online/offline
      * persistence hooks: saving, loading, loading progress
      * misc hooks: cron loop, shutdown, module loaded/unloaded
      * change the way hooks test work, and add tests for all of the above
      
      startLoading() now gets flag indicating what is loaded.
      stopLoading() now gets an indication of success or failure.
      adding startSaving() and stopSaving() with similar args and role.
      51c3ff8d
  18. 24 Oct, 2019 1 commit
  19. 15 Oct, 2019 1 commit
  20. 07 Oct, 2019 2 commits
    • Oran Agra's avatar
      diskless replication rdb transfer uses pipe, and writes to sockets form the parent process. · 5a477946
      Oran Agra authored
      misc:
      - handle SSL_has_pending by iterating though these in beforeSleep, and setting timeout of 0 to aeProcessEvents
      - fix issue with epoll signaling EPOLLHUP and EPOLLERR only to the write handlers. (needed to detect the rdb pipe was closed)
      - add key-load-delay config for testing
      - trim connShutdown which is no longer needed
      - rioFdsetWrite -> rioFdWrite - simplified since there's no longer need to write to multiple FDs
      - don't detect rdb child exited (don't call wait3) until we detect the pipe is closed
      - Cleanup bad optimization from rio.c, add another one
      5a477946
    • Yossi Gottlieb's avatar
      TLS: Connections refactoring and TLS support. · b087dd1d
      Yossi Gottlieb authored
      * Introduce a connection abstraction layer for all socket operations and
      integrate it across the code base.
      * Provide an optional TLS connections implementation based on OpenSSL.
      * Pull a newer version of hiredis with TLS support.
      * Tests, redis-cli updates for TLS support.
      b087dd1d
  21. 27 Sep, 2019 2 commits
  22. 06 Sep, 2019 1 commit
  23. 05 Sep, 2019 1 commit
  24. 22 Jul, 2019 1 commit
  25. 19 Jul, 2019 1 commit
  26. 18 Jul, 2019 3 commits
  27. 17 Jul, 2019 5 commits
  28. 08 Jul, 2019 1 commit
    • Oran Agra's avatar
      diskless replication on slave side (don't store rdb to file), plus some other related fixes · 2de544cf
      Oran Agra authored
      The implementation of the diskless replication was currently diskless only on the master side.
      The slave side was still storing the received rdb file to the disk before loading it back in and parsing it.
      
      This commit adds two modes to load rdb directly from socket:
      1) when-empty
      2) using "swapdb"
      the third mode of using diskless slave by flushdb is risky and currently not included.
      
      other changes:
      --------------
      distinguish between aof configuration and state so that we can re-enable aof only when sync eventually
      succeeds (and not when exiting from readSyncBulkPayload after a failed attempt)
      also a CONFIG GET and INFO during rdb loading would have lied
      
      When loading rdb from the network, don't kill the server on short read (that can be a network error)
      
      Fix rdb check when performed on preamble AOF
      
      tests:
      run replication tests for diskless slave too
      make replication test a bit more aggressive
      Add test for diskless load swapdb
      2de544cf