1. 06 Sep, 2020 1 commit
    • Oran Agra's avatar
      if diskless repl child is killed, make sure to reap the pid (#7742) · 573246f7
      Oran Agra authored
      Starting redis 6.0 and the changes we made to the diskless master to be
      suitable for TLS, I made the master avoid reaping (wait3) the pid of the
      child until we know all replicas are done reading their rdb.
      
      I did that in order to avoid a state where the rdb_child_pid is -1 but
      we don't yet want to start another fork (still busy serving that data to
      replicas).
      
      It turns out that the solution used so far was problematic in case the
      fork child was being killed (e.g. by the kernel OOM killer), in that
      case there's a chance that we currently disabled the read event on the
      rdb pipe, since we're waiting for a replica to become writable again.
      and in that scenario the master would have never realized the child
      exited, and the replica will remain hung too.
      Note that there's no mechanism to detect a hung replica while it's in
      rdb transfer state.
      
      The solution here is to add another pipe which is used by the parent to
      tell the child it is safe to exit. this mean that when the child exits,
      for whatever reason, it is safe to reap it.
      
      Besides that, i'm re-introducing an adjustment to REPLCONF ACK which was
      part of #6271 (Accelerate diskless master connections) but was dropped
      when that PR was rebased after the TLS fork/pipe changes (5a477946).
      Now that RdbPipeCleanup no longer calls checkChildrenDone, and the ACK
      has chance to detect that the child exited, it should be the one to call
      it so that we don't have to wait for cron (server.hz) to do that.
      573246f7
  2. 03 Sep, 2020 1 commit
    • Oran Agra's avatar
      Run active defrag while blocked / loading (#7726) · 9ef8d2f6
      Oran Agra authored
      During long running scripts or loading RDB/AOF, we may need to do some
      defragging. Since processEventsWhileBlocked is called periodically at
      unknown intervals, and many cron jobs either depend on run_with_period
      (including active defrag), or rely on being called at server.hz rate
      (i.e. active defrag knows ho much time to run by looking at server.hz),
      the whileBlockedCron may have to run a loop triggering the cron jobs in it
      (currently only active defrag) several times.
      
      Other changes:
      - Adding a test for defrag during aof loading.
      - Changing key-load-delay config to take negative values for fractions
        of a microsecond sleep
      9ef8d2f6
  3. 27 Aug, 2020 2 commits
    • Oran Agra's avatar
      Fix rejectCommand trims newline in shared error objects, hung clients (#7714) · 9fcd9e19
      Oran Agra authored
      65a3307b
      
       (released in 6.0.6) has a side effect, when processCommand
      rejects a command with pre-made shared object error string, it trims the
      newlines from the end of the string. if that string is later used with
      addReply, the newline will be missing, breaking the protocol, and
      leaving the client hung.
      
      It seems that the only scenario which this happens is when replying with
      -LOADING to some command, and later using that reply from the CONFIG
      SET command (still during loading). this will result in hung client.
      
      Refactoring the code in order to avoid trimming these newlines from
      shared string objects, and do the newline trimming only in other cases
      where it's needed.
      Co-authored-by: default avatarGuy Benoish <guy.benoish@redislabs.com>
      9fcd9e19
    • Oran Agra's avatar
      Update memory metrics for INFO during loading (#7690) · 8bdcbbb0
      Oran Agra authored
      During a long AOF or RDB loading, the memory stats were not updated, and
      INFO would return stale data, specifically about fragmentation and RSS.
      In the past some of these were sampled directly inside the INFO command,
      but were moved to cron as an optimization.
      
      This commit introduces a concept of loadingCron which should take
      some of the responsibilities of serverCron.
      It attempts to limit it's rate to approximately the server Hz, but may
      not be very accurate.
      
      In order to avoid too many system call, we use the cached ustime, and
      also make sure to update it in both AOF loading and RDB loading inside
      processEventsWhileBlocked (it seems AOF loading was missing it).
      8bdcbbb0
  4. 20 Aug, 2020 1 commit
    • 杨博东's avatar
      Fix flock cluster config may cause failure to restart after kill -9 (#7674) · cbaf3c5b
      杨博东 authored
      
      
      After fork, the child process(redis-aof-rewrite) will get the fd opened
      by the parent process(redis), when redis killed by kill -9, it will not
      graceful exit(call prepareForShutdown()), so redis-aof-rewrite thread may still
      alive, the fd(lock) will still be held by redis-aof-rewrite thread, and
      redis restart will fail to get lock, means fail to start.
      
      This issue was causing failures in the cluster tests in github actions.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      cbaf3c5b
  5. 12 Aug, 2020 1 commit
    • Yossi Gottlieb's avatar
      Add oom-score-adj configuration option to control Linux OOM killer. (#1690) · 2530dc0e
      Yossi Gottlieb authored
      Add Linux kernel OOM killer control option.
      
      This adds the ability to control the Linux OOM killer oom_score_adj
      parameter for all Redis processes, depending on the process role (i.e.
      master, replica, background child).
      
      A oom-score-adj global boolean flag control this feature. In addition,
      specific values can be configured using oom-score-adj-values if
      additional tuning is required.
      2530dc0e
  6. 11 Aug, 2020 3 commits
    • Tyson Andre's avatar
      Implement SMISMEMBER key member [member ...] (#7615) · 6f11acbd
      Tyson Andre authored
      
      
      This is a rebased version of #3078 originally by shaharmor
      with the following patches by TysonAndre made after rebasing
      to work with the updated C API:
      
      1. Add 2 more unit tests
         (wrong argument count error message, integer over 64 bits)
      2. Use addReplyArrayLen instead of addReplyMultiBulkLen.
      3. Undo changes to src/help.h - for the ZMSCORE PR,
         I heard those should instead be automatically
         generated from the redis-doc repo if it gets updated
      
      Motivations:
      
      - Example use case: Client code to efficiently check if each element of a set
        of 1000 items is a member of a set of 10 million items.
        (Similar to reasons for working on #7593)
      - HMGET and ZMSCORE already exist. This may lead to developers deciding
        to implement functionality that's best suited to a regular set with a
        data type of sorted set or hash map instead, for the multi-get support.
      
      Currently, multi commands or lua scripting to call sismember multiple times
      would almost definitely be less efficient than a native smismember
      for the following reasons:
      
      - Need to fetch the set from the string every time
        instead of reusing the C pointer.
      - Using pipelining or multi-commands would result in more bytes sent
        and received by the client for the repeated SISMEMBER KEY sections.
      - Need to specially encode the data and decode it from the client
        for lua-based solutions.
      - Proposed solutions using Lua or SADD/SDIFF could trigger writes to
        memory, which is undesirable on a redis replica server
        or when commands get replicated to replicas.
      Co-Authored-By: default avatarShahar Mor <shahar@peer5.com>
      Co-Authored-By: default avatarTyson Andre <tysonandre775@hotmail.com>
      6f11acbd
    • Rajat Pawar's avatar
      Fix comment about ACLGetCommandPerm() · 59d437c7
      Rajat Pawar authored
      59d437c7
    • 杨博东's avatar
      Avoid redundant calls to signalKeyAsReady (#7625) · 229327ad
      杨博东 authored
      signalKeyAsReady has some overhead (namely dictFind) so we should
      only call it when there are clients blocked on the relevant type (BLOCKED_*)
      229327ad
  7. 08 Aug, 2020 1 commit
    • Wang Yuan's avatar
      Fix applying zero offset to null pointer when creating moduleFreeContextReusedClient (#7323) · 1ef014ee
      Wang Yuan authored
      Before this fix we where attempting to select a db before creating db the DB, see: #7323
      
      This issue doesn't seem to have any implications, since the selected DB index is 0,
      the db pointer remains NULL, and will later be correctly set before using this dummy
      client for the first time.
      
      As we know, we call 'moduleInitModulesSystem()' before 'initServer()'. We will allocate
      memory for server.db in 'initServer', but we call 'createClient()' that will call 'selectDb()'
      in 'moduleInitModulesSystem()', before the databases where created. Instead, we should call
      'createClient()' for moduleFreeContextReusedClient after 'initServer()'.
      1ef014ee
  8. 06 Aug, 2020 2 commits
    • Oran Agra's avatar
      Accelerate diskless master connections, and general re-connections (#6271) · c17e597d
      Oran Agra authored
      Diskless master has some inherent latencies.
      1) fork starts with delay from cron rather than immediately
      2) replica is put online only after an ACK. but the ACK
         was sent only once a second.
      3) but even if it would arrive immediately, it will not
         register in case cron didn't yet detect that the fork is done.
      
      Besides that, when a replica disconnects, it doesn't immediately
      attempts to re-connect, it waits for replication cron (one per second).
      in case it was already online, it may be important to try to re-connect
      as soon as possible, so that the backlog at the master doesn't vanish.
      
      In case it disconnected during rdb transfer, one can argue that it's
      not very important to re-connect immediately, but this is needed for the
      "diskless loading short read" test to be able to run 100 iterations in 5
      seconds, rather than 3 (waiting for replication cron re-connection)
      
      changes in this commit:
      1) sync command starts a fork immediately if no sync_delay is configured
      2) replica sends REPLCONF ACK when done reading the rdb (rather than on 1s cron)
      3) when a replica unexpectedly disconnets, it immediately tries to
         re-connect rather than waiting 1s
      4) when when a child exits, if there is another replica waiting, we spawn a new
         one right away, instead of waiting for 1s replicationCron.
      5) added a call to connectWithMaster from replicationSetMaster. which is called
         from the REPLICAOF command but also in 3 places in cluster.c, in all of
         these the connection attempt will now be immediate instead of delayed by 1
         second.
      
      side note:
      we can add a call to rdbPipeReadHandler in replconfCommand when getting
      a REPLCONF ACK from the replica to solve a race where the replica got
      the entire rdb and EOF marker before we detected that the pipe was
      closed.
      in the test i did see this race happens in one about of some 300 runs,
      but i concluded that this race is unlikely in real life (where the
      replica is on another host and we're more likely to first detect the
      pipe was closed.
      the test runs 100 iterations in 3 seconds, so in some cases it'll take 4
      seconds instead (waiting for another REPLCONF ACK).
      
      Removing unneeded startBgsaveForReplication from updateSlavesWaitingForBgsave
      Now that CheckChildrenDone is calling the new replicationStartPendingFork
      (extracted from serverCron) there's actually no need to call
      startBgsaveForReplication from updateSlavesWaitingForBgsave anymore,
      since as soon as updateSlavesWaitingForBgsave returns, CheckChildrenDone is
      calling replicationStartPendingFork that handles that anyway.
      The code in updateSlavesWaitingForBgsave had a bug in which it ignored
      repl-diskless-sync-delay, but removing that code shows that this bug was
      hiding another bug, which is that the max_idle should have used >= and
      not >, this one second delay has a big impact on my new test.
      c17e597d
    • Oran Agra's avatar
      Assertion and panic, print crash log without generating SIGSEGV · 90b717e7
      Oran Agra authored
      This makes it possible to add tests that generate assertions, and run
      them with valgrind, making sure that there are no memory violations
      prior to the assertion.
      
      New config options:
      - crash-log-enabled - can be disabled for cleaner core dumps
      - crash-memcheck-enabled - useful for faster termination after a crash
      - use-exit-on-panic - to be used by the test suite so that valgrind can
        detect leaks and memory corruptions
      
      Other changes:
      - Crash log is printed even on system that dont HAVE_BACKTRACE, i.e. in
        both SIGSEGV and assert / panic
      - Assertion and panic won't print registers and code around EIP (which
        was useless), but will do fast memory test (which may still indicate
        that the assertion was due to memory corrpution)
      
      I had to reshuffle code in order to re-use it, so i extracted come code
      into function without actually doing any changes to the code:
      - logServerInfo
      - logModulesInfo
      - doFastMemoryTest (with the exception of it being conditional)
      - dumpCodeAroundEIP
      
      changes to the crash report on segfault:
      - logRegisters is called right after the stack trace (before info) done
        just in order to have more re-usable code
      - stack trace skips the first two items on the stack (the crash log and
        signal handler functions)
      90b717e7
  9. 04 Aug, 2020 1 commit
    • Tyson Andre's avatar
      Add a ZMSCORE command returning an array of scores. (#7593) · f11f26cc
      Tyson Andre authored
      
      
      Syntax: `ZMSCORE KEY MEMBER [MEMBER ...]`
      
      This is an extension of #2359
      amended by Tyson Andre to work with the changed unstable API,
      add more tests, and consistently return an array.
      
      - It seemed as if it would be more likely to get reviewed
        after updating the implementation.
      
      Currently, multi commands or lua scripting to call zscore multiple times
      would almost definitely be less efficient than a native ZMSCORE
      for the following reasons:
      
      - Need to fetch the set from the string every time instead of reusing the C
        pointer.
      - Using pipelining or multi-commands would result in more bytes sent by
        the client for the repeated `ZMSCORE KEY` sections.
      - Need to specially encode the data and decode it from the client
        for lua-based solutions.
      - The fastest solution I've seen for large sets(thousands or millions)
        involves lua and a variadic ZADD, then a ZINTERSECT, then a ZRANGE 0 -1,
        then UNLINK of a temporary set (or lua). This is still inefficient.
      Co-authored-by: default avatarTyson Andre <tysonandre775@hotmail.com>
      f11f26cc
  10. 29 Jul, 2020 1 commit
  11. 28 Jul, 2020 1 commit
  12. 26 Jul, 2020 1 commit
  13. 23 Jul, 2020 1 commit
  14. 10 Jul, 2020 2 commits
  15. 23 Jun, 2020 1 commit
    • Oran Agra's avatar
      EXEC always fails with EXECABORT and multi-state is cleared · 65a3307b
      Oran Agra authored
      In order to support the use of multi-exec in pipeline, it is important that
      MULTI and EXEC are never rejected and it is easy for the client to know if the
      connection is still in multi state.
      
      It was easy to make sure MULTI and DISCARD never fail (done by previous
      commits) since these only change the client state and don't do any actual
      change in the server, but EXEC is a different story.
      
      Since in the past, it was possible for clients to handle some EXEC errors and
      retry the EXEC, we now can't affort to return any error on EXEC other than
      EXECABORT, which now carries with it the real reason for the abort too.
      
      Other fixes in this commit:
      - Some checks that where performed at the time of queuing need to be re-
        validated when EXEC runs, for instance if the transaction contains writes
        commands, it needs to be aborted. there was one check that was already done
        in execCommand (-READONLY), but other checks where missing: -OOM, -MISCONF,
        -NOREPLICAS, -MASTERDOWN
      - When a command is rejected by processCommand it was rejected with addReply,
        which was not recognized as an error in case the bad command came from the
        master. this will enable to count or MONITOR these errors in the future.
      - make it easier for tests to create additional (non deferred) clients.
      - add tests for the fixes of this commit.
      65a3307b
  16. 10 Jun, 2020 2 commits
  17. 28 May, 2020 1 commit
  18. 27 May, 2020 1 commit
    • antirez's avatar
      Remove the meaningful offset feature. · 22472fe5
      antirez authored
      After a closer look, the Redis core devleopers all believe that this was
      too fragile, caused many bugs that we didn't expect and that were very
      hard to track. Better to find an alternative solution that is simpler.
      22472fe5
  19. 22 May, 2020 1 commit
    • antirez's avatar
      Make disconnectSlaves() synchronous in the base case. · adc5df1b
      antirez authored
      Otherwise we run into that:
      
      Backtrace:
      src/redis-server 127.0.0.1:21322(logStackTrace+0x45)[0x479035]
      src/redis-server 127.0.0.1:21322(sigsegvHandler+0xb9)[0x4797f9]
      /lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7fd373c5e390]
      src/redis-server 127.0.0.1:21322(_serverAssert+0x6a)[0x47660a]
      src/redis-server 127.0.0.1:21322(freeReplicationBacklog+0x42)[0x451282]
      src/redis-server 127.0.0.1:21322[0x4552d4]
      src/redis-server 127.0.0.1:21322[0x4c5593]
      src/redis-server 127.0.0.1:21322(aeProcessEvents+0x2e6)[0x42e786]
      src/redis-server 127.0.0.1:21322(aeMain+0x1d)[0x42eb0d]
      src/redis-server 127.0.0.1:21322(main+0x4c5)[0x42b145]
      /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7fd3738a3830]
      src/redis-server 127.0.0.1:21322(_start+0x29)[0x42b409]
      
      Since we disconnect all the replicas and free the replication backlog in
      certain replication paths, and the code that will free the replication
      backlog expects that no replica is connected.
      
      However we still need to free the replicas asynchronously in certain
      cases, as documented in the top comment of disconnectSlaves().
      adc5df1b
  20. 15 May, 2020 1 commit
    • antirez's avatar
      Cache master without checking of deferred close flags. · 1a7cd2c0
      antirez authored
      The context is issue #7205: since the introduction of threaded I/O we close
      clients asynchronously by default from readQueryFromClient(). So we
      should no longer prevent the caching of the master client, to later
      PSYNC incrementally, if such flags are set. However we also don't want
      the master client to be cached with such flags (would be closed
      immediately after being restored). And yet we want a way to understand
      if a master was closed because of a protocol error, and in that case
      prevent the caching.
      1a7cd2c0
  21. 14 May, 2020 1 commit
  22. 05 May, 2020 1 commit
  23. 02 May, 2020 2 commits
    • hwware's avatar
    • zhenwei pi's avatar
      Support setcpuaffinity on linux/bsd · 1a0deab2
      zhenwei pi authored
      Currently, there are several types of threads/child processes of a
      redis server. Sometimes we need deeply optimise the performance of
      redis, so we would like to isolate threads/processes.
      
      There were some discussion about cpu affinity cases in the issue:
      https://github.com/antirez/redis/issues/2863
      
      
      
      So implement cpu affinity setting by redis.conf in this patch, then
      we can config server_cpulist/bio_cpulist/aof_rewrite_cpulist/
      bgsave_cpulist by cpu list.
      
      Examples of cpulist in redis.conf:
      server_cpulist 0-7:2      means cpu affinity 0,2,4,6
      bio_cpulist 1,3           means cpu affinity 1,3
      aof_rewrite_cpulist 8-11  means cpu affinity 8,9,10,11
      bgsave_cpulist 1,10-11    means cpu affinity 1,10,11
      
      Test on linux/freebsd, both work fine.
      Signed-off-by: default avatarzhenwei pi <pizhenwei@bytedance.com>
      1a0deab2
  24. 27 Apr, 2020 1 commit
    • Oran Agra's avatar
      Keep track of meaningful replication offset in replicas too · 4447ddc8
      Oran Agra authored
      Now both master and replicas keep track of the last replication offset
      that contains meaningful data (ignoring the tailing pings), and both
      trim that tail from the replication backlog, and the offset with which
      they try to use for psync.
      
      the implication is that if someone missed some pings, or even have
      excessive pings that the promoted replica has, it'll still be able to
      psync (avoid full sync).
      
      the downside (which was already committed) is that replicas running old
      code may fail to psync, since the promoted replica trims pings form it's
      backlog.
      
      This commit adds a test that reproduces several cases of promotions and
      demotions with stale and non-stale pings
      
      Background:
      The mearningful offset on the master was added recently to solve a problem were
      the master is left all alone, injecting PINGs into it's backlog when no one is
      listening and then gets demoted and tries to replicate from a replica that didn't
      have any of the PINGs (or at least not the last ones).
      
      however, consider this case:
      master A has two replicas (B and C) replicating directly from it.
      there's no traffic at all, and also no network issues, just many pings in the
      tail of the backlog. now B gets promoted, A becomes a replica of B, and C
      remains a replica of A. when A gets demoted, it trims the pings from its
      backlog, and successfully replicate from B. however, C is still aware of
      these PINGs, when it'll disconnect and re-connect to A, it'll ask for something
      that's not in the backlog anymore (since A trimmed the tail of it's backlog),
      and be forced to do a full sync (something it didn't have to do before the
      meaningful offset fix).
      
      Besides that, the psync2 test was always failing randomly here and there, it
      turns out the reason were PINGs. Investigating it shows the following scenario:
      
      cycle 1: redis #1 is master, and all the rest are direct replicas of #1
      cycle 2: redis #2 is promoted to master, #1 is a replica of #2 and #3 is replica of #1
      now we see that when #1 is demoted it prints:
      17339:S 21 Apr 2020 11:16:38.523 * Using the meaningful offset 3929963 instead of 3929977 to exclude the final PINGs (14 bytes difference)
      17339:S 21 Apr 2020 11:16:39.391 * Trying a partial resynchronization (request e2b3f8817735fdfe5fa4626766daa938b61419e5:3929964).
      17339:S 21 Apr 2020 11:16:39.392 * Successful partial resynchronization with master.
      and when #3 connects to the demoted #2, #2 says:
      17339:S 21 Apr 2020 11:16:40.084 * Partial resynchronization not accepted: Requested offset for secondary ID was 3929978, but I can reply up to 3929964
      
      so the issue here is that the meaningful offset feature saved the day for the
      demoted master (since it needs to sync from a replica that didn't get the last
      ping), but it didn't help one of the other replicas which did get the last ping.
      4447ddc8
  25. 24 Apr, 2020 1 commit
    • antirez's avatar
      LCS -> STRALGO LCS. · 8a7f255c
      antirez authored
      STRALGO should be a container for mostly read-only string
      algorithms in Redis. The algorithms should have two main
      characteristics:
      
      1. They should be non trivial to compute, and often not part of
      programming language standard libraries.
      2. They should be fast enough that it is a good idea to have optimized C
      implementations.
      
      Next thing I would love to see? A small strings compression algorithm.
      8a7f255c
  26. 21 Apr, 2020 1 commit
  27. 09 Apr, 2020 4 commits
  28. 07 Apr, 2020 1 commit
    • antirez's avatar
      Speedup INFO by counting client memory incrementally. · f6987628
      antirez authored
      Related to #5145.
      
      Design note: clients may change type when they turn into replicas or are
      moved into the Pub/Sub category and so forth. Moreover the recomputation
      of the bytes used is problematic for obvious reasons: it changes
      continuously, so as a conservative way to avoid accumulating errors,
      each client remembers the contribution it gave to the sum, and removes
      it when it is freed or before updating it with the new memory usage.
      f6987628
  29. 01 Apr, 2020 1 commit
  30. 31 Mar, 2020 1 commit