1. 04 Aug, 2020 1 commit
    • Tyson Andre's avatar
      Add a ZMSCORE command returning an array of scores. (#7593) · f11f26cc
      Tyson Andre authored
      
      
      Syntax: `ZMSCORE KEY MEMBER [MEMBER ...]`
      
      This is an extension of #2359
      amended by Tyson Andre to work with the changed unstable API,
      add more tests, and consistently return an array.
      
      - It seemed as if it would be more likely to get reviewed
        after updating the implementation.
      
      Currently, multi commands or lua scripting to call zscore multiple times
      would almost definitely be less efficient than a native ZMSCORE
      for the following reasons:
      
      - Need to fetch the set from the string every time instead of reusing the C
        pointer.
      - Using pipelining or multi-commands would result in more bytes sent by
        the client for the repeated `ZMSCORE KEY` sections.
      - Need to specially encode the data and decode it from the client
        for lua-based solutions.
      - The fastest solution I've seen for large sets(thousands or millions)
        involves lua and a variadic ZADD, then a ZINTERSECT, then a ZRANGE 0 -1,
        then UNLINK of a temporary set (or lua). This is still inefficient.
      Co-authored-by: default avatarTyson Andre <tysonandre775@hotmail.com>
      f11f26cc
  2. 29 Jul, 2020 1 commit
  3. 28 Jul, 2020 1 commit
  4. 26 Jul, 2020 1 commit
  5. 23 Jul, 2020 1 commit
  6. 10 Jul, 2020 2 commits
  7. 23 Jun, 2020 1 commit
    • Oran Agra's avatar
      EXEC always fails with EXECABORT and multi-state is cleared · 65a3307b
      Oran Agra authored
      In order to support the use of multi-exec in pipeline, it is important that
      MULTI and EXEC are never rejected and it is easy for the client to know if the
      connection is still in multi state.
      
      It was easy to make sure MULTI and DISCARD never fail (done by previous
      commits) since these only change the client state and don't do any actual
      change in the server, but EXEC is a different story.
      
      Since in the past, it was possible for clients to handle some EXEC errors and
      retry the EXEC, we now can't affort to return any error on EXEC other than
      EXECABORT, which now carries with it the real reason for the abort too.
      
      Other fixes in this commit:
      - Some checks that where performed at the time of queuing need to be re-
        validated when EXEC runs, for instance if the transaction contains writes
        commands, it needs to be aborted. there was one check that was already done
        in execCommand (-READONLY), but other checks where missing: -OOM, -MISCONF,
        -NOREPLICAS, -MASTERDOWN
      - When a command is rejected by processCommand it was rejected with addReply,
        which was not recognized as an error in case the bad command came from the
        master. this will enable to count or MONITOR these errors in the future.
      - make it easier for tests to create additional (non deferred) clients.
      - add tests for the fixes of this commit.
      65a3307b
  8. 10 Jun, 2020 2 commits
  9. 28 May, 2020 1 commit
  10. 27 May, 2020 1 commit
    • antirez's avatar
      Remove the meaningful offset feature. · 22472fe5
      antirez authored
      After a closer look, the Redis core devleopers all believe that this was
      too fragile, caused many bugs that we didn't expect and that were very
      hard to track. Better to find an alternative solution that is simpler.
      22472fe5
  11. 22 May, 2020 1 commit
    • antirez's avatar
      Make disconnectSlaves() synchronous in the base case. · adc5df1b
      antirez authored
      Otherwise we run into that:
      
      Backtrace:
      src/redis-server 127.0.0.1:21322(logStackTrace+0x45)[0x479035]
      src/redis-server 127.0.0.1:21322(sigsegvHandler+0xb9)[0x4797f9]
      /lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7fd373c5e390]
      src/redis-server 127.0.0.1:21322(_serverAssert+0x6a)[0x47660a]
      src/redis-server 127.0.0.1:21322(freeReplicationBacklog+0x42)[0x451282]
      src/redis-server 127.0.0.1:21322[0x4552d4]
      src/redis-server 127.0.0.1:21322[0x4c5593]
      src/redis-server 127.0.0.1:21322(aeProcessEvents+0x2e6)[0x42e786]
      src/redis-server 127.0.0.1:21322(aeMain+0x1d)[0x42eb0d]
      src/redis-server 127.0.0.1:21322(main+0x4c5)[0x42b145]
      /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7fd3738a3830]
      src/redis-server 127.0.0.1:21322(_start+0x29)[0x42b409]
      
      Since we disconnect all the replicas and free the replication backlog in
      certain replication paths, and the code that will free the replication
      backlog expects that no replica is connected.
      
      However we still need to free the replicas asynchronously in certain
      cases, as documented in the top comment of disconnectSlaves().
      adc5df1b
  12. 15 May, 2020 1 commit
    • antirez's avatar
      Cache master without checking of deferred close flags. · 1a7cd2c0
      antirez authored
      The context is issue #7205: since the introduction of threaded I/O we close
      clients asynchronously by default from readQueryFromClient(). So we
      should no longer prevent the caching of the master client, to later
      PSYNC incrementally, if such flags are set. However we also don't want
      the master client to be cached with such flags (would be closed
      immediately after being restored). And yet we want a way to understand
      if a master was closed because of a protocol error, and in that case
      prevent the caching.
      1a7cd2c0
  13. 14 May, 2020 1 commit
  14. 05 May, 2020 1 commit
  15. 02 May, 2020 2 commits
    • hwware's avatar
    • zhenwei pi's avatar
      Support setcpuaffinity on linux/bsd · 1a0deab2
      zhenwei pi authored
      Currently, there are several types of threads/child processes of a
      redis server. Sometimes we need deeply optimise the performance of
      redis, so we would like to isolate threads/processes.
      
      There were some discussion about cpu affinity cases in the issue:
      https://github.com/antirez/redis/issues/2863
      
      
      
      So implement cpu affinity setting by redis.conf in this patch, then
      we can config server_cpulist/bio_cpulist/aof_rewrite_cpulist/
      bgsave_cpulist by cpu list.
      
      Examples of cpulist in redis.conf:
      server_cpulist 0-7:2      means cpu affinity 0,2,4,6
      bio_cpulist 1,3           means cpu affinity 1,3
      aof_rewrite_cpulist 8-11  means cpu affinity 8,9,10,11
      bgsave_cpulist 1,10-11    means cpu affinity 1,10,11
      
      Test on linux/freebsd, both work fine.
      Signed-off-by: default avatarzhenwei pi <pizhenwei@bytedance.com>
      1a0deab2
  16. 27 Apr, 2020 1 commit
    • Oran Agra's avatar
      Keep track of meaningful replication offset in replicas too · 4447ddc8
      Oran Agra authored
      Now both master and replicas keep track of the last replication offset
      that contains meaningful data (ignoring the tailing pings), and both
      trim that tail from the replication backlog, and the offset with which
      they try to use for psync.
      
      the implication is that if someone missed some pings, or even have
      excessive pings that the promoted replica has, it'll still be able to
      psync (avoid full sync).
      
      the downside (which was already committed) is that replicas running old
      code may fail to psync, since the promoted replica trims pings form it's
      backlog.
      
      This commit adds a test that reproduces several cases of promotions and
      demotions with stale and non-stale pings
      
      Background:
      The mearningful offset on the master was added recently to solve a problem were
      the master is left all alone, injecting PINGs into it's backlog when no one is
      listening and then gets demoted and tries to replicate from a replica that didn't
      have any of the PINGs (or at least not the last ones).
      
      however, consider this case:
      master A has two replicas (B and C) replicating directly from it.
      there's no traffic at all, and also no network issues, just many pings in the
      tail of the backlog. now B gets promoted, A becomes a replica of B, and C
      remains a replica of A. when A gets demoted, it trims the pings from its
      backlog, and successfully replicate from B. however, C is still aware of
      these PINGs, when it'll disconnect and re-connect to A, it'll ask for something
      that's not in the backlog anymore (since A trimmed the tail of it's backlog),
      and be forced to do a full sync (something it didn't have to do before the
      meaningful offset fix).
      
      Besides that, the psync2 test was always failing randomly here and there, it
      turns out the reason were PINGs. Investigating it shows the following scenario:
      
      cycle 1: redis #1 is master, and all the rest are direct replicas of #1
      cycle 2: redis #2 is promoted to master, #1 is a replica of #2 and #3 is replica of #1
      now we see that when #1 is demoted it prints:
      17339:S 21 Apr 2020 11:16:38.523 * Using the meaningful offset 3929963 instead of 3929977 to exclude the final PINGs (14 bytes difference)
      17339:S 21 Apr 2020 11:16:39.391 * Trying a partial resynchronization (request e2b3f8817735fdfe5fa4626766daa938b61419e5:3929964).
      17339:S 21 Apr 2020 11:16:39.392 * Successful partial resynchronization with master.
      and when #3 connects to the demoted #2, #2 says:
      17339:S 21 Apr 2020 11:16:40.084 * Partial resynchronization not accepted: Requested offset for secondary ID was 3929978, but I can reply up to 3929964
      
      so the issue here is that the meaningful offset feature saved the day for the
      demoted master (since it needs to sync from a replica that didn't get the last
      ping), but it didn't help one of the other replicas which did get the last ping.
      4447ddc8
  17. 24 Apr, 2020 1 commit
    • antirez's avatar
      LCS -> STRALGO LCS. · 8a7f255c
      antirez authored
      STRALGO should be a container for mostly read-only string
      algorithms in Redis. The algorithms should have two main
      characteristics:
      
      1. They should be non trivial to compute, and often not part of
      programming language standard libraries.
      2. They should be fast enough that it is a good idea to have optimized C
      implementations.
      
      Next thing I would love to see? A small strings compression algorithm.
      8a7f255c
  18. 21 Apr, 2020 1 commit
  19. 09 Apr, 2020 4 commits
  20. 07 Apr, 2020 1 commit
    • antirez's avatar
      Speedup INFO by counting client memory incrementally. · f6987628
      antirez authored
      Related to #5145.
      
      Design note: clients may change type when they turn into replicas or are
      moved into the Pub/Sub category and so forth. Moreover the recomputation
      of the bytes used is problematic for obvious reasons: it changes
      continuously, so as a conservative way to avoid accumulating errors,
      each client remembers the contribution it gave to the sum, and removes
      it when it is freed or before updating it with the new memory usage.
      f6987628
  21. 01 Apr, 2020 1 commit
  22. 31 Mar, 2020 2 commits
    • Guy Benoish's avatar
      Modules: Test MULTI/EXEC replication of RM_Replicate · d6eb3afd
      Guy Benoish authored
      Makse sure call() doesn't wrap replicated commands with
      a redundant MULTI/EXEC
      
      Other, unrelated changes:
      1. Formatting compiler warning in INFO CLIENTS
      2. Use CLIENT_ID_AOF instead of UINT64_MAX
      d6eb3afd
    • antirez's avatar
      Fix module commands propagation double MULTI bug. · 9dcf878f
      antirez authored
      37a10cef introduced automatic wrapping of MULTI/EXEC for the
      alsoPropagate API. However this collides with the built-in mechanism
      already present in module.c. To avoid complex changes near Redis 6 GA
      this commit introduces the ability to exclude call() MUTLI/EXEC wrapping
      for also propagate in order to continue to use the old code paths in
      module.c.
      9dcf878f
  23. 27 Mar, 2020 6 commits
  24. 25 Mar, 2020 1 commit
    • antirez's avatar
      PSYNC2: meaningful offset implemented. · 57fa355e
      antirez authored
      A very commonly signaled operational problem with Redis master-replicas
      sets is that, once the master becomes unavailable for some reason,
      especially because of network problems, many times it wont be able to
      perform a partial resynchronization with the new master, once it rejoins
      the partition, for the following reason:
      
      1. The master becomes isolated, however it keeps sending PINGs to the
      replicas. Such PINGs will never be received since the link connection is
      actually already severed.
      2. On the other side, one of the replicas will turn into the new master,
      setting its secondary replication ID offset to the one of the last
      command received from the old master: this offset will not include the
      PINGs sent by the master once the link was already disconnected.
      3. When the master rejoins the partion and is turned into a replica, its
      offset will be too advanced because of the PINGs, so a PSYNC will fail,
      and a full synchronization will be required.
      
      Related to issue #7002 and other discussion we had in the past around
      this problem.
      57fa355e
  25. 18 Mar, 2020 1 commit
    • WuYunlong's avatar
      Fix master replica inconsistency for upgrading scenario. · f6029fb9
      WuYunlong authored
      Before this commit, when upgrading a replica, expired keys will not
      be loaded, thus causing replica having less keys in db. To this point,
      master and replica's keys is logically consistent. However, before
      the keys in master and replica are physically consistent, that is,
      they have the same dbsize, if master got a problem and the replica
      got promoted and becomes new master of that partition, and master
      updates a key which does not exist on master, but physically exists
      on the old master(new replica), the old master would refuse to update
      the key, thus causing master and replica data inconsistent.
      
      How could this happen?
      That's all because of the wrong judgement of roles while starting up
      the server. We can not use server.masterhost to judge if the server
      is master or replica, since it fails in cluster mode.
      
      When we start the server, we load rdb and do want to load expired keys,
      and do not want to have the ability to active expire keys, if it is
      a replica.
      f6029fb9
  26. 16 Mar, 2020 1 commit
  27. 04 Mar, 2020 2 commits