1. 08 Dec, 2020 2 commits
  2. 07 Dec, 2020 3 commits
  3. 06 Dec, 2020 9 commits
    • David CARLIER's avatar
      ec951cdc
    • Oran Agra's avatar
      Sanitize dump payload: performance optimizations and tuning · e288430c
      Oran Agra authored
      First, if the ziplist header is surely inside the ziplist, do fast path
      decoding rather than the careful one.
      
      In that case, streamline the encoding if-else chain to be executed only
      once, and the encoding validity tested at the end.
      
      encourage inlining
      
      likely / unlikely hints for speculative execution
      
      Assertion used _exit(1) to tell the compiler that the code after them is
      not reachable and get rid of warnings.
      
      But in some cases assertions are placed inside tight loops, and any
      piece of code in them can slow down execution (code cache and other
      reasons), instead using either abort() or better yet, unreachable
      builtin.
      e288430c
    • Oran Agra's avatar
      Sanitize dump payload: fail RESTORE if memory allocation fails · 7ca00d69
      Oran Agra authored
      When RDB input attempts to make a huge memory allocation that fails,
      RESTORE should fail gracefully rather than die with panic
      7ca00d69
    • Oran Agra's avatar
      Sanitize dump payload: validate no duplicate records in hash/zset/intset · 3716950c
      Oran Agra authored
      If RESTORE passes successfully with full sanitization, we can't affort
      to crash later on assertion due to duplicate records in a hash when
      converting it form ziplist to dict.
      This means that when doing full sanitization, we must make sure there
      are no duplicate records in any of the collections.
      3716950c
    • Oran Agra's avatar
      Sanitize dump payload: fuzz tester and fixes for segfaults and leaks it exposed · c31055db
      Oran Agra authored
      The test creates keys with various encodings, DUMP them, corrupt the payload
      and RESTORES it.
      It utilizes the recently added use-exit-on-panic config to distinguish between
       asserts and segfaults.
      If the restore succeeds, it runs random commands on the key to attempt to
      trigger a crash.
      
      It runs in two modes, one with deep sanitation enabled and one without.
      In the first one we don't expect any assertions or segfaults, in the second one
      we expect assertions, but no segfaults.
      We also check for leaks and invalid reads using valgrind, and if we find them
      we print the commands that lead to that issue.
      
      Changes in the code (other than the test):
      - Replace a few NPD (null pointer deference) flows and division by zero with an
        assertion, so that it doesn't fail the test. (since we set the server to use
        `exit` rather than `abort` on assertion).
      - Fix quite a lot of flows in rdb.c that could have lead to memory leaks in
        RESTORE command (since it now responds with an error rather than panic)
      - Add a DEBUG flag for SET-SKIP-CHECKSUM-VALIDATION so that the test don't need
        to bother with faking a valid checksum
      - Remove a pile of code in serverLogObjectDebugInfo which is actually unsafe to
        run in the crash report (see comments in the code)
      - fix a missing boundary check in lzf_decompress
      
      test suite infra improvements:
      - be able to run valgrind checks before the process terminates
      - rotate log files when restarting servers
      c31055db
    • Oran Agra's avatar
      Sanitize dump payload: ziplist, listpack, zipmap, intset, stream · ca1c1825
      Oran Agra authored
      When loading an encoded payload we will at least do a shallow validation to
      check that the size that's encoded in the payload matches the size of the
      allocation.
      This let's us later use this encoded size to make sure the various offsets
      inside encoded payload don't reach outside the allocation, if they do, we'll
      assert/panic, but at least we won't segfault or smear memory.
      
      We can also do 'deep' validation which runs on all the records of the encoded
      payload and validates that they don't contain invalid offsets. This lets us
      detect corruptions early and reject a RESTORE command rather than accepting
      it and asserting (crashing) later when accessing that payload via some command.
      
      configuration:
      - adding ACL flag skip-sanitize-payload
      - adding config sanitize-dump-payload [yes/no/clients]
      
      For now, we don't have a good way to ensure MIGRATE in cluster resharding isn't
      being slowed down by these sanitation, so i'm setting the default value to `no`,
      but later on it should be set to `clients` by default.
      
      changes:
      - changing rdbReportError not to `exit` in RESTORE command
      - adding a new stat to be able to later check if cluster MIGRATE isn't being
        slowed down by sanitation.
      ca1c1825
    • Oran Agra's avatar
      prevent client tracking from causing feedback loop in performEvictions (#8100) · c4fdf09c
      Oran Agra authored
      When client tracking is enabled signalModifiedKey can increase memory usage,
      this can cause the loop in performEvictions to keep running since it was measuring
      the memory usage impact of signalModifiedKey.
      
      The section that measures the memory impact of the eviction should be just on dbDelete,
      excluding keyspace notification, client tracking, and propagation to AOF and replicas.
      
      This resolves part of the problem described in #8069
      p.s. fix took 1 minute, test took about 3 hours to write.
      c4fdf09c
    • guybe7's avatar
      Make sure we do not propagate nested MULTI/EXEC (#8097) · 1df5bb56
      guybe7 authored
      One way this was happening is when a module issued an RM_Call which would inject MULTI.
      If the module command that does that was itself issued by something else that already did
      added MULTI (e.g. another module, or a Lua script), it would have caused nested MULTI.
      
      In fact the MULTI state in the client or the MULTI_EMITTED flag in the context isn't
      the right indication that we need to propagate MULTI or not, because on a nested calls
      (possibly a module action called by a keyspace event of another module action), these
      flags aren't retained / reflected.
      
      instead there's now a global propagate_in_transaction flag for that.
      
      in addition to that, we now have a global in_eval and in_exec flags, to serve the flags
      of RM_GetContextFlags, since their dependence on the current client is wrong for the same
      reasons mentioned above.
      1df5bb56
    • Wang Yuan's avatar
      Limit the main db and expires dictionaries to expand (#7954) · 75f9dec6
      Wang Yuan authored
      As we know, redis may reject user's requests or evict some keys if
      used memory is over maxmemory. Dictionaries expanding may make
      things worse, some big dictionaries, such as main db and expires dict,
      may eat huge memory at once for allocating a new big hash table and be
      far more than maxmemory after expanding.
      There are related issues: #4213 #4583
      
      More details, when expand dict in redis, we will allocate a new big
      ht[1] that generally is double of ht[0], The size of ht[1] will be
      very big if ht[0] already is big. For db dict, if we have more than
      64 million keys, we need to cost 1GB for ht[1] when dict expands.
      
      If the sum of used memory and new hash table of dict needed exceeds
      maxmemory, we shouldn't allow the dict to expand. Because, if we
      enable keys eviction, we still couldn't add much more keys after
      eviction and rehashing, what's worse, redis will keep less keys when
      redis only remains a little memory for storing new hash table instead
      of users' data. Moreover users can't write data in redis if disable
      keys eviction.
      
      What this commit changed ?
      
      Add a new member function expandAllowed for dict type, it provide a way
      for caller to allow expand or not. We expose two parameters for this
      function: more memory needed for expanding and dict current load factor,
      users can implement a function to make a decision by them.
      For main db dict and expires dict type, these dictionaries may be very
      big and cost huge memory for expanding, so we implement a judgement
      function: we can stop dict to expand provisionally if used memory will
      be over maxmemory after dict expands, but to guarantee the performance
      of redis, we still allow dict to expand if dict load factor exceeds the
      safe load factor.
      Add test cases to verify we don't allow main db to expand when left
      memory is not enough, so that avoid keys eviction.
      
      Other changes:
      
      For new hash table size when expand. Before this commit, the size is
      that double used of dict and later _dictNextPower. Actually we aim to
      control a dict load factor between 0.5 and 1.0. Now we replace *2 with
      +1, since the first check is that used >= size, the outcome of before
      will usually be the same as _dictNextPower(used+1). The only case where
      it'll differ is when dict_can_resize is false during fork, so that later
      the _dictNextPower(used*2) will cause the dict to jump to *4 (i.e.
      _dictNextPower(1025*2) will return 4096).
      Fix rehash test cases due to changing algorithm of new hash table size
      when expand.
      75f9dec6
  4. 03 Dec, 2020 3 commits
  5. 02 Dec, 2020 2 commits
    • Wang Yuan's avatar
      Backup keys to slots map and restore when fail to sync if diskless-load type... · b55a827e
      Wang Yuan authored
      
      Backup keys to slots map and restore when fail to sync if diskless-load type is swapdb in cluster mode (#8108)
      
      When replica diskless-load type is swapdb in cluster mode, we didn't backup
      keys to slots map, so we will lose keys to slots map if fail to sync.
      Now we backup keys to slots map at first, and restore it properly when fail.
      
      This commit includes a refactory/cleanup of the backups mechanism (moving it to db.c and re-structuring it a bit).
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      b55a827e
    • luhuachao's avatar
      Modify help msg PING_BULK to PING_MBULK in benchmark (#8109) · 7885faf1
      luhuachao authored
      As described in redis-benchamrk help message 'The test names are the same as the ones produced as output.', In redis-benchmark output, we can only see PING_BULK, but the cmd `redis-benchmark -t ping_bulk` is not supported. We  have to run it with ping_mbulk which is not user friendly.
      7885faf1
  6. 01 Dec, 2020 3 commits
    • Madelyn Olson's avatar
      Getset fix (#8118) · 69b7113b
      Madelyn Olson authored
      
      
      * Fixed SET GET executing on wrong type
      Co-authored-by: default avatarMadelyn Olson <madelyneolson@gmail.com>
      69b7113b
    • sundb's avatar
      Improve dbid range check for SELECT, MOVE, COPY (#8085) · 3ba2281f
      sundb authored
      SELECT used to read the index into a `long` variable, and then pass it to a function
      that takes an `int`, possibly causing an overflow before the range check.
      
      Now all these commands use better and cleaner range check, and that also results in
      a slight change of the error response in case of an invalid database index.
      
      SELECT:
      in the past it would have returned either `-ERR invalid DB index` (if not a number),
      or `-ERR DB index is out of range` (if not between 1..16 or alike).
      now it'll return either `-ERR value is out of range` (if not a number), or
      `-ERR value is out of range, value must between -2147483648 and 2147483647`
      (if not in the range for an int), or `-ERR DB index is out of range`
      (if not between 0..16 or alike)
      
      
      MOVE:
      in the past it would only fail with `-ERR index out of range` no matter the reason.
      now return the same errors as the new ones for SELECT mentioned above.
      (i.e. unlike for SELECT even for a value like 17 we changed the error message)
      
      COPY:
      doesn't really matter how it behaved in the past (new command), new behavior is
      like the above two.
      3ba2281f
    • Itamar Haber's avatar
      Adds pub/sub channel patterns to ACL (#7993) · c1b1e8c3
      Itamar Haber authored
      Fixes #7923.
      
      This PR appropriates the special `&` symbol (because `@` and `*` are taken),
      followed by a literal value or pattern for describing the Pub/Sub patterns that
      an ACL user can interact with. It is similar to the existing key patterns
      mechanism in function (additive) and implementation (copy-pasta). It also adds
      the allchannels and resetchannels ACL keywords, naturally.
      
      The default user is given allchannels permissions, whereas new users get
      whatever is defined by the acl-pubsub-default configuration directive. For
      backward compatibility in 6.2, the default of this directive is allchannels but
      this is likely to be changed to resetchannels in the next major version for
      stronger default security settings.
      
      Unless allchannels is set for the user, channel access permissions are checked
      as follows :
      * Calls to both PUBLISH and SUBSCRIBE will fail unless a pattern matching the
        argumentative channel name(s) exists for the user.
      * Calls to PSUBSCRIBE will fail unless the pattern(s) provided as an argument
        literally exist(s) in the user's list.
      
      Such failures are logged to the ACL log.
      
      Runtime changes to channel permissions for a user with existing subscribing
      clients cause said clients to disconnect unless the new permissions permit the
      connections to continue. Note, however, that PSUBSCRIBErs' patterns are matched
      literally, so given the change bar:* -> b*, pattern subscribers to bar:* will be
      disconnected.
      
      Notes/questions:
      * UNSUBSCRIBE, PUNSUBSCRIBE and PUBSUB remain unprotected due to lack of reasons
        for touching them.
      c1b1e8c3
  7. 30 Nov, 2020 2 commits
  8. 29 Nov, 2020 1 commit
    • guybe7's avatar
      XPENDING with IDLE (#7972) · ada2ac9a
      guybe7 authored
      Used to filter stream pending entries by their idle-time,
      useful for XCLAIMing entries that have not been processed
      for some time
      ada2ac9a
  9. 26 Nov, 2020 1 commit
  10. 25 Nov, 2020 4 commits
  11. 24 Nov, 2020 2 commits
  12. 23 Nov, 2020 1 commit
  13. 22 Nov, 2020 5 commits
    • xindoo's avatar
      0b1d89d7
    • Yossi Gottlieb's avatar
      Clean up building with USE_SYSTEMD. (#8073) · 08d3e929
      Yossi Gottlieb authored
      When USE_SYSTEMD=yes is specified, try to use pkg-config to determine
      libsystemd linker flags. If not found, silently fall back to simply
      using "-lsystemd".
      
      We now use a LIBSYSTEMD_LIBS variable so users can explicitly override
      it and specify their own library.
      
      If USE_SYSTEMD is unspecified the old behavior of auto-enabling it if
      both pkg-config and libsystemd are available is retained.
      08d3e929
    • Wang Yuan's avatar
      Fix diskless replication failure when has non-rdb child process (#8070) · f207e168
      Wang Yuan authored
      If we enable diskless replication, set repl-diskless-sync-delay to 0,
      and master has non-rdb child process such as rewrite aof child, master
      will try to start to a new BGSAVE but fails immediately (before fork)
      when replicas ask for full synchronization, and master always fails
      to start a new BGSAVE and disconnects with replicas until non-rdb
      child process exists.
      
      this bug was introduced in #6271 (not yet released in 6.0.x)
      f207e168
    • Oran Agra's avatar
      Fix bug with module GIL being released prematurely (#8061) · e6fa4738
      Oran Agra authored
      This is hopefully usually harmles.
      The server.ready_keys will usually be empty so the code after releasing
      the GIL will soon be done.
      The only case where it'll actually process things is when a module
      releases a client (or module) blocked on a key, by triggering this NOT
      from within a command (e.g. a timer event).
      
      This bug was introduced in redis 6.0.9, see #7903
      e6fa4738
    • Oran Agra's avatar
      Fix oom-score-adj-values range, abs options, and bug when used in config file (#8046) · 61954951
      Oran Agra authored
      Fix: When oom-score-adj-values is provided in the config file after
      oom-score-adj yes, it'll take an immediate action, before
      readOOMScoreAdj was acquired, resulting in an error (out of range score
      due to uninitialized value. delay the reaction the real call is made by
      main().
      
      Since the values are clamped to -1000..1000, and they're
      applied as an offset from the value at startup (which may be -1000), we
      need to allow the offsets to reach to +2000 so that a value of +1000 is
      achievable in case the value at startup was -1000.
      
      Adding an option for absolute values rather than relative ones.
      61954951
  14. 20 Nov, 2020 1 commit
  15. 18 Nov, 2020 1 commit
    • guybe7's avatar
      EXISTS should not alter LRU, OBJECT should not reveal expired keys on replica (#8016) · f8ae9917
      guybe7 authored
      The bug was introduced by #5021 which only attempted avoid EXIST on an
      already expired key from returning 1 on a replica.
      
      Before that commit, dbExists was used instead of
      lookupKeyRead (which had an undesired effect to "touch" the LRU/LFU)
      
      Other than that, this commit fixes OBJECT to also come empty handed on
      expired keys in replica.
      
      And DEBUG DIGEST-VALUE to behave like DEBUG OBJECT (get the data from
      the key regardless of it's expired state)
      f8ae9917