1. 17 Jan, 2023 3 commits
    • Oran Agra's avatar
      Obuf limit, exit during loop in *RAND* commands · 4779ed5e
      Oran Agra authored
      Related to the hang reported in #11671
      Currently, redis can disconnect a client due to reaching output buffer limit,
      it'll also avoid feeding that output buffer with more data, but it will keep
      running the loop in the command (despite the client already being marked for
      disconnection)
      
      This PR is an attempt to mitigate the problem, specifically for commands that
      are easy to abuse, specifically: SRANDMEMBER.
      The RAND family of commands can take a negative COUNT argument (which is not
      bound to the number of elements in the key), so it's enough to create a key
      with one field, and then these commands can be used to hang redis.
      
      NOTICE:
      For in Redis 7.0 this fix handles KEYS as well, but in this branch
      it doesn't, details in #11676
      4779ed5e
    • Meir Shpilraien (Spielrein)'s avatar
      Clean Lua stack before parsing call reply to avoid crash on a call with many arguments (#9809) · a511af7c
      Meir Shpilraien (Spielrein) authored
      This commit 0f8b634c (CVE-2021-32626 released in 6.2.6, 6.0.16, 5.0.14)
      fixes an invalid memory write issue by using `lua_checkstack` API to make
      sure the Lua stack is not overflow. This fix was added on 3 places:
      1. `luaReplyToRedisReply`
      2. `ldbRedis`
      3. `redisProtocolToLuaType`
      
      On the first 2 functions, `lua_checkstack` is handled gracefully while the
      last is handled with an assert and a statement that this situation can
      not happened (only with misbehave module):
      
      > the Redis reply might be deep enough to explode the LUA stack (notice
      that currently there is no such command in Redis that returns such a nested
      reply, but modules might do it)
      
      The issue that was discovered is that user arguments is also considered part
      of the stack, and so the following script (for example) make the assertion reachable:
      ```
      local a = {}
      for i=1,7999 do
          a[i] = 1
      end
      return redis.call("lpush", "l", unpack(a))
      ```
      
      This is a regression because such a script would have worked before and now
      its crashing Redis. The solution is to clear the function arguments from the Lua
      stack which makes the original assumption true and the assertion unreachable.
      
      (cherry picked from commit 6b0b04f1)
      a511af7c
    • Vo Trong Phuc's avatar
      add check good slaves to write when execute script (#10249) · 2adbbbcd
      Vo Trong Phuc authored
      
      
      There was no check min-slave-* config when evaluating Lua script.
      Add check enough good slaves for write command when evaluating scripts.
      Co-authored-by: default avatarPhuc. Vo Trong <phucvt@vng.com.vn>
      (cherry picked from commit 34505d26)
      2adbbbcd
  2. 04 Oct, 2021 3 commits
    • Oran Agra's avatar
      Fix ziplist and listpack overflows and truncations (CVE-2021-32627, CVE-2021-32628) · f6a40570
      Oran Agra authored
      - fix possible heap corruption in ziplist and listpack resulting by trying to
        allocate more than the maximum size of 4GB.
      - prevent ziplist (hash and zset) from reaching size of above 1GB, will be
        converted to HT encoding, that's not a useful size.
      - prevent listpack (stream) from reaching size of above 1GB.
      - XADD will start a new listpack if the new record may cause the previous
        listpack to grow over 1GB.
      - XADD will respond with an error if a single stream record is over 1GB
      - List type (ziplist in quicklist) was truncating strings that were over 4GB,
        now it'll respond with an error.
      f6a40570
    • meir@redislabs.com's avatar
      Fix protocol parsing on 'ldbReplParseCommand' (CVE-2021-32672) · 6ac3c0b7
      meir@redislabs.com authored
      The protocol parsing on 'ldbReplParseCommand' (LUA debugging)
      Assumed protocol correctness. This means that if the following
      is given:
      *1
      $100
      test
      The parser will try to read additional 94 unallocated bytes after
      the client buffer.
      This commit fixes this issue by validating that there are actually enough
      bytes to read. It also limits the amount of data that can be sent by
      the debugger client to 1M so the client will not be able to explode
      the memory.
      6ac3c0b7
    • Oran Agra's avatar
      Prevent unauthenticated client from easily consuming lots of memory (CVE-2021-32675) · 5674b005
      Oran Agra authored
      This change sets a low limit for multibulk and bulk length in the
      protocol for unauthenticated connections, so that they can't easily
      cause redis to allocate massive amounts of memory by sending just a few
      characters on the network.
      The new limits are 10 arguments of 16kb each (instead of 1m of 512mb)
      5674b005
  3. 21 Jul, 2021 11 commits
    • Huang Zhw's avatar
      On 32 bit platform, the bit position of GETBIT/SETBIT/BITFIELD/BITCOUNT,BITPOS... · 5f49f4fb
      Huang Zhw authored
      On 32 bit platform, the bit position of GETBIT/SETBIT/BITFIELD/BITCOUNT,BITPOS may overflow (see CVE-2021-32761) (#9191)
      
      GETBIT, SETBIT may access wrong address because of wrap.
      BITCOUNT and BITPOS may return wrapped results.
      BITFIELD may access the wrong address but also allocate insufficient memory and segfault (see CVE-2021-32761).
      
      This commit uses `uint64_t` or `long long` instead of `size_t`.
      related https://github.com/redis/redis/pull/8096
      
      At 32bit platform:
      > setbit bit 4294967295 1
      (integer) 0
      > config set proto-max-bulk-len 536870913
      OK
      > append bit "\xFF"
      (integer) 536870913
      > getbit bit 4294967296
      (integer) 0
      
      When the bit index is larger than 4294967295, size_t can't hold bit index. In the past,  `proto-max-bulk-len` is limit to 536870912, so there is no problem.
      
      After this commit, bit position is stored in `uint64_t` or `long long`. So when `proto-max-bulk-len > 536870912`, 32bit platforms can still be correct.
      
      For 64bit platform, this problem still exists. The major reason is bit pos 8 times of byte pos. When proto-max-bulk-len is very larger, bit pos may overflow.
      But at 64bit platform, we don't have so long string. So this bug may never happen.
      
      Additionally this commit add a test cost `512MB` memory which is tag as `large-memory`. Make freebsd ci and valgrind ci ignore this test.
      * This test is disabled in this version since bitops doesn't rely on
      proto-max-bulk-len. some of the overflows can still occur so we do want
      the fixes.
      
      (cherry picked from commit 71d45287)
      5f49f4fb
    • Binbin's avatar
      SMOVE only notify dstset when the addition is successful. (#9244) · 7d9878e4
      Binbin authored
      in case dest key already contains the member, the dest key isn't modified, so the command shouldn't invalidate watch.
      
      (cherry picked from commit 11dc4e59)
      7d9878e4
    • Oran Agra's avatar
      Test infra, handle RESP3 attributes and big-numbers and bools (#9235) · 62bc09d9
      Oran Agra authored
      - promote the code in DEBUG PROTOCOL to addReplyBigNum
      - DEBUG PROTOCOL ATTRIB skips the attribute when client is RESP2
      - networking.c addReply for push and attributes generate assertion when
        called on a RESP2 client, anything else would produce a broken
        protocol that clients can't handle.
      
      (cherry picked from commit 6a5bac30)
      (cherry picked from commit 7f38aa8bc719f709acdcefc35a45a7aa6faa76fa)
      62bc09d9
    • Oran Agra's avatar
      Tests: add a way to read raw RESP protocol reponses (#9193) · c5446aca
      Oran Agra authored
      This makes it possible to distinguish between null response and an empty
      array (currently the tests infra translates both to an empty string/list)
      
      (cherry picked from commit 7103367a)
      (cherry picked from commit e04bce2f01a369f57893be2bd0109e21f14f037e)
      c5446aca
    • Binbin's avatar
      Fix accidental deletion of sinterstore command when we meet wrong type error. (#9032) · 1655576e
      Binbin authored
      SINTERSTORE would have deleted the dest key right away,
      even when later on it is bound to fail on an (WRONGTYPE) error.
      
      With this change it first picks up all the input keys, and only later
      delete the dest key if one is empty.
      
      Also add more tests for some commands.
      Mainly focus on
      - `wrong type error`:
      	expand test case (base on sinter bug) in non-store variant
      	add tests for store variant (although it exists in non-store variant, i think it would be better to have same tests)
      - the dstkey result when we meet `non-exist key (empty set)` in *store
      
      sdiff:
      - improve test case about wrong type error (the one we found in sinter, although it is safe in sdiff)
      - add test about using non-exist key (treat it like an empty set)
      sdiffstore:
      - according to sdiff test case, also add some tests about `wrong type error` and `non-exist key`
      - the different is that in sdiffstore, we will consider the `dstkey` result
      
      sunion/sunionstore add more tests (same as above)
      
      sinter/sinterstore also same as above ...
      
      (cherry picked from commit b8a5da80)
      (cherry picked from commit f4702b8b7a7da6cc661ddb6744cb322bc92e3267)
      1655576e
    • Jason Elbaum's avatar
      Change return value type for ZPOPMAX/MIN in RESP3 (#8981) · 8c0f06c2
      Jason Elbaum authored
      When using RESP3, ZPOPMAX/ZPOPMIN should return nested arrays for consistency
      with other commands (e.g. ZRANGE).
      
      We do that only when COUNT argument is present (similarly to how LPOP behaves).
      for reasoning see https://github.com/redis/redis/issues/8824#issuecomment-855427955
      
      This is a breaking change only when RESP3 is used, and COUNT argument is present!
      
      (cherry picked from commit 7f342020)
      (cherry picked from commit caaad2d686b2af0d13fbeda414e2b70e57635b5c)
      8c0f06c2
    • perryitay's avatar
      Fail EXEC command in case a watched key is expired (#9194) · 8df81c03
      perryitay authored
      
      
      There are two issues fixed in this commit:
      1. we want to fail the EXEC command in case there is a watched key that's logically
         expired but not yet deleted by active expire or lazy expire.
      2. we saw that currently cache time is update in every `call()` (including nested calls),
         this time is being also being use for the isKeyExpired comparison, we want to update
         the cache time only in the first call (execCommand)
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      (cherry picked from commit ac8b1df8)
      8df81c03
    • Oran Agra's avatar
      Fix race in client side tracking (#9116) · d80c8711
      Oran Agra authored
      The `Tracking gets notification of expired keys` test in tracking.tcl
      used to hung in valgrind CI quite a lot.
      
      It turns out the reason is that with valgrind and a busy machine, the
      server cron active expire cycle could easily run in the same event loop
      as the command that created `mykey`, so that when they key got expired,
      there were two change events to broadcast, one that set the key and one
      that expired it, but since we used raxTryInsert, the client that was
      associated with the "last" change was the one that created the key, so
      the NOLOOP filtered that event.
      
      This commit adds a test that reproduces the problem by using lazy expire
      in a multi-exec which makes sure the key expires in the same event loop
      as the one that added it.
      
      (cherry picked from commit 9b564b52)
      d80c8711
    • YaacovHazan's avatar
      unregister AE_READABLE from the read pipe in backgroundSaveDoneHandlerSocket (#8991) · da127ed3
      YaacovHazan authored
      In diskless replication, we create a read pipe for the RDB, between the child and the parent.
      When we close this pipe (fd), the read handler also needs to be removed from the event loop (if it still registered).
      Otherwise, next time we will use the same fd, the registration will be fail (panic), because
      we will use EPOLL_CTL_MOD (the fd still register in the event loop), on fd that already removed from epoll_ctl
      
      (cherry picked from commit 501d7755)
      da127ed3
    • guybe7's avatar
      Add a timeout mechanism for replicas stuck in fullsync (#8762) · ea0a3764
      guybe7 authored
      Starting redis 6.0 (part of the TLS feature), diskless master uses pipe from the fork
      child so that the parent is the one sending data to the replicas.
      This mechanism has an issue in which a hung replica will cause the master to wait
      for it to read the data sent to it forever, thus preventing the fork child from terminating
      and preventing the creations of any other forks.
      
      This PR adds a timeout mechanism, much like the ACK-based timeout,
      we disconnect replicas that aren't reading the RDB file fast enough.
      
      (cherry picked from commit d63d0260)
      ea0a3764
    • Oran Agra's avatar
      if diskless repl child is killed, make sure to reap the pid (#7742) · 18429ce5
      Oran Agra authored
      Starting redis 6.0 and the changes we made to the diskless master to be
      suitable for TLS, I made the master avoid reaping (wait3) the pid of the
      child until we know all replicas are done reading their rdb.
      
      I did that in order to avoid a state where the rdb_child_pid is -1 but
      we don't yet want to start another fork (still busy serving that data to
      replicas).
      
      It turns out that the solution used so far was problematic in case the
      fork child was being killed (e.g. by the kernel OOM killer), in that
      case there's a chance that we currently disabled the read event on the
      rdb pipe, since we're waiting for a replica to become writable again.
      and in that scenario the master would have never realized the child
      exited, and the replica will remain hung too.
      Note that there's no mechanism to detect a hung replica while it's in
      rdb transfer state.
      
      The solution here is to add another pipe which is used by the parent to
      tell the child it is safe to exit. this mean that when the child exits,
      for whatever reason, it is safe to reap it.
      
      Besides that, i'm re-introducing an adjustment to REPLCONF ACK which was
      part of #6271 (Accelerate diskless master connections) but was dropped
      when that PR was rebased after the TLS fork/pipe changes (5a477946).
      Now that RdbPipeCleanup no longer calls checkChildrenDone, and the ACK
      has chance to detect that the child exited, it should be the one to call
      it so that we don't have to wait for cron (server.hz) to do that.
      
      (cherry picked from commit 573246f7)
      18429ce5
  4. 02 Mar, 2021 2 commits
    • Yossi Gottlieb's avatar
      Fix failed tests on Linux Alpine and add a CI job. (#8532) · d0762100
      Yossi Gottlieb authored
      * Remove linux/version.h dependency.
      
      This introduces unnecessary dependencies, and generally not a good idea
      as the platform we build on may be different than the platform we run
      on.
      
      To determine if sync_file_range exists we can simply rely on header file
      hints.
      
      * Fix setproctitle() on libmusl.
      
      The previous ifdef checks were a bit too strict for no apparent
      reason.
      
      * Fix tests failure on Linux with no backtrace.
      
      * Add alpine daily CI job.
      
      (cherry picked from commit 95ea7454)
      d0762100
    • sundb's avatar
      Fix timing error oom-score-adj test (#8513) · 40ec7a27
      sundb authored
      fixes timing issue, fork didn't always get to set the oom score before the test verified it.
      
      (cherry picked from commit 46346e9e)
      40ec7a27
  5. 22 Feb, 2021 1 commit
    • Viktor Söderqvist's avatar
      RM_ZsetRem: Delete key if empty (#8453) · 17c3ac89
      Viktor Söderqvist authored
      Without this fix, RM_ZsetRem can leave empty sorted sets which are
      not allowed to exist.
      
      Removing from a sorted set while iterating seems to work (while
      inserting causes failed assetions). RM_ZsetRangeEndReached is
      modified to return 1 if the key doesn't exist, to terminate
      iteration when the last element has been removed.
      
      (cherry picked from commit aea6e71e)
      17c3ac89
  6. 12 Jan, 2021 12 commits
    • Oran Agra's avatar
      Improve stability of new CSC eviction test (#8160) · 5745b469
      Oran Agra authored
      c4fdf09c added a test that now fails with valgrind
      it fails for two resons:
      1) the test samples the used memory and then limits the maxmemory to
         that value, but it turns out this is not atomic and on slow machines
         the background cron process that clean out old query buffers reduces
         the memory so that the setting doesn't cause eviction.
      2) the dbsize was tested late, after reading some invalidation messages
         by that time more and more keys got evicted, partially draining the
         db. this is not the focus of this fix (still a known limitation)
      
      (cherry picked from commit a102b21d)
      5745b469
    • Oran Agra's avatar
      fix race in cluster transactions test (#8312) · a11c842e
      Oran Agra authored
      we didn't wait for the commands executed on the master to reach the replica.
      
      (cherry picked from commit 4f8458d8)
      a11c842e
    • Oran Agra's avatar
      Fix cluster diskless load swapdb test (#8308) · 615eb0db
      Oran Agra authored
      The test was trying to wait for the replica to start loading the rdb
      from the master before it kills the master, but it was actually waiting
      for ROLE to be in "sync" mode, which corresponds to REPL_STATE_TRANSFER
      that starts before the actual loading starts.
      now instead it waits for the loading flag to be set.
      
      Besides, the test was dependent on the previous configuration of the
      servers, relying on the fact the replica is configured to persist
      (either RDB of AOF), now it is set explicitly.
      
      (cherry picked from commit 26495387)
      615eb0db
    • Meir Shpilraien (Spielrein)'s avatar
      Moved RMAPI_FUNC_SUPPORTED location such that it will be visible to modules (#8037) · fb86ecb8
      Meir Shpilraien (Spielrein) authored
      The RMAPI_FUNC_SUPPORTED was defined in the wrong place on redismodule.h
      and was not visible to modules.
      
      (cherry picked from commit 97d647a1)
      fb86ecb8
    • Oran Agra's avatar
      prevent client tracking from causing feedback loop in performEvictions (#8100) · f2f57eb4
      Oran Agra authored
      When client tracking is enabled signalModifiedKey can increase memory usage,
      this can cause the loop in performEvictions to keep running since it was measuring
      the memory usage impact of signalModifiedKey.
      
      The section that measures the memory impact of the eviction should be just on dbDelete,
      excluding keyspace notification, client tracking, and propagation to AOF and replicas.
      
      This resolves part of the problem described in #8069
      p.s. fix took 1 minute, test took about 3 hours to write.
      
      (cherry picked from commit c4fdf09c)
      f2f57eb4
    • Madelyn Olson's avatar
      Further improved ACL algorithm for picking categories · e664f381
      Madelyn Olson authored
      (cherry picked from commit 411bcf1a)
      e664f381
    • Yang Bodong's avatar
      Swapdb should make transaction fail if there is any client watching keys (#8239) · f464cf23
      Yang Bodong authored
      
      
      This PR not only fixes the problem that swapdb does not make the
      transaction fail, but also optimizes the FLUSHALL and FLUSHDB command to
      set the CLIENT_DIRTY_CAS flag to avoid unnecessary traversal of clients.
      
      FLUSHDB was changed to first iterate on all watched keys, and then on the
      clients watching each key.
      Instead of iterating though all clients, and for each iterate on watched keys.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      (cherry picked from commit 10f94b0a)
      f464cf23
    • Oran Agra's avatar
      Fix wrong order of key/value in Lua map response (#8266) · ec56906b
      Oran Agra authored
      When a Lua script returns a map to redis (a feature which was added in
      redis 6 together with RESP3), it would have returned the value first and
      the key second.
      
      If the client was using RESP2, it was getting them out of order, and if
      the client was in RESP3, it was getting a map of value => key.
      This was happening regardless of the Lua script using redis.setresp(3)
      or not.
      
      This also affects a case where the script was returning a map which it got
      from from redis by doing something like: redis.setresp(3); return redis.call()
      
      This fix is a breaking change for redis 6.0 users who happened to rely
      on the wrong order (either ones that used redis.setresp(3), or ones that
      returned a map explicitly).
      
      This commit also includes other two changes in the tests:
      1. The test suite now handles RESP3 maps as dicts rather than nested
         lists
      2. Remove some redundant (duplicate) tests from tracking.tcl
      
      (cherry picked from commit 2017407b)
      ec56906b
    • Oran Agra's avatar
      Handle output buffer limits for Module blocked clients (#8141) · ec74ae7e
      Oran Agra authored
      Module blocked clients cache the response in a temporary client,
      the reply list in this client would be affected by the recent fix
      in #7202, but when the reply is later copied into the real client,
      it would have bypassed all the checks for output buffer limit, which
      would have resulted in both: responding with a partial response to
      the client, and also not disconnecting it at all.
      
      (cherry picked from commit 48efc25f)
      ec74ae7e
    • Wang Yuan's avatar
      Backup keys to slots map and restore when fail to sync if diskless-load type... · a60ed4a5
      Wang Yuan authored
      
      Backup keys to slots map and restore when fail to sync if diskless-load type is swapdb in cluster mode (#8108)
      
      When replica diskless-load type is swapdb in cluster mode, we didn't backup
      keys to slots map, so we will lose keys to slots map if fail to sync.
      Now we backup keys to slots map at first, and restore it properly when fail.
      
      This commit includes a refactory/cleanup of the backups mechanism (moving it to db.c and re-structuring it a bit).
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      (cherry picked from commit b55a827e)
      a60ed4a5
    • guybe7's avatar
      EXISTS should not alter LRU, OBJECT should not reveal expired keys on replica (#8016) · 4b37eb13
      guybe7 authored
      The bug was introduced by #5021 which only attempted avoid EXIST on an
      already expired key from returning 1 on a replica.
      
      Before that commit, dbExists was used instead of
      lookupKeyRead (which had an undesired effect to "touch" the LRU/LFU)
      
      Other than that, this commit fixes OBJECT to also come empty handed on
      expired keys in replica.
      
      And DEBUG DIGEST-VALUE to behave like DEBUG OBJECT (get the data from
      the key regardless of it's expired state)
      
      (cherry picked from commit f8ae9917)
      4b37eb13
    • Wang Yuan's avatar
      Disable rehash when redis has child process (#8007) · 3a13c654
      Wang Yuan authored
      In redisFork(), we don't set child pid, so updateDictResizePolicy()
      doesn't take effect, that isn't friendly for copy-on-write.
      
      The bug was introduced this in redis 6.0: 56258c6b
      
      (cherry picked from commit 89c78a98)
      3a13c654
  7. 27 Oct, 2020 8 commits