1. 28 Nov, 2023 2 commits
    • zhaozhao.zz's avatar
      Fix resize hash tables stuck on the last non-empty slot (#12802) · a1c5171c
      zhaozhao.zz authored
      Introduced in #11695 .
      
      The tryResizeHashTables function gets stuck on the last non-empty slot
      while iterating through dictionaries. It does not restart from the
      beginning. The reason for this issue is a problem with the usage of
      dbIteratorNextDict:
      
      /* Returns next dictionary from the iterator, or NULL if iteration is complete. */
      dict *dbIteratorNextDict(dbIterator *dbit) {
          if (dbit->next_slot == -1) return NULL;
          dbit->slot = dbit->next_slot;
          dbit->next_slot = dbGetNextNonEmptySlot(dbit->db, dbit->slot, dbit->keyType);
          return dbGetDictFromIterator(dbit);
      }
      
      When iterating to the last non-empty slot, next_slot is set to -1,
      causing it to loop indefinitely on that slot. We need to modify the code
      to ensure that after iterating to the last non-empty slot, it returns to
      the first non-empty slot.
      
      BTW, function tryResizeHashTables is actually iterating over slots
      that have keys. However, in its implementation, it leverages the
      dbIterator (which is a key iterator) to obtain slot and dictionary
      information. While this approach works fine, but it is not very
      intuitive. This PR also improves readability by changing the iteration
      to directly iterate over slots, thereby enhancing clarity.
      a1c5171c
    • zhaozhao.zz's avatar
      clarify the comment of findSlotByKeyIndex function (#12811) · 095d0578
      zhaozhao.zz authored
      The current comment for `findSlotByKeyIndex` is a bit ambiguous and can
      be misleading, as it may be misunderstood as getting the next slot
      corresponding to target.
      095d0578
  2. 27 Nov, 2023 3 commits
    • Binbin's avatar
      Un-register notification and server event when RedisModule_OnLoad fails (#12809) · d6f19539
      Binbin authored
      When we register notification or server event in RedisModule_OnLoad, but
      RedisModule_OnLoad eventually fails, triggering notification or server
      event
      will cause the server to crash.
      
      If the loading fails on a later stage of moduleLoad, we do call
      moduleUnload
      which handles all un-registration, but when it fails on the
      RedisModule_OnLoad
      call, we only un-register several specific things and these were
      missing:
      
      - moduleUnsubscribeNotifications
      - moduleUnregisterFilters
      - moduleUnsubscribeAllServerEvents
      
      Refactored the code to reuse the code from moduleUnload.
      
      Fixes #12808.
      d6f19539
    • zhaozhao.zz's avatar
      Optimize the efficiency of active expiration when databases exceeds 16. (#12738) · 1bd0b549
      zhaozhao.zz authored
      Currently, when the number of databases exceeds 16,
      the efficiency of cleaning expired keys is relatively low.
      
      The reason is that by default only 16 databases are scanned when
      attempting to clean expired keys (CRON_DBS_PER_CALL is 16). But users
      may set databases higher than 16, such as 256, but it does not
      necessarily mean that all 256 databases have expiration time set. If
      only one database has expiration time set, this database needs 16
      activeExpireCycle rounds in order to be scanned once, and 15 of those
      rounds are meaningless.
      
      To optimize the efficiency of expiration in such scenarios, we use dbs_per_call
      to control the number of databases with expired keys being scanned.
      
      Additionally, add a condition to limit the maximum number of rounds
      to server.dbnum to prevent excessive spinning. This ensures that even if
      only one database has expired keys, it can be triggered within a single cron job.
      1bd0b549
    • binfeng-xin's avatar
      Call signalModifiedKey after the key modification is completed (#11144) · 56ec1ff1
      binfeng-xin authored
      
      
      Fix `signalModifiedKey()` order, call it after key modification
      completed, to ensure the state of key eventual consistency.
      
      When a key is modified, Redis calls `signalModifiedKey` to notify other
      systems, such as the watch system of transactions and the tracking
      system of client side caching. However, in some commands, the
      `signalModifiedKey` call happens during the key modification process
      instead of after the key modification is completed. This can potentially
      cause issues, as systems relying on `signalModifiedKey` may receive the
      "write in flight" status of the key rather than its final state.
      
      These commands include:
      1. PFADD
      2. LSET, LMOVE, LREM
      3. ZPOPMIN, ZPOPMAX, BZPOPMIN, BZPOPMAX, ZMPOP, BZMPOP
      
      Currently there is no problem with Redis, but it is better to adjust the
      order of `signalModifiedKey()`, to avoid issues in future development on
      Redis.
      
      ---------
      Co-authored-by: default avatarzhaozhao.zz <zhaozhao.zz@alibaba-inc.com>
      56ec1ff1
  3. 23 Nov, 2023 6 commits
    • meiravgri's avatar
      Fix async safety in signal handlers (#12658) · 2e854bcc
      meiravgri authored
      see discussion from after https://github.com/redis/redis/pull/12453 was
      merged
      ----
      This PR replaces signals that are not considered async-signal-safe
      (AS-safe) with safe calls.
      
      #### **1. serverLog() and serverLogFromHandler()**
      `serverLog` uses unsafe calls. It was decided that we will **avoid**
      `serverLog` calls by the signal handlers when:
      * The signal is not fatal, such as SIGALRM. In these cases, we prefer
      using `serverLogFromHandler` which is the safe version of `serverLog`.
      Note they have different prompts:
      `serverLog`: `62220:M 26 Oct 2023 14:39:04.526 # <msg>`
      `serverLogFromHandler`: `62220:signal-handler (1698331136) <msg>`
      * The code was added recently. Calls to `serverLog` by the signal
      handler have been there ever since Redis exists and it hasn't caused
      problems so far. To avoid regression, from now we should use
      `serverLogFromHandler`
      
      #### **2. `snprintf` `fgets` and `strtoul`(base = 16) -------->
      `_safe_snprintf`, `fgets_async_signal_safe`, `string_to_hex`**
      The safe version of `snprintf` was taken from
      [here](https://github.com/twitter/twemcache/blob/8cfc4ca5e76ed936bd3786c8cc43ed47e7778c08/src/mc_util.c#L754)
      
      #### **3. fopen(), fgets(), fclose() --------> open(), read(), close()**
      
      #### **4. opendir(), readdir(), closedir() --------> open(),
      syscall(SYS_getdents64), close()**
      
      #### **5. Threads_mngr sync mechanisms**
      * waiting for the thread to generate stack trace: semaphore -------->
      busy-wait
      * `globals_rw_lock` was removed: as we are not using malloc and the
      semaphore anymore we don't need to protect `ThreadsManager_cleanups`.
      
      #### **6. Stacktraces buffer**
      The initial problem was that we were not able to safely call malloc
      within the signal handler.
      To solve that we created a buffer on the stack of `writeStacktraces` and
      saved it in a global pointer, assuming that under normal circumstances,
      the function `writeStacktraces` would complete before any thread
      attempted to write to it. However, **if threads lag behind, they might
      access this global pointer after it no longer belongs to the
      `writeStacktraces` stack, potentially corrupting memory.**
      To address this, various solutions were discussed
      [here](https://github.com/redis/redis/pull/12658#discussion_r1390442896)
      Eventually, we decided to **create a pipe** at server startup that will
      remain valid as long as the process is alive.
      We chose this solution due to its minimal memory usage, and since
      `write()` and `read()` are atomic operations. It ensures that stack
      traces from different threads won't mix.
      
      **The stacktraces collection process is now as  follows:**
      * Cleaning the pipe to eliminate writes of late threads from previous
      runs.
      * Each thread writes to the pipe its stacktrace
      * Waiting for all the threads to mark completion or until a timeout (2
      sec) is reached
      * Reading from the pipe to print the stacktraces.
      
      #### **7. Changes that were considered and eventually were dropped**
      * replace watchdog timer with a POSIX timer: 
      according to [settimer man](https://linux.die.net/man/2/setitimer)
      
      > POSIX.1-2008 marks getitimer() and setitimer() obsolete, recommending
      the use of the POSIX timers API
      ([timer_gettime](https://linux.die.net/man/2/timer_gettime)(2),
      [timer_settime](https://linux.die.net/man/2/timer_settime)(2), etc.)
      instead.
      
      However, although it is supposed to conform to POSIX std, POSIX timers
      API is not supported on Mac.
      You can take a look here at the Linux implementation:
      
      [here](https://github.com/redis/redis/commit/c7562ee13546e504977372fdf40d33c3f86775a5
      
      )
      To avoid messing up the code, and uncertainty regarding compatibility,
      it was decided to drop it for now.
      
      * avoid using sds (uses malloc) in logConfigDebugInfo
      It was considered to print config info instead of using sds, however
      apparently, `logConfigDebugInfo` does more than just print the sds, so
      it was decided this fix is out of this issue scope.
      
      #### **8. fix Signal mask check**
      The check `signum & sig_mask` intended to indicate whether the signal is
      blocked by the thread was incorrect. Actually, the bit position in the
      signal mask corresponds to the signal number. We fixed this by changing
      the condition to: `sig_mask & (1L << (sig_num - 1))`
      
      #### **9. Unrelated changes**
      both `fork.tcl `and `util.tcl` implemented a function called
      `count_log_message` expecting different parameters. This caused
      confusion when trying to run daily tests with additional test parameters
      to run a specific test.
      The `count_log_message` in `fork.tcl` was removed and the calls were
      replaced with calls to `count_log_message` located in `util.tcl`
      
      ---------
      Co-authored-by: default avatarOzan Tezcan <ozantezcan@gmail.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      2e854bcc
    • Binbin's avatar
      Fix misleading error message in redis.log when loglevel is invalid (#12636) · 5e403099
      Binbin authored
      We don't have any debug level, change it to log level.
      5e403099
    • Moshe Kaplan's avatar
      rdb.c: Avoid potential file handle leak (Coverity 404720) (#12795) · c9aa586b
      Moshe Kaplan authored
      `open()` can return any non-negative integer on success, including zero.
      This change modifies the check from open()'s return value to also
      include checking for a return value of zero (e.g., if stdin were closed
      and then `open()` was called).
      
      Fixes Coverity 404720
      
      Can't happen in Redis. just a cleanup.
      c9aa586b
    • Moshe Kaplan's avatar
      redis-check-aof.c: Avoid leaking file handle if file is zero bytes (#12797) · ae09d4d3
      Moshe Kaplan authored
      If fopen() is successful, but redis_fstat() determines the file is zero
      bytes, the file handle stored in fp will leak. This change closes the
      filehandle stored in fp if the file is zero bytes.
      
      An FD leak on a tool like redis-check-aof isn't an issue (it'll exit soon anyway).
      This is just a cleanup.
      ae09d4d3
    • Moshe Kaplan's avatar
      config.c: Avoid leaking file handle if redis_fstat() fails (#12796) · 1c48d3da
      Moshe Kaplan authored
      If fopen() is successful, but redis_fstat() fails, the file handle
      stored in fp will leak. This change closes the filehandle stored in fp
      if redis_fstat() fails.
      
      Fixes Coverity 390029
      1c48d3da
    • Moshe Kaplan's avatar
      util.c: Don't leak directory handle if recursive call to dirRemove fails (#12800) · 157e5d47
      Moshe Kaplan authored
      If a recursive call to dirRemove() returns -1, dirRemove() the directory
      handle stored in dir will leak. This change closes the directory handle
      stored in dir even if the recursive call to dirRemove() returns -1.
      
      Fixed Coverity 371073
      157e5d47
  4. 22 Nov, 2023 16 commits
  5. 21 Nov, 2023 4 commits
  6. 20 Nov, 2023 1 commit
  7. 19 Nov, 2023 3 commits
    • Hwang Si Yeon's avatar
      Add an explanation for URI with -u in redis-cli --help (#12751) · a1f91ffa
      Hwang Si Yeon authored
      Add documentation of the URI format in the `--help` output of
      `redis-cli` and `redis-benchmark`.
      
      In particular, it's good for users to know that they need to specify
      "default" as the username when authenticating without a username. Other
      details of the URI format are described too, like scheme and dbnum.
      
      It used to be possible to connect to Redis using an URL with an empty
      username, like `redis-cli -u redis://:PASSWORD@localhost:6379/0`. This
      was broken in 6.2 (#8048), and there was a discussion about it #9186.
      Now, users need to specify "default" as the username and it's better to
      document it.
      
      Refer to #12746 for more details.
      a1f91ffa
    • Wen Hui's avatar
      Adding missing SWAPDB related test cases. (#12769) · 5a1f4b9a
      Wen Hui authored
      We have some test cases of swapdb with watchkey but missing seperate
      basic swapdb test cases, unhappy path and flushdb after swapdb. So added
      the test cases in keyspace.tcl.
      5a1f4b9a
    • Binbin's avatar
      Fix timing issue in CLUSTER SLAVE / REPLICAS consistent test (#12774) · 3d9c427f
      Binbin authored
      CI reports that this test failed, the reason is because during
      the command processing, the node processed PING/PONG, resulting
      in ping_sent or pong_received mismatch.
      
      Change to use MULTI to avoid timing issue. The test was introduced
      in #12224.
      3d9c427f
  8. 17 Nov, 2023 2 commits
  9. 16 Nov, 2023 1 commit
  10. 15 Nov, 2023 1 commit
    • Binbin's avatar
      Empty rehashing list in emptyDbStructure (#12764) · 4366bbaa
      Binbin authored
      This is currently harmless, since we have already cleared the dict
      before, it will reset the rehashidx to -1, and in incrementallyRehash
      we will call dictIsRehashing to check.
      
      It would be nice to empty the list to avoid meaningless attempts, and
      the code is also unified to reduce misunderstandings.
      4366bbaa
  11. 14 Nov, 2023 1 commit
    • Binbin's avatar
      Fix DB iterator not resetting pauserehash causing dict being unable to rehash (#12757) · fe363063
      Binbin authored
      When using DB iterator, it will use dictInitSafeIterator to init a old safe
      dict iterator. When dbIteratorNext is used, it will jump to the next slot db
      dict when we are done a dict. During this process, we do not have any calls to
      dictResumeRehashing, which causes the dict's pauserehash to always be > 0.
      
      And at last, it will be returned directly in dictRehashMilliseconds, which causes
      us to have slot dict in a state where rehash cannot be completed.
      
      In the "expire scan should skip dictionaries with lot's of empty buckets" test,
      adding a `keys *` can reproduce the problem stably. `keys *` will call dbIteratorNext
      to trigger a traversal of all slot dicts.
      
      Added dbReleaseIterator and dbIteratorInitNextSafeIterator methods to call dictResetIterator.
      Issue was introduced in #11695.
      fe363063