1. 21 Nov, 2024 1 commit
    • Moti Cohen's avatar
      modules API: Support register unprefixed config parameters (#13656) · 15563450
      Moti Cohen authored
      PR #10285 introduced support for modules to register four types of
      configurations — Bool, Numeric, String, and Enum. Accessible through the
      Redis config file and the CONFIG command.
      
      With this PR, it will be possible to register configuration parameters
      without automatically prefixing the parameter names. This provides
      greater flexibility in configuration naming, enabling, for instance,
      both `bf-initial-size` or `initial-size` to be defined in the module
      without automatically prefixing with `<MODULE-NAME>.`. In addition it
      will also be possible to create a single additional alias via the same
      API. This brings us another step closer to integrate modules into redis
      core.
      
      **Example:** Register a configuration parameter `bf-initial-size` with
      an alias `initial-size` without the automatic module name prefix, set
      with new `REDISMODULE_CONFIG_UNPREFIXED` flag:
      ```
      RedisModule_RegisterBoolConfig(ctx, "bf-initial-size|initial-size", default_val, optflags | REDISMODULE_CONFIG_UNPREFIXED, getfn, setfn, applyfn, privdata);
      ```
      # API changes
      Related functions that now support unprefixed configuration flag
      (`REDISMODULE_CONFIG_UNPREFIXED`) along with optional alias:
      ```
      RedisModule_RegisterBoolConfig
      RedisModule_RegisterEnumConfig
      RedisModule_RegisterNumericConfig
      RedisModule_RegisterStringConfig
      ```
      
      # Implementation Details:
      `config.c`: On load server configuration, at function
      `loadServerConfigFromString()`, it collects all unknown configurations
      into `module_configs_queue` dictionary. These may include valid module
      configurations or invalid ones. They will be validated later by
      `loadModuleConfigs()` against the configurations declared by the loaded
      module(s).
      `Module.c:` The `ModuleConfig` structure has been modified to store now:
      (1) Full configuration name (2) Alias (3) Unprefixed flag status -
      ensuring that configurations retain their original registration format
      when triggered in notifications.
      
      Added error printout:
      This change introduces an error printout for unresolved configurations,
      detailing each unresolved parameter detected during startup. The last
      line in the output existed prior to this change and has been retained to
      systems relies on it:
      ```
      595011:M 18 Nov 2024 08:26:23.616 # Unresolved Configuration(s) Detected:
      595011:M 18 Nov 2024 08:26:23.616 #  >>> 'bf-initiel-size 8'
      595011:M 18 Nov 2024 08:26:23.616 #  >>> 'search-sizex 32'
      595011:M 18 Nov 2024 08:26:23.616 # Module Configuration detected without loadmodule directive or no ApplyConfig call: aborting
      ```
      
      # Backward Compatibility:
      Existing modules will function without modification, as the new
      functionality only applies if REDISMODULE_CONFIG_UNPREFIXED is
      explicitly set.
      
      # Module vs. Core API Conflict Behavior
      The new API allows to modules loading duplication of same configuration
      name or same configuration alias, just like redis core configuration
      allows (i.e. the users sets two configs with a different value, but
      these two configs are actually the same one). Unlike redis core, given a
      name and its alias, it doesn't allow have both configuration on load. To
      implement it, it is required to modify DS `module_configs_queue` to
      reflect the order of their loading and later on, during
      `loadModuleConfigs()`, resolve pairs of names and aliases and which one
      is the last one to apply. "Relaxing" this limitation can be deferred to
      a future update if necessary, but for now, we error in this case.
      15563450
  2. 03 Sep, 2024 1 commit
    • Ozan Tezcan's avatar
      Reply LOADING on replica while flushing the db (#13495) · a7afd1d2
      Ozan Tezcan authored
      On a full sync, replica starts discarding existing db. If the existing 
      db is huge and flush is happening synchronously, replica may become 
      unresponsive. 
      
      Adding a change to yield back to event loop while flushing db on 
      a replica. Replica will reply -LOADING in this case. Note that while 
      replica is loading the new rdb, it may get an error and start flushing
      the partial db. This step may take a long time as well. Similarly, 
      replica will reply -LOADING in this case. 
      
      To call processEventsWhileBlocked() and reply -LOADING, we need to do:
      - Set connSetReadHandler() null not to process further data from the master
      - Set server.loading flag
      - Call blockingOperationStarts()
      
      rdbload() already does these steps and calls processEventsWhileBlocked()
      while loading the rdb. Added a new call rdbLoadWithEmptyFunc() which 
      accepts callback to flush db before loading rdb or when an error 
      happens while loading. 
      
      For diskless replication, doing something similar and calling emptyData()
      after setting required flags.
      
      Additional changes:
      - Allow `appendonly` config change during loading. 
       Config can be changed while loading data on startup or on replication 
       when slave is loading RDB. We allow config change command to update 
       `server.aof_enabled` and then lazily apply config change after loading
       operation is completed.
       
       - Added a test for `replica-lazy-flush` config
      a7afd1d2
  3. 09 Jul, 2024 1 commit
    • debing.sun's avatar
      Hide user data from log (#13400) · 69b480cb
      debing.sun authored
      
      
      This PR is based on the commits from PR #11747.
      
      In the event of an assertion failure, hide command arguments from the
      operator.
      
      In some cases, private client information can be voluntarily exposed
      when a redis instance crashes due to an assertion failure.
      This commit prevent וnintentional client info exposure.
      Operators can still access the hidden data, but they must actively
      request it.
      Any of the client info commands remains the unchanged.
      
      ### Config
      Add a new config `hide-user-data-from-log` to turn this feature on and
      off, default off.
      
      ---------
      Co-authored-by: default avatarnaglera <anagler123@gmail.com>
      Co-authored-by: default avatarnaglera <58042354+naglera@users.noreply.github.com>
      69b480cb
  4. 20 Mar, 2024 1 commit
  5. 06 Feb, 2024 1 commit
    • Binbin's avatar
      Re-compute active_defrag_running after adjusting defrag configurations (#13020) · 13bd3643
      Binbin authored
      Currently, once active defrag starts, we can not adjust
      active_defrag_running
      downwards. This is because active_defrag_running will be dynamically
      compute
      based on the fragmentation, we think we should not lower the effort when
      the
      fragmentation drops.
      
      However, we need to note that active_defrag_running will also be
      dynamically
      computed based on configurations. In this case, we are not respecting
      cycle-min
      or cycle-max. Some people may realize halfway through that defrag
      consumes a
      lot and want to adjust it.
      
      Previously we could only turn off activedefrag and then turn it on again
      to
      adjust active_defrag_running downwards. So in this PR, when a active
      defrag
      configuration change is made, we will re-compute it.
      
      These configuration items are:
      - active-defrag-cycle-min
      - active-defrag-cycle-max
      - active-defrag-threshold-upper
      13bd3643
  6. 29 Jan, 2024 1 commit
    • Binbin's avatar
      Fix maxmemory-samples stack overflow crash in evictionPoolPopulate, limit its... · acd96052
      Binbin authored
      Fix maxmemory-samples stack overflow crash in evictionPoolPopulate, limit its value to [1,64] (#13000)
      
      We have not limited the value of maxmemory-samples in the past, it can
      be set very large. If it is set very large, we will have stack overflow
      in evictionPoolPopulate when we trigger the key eviction.
      
      There is no reason for this config to be set too high, so just limit its
      range to [1,64].
      acd96052
  7. 02 Jan, 2024 1 commit
    • AshMosh's avatar
      Manage number of new connections per cycle (#12178) · c3f8b542
      AshMosh authored
      
      
      There are situations (especially in TLS) in which the engine gets too occupied managing a large number of new connections. Existing connections may time-out while the server is processing the new connections initial TLS handshakes, which may cause cause new connections to be established, perpetuating the problem. To better manage the tradeoff between new connection rate and other workloads, this change adds a new config to manage maximum number of new connections per event loop cycle, instead of using a predetermined number (currently 1000).
      
      This change introduces two new configurations, max-new-connections-per-cycle and max-new-tls-connections-per-cycle. The default value of the tcp connections is 10 per cycle and the default value of tls connections per cycle is 1.
      ---------
      Co-authored-by: default avatarMadelyn Olson <madelyneolson@gmail.com>
      c3f8b542
  8. 27 Dec, 2023 1 commit
    • Moshe Kaplan's avatar
      config.c: Avoid leaking file handle if file is 0 bytes (#12828) · fa751f9b
      Moshe Kaplan authored
      If fopen() is successful and redis_fstat determines that the file is 0
      bytes, the file handle stored in fp will leak. This change closes the
      filehandle stored in fp if the file is 0 bytes.
      
      Second attempt at fixing Coverity 390029
      
      This is a follow-up to #12796
      fa751f9b
  9. 13 Dec, 2023 1 commit
    • Binbin's avatar
      Redact ACL username information and mark *-key-file-pass configs as sensitive (#12860) · 3c0fd252
      Binbin authored
      In #11489, we consider acl username to be sensitive information,
      and consider the ACL GETUSER a sensitive command and remove it
      from redis-cli historyfile.
      
      This PR redact username information in ACL GETUSER and ACL DELUSER
      from SLOWLOG, and also remove ACL DELUSER from redis-cli historyfile.
      
      This PR also mark tls-key-file-pass and tls-client-key-file-pass
      as sensitive config, will redact it from SLOWLOG and also
      remove them from redis-cli historyfile.
      3c0fd252
  10. 29 Nov, 2023 1 commit
    • zhaozhao.zz's avatar
      format cpu config as redis style (#7351) · 3431b1f1
      zhaozhao.zz authored
      The following four configurations are renamed to align with Redis style:
      
      1. server_cpulist renamed to server-cpulist
      2. bio_cpulist renamed to bio-cpulist
      3. aof_rewrite_cpulist renamed to aof-rewrite-cpulist
      4. bgsave_cpulist renamed to bgsave-cpulist
      
      The original names are retained as aliases to ensure compatibility with
      old configuration files. We recommend users to gradually transition to
      using the new configuration names to maintain consistency in style.
      3431b1f1
  11. 23 Nov, 2023 1 commit
  12. 02 Oct, 2023 1 commit
  13. 20 Aug, 2023 1 commit
    • meiravgri's avatar
      Signal handler attributes (#12426) · fe47c202
      meiravgri authored
      This PR purpose is to make the crash report process thread safe.
      main changes include:
      
      1. `setupSigSegvHandler()` is introduced to initialize the signal handler.
      This function first initializes the signal handler mutex (if not initialized yet)
      and then registers the process to the signal handler. 
      
      2. **sigsegvHandler** flags :
      SA_NODEFER - don't add the signal to the process signal mask. We use this
      flag because we want to be able to handle a second call to the signal manually.
      removed SA_RESETHAND: this flag resets the signal handler function upon the first
      entrance to the registered function. The reason to use this flag is to protect from
      recursively entering the signal handler by the same thread. But, it also means
      that if a second thread crashes while handling a signal, the process will be
      terminated immediately and we won't get the crash report.
      In this PR we discard this flag. The signal handler guard described below purpose
      is to solve the above issues.
      
      3. Add a **signal handler lock** with ERRORCHECK attributes. 
      The lock's purpose is to ensure that only one thread generates a crash report.
      Once a second thread enters the signal handler it will be blocked.
      We use the ERRORCHECK lock in order to protect from possible deadlock in
      case the thread handling the crash gets a signal. In the latest scenario, we log
      what we have collected until the handler crashed.
      
      At the end of the crash report we reset the signal handler SIG_DFL, with no flags, and
      rethrow the signal to generate a core dump (if enabled) and exit the process.
      
      During the work on this PR we wanted to understand the historical reasons for
      how crash is handled.
      With respect to the choice of the flag, we believe the **SA_RESETHAND** was not
      added for any specific purpose.
      **SA_ONSTACK** which is removed here from bugReportEnd(), was originally also
      set in the initial registration to signal handler, but removed in 3ada43e7. In addition,
      it was removed from another location in deee2c1e with the following description,
      which is also relevant to why it should be removed from bugReportEnd:
      
      > it seems to be some valgrind bug with SA_ONSTACK.
      > SA_ONSTACK seems unneeded since WD is not recursive (SA_NODEFER was removed),
      > also, not sure if it's even valid without a call to sigaltstack()
      fe47c202
  14. 27 Jun, 2023 1 commit
    • Binbin's avatar
      Set HIDDEN_CONFIG flag on aof-disable-auto-gc (#12355) · f58fd9e6
      Binbin authored
      aof-disable-auto-gc was created for testing purposes,
      to check if certain AOF files were actually generated
      and if they were deletedcorrectly during testing.
      
      So hiding it, see #12249 for more discussion.
      f58fd9e6
  15. 20 Jun, 2023 1 commit
    • Wen Hui's avatar
      Sanitizer reported memory leak for '--invalid' option or port number is missed... · 813924b4
      Wen Hui authored
      
      Sanitizer reported memory leak for '--invalid' option or port number is missed cases to redis-server. (#12322)
      
      Observed that the sanitizer reported memory leak as clean up is not done
      before the process termination in negative/following cases:
      
      **- when we passed '--invalid' as option to redis-server.**
      
      ```
       -vm:~/mem-leak-issue/redis$ ./src/redis-server --invalid
      
      *** FATAL CONFIG FILE ERROR (Redis 255.255.255) ***
      Reading the configuration file, at line 2
      >>> 'invalid'
      Bad directive or wrong number of arguments
      
      =================================================================
      ==865778==ERROR: LeakSanitizer: detected memory leaks
      
      Direct leak of 8 byte(s) in 1 object(s) allocated from:
          #0 0x7f0985f65867 in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:145
          #1 0x558ec86686ec in ztrymalloc_usable_internal /home/ubuntu/mem-leak-issue/redis/src/zmalloc.c:117
          #2 0x558ec86686ec in ztrymalloc_usable /home/ubuntu/mem-leak-issue/redis/src/zmalloc.c:135
          #3 0x558ec86686ec in ztryrealloc_usable_internal /home/ubuntu/mem-leak-issue/redis/src/zmalloc.c:276
          #4 0x558ec86686ec in zrealloc /home/ubuntu/mem-leak-issue/redis/src/zmalloc.c:327
          #5 0x558ec865dd7e in sdssplitargs /home/ubuntu/mem-leak-issue/redis/src/sds.c:1172
          #6 0x558ec87a1be7 in loadServerConfigFromString /home/ubuntu/mem-leak-issue/redis/src/config.c:472
          #7 0x558ec87a13b3 in loadServerConfig /home/ubuntu/mem-leak-issue/redis/src/config.c:718
          #8 0x558ec85e6f15 in main /home/ubuntu/mem-leak-issue/redis/src/server.c:7258
          #9 0x7f09856e5d8f in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58
      
      SUMMARY: AddressSanitizer: 8 byte(s) leaked in 1 allocation(s).
      
      ```
      
      **- when we pass '--port' as option and missed to add port number to redis-server.**
      
      ```
      vm:~/mem-leak-issue/redis$ ./src/redis-server --port
      
      *** FATAL CONFIG FILE ERROR (Redis 255.255.255) ***
      Reading the configuration file, at line 2
      >>> 'port'
      wrong number of arguments
      
      =================================================================
      ==865846==ERROR: LeakSanitizer: detected memory leaks
      
      Direct leak of 8 byte(s) in 1 object(s) allocated from:
          #0 0x7fdcdbb1f867 in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:145
          #1 0x557e8b04f6ec in ztrymalloc_usable_internal /home/ubuntu/mem-leak-issue/redis/src/zmalloc.c:117
          #2 0x557e8b04f6ec in ztrymalloc_usable /home/ubuntu/mem-leak-issue/redis/src/zmalloc.c:135
          #3 0x557e8b04f6ec in ztryrealloc_usable_internal /home/ubuntu/mem-leak-issue/redis/src/zmalloc.c:276
          #4 0x557e8b04f6ec in zrealloc /home/ubuntu/mem-leak-issue/redis/src/zmalloc.c:327
          #5 0x557e8b044d7e in sdssplitargs /home/ubuntu/mem-leak-issue/redis/src/sds.c:1172
          #6 0x557e8b188be7 in loadServerConfigFromString /home/ubuntu/mem-leak-issue/redis/src/config.c:472
          #7 0x557e8b1883b3 in loadServerConfig /home/ubuntu/mem-leak-issue/redis/src/config.c:718
          #8 0x557e8afcdf15 in main /home/ubuntu/mem-leak-issue/redis/src/server.c:7258
          #9 0x7fdcdb29fd8f in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58
      
      Indirect leak of 10 byte(s) in 1 object(s) allocated from:
          #0 0x7fdcdbb1fc18 in __interceptor_realloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:164
          #1 0x557e8b04f9aa in ztryrealloc_usable_internal /home/ubuntu/mem-leak-issue/redis/src/zmalloc.c:287
          #2 0x557e8b04f9aa in ztryrealloc_usable /home/ubuntu/mem-leak-issue/redis/src/zmalloc.c:317
          #3 0x557e8b04f9aa in zrealloc_usable /home/ubuntu/mem-leak-issue/redis/src/zmalloc.c:342
          #4 0x557e8b033f90 in _sdsMakeRoomFor /home/ubuntu/mem-leak-issue/redis/src/sds.c:271
          #5 0x557e8b033f90 in sdsMakeRoomFor /home/ubuntu/mem-leak-issue/redis/src/sds.c:295
          #6 0x557e8b033f90 in sdscatlen /home/ubuntu/mem-leak-issue/redis/src/sds.c:486
          #7 0x557e8b044e1f in sdssplitargs /home/ubuntu/mem-leak-issue/redis/src/sds.c:1165
          #8 0x557e8b188be7 in loadServerConfigFromString /home/ubuntu/mem-leak-issue/redis/src/config.c:472
          #9 0x557e8b1883b3 in loadServerConfig /home/ubuntu/mem-leak-issue/redis/src/config.c:718
          #10 0x557e8afcdf15 in main /home/ubuntu/mem-leak-issue/redis/src/server.c:7258
          #11 0x7fdcdb29fd8f in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58
      
      SUMMARY: AddressSanitizer: 18 byte(s) leaked in 2 allocation(s).
      
      ```
      
      As part analysis found that the sdsfreesplitres is not called when this condition checks are being hit.
      
      Output after the fix:
      
      
      ```
      vm:~/mem-leak-issue/redis$ ./src/redis-server --invalid
      
      *** FATAL CONFIG FILE ERROR (Redis 255.255.255) ***
      Reading the configuration file, at line 2
      >>> 'invalid'
      Bad directive or wrong number of arguments
      vm:~/mem-leak-issue/redis$
      
      ===========================================
      vm:~/mem-leak-issue/redis$ ./src/redis-server --jdhg
      
      *** FATAL CONFIG FILE ERROR (Redis 255.255.255) ***
      Reading the configuration file, at line 2
      >>> 'jdhg'
      Bad directive or wrong number of arguments
      
      ---------------------------------------------------------------------------
      vm:~/mem-leak-issue/redis$ ./src/redis-server --port
      
      *** FATAL CONFIG FILE ERROR (Redis 255.255.255) ***
      Reading the configuration file, at line 2
      >>> 'port'
      wrong number of arguments
      ```
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      813924b4
  16. 18 Jun, 2023 1 commit
    • Wen Hui's avatar
      Cluster human readable nodename feature (#9564) · 070453ee
      Wen Hui authored
      
      
      This PR adds a human readable name to a node in clusters that are visible as part of error logs. This is useful so that admins and operators of Redis cluster have better visibility into failures without having to cross-reference the generated ID with some logical identifier (such as pod-ID or EC2 instance ID). This is mentioned in #8948. Specific nodenames can be set by using the variable cluster-announce-human-nodename. The nodename is gossiped using the clusterbus extension in #9530.
      Co-authored-by: default avatarMadelyn Olson <madelyneolson@gmail.com>
      070453ee
  17. 28 May, 2023 1 commit
  18. 23 May, 2023 1 commit
    • zhaozhao.zz's avatar
      add a new loglevel 'nothing' to disable logging (#12133) · 07ea2204
      zhaozhao.zz authored
      Users can record logs of different levels by setting the `loglevel`.
      However, sometimes there are many logs even at the warning level,
      which can affect the performance of Redis.
      
      For example, when a user accesses the tls-port using a non-encrypted link,
      Redis will log lots of "# Error accepting a client connection: ...".
      
      We can provide the ability to disable logging so that users can temporarily turn
      off logging and turn it back on after the problem is resolved.
      07ea2204
  19. 03 May, 2023 1 commit
    • Madelyn Olson's avatar
      Remove prototypes with empty declarations (#12020) · 5e3be1be
      Madelyn Olson authored
      Technically declaring a prototype with an empty declaration has been deprecated since the early days of C, but we never got a warning for it. C2x will apparently be introducing a breaking change if you are using this type of declarator, so Clang 15 has started issuing a warning with -pedantic. Although not apparently a problem for any of the compiler we build on, if feels like the right thing is to properly adhere to the C standard and use (void).
      5e3be1be
  20. 04 Apr, 2023 1 commit
    • Subhi Al Hasan's avatar
      check for known-slave in sentinel rewrite config (#11775) · 74b29985
      Subhi Al Hasan authored
      Fix the following config file error
      
      ```
      *** FATAL CONFIG FILE ERROR (Redis 6.2.7) ***
      Reading the configuration file, at line 152
      >>> 'sentinel known-replica XXXX 127.0.0.1 5001'
      Duplicate hostname and port for replica.
      ```
      
      
      that is happening when a user uses the legacy key "known-slave" in
      the config file and a config rewrite occurs. The config rewrite logic won't
      replace the old  line "sentinel known-slave XXXX 127.0.0.1 5001" and
      would add a new line with "sentinel known-replica XXXX 127.0.0.1 5001"
      which results in the error above "Duplicate hostname and port for replica."
      
      example:
      
      Current sentinal.conf
      ```
      ...
      
      sentinel known-slave XXXX 127.0.0.1 5001
      sentinel example-random-option X
      ...
      ```
      after the config rewrite logic runs:
      ```
      ....
      sentinel known-slave XXXX 127.0.0.1 5001
      sentinel example-random-option X
      
      # Generated by CONFIG REWRITE
      sentinel known-replica XXXX 127.0.0.1 5001
      ```
      
      This bug only exists in Redis versions >=6.2 because prior to that it was hidden
      by the effects of this bug https://github.com/redis/redis/issues/5388 that was fixed
      in https://github.com/redis/redis/pull/8271 and was released in versions >=6.2
      74b29985
  21. 14 Mar, 2023 1 commit
    • Slava Koyfman's avatar
      Implementing the WAITAOF command (issue #10505) (#11713) · 9344f654
      Slava Koyfman authored
      
      
      Implementing the WAITAOF functionality which would allow the user to
      block until a specified number of Redises have fsynced all previous write
      commands to the AOF.
      
      Syntax: `WAITAOF <num_local> <num_replicas> <timeout>`
      Response: Array containing two elements: num_local, num_replicas
      num_local is always either 0 or 1 representing the local AOF on the master.
      num_replicas is the number of replicas that acknowledged the a replication
      offset of the last write being fsynced to the AOF.
      
      Returns an error when called on replicas, or when called with non-zero
      num_local on a master with AOF disabled, in all other cases the response
      just contains number of fsync copies.
      
      Main changes:
      * Added code to keep track of replication offsets that are confirmed to have
        been fsynced to disk.
      * Keep advancing master_repl_offset even when replication is disabled (and
        there's no replication backlog, only if there's an AOF enabled).
        This way we can use this command and it's mechanisms even when replication
        is disabled.
      * Extend REPLCONF ACK to `REPLCONF ACK <ofs> FACK <ofs>`, the FACK
        will be appended only if there's an AOF on the replica, and already ignored on
        old masters (thus backwards compatible)
      * WAIT now no longer wait for the replication offset after your last command, but
        rather the replication offset after your last write (or read command that caused
        propagation, e.g. lazy expiry).
      
      Unrelated changes:
      * WAIT command respects CLIENT_DENY_BLOCKING (not just CLIENT_MULTI)
      
      Implementation details:
      * Add an atomic var named `fsynced_reploff_pending` that's updated
        (usually by the bio thread) and later copied to the main `fsynced_reploff`
        variable (only if the AOF base file exists).
        I.e. during the initial AOF rewrite it will not be used as the fsynced offset
        since the AOF base is still missing.
      * Replace close+fsync bio job with new BIO_CLOSE_AOF (AOF specific)
        job that will also update fsync offset the field.
      * Handle all AOF jobs (BIO_CLOSE_AOF, BIO_AOF_FSYNC) in the same bio
        worker thread, to impose ordering on their execution. This solves a
        race condition where a job could set `fsynced_reploff_pending` to a higher
        value than another pending fsync job, resulting in indicating an offset
        for which parts of the data have not yet actually been fsynced.
        Imposing an ordering on the jobs guarantees that fsync jobs are executed
        in increasing order of replication offset.
      * Drain bio jobs when switching `appendfsync` to "always"
        This should prevent a write race between updates to `fsynced_reploff_pending`
        in the main thread (`flushAppendOnlyFile` when set to ALWAYS fsync), and
        those done in the bio thread.
      * Drain the pending fsync when starting over a new AOF to avoid race conditions
        with the previous AOF offsets overriding the new one (e.g. after switching to
        replicate from a new master).
      * Make sure to update the fsynced offset at the end of the initial AOF rewrite.
        a must in case there are no additional writes that trigger a periodic fsync,
        specifically for a replica that does a full sync.
      
      Limitations:
      It is possible to write a module and a Lua script that propagate to the AOF and doesn't
      propagate to the replication stream. see REDISMODULE_ARGV_NO_REPLICAS and luaRedisSetReplCommand.
      These features are incompatible with the WAITAOF command, and can result
      in two bad cases. The scenario is that the user executes command that only
      propagates to AOF, and then immediately
      issues a WAITAOF, and there's no further writes on the replication stream after that.
      1. if the the last thing that happened on the replication stream is a PING
        (which increased the replication offset but won't trigger an fsync on the replica),
        then the client would hang forever (will wait for an fack that the replica will never
        send sine it doesn't trigger any fsyncs).
      2. if the last thing that happened is a write command that got propagated properly,
        then WAITAOF will be released immediately, without waiting for an fsync (since
        the offset didn't change)
      
      Refactoring:
      * Plumbing to allow bio worker to handle multiple job types
        This introduces infrastructure necessary to allow BIO workers to
        not have a 1-1 mapping of worker to job-type. This allows in the
        future to assign multiple job types to a single worker, either as
        a performance/resource optimization, or as a way of enforcing
        ordering between specific classes of jobs.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      9344f654
  22. 11 Mar, 2023 1 commit
    • guybe7's avatar
      Add reply_schema to command json files (internal for now) (#10273) · 4ba47d2d
      guybe7 authored
      Work in progress towards implementing a reply schema as part of COMMAND DOCS, see #9845
      Since ironing the details of the reply schema of each and every command can take a long time, we
      would like to merge this PR when the infrastructure is ready, and let this mature in the unstable branch.
      Meanwhile the changes of this PR are internal, they are part of the repo, but do not affect the produced build.
      
      ### Background
      In #9656 we add a lot of information about Redis commands, but we are missing information about the replies
      
      ### Motivation
      1. Documentation. This is the primary goal.
      2. It should be possible, based on the output of COMMAND, to be able to generate client code in typed
        languages. In order to do that, we need Redis to tell us, in detail, what each reply looks like.
      3. We would like to build a fuzzer that verifies the reply structure (for now we use the existing
        testsuite, see the "Testing" section)
      
      ### Schema
      The idea is to supply some sort of schema for the various replies of each command.
      The schema will describe the conceptual structure of the reply (for generated clients), as defined in RESP3.
      Note that the reply structure itself may change, depending on the arguments (e.g. `XINFO STREAM`, with
      and without the `FULL` modifier)
      We decided to use the standard json-schema (see https://json-schema.org/) as the reply-schema.
      
      Example for `BZPOPMIN`:
      ```
      "reply_schema": {
          "oneOf": [
              {
                  "description": "Timeout reached and no elements were popped.",
                  "type": "null"
              },
              {
                  "description": "The keyname, popped member, and its score.",
                  "type": "array",
                  "minItems": 3,
                  "maxItems": 3,
                  "items": [
                      {
                          "description": "Keyname",
                          "type": "string"
                      },
                      {
                          "description": "Member",
                          "type": "string"
                      },
                      {
                          "description": "Score",
                          "type": "number"
                      }
                  ]
              }
          ]
      }
      ```
      
      #### Notes
      1.  It is ok that some commands' reply structure depends on the arguments and it's the caller's responsibility
        to know which is the relevant one. this comes after looking at other request-reply systems like OpenAPI,
        where the reply schema can also be oneOf and the caller is responsible to know which schema is the relevant one.
      2. The reply schemas will describe RESP3 replies only. even though RESP3 is structured, we want to use reply
        schema for documentation (and possibly to create a fuzzer that validates the replies)
      3. For documentation, the description field will include an explanation of the scenario in which the reply is sent,
        including any relation to arguments. for example, for `ZRANGE`'s two schemas we will need to state that one
        is with `WITHSCORES` and the other is without.
      4. For documentation, there will be another optional field "notes" in which we will add a short description of
        the representation in RESP2, in case it's not trivial (RESP3's `ZRANGE`'s nested array vs. RESP2's flat
        array, for example)
      
      Given the above:
      1. We can generate the "return" section of all commands in [redis-doc](https://redis.io/commands/)
        (given that "description" and "notes" are comprehensive enough)
      2. We can generate a client in a strongly typed language (but the return type could be a conceptual
        `union` and the caller needs to know which schema is relevant). see the section below for RESP2 support.
      3. We can create a fuzzer for RESP3.
      
      ### Limitations (because we are using the standard json-schema)
      The problem is that Redis' replies are more diverse than what the json format allows. This means that,
      when we convert the reply to a json (in order to validate the schema against it), we lose information (see
      the "Testing" section below).
      The other option would have been to extend the standard json-schema (and json format) to include stuff
      like sets, bulk-strings, error-string, etc. but that would mean also extending the schema-validator - and that
      seemed like too much work, so we decided to compromise.
      
      Examples:
      1. We cannot tell the difference between an "array" and a "set"
      2. We cannot tell the difference between simple-string and bulk-string
      3. we cannot verify true uniqueness of items in commands like ZRANGE: json-schema doesn't cover the
        case of two identical members with different scores (e.g. `[["m1",6],["m1",7]]`) because `uniqueItems`
        compares (member,score) tuples and not just the member name. 
      
      ### Testing
      This commit includes some changes inside Redis in order to verify the schemas (existing and future ones)
      are indeed correct (i.e. describe the actual response of Redis).
      To do that, we added a debugging feature to Redis that causes it to produce a log of all the commands
      it executed and their replies.
      For that, Redis needs to be compiled with `-DLOG_REQ_RES` and run with
      `--reg-res-logfile <file> --client-default-resp 3` (the testsuite already does that if you run it with
      `--log-req-res --force-resp3`)
      You should run the testsuite with the above args (and `--dont-clean`) in order to make Redis generate
      `.reqres` files (same dir as the `stdout` files) which contain request-response pairs.
      These files are later on processed by `./utils/req-res-log-validator.py` which does:
      1. Goes over req-res files, generated by redis-servers, spawned by the testsuite (see logreqres.c)
      2. For each request-response pair, it validates the response against the request's reply_schema
        (obtained from the extended COMMAND DOCS)
      5. In order to get good coverage of the Redis commands, and all their different replies, we chose to use
        the existing redis test suite, rather than attempt to write a fuzzer.
      
      #### Notes about RESP2
      1. We will not be able to use the testing tool to verify RESP2 replies (we are ok with that, it's time to
        accept RESP3 as the future RESP)
      2. Since the majority of the test suite is using RESP2, and we want the server to reply with RESP3
        so that we can validate it, we will need to know how to convert the actual reply to the one expected.
         - number and boolean are always strings in RESP2 so the conversion is easy
         - objects (maps) are always a flat array in RESP2
         - others (nested array in RESP3's `ZRANGE` and others) will need some special per-command
           handling (so the client will not be totally auto-generated)
      
      Example for ZRANGE:
      ```
      "reply_schema": {
          "anyOf": [
              {
                  "description": "A list of member elements",
                  "type": "array",
                  "uniqueItems": true,
                  "items": {
                      "type": "string"
                  }
              },
              {
                  "description": "Members and their scores. Returned in case `WITHSCORES` was used.",
                  "notes": "In RESP2 this is returned as a flat array",
                  "type": "array",
                  "uniqueItems": true,
                  "items": {
                      "type": "array",
                      "minItems": 2,
                      "maxItems": 2,
                      "items": [
                          {
                              "description": "Member",
                              "type": "string"
                          },
                          {
                              "description": "Score",
                              "type": "number"
                          }
                      ]
                  }
              }
          ]
      }
      ```
      
      ### Other changes
      1. Some tests that behave differently depending on the RESP are now being tested for both RESP,
        regardless of the special log-req-res mode ("Pub/Sub PING" for example)
      2. Update the history field of CLIENT LIST
      3. Added basic tests for commands that were not covered at all by the testsuite
      
      ### TODO
      
      - [x] (maybe a different PR) add a "condition" field to anyOf/oneOf schemas that refers to args. e.g.
        when `SET` return NULL, the condition is `arguments.get||arguments.condition`, for `OK` the condition
        is `!arguments.get`, and for `string` the condition is `arguments.get` - https://github.com/redis/redis/issues/11896
      - [x] (maybe a different PR) also run `runtest-cluster` in the req-res logging mode
      - [x] add the new tests to GH actions (i.e. compile with `-DLOG_REQ_RES`, run the tests, and run the validator)
      - [x] (maybe a different PR) figure out a way to warn about (sub)schemas that are uncovered by the output
        of the tests - https://github.com/redis/redis/issues/11897
      - [x] (probably a separate PR) add all missing schemas
      - [x] check why "SDOWN is triggered by misconfigured instance replying with errors" fails with --log-req-res
      - [x] move the response transformers to their own file (run both regular, cluster, and sentinel tests - need to
        fight with the tcl including mechanism a bit)
      - [x] issue: module API - https://github.com/redis/redis/issues/11898
      - [x] (probably a separate PR): improve schemas: add `required` to `object`s - https://github.com/redis/redis/issues/11899
      
      Co-authored-by: default avatarOzan Tezcan <ozantezcan@gmail.com>
      Co-authored-by: default avatarHanna Fadida <hanna.fadida@redislabs.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      Co-authored-by: default avatarShaya Potter <shaya@redislabs.com>
      4ba47d2d
  23. 19 Feb, 2023 1 commit
  24. 11 Jan, 2023 1 commit
    • Viktor Söderqvist's avatar
      Make dictEntry opaque · c84248b5
      Viktor Söderqvist authored
      Use functions for all accesses to dictEntry (except in dict.c). Dict abuses
      e.g. in defrag.c have been replaced by support functions provided by dict.
      c84248b5
  25. 05 Jan, 2023 1 commit
  26. 04 Jan, 2023 1 commit
  27. 07 Dec, 2022 1 commit
    • Harkrishn Patro's avatar
      Optimize client memory usage tracking operation while client eviction is disabled (#11348) · c0267b3f
      Harkrishn Patro authored
      
      
      ## Issue
      During the client input/output buffer processing, the memory usage is
      incrementally updated to keep track of clients going beyond a certain
      threshold `maxmemory-clients` to be evicted. However, this additional
      tracking activity leads to unnecessary CPU cycles wasted when no
      client-eviction is required. It is applicable in two cases.
      
      * `maxmemory-clients` is set to `0` which equates to no client eviction
        (applicable to all clients)
      * `CLIENT NO-EVICT` flag is set to `ON` which equates to a particular
        client not applicable for eviction.  
      
      ## Solution
      * Disable client memory usage tracking during the read/write flow when
        `maxmemory-clients` is set to `0` or `client no-evict` is `on`.
        The memory usage is tracked only during the `clientCron` i.e. it gets
        periodically updated.
      * Cleanup the clients from the memory usage bucket when client eviction
        is disabled.
      * When the maxmemory-clients config is enabled or disabled at runtime,
        we immediately update the memory usage buckets for all clients (tested
        scanning 80000 took some 20ms)
      
      Benchmark shown that this can improve performance by about 5% in
      certain situations.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      c0267b3f
  28. 26 Nov, 2022 1 commit
  29. 09 Nov, 2022 1 commit
    • Viktor Söderqvist's avatar
      Listpack encoding for sets (#11290) · 4e472a1a
      Viktor Söderqvist authored
      Small sets with not only integer elements are listpack encoded, by default
      up to 128 elements, max 64 bytes per element, new config `set-max-listpack-entries`
      and `set-max-listpack-value`. This saves memory for small sets compared to using a hashtable.
      
      Sets with only integers, even very small sets, are still intset encoded (up to 1G
      limit, etc.). Larger sets are hashtable encoded.
      
      This PR increments the RDB version, and has an effect on OBJECT ENCODING
      
      Possible conversions when elements are added:
      
          intset -> listpack
          listpack -> hashtable
          intset -> hashtable
      
      Note: No conversion happens when elements are deleted. If all elements are
      deleted and then added again, the set is deleted and recreated, thus implicitly
      converted to a smaller encoding.
      4e472a1a
  30. 02 Nov, 2022 1 commit
  31. 03 Oct, 2022 1 commit
    • Madelyn Olson's avatar
      Stabilize cluster hostnames tests (#11307) · 663fbd34
      Madelyn Olson authored
      This PR introduces a couple of changes to improve cluster test stability:
      1. Increase the cluster node timeout to 3 seconds, which is similar to the
         normal cluster tests, but introduce a new mechanism to increase the ping
         period so that the tests are still fast. This new config is a debug config.
      2. Set `cluster-replica-no-failover yes` on a wider array of tests which are
         sensitive to failovers. This was occurring on the ARM CI.
      663fbd34
  32. 22 Sep, 2022 1 commit
    • Shaya Potter's avatar
      Add RM_SetContextUser to support acl validation in RM_Call (and scripts) (#10966) · 6e993a5d
      Shaya Potter authored
      Adds a number of user management/ACL validaiton/command execution functions to improve a
      Redis module's ability to enforce ACLs correctly and easily.
      
      * RM_SetContextUser - sets a RedisModuleUser on the context, which RM_Call will use to both
        validate ACLs (if requested and set) as well as assign to the client so that scripts executed via
        RM_Call will have proper ACL validation.
      * RM_SetModuleUserACLString - Enables one to pass an entire ACL string, not just a single OP
        and have it applied to the user
      * RM_GetModuleUserACLString - returns a stringified version of the user's ACL (same format as dump
        and list).  Contains an optimization to cache the stringified version until the underlying ACL is modified.
      * Slightly re-purpose the "C" flag to RM_Call from just being about ACL check before calling the
        command, to actually running the command with the right user, so that it also affects commands
        inside EVAL scripts. see #11231
      6e993a5d
  33. 22 Aug, 2022 4 commits
    • zhenwei pi's avatar
      Introduce .listen into connection type · 0b27cfe3
      zhenwei pi authored
      
      
      Introduce listen method into connection type, this allows no hard code
      of listen logic. Originally, we initialize server during startup like
      this:
          if (server.port)
              listenToPort(server.port,&server.ipfd);
          if (server.tls_port)
              listenToPort(server.port,&server.tlsfd);
          if (server.unixsocket)
              anetUnixServer(...server.unixsocket...);
      
          ...
          if (createSocketAcceptHandler(&server.ipfd, acceptTcpHandler) != C_OK)
          if (createSocketAcceptHandler(&server.tlsfd, acceptTcpHandler) != C_OK)
          if (createSocketAcceptHandler(&server.sofd, acceptTcpHandler) != C_OK)
          ...
      
      If a new connection type gets supported, we have to add more hard code
      to setup listener.
      
      Introduce .listen and refactor listener, and Unix socket supports this.
      this allows to setup listener arguments and create listener in a loop.
      
      What's more, '.listen' is defined in connection.h, so we should include
      server.h to import 'struct socketFds', but server.h has already include
      'connection.h'. To avoid including loop(also to make code reasonable),
      define 'struct connListener' in connection.h instead of 'struct socketFds'
      in server.h. This leads this commit to get more changes.
      
      There are more fields in 'struct connListener', hence it's possible to
      simplify changeBindAddr & applyTLSPort() & updatePort() into a single
      logic: update the listener config from the server.xxx, and re-create
      the listener.
      
      Because of the new field 'priv' in struct connListener, we expect to pass
      this to the accept handler(even it's not used currently), this may be used
      in the future.
      Signed-off-by: default avatarzhenwei pi <pizhenwei@bytedance.com>
      0b27cfe3
    • zhenwei pi's avatar
      Use connection name of string · 45617385
      zhenwei pi authored
      
      
      Suggested by Oran, use an array to store all the connection types
      instead of a linked list, and use connection name of string. The index
      of a connection is dynamically allocated.
      
      Currently we support max 8 connection types, include:
      - tcp
      - unix socket
      - tls
      
      and RDMA is in the plan, then we have another 4 types to support, it
      should be enough in a long time.
      
      Introduce 3 functions to get connection type by a fast path:
      - connectionTypeTcp()
      - connectionTypeTls()
      - connectionTypeUnix()
      
      Note that connectionByType() is designed to use only in unlikely code path.
      Signed-off-by: default avatarzhenwei pi <pizhenwei@bytedance.com>
      45617385
    • zhenwei pi's avatar
      Abstract accept handler · 0ae02ce9
      zhenwei pi authored
      
      
      Abstract accept handler for socket&TLS, and add helper function
      'connAcceptHandler' to get accept handler by specified type.
      
      Also move acceptTcpHandler into socket.c, and move
      acceptTLSHandler into tls.c.
      Signed-off-by: default avatarzhenwei pi <pizhenwei@bytedance.com>
      0ae02ce9
    • zhenwei pi's avatar
      Introduce connection layer framework · 8234a512
      zhenwei pi authored
      
      
      Use connTypeRegister() to register a connection type into redis, and
      query connection by connectionByType() via type.
      
      With this change, we can hide TLS specified methods into connection
      type:
      - void tlsInit(void);
      - void tlsCleanup(void);
      - int tlsConfigure(redisTLSContextConfig *ctx_config);
      - int isTlsConfigured(void);
      
      Merge isTlsConfigured & tlsConfigure, use an argument *reconfigure*
      to distinguish:
         tlsConfigure(&server.tls_ctx_config)
      -> onnTypeConfigure(CONN_TYPE_TLS, &server.tls_ctx_config, 1)
      
         isTlsConfigured() && tlsConfigure(&server.tls_ctx_config)
      -> connTypeConfigure(CONN_TYPE_TLS, &server.tls_ctx_config, 0)
      
      Finally, we can remove USE_OPENSSL from config.c. If redis is built
      without TLS, and still run redis with TLS, then redis reports:
       # Missing implement of connection type 1
       # Failed to configure TLS. Check logs for more info.
      
      The log can be optimised, let's leave it in the future. Maybe we can
      use connection type as a string.
      
      Although uninitialized fields of a static struct are zero, we still
      set them as NULL explicitly in socket.c, let them clear to read & maintain:
          .init = NULL,
          .cleanup = NULL,
          .configure = NULL,
      Signed-off-by: default avatarzhenwei pi <pizhenwei@bytedance.com>
      8234a512
  34. 21 Aug, 2022 1 commit
    • yourtree's avatar
      Support setlocale via CONFIG operation. (#11059) · ca6aeadf
      yourtree authored
      
      
      Till now Redis officially supported tuning it via environment variable see #1074.
      But we had other requests to allow changing it at runtime, see #799, and #11041.
      
      Note that `strcoll()` is used as Lua comparison function and also for comparison of
      certain string objects in Redis, which leads to a problem that, in different regions,
      for some characters, the result may be different. Below is an example.
      ```
      127.0.0.1:6333> SORT test alpha
      1) "<"
      2) ">"
      3) ","
      4) "*"
      127.0.0.1:6333> CONFIG GET locale-collate
      1) "locale-collate"
      2) ""
      127.0.0.1:6333> CONFIG SET locale-collate 1
      (error) ERR CONFIG SET failed (possibly related to argument 'locale')
      127.0.0.1:6333> CONFIG SET locale-collate C
      OK
      127.0.0.1:6333> SORT test alpha
      1) "*"
      2) ","
      3) "<"
      4) ">"
      ```
      That will cause accidental code compatibility issues for Lua scripts and some
      Redis commands. This commit creates a new config parameter to control the
      local environment which only affects `Collate` category. Above shows how it
      affects `SORT` command, and below shows the influence on Lua scripts.
      ```
      127.0.0.1:6333> CONFIG GET locale-collate
      1) " locale-collate"
      2) "C"
      127.0.0.1:6333> EVAL "return ',' < '*'" 0
      (nil)
      127.0.0.1:6333> CONFIG SET locale-collate ""
      OK
      127.0.0.1:6333> EVAL "return ',' < '*'" 0
      (integer) 1
      ```
      Co-authored-by: default avatarcalvincjli <calvincjli@tencent.com>
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      ca6aeadf
  35. 18 Jul, 2022 1 commit
    • ranshid's avatar
      Avoid using unsafe C functions (#10932) · eacca729
      ranshid authored
      replace use of:
      sprintf --> snprintf
      strcpy/strncpy  --> redis_strlcpy
      strcat/strncat  --> redis_strlcat
      
      **why are we making this change?**
      Much of the code uses some unsafe variants or deprecated buffer handling
      functions.
      While most cases are probably not presenting any issue on the known path
      programming errors and unterminated strings might lead to potential
      buffer overflows which are not covered by tests.
      
      **As part of this PR we change**
      1. added implementation for redis_strlcpy and redis_strlcat based on the strl implementation: https://linux.die.net/man/3/strl
      2. change all occurrences of use of sprintf with use of snprintf
      3. change occurrences of use of  strcpy/strncpy with redis_strlcpy
      4. change occurrences of use of strcat/strncat with redis_strlcat
      5. change the behavior of ll2string/ull2string/ld2string so that it will always place null
        termination ('\0') on the output buffer in the first index. this was done in order to make
        the use of these functions more safe in cases were the user will not check the output
        returned by them (for example in rdbRemoveTempFile)
      6. we added a compiler directive to issue a deprecation error in case a use of
        sprintf/strcpy/strcat is found during compilation which will result in error during compile time.
        However keep in mind that since the deprecation attribute is not supported on all compilers,
        this is expected to fail during push workflows.
      
      
      **NOTE:** while this is only an initial milestone. We might also consider
      using the *_s implementation provided by the C11 Extensions (however not
      yet widly supported). I would also suggest to start
      looking at static code analyzers to track unsafe use cases.
      For example LLVM clang checker supports security.insecureAPI.DeprecatedOrUnsafeBufferHandling
      which can help locate unsafe function usage.
      https://clang.llvm.org/docs/analyzer/checkers.html#security-insecureapi-deprecatedorunsafebufferhandling-c
      The main reason not to onboard it at this stage is that the alternative
      excepted by clang is to use the C11 extensions which are not always
      supported by stdlib.
      eacca729
  36. 20 Jun, 2022 1 commit
    • Tian's avatar
      Fsync directory while persisting AOF manifest, RDB file, and config file (#10737) · 99a425d0
      Tian authored
      The current process to persist files is `write` the data, `fsync` and `rename` the file,
      but a underlying problem is that the rename may be lost when a sudden crash like
      power outage and the directory hasn't been persisted.
      
      The article [Ensuring data reaches disk](https://lwn.net/Articles/457667/) mentions
      a safe way to update file should be:
      
      1. create a new temp file (on the same file system!)
      2. write data to the temp file
      3. fsync() the temp file
      4. rename the temp file to the appropriate name
      5. fsync() the containing directory
      
      This commit handles CONFIG REWRITE, AOF manifest, and RDB file (both for persistence,
      and the one the replica gets from the master).
      It doesn't handle (yet), ACL SAVE and Cluster configs, since these don't yet follow this pattern.
      99a425d0
  37. 02 Jun, 2022 1 commit
    • zhaozhao.zz's avatar
      rewrite alias config to original name (#10811) · a18c91d6
      zhaozhao.zz authored
      
      
      Redis 7 adds some new alias config like `hash-max-listpack-entries` alias `hash-max-ziplist-entries`.
      
      If a config file contains both real name and alias like this:
      ```
      hash-max-listpack-entries 20
      hash-max-ziplist-entries 20
      ```
      
      after set `hash-max-listpack-entries` to 100 and `config rewrite`, the config file becomes to:
      ```
      hash-max-listpack-entries 100
      hash-max-ziplist-entries 20
      ```
      
      we can see that the alias config is not modified, and users will get wrong config after restart.
      
      6.0 and 6.2 doesn't have this bug, since they only have the `slave` word alias.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      a18c91d6