1. 24 May, 2023 1 commit
    • judeng's avatar
      postpone the initialization of oject's lru&lfu until it is added to the db as... · d71478a8
      judeng authored
      postpone the initialization of oject's lru&lfu until it is added to the db as a value object (#11626)
      
      This pr can get two performance benefits:
      1. Stop redundant initialization when most robj objects are created
      2. LRU_CLOCK will no longer be called in io threads, so we can avoid the `atomicGet`
      
      Another code optimization:
      deleted the redundant judgment in dbSetValue, no matter in LFU or LRU, the lru field inold
      robj is always the freshest (it is always updated in lookupkey), so we don't need to judge if in LFU
      d71478a8
  2. 12 Apr, 2023 1 commit
    • Oran Agra's avatar
      Attempt to solve MacOS CI issues in GH Actions (#12013) · 997fa41e
      Oran Agra authored
      The MacOS CI in github actions often hangs without any logs. GH argues that
      it's due to resource utilization, either running out of disk space, memory, or CPU
      starvation, and thus the runner is terminated.
      
      This PR contains multiple attempts to resolve this:
      1. introducing pause_process instead of SIGSTOP, which waits for the process
        to stop before resuming the test, possibly resolving race conditions in some tests,
        this was a suspect since there was one test that could result in an infinite loop in that
       case, in practice this didn't help, but still a good idea to keep.
      2. disable the `save` config in many tests that don't need it, specifically ones that use
        heavy writes and could create large files.
      3. change the `populate` proc to use short pipeline rather than an infinite one.
      4. use `--clients 1` in the macos CI so that we don't risk running multiple resource
        demanding tests in parallel.
      5. enable `--verbose` to be repeated to elevate verbosity and print more info to stdout
        when a test or a server starts.
      997fa41e
  3. 22 Nov, 2022 1 commit
    • Binbin's avatar
      Make assert_refcount skip the OBJECT REFCOUNT check with needs:debug tag (#11487) · 543e0daa
      Binbin authored
      This PR add `assert_refcount_morethan`, and modify `assert_refcount` to skip
      the `OBJECT REFCOUNT` check with `needs:debug` flag. Use them to modify all
      `OBJECT REFCOUNT` calls and also update the tests/README to be more specific.
      
      The reasoning is that some of these tests could be testing something important,
      and along the way also add a check for the refcount, and it could be a shame to skip
      the whole test just because the refcount functionality is missing or blocked.
      but much like the fact that some redis variants may not support DEBUG,
      and still we want to run the majority of the test for coverage, and just skip the digest match.
      543e0daa
  4. 18 Aug, 2022 1 commit
    • Meir Shpilraien (Spielrein)'s avatar
      Fix replication inconsistency on modules that uses key space notifications (#10969) · 508a1388
      Meir Shpilraien (Spielrein) authored
      Fix replication inconsistency on modules that uses key space notifications.
      
      ### The Problem
      
      In general, key space notifications are invoked after the command logic was
      executed (this is not always the case, we will discuss later about specific
      command that do not follow this rules). For example, the `set x 1` will trigger
      a `set` notification that will be invoked after the `set` logic was performed, so
      if the notification logic will try to fetch `x`, it will see the new data that was written.
      Consider the scenario on which the notification logic performs some write
      commands. for example, the notification logic increase some counter,
      `incr x{counter}`, indicating how many times `x` was changed.
      The logical order by which the logic was executed is has follow:
      
      ```
      set x 1
      incr x{counter}
      ```
      
      The issue is that the `set x 1` command is added to the replication buffer
      at the end of the command invocation (specifically after the key space
      notification logic was invoked and performed the `incr` command).
      The replication/aof sees the commands in the wrong order:
      
      ```
      incr x{counter}
      set x 1
      ```
      
      In this specific example the order is less important.
      But if, for example, the notification would have deleted `x` then we would
      end up with primary-replica inconsistency.
      
      ### The Solution
      
      Put the command that cause the notification in its rightful place. In the
      above example, the `set x 1` command logic was executed before the
      notification logic, so it should be added to the replication buffer before
      the commands that is invoked by the notification logic. To achieve this,
      without a major code refactoring, we save a placeholder in the replication
      buffer, when finishing invoking the command logic we check if the command
      need to be replicated, and if it does, we use the placeholder to add it to the
      replication buffer instead of appending it to the end.
      
      To be efficient and not allocating memory on each command to save the
      placeholder, the replication buffer array was modified to reuse memory
      (instead of allocating it each time we want to replicate commands).
      Also, to avoid saving a placeholder when not needed, we do it only for
      WRITE or MAY_REPLICATE commands.
      
      #### Additional Fixes
      
      * Expire and Eviction notifications:
        * Expire/Eviction logical order was to first perform the Expire/Eviction
          and then the notification logic. The replication buffer got this in the
          other way around (first notification effect and then the `del` command).
          The PR fixes this issue.
        * The notification effect and the `del` command was not wrap with
          `multi-exec` (if needed). The PR also fix this issue.
      * SPOP command:
        * On spop, the `spop` notification was fired before the command logic
          was executed. The change in this PR would have cause the replication
          order to be change (first `spop` command and then notification `logic`)
          although the logical order is first the notification logic and then the
          `spop` logic. The right fix would have been to move the notification to
          be fired after the command was executed (like all the other commands),
          but this can be considered a breaking change. To overcome this, the PR
          keeps the current behavior and changes the `spop` code to keep the right
          logical order when pushing commands to the replication buffer. Another PR
          will follow to fix the SPOP properly and match it to the other command (we
          split it to 2 separate PR's so it will be easy to cherry-pick this PR to 7.0 if
          we chose to).
      
      #### Unhanded Known Limitations
      
      * key miss event:
        * On key miss event, if a module performed some write command on the
          event (using `RM_Call`), the `dirty` counter would increase and the read
          command that cause the key miss event would be replicated to the replication
          and aof. This problem can also happened on a write command that open
          some keys but eventually decides not to perform any action. We decided
          not to handle this problem on this PR because the solution is complex
          and will cause additional risks in case we will want to cherry-pick this PR.
          We should decide if we want to handle it in future PR's. For now, modules
          writers is advice not to perform any write commands on key miss event.
      
      #### Testing
      
      * We already have tests to cover cases where a notification is invoking write
        commands that are also added to the replication buffer, the tests was modified
        to verify that the replica gets the command in the correct logical order.
      * Test was added to verify that `spop` behavior was kept unchanged.
      * Test was added to verify key miss event behave as expected.
      * Test was added to verify the changes do not break lazy expiration.
      
      #### Additional Changes
      
      * `propagateNow` function can accept a special dbid, -1, indicating not
        to replicate `select`. We use this to replicate `multi/exec` on `propagatePendingCommands`
        function. The side effect of this change is that now the `select` command
        will appear inside the `multi/exec` block on the replication stream (instead of
        outside of the `multi/exec` block). Tests was modified to match this new behavior.
      508a1388
  5. 05 Jan, 2022 1 commit
    • filipe oliveira's avatar
      Added INFO LATENCYSTATS section: latency by percentile distribution/latency by... · 5dd15443
      filipe oliveira authored
      
      Added INFO LATENCYSTATS section: latency by percentile distribution/latency by cumulative distribution of latencies (#9462)
      
      # Short description
      
      The Redis extended latency stats track per command latencies and enables:
      - exporting the per-command percentile distribution via the `INFO LATENCYSTATS` command.
        **( percentile distribution is not mergeable between cluster nodes ).**
      - exporting the per-command cumulative latency distributions via the `LATENCY HISTOGRAM` command.
        Using the cumulative distribution of latencies we can merge several stats from different cluster nodes
        to calculate aggregate metrics .
      
      By default, the extended latency monitoring is enabled since the overhead of keeping track of the
      command latency is very small.
       
      If you don't want to track extended latency metrics, you can easily disable it at runtime using the command:
       - `CONFIG SET latency-tracking no`
      
      By default, the exported latency percentiles are the p50, p99, and p999.
      You can alter them at runtime using the command:
      - `CONFIG SET latency-tracking-info-percentiles "0.0 50.0 100.0"`
      
      
      ## Some details:
      - The total size per histogram should sit around 40 KiB. We only allocate those 40KiB when a command
        was called for the first time.
      - With regards to the WRITE overhead As seen below, there is no measurable overhead on the achievable
        ops/sec or full latency spectrum on the client. Including also the measured redis-benchmark for unstable
        vs this branch. 
      - We track from 1 nanosecond to 1 second ( everything above 1 second is considered +Inf )
      
      ## `INFO LATENCYSTATS` exposition format
      
         - Format: `latency_percentiles_usec_<CMDNAME>:p0=XX,p50....` 
      
      ## `LATENCY HISTOGRAM [command ...]` exposition format
      
      Return a cumulative distribution of latencies in the format of a histogram for the specified command names.
      
      The histogram is composed of a map of time buckets:
      - Each representing a latency range, between 1 nanosecond and roughly 1 second.
      - Each bucket covers twice the previous bucket's range.
      - Empty buckets are not printed.
      - Everything above 1 sec is considered +Inf.
      - At max there will be log2(1000000000)=30 buckets
      
      We reply a map for each command in the format:
      `<command name> : { `calls`: <total command calls> , `histogram` : { <bucket 1> : latency , < bucket 2> : latency, ...  } }`
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      5dd15443
  6. 22 Dec, 2021 1 commit
    • guybe7's avatar
      Sort out mess around propagation and MULTI/EXEC (#9890) · 7ac21307
      guybe7 authored
      The mess:
      Some parts use alsoPropagate for late propagation, others using an immediate one (propagate()),
      causing edge cases, ugly/hacky code, and the tendency for bugs
      
      The basic idea is that all commands are propagated via alsoPropagate (i.e. added to a list) and the
      top-most call() is responsible for going over that list and actually propagating them (and wrapping
      them in MULTI/EXEC if there's more than one command). This is done in the new function,
      propagatePendingCommands.
      
      Callers to propagatePendingCommands:
      1. top-most call() (we want all nested call()s to add to the also_propagate array and just the top-most
         one to propagate them) - via `afterCommand`
      2. handleClientsBlockedOnKeys: it is out of call() context and it may propagate stuff - via `afterCommand`. 
      3. handleClientsBlockedOnKeys edge case: if the looked-up key is already expired, we will propagate the
         expire but will not unblock any client so `afterCommand` isn't called. in that case, we have to propagate
         the deletion explicitly.
      4. cron stuff: active-expire and eviction may also propagate stuff
      5. modules: the module API allows to propagate stuff from just about anywhere (timers, keyspace notifications,
         threads). I could have tried to catch all the out-of-call-context places but it seemed easier to handle it in one
         place: when we free the context. in the spirit of what was done in call(), only the top-most freeing of a module
         context may cause propagation.
      6. modules: when using a thread-safe ctx it's not clear when/if the ctx will be freed. we do know that the module
         must lock the GIL before calling RM_Replicate/RM_Call so we propagate the pending commands when
         releasing the GIL.
      
      A "known limitation", which were actually a bug, was fixed because of this commit (see propagate.tcl):
         When using a mix of RM_Call with `!` and RM_Replicate, the command would propagate out-of-order:
         first all the commands from RM_Call, and then the ones from RM_Replicate
      
      Another thing worth mentioning is that if, in the past, a client would issue a MULTI/EXEC with just one
      write command the server would blindly propagate the MULTI/EXEC too, even though it's redundant.
      not anymore.
      
      This commit renames propagate() to propagateNow() in order to cause conflicts in pending PRs.
      propagatePendingCommands is the only caller of propagateNow, which is now a static, internal helper function.
      
      Optimizations:
      1. alsoPropagate will not add stuff to also_propagate if there's no AOF and replicas
      2. alsoPropagate reallocs also_propagagte exponentially, to save calls to memmove
      
      Bugfixes:
      1. CONFIG SET can create evictions, sending notifications which can cause to dirty++ with modules.
         we need to prevent it from propagating to AOF/replicas
      2. We need to set current_client in RM_Call. buggy scenario:
         - CONFIG SET maxmemory, eviction notifications, module hook calls RM_Call
         - assertion in lookupKey crashes, because current_client has CONFIG SET, which isn't CMD_WRITE
      3. minor: in eviction, call propagateDeletion after notification, like active-expire and all commands
         (we always send a notification before propagating the command)
      7ac21307
  7. 25 Oct, 2021 1 commit
    • Wang Yuan's avatar
      Replication backlog and replicas use one global shared replication buffer (#9166) · c1718f9d
      Wang Yuan authored
      ## Background
      For redis master, one replica uses one copy of replication buffer, that is a big waste of memory,
      more replicas more waste, and allocate/free memory for every reply list also cost much.
      If we set client-output-buffer-limit small and write traffic is heavy, master may disconnect with
      replicas and can't finish synchronization with replica. If we set  client-output-buffer-limit big,
      master may be OOM when there are many replicas that separately keep much memory.
      Because replication buffers of different replica client are the same, one simple idea is that
      all replicas only use one replication buffer, that will effectively save memory.
      
      Since replication backlog content is the same as replicas' output buffer, now we
      can discard replication backlog memory and use global shared replication buffer
      to implement replication backlog mechanism.
      
      ## Implementation
      I create one global "replication buffer" which contains content of replication stream.
      The structure of "replication buffer" is similar to the reply list that exists in every client.
      But the node of list is `replBufBlock`, which has `id, repl_offset, refcount` fields.
      ```c
      /* Replication buffer blocks is the list of replBufBlock.
       *
       * +--------------+       +--------------+       +--------------+
       * | refcount = 1 |  ...  | refcount = 0 |  ...  | refcount = 2 |
       * +--------------+       +--------------+       +--------------+
       *      |                                            /       \
       *      |                                           /         \
       *      |                                          /           \
       *  Repl Backlog                               Replia_A      Replia_B
       * 
       * Each replica or replication backlog increments only the refcount of the
       * 'ref_repl_buf_node' which it points to. So when replica walks to the next
       * node, it should first increase the next node's refcount, and when we trim
       * the replication buffer nodes, we remove node always from the head node which
       * refcount is 0. If the refcount of the head node is not 0, we must stop
       * trimming and never iterate the next node. */
      
      /* Similar with 'clientReplyBlock', it is used for shared buffers between
       * all replica clients and replication backlog. */
      typedef struct replBufBlock {
          int refcount;           /* Number of replicas or repl backlog using. */
          long long id;           /* The unique incremental number. */
          long long repl_offset;  /* Start replication offset of the block. */
          size_t size, used;
          char buf[];
      } replBufBlock;
      ```
      So now when we feed replication stream into replication backlog and all replicas, we only need
      to feed stream into replication buffer `feedReplicationBuffer`. In this function, we set some fields of
      replication backlog and replicas to references of the global replication buffer blocks. And we also
      need to check replicas' output buffer limit to free if exceeding `client-output-buffer-limit`, and trim
      replication backlog if exceeding `repl-backlog-size`.
      
      When sending reply to replicas, we also need to iterate replication buffer blocks and send its
      content, when totally sending one block for replica, we decrease current node count and
      increase the next current node count, and then free the block which reference is 0 from the
      head of replication buffer blocks.
      
      Since now we use linked list to manage replication backlog, it may cost much time for iterating
      all linked list nodes to find corresponding replication buffer node. So we create a rax tree to
      store some nodes  for index, but to avoid rax tree occupying too much memory, i record
      one per 64 nodes for index.
      
      Currently, to make partial resynchronization as possible as much, we always let replication
      backlog as the last reference of replication buffer blocks, backlog size may exceeds our setting
      if slow replicas that reference vast replication buffer blocks, and this method doesn't increase
      memory usage since they share replication buffer. To avoid freezing server for freeing unreferenced
      replication buffer blocks when we need to trim backlog for exceeding backlog size setting,
      we trim backlog incrementally (free 64 blocks per call now), and make it faster in
      `beforeSleep` (free 640 blocks).
      
      ### Other changes
      - `mem_total_replication_buffers`: we add this field in INFO command, it means the total
        memory of replication buffers used.
      - `mem_clients_slaves`:  now even replica is slow to replicate, and its output buffer memory
        is not 0, but it still may be 0, since replication backlog and replicas share one global replication
        buffer, only if replication buffer memory is more than the repl backlog setting size, we consider
        the excess as replicas' memory. Otherwise, we think replication buffer memory is the consumption
        of repl backlog.
      - Key eviction
        Since all replicas and replication backlog share global replication buffer, we think only the
        part of exceeding backlog size the extra separate consumption of replicas.
        Because we trim backlog incrementally in the background, backlog size may exceeds our
        setting if slow replicas that reference vast replication buffer blocks disconnect.
        To avoid massive eviction loop, we don't count the delayed freed replication backlog into
        used memory even if there are no replicas, i.e. we also regard this memory as replicas's memory.
      - `client-output-buffer-limit` check for replica clients
        It doesn't make sense to set the replica clients output buffer limit lower than the repl-backlog-size
        config (partial sync will succeed and then replica will get disconnected). Such a configuration is
        ignored (the size of repl-backlog-size will be used). This doesn't have memory consumption
        implications since the replica client will share the backlog buffers memory.
      - Drop replication backlog after loading data if needed
        We always create replication backlog if server is a master, we need it because we put DELs in
        it when loading expired keys in RDB, but if RDB doesn't have replication info or there is no rdb,
        it is not possible to support partial resynchronization, to avoid extra memory of replication backlog,
        we drop it.
      - Multi IO threads
       Since all replicas and replication backlog use global replication buffer,  if I/O threads are enabled,
        to guarantee data accessing thread safe, we must let main thread handle sending the output buffer
        to all replicas. But before, other IO threads could handle sending output buffer of all replicas.
      
      ## Other optimizations
      This solution resolve some other problem:
      - When replicas disconnect with master since of out of output buffer limit, releasing the output
        buffer of replicas may freeze server if we set big `client-output-buffer-limit` for replicas, but now,
        it doesn't cause freezing.
      - This implementation may mitigate reply list copy cost time(also freezes server) when one replication
        has huge reply buffer and another replica can copy buffer for full synchronization. now, we just copy
        reference info, it is very light.
      - If we set replication backlog size big, it also may cost much time to copy replication backlog into
        replica's output buffer. But this commit eliminates this problem.
      - Resizing replication backlog size doesn't empty current replication backlog content.
      c1718f9d
  8. 07 Oct, 2021 1 commit
    • yoav-steinberg's avatar
      obuf based eviction tests run until eviction occurs (#9611) · 834e8843
      yoav-steinberg authored
      obuf based eviction tests run until eviction occurs instead of assuming a certain
      amount of writes will fill the obuf enough for eviction to occur.
      This handles the kernel buffering written data and emptying the obuf even though
      no one actualy reads from it.
      
      The tests have a new timeout of 20sec: if the test doesn't pass after 20 sec it'll fail.
      Hopefully this enough for our slow CI targets.
      
      This also eliminates the need to skip some tests in TLS.
      834e8843
  9. 05 Oct, 2021 1 commit
  10. 29 Sep, 2021 1 commit
  11. 26 Sep, 2021 1 commit
    • yoav-steinberg's avatar
      Client eviction ci issues (#9549) · 66002530
      yoav-steinberg authored
      Fixing CI test issues introduced in #8687
      - valgrind warnings in readQueryFromClient when client was freed by processInputBuffer
      - adding DEBUG pause-cron for tests not to be time dependent.
      - skipping a test that depends on socket buffers / events not compatible with TLS
      - making sure client got subscribed by not using deferring client
      66002530
  12. 23 Sep, 2021 1 commit
    • yoav-steinberg's avatar
      Client eviction (#8687) · 2753429c
      yoav-steinberg authored
      
      
      ### Description
      A mechanism for disconnecting clients when the sum of all connected clients is above a
      configured limit. This prevents eviction or OOM caused by accumulated used memory
      between all clients. It's a complimentary mechanism to the `client-output-buffer-limit`
      mechanism which takes into account not only a single client and not only output buffers
      but rather all memory used by all clients.
      
      #### Design
      The general design is as following:
      * We track memory usage of each client, taking into account all memory used by the
        client (query buffer, output buffer, parsed arguments, etc...). This is kept up to date
        after reading from the socket, after processing commands and after writing to the socket.
      * Based on the used memory we sort all clients into buckets. Each bucket contains all
        clients using up up to x2 memory of the clients in the bucket below it. For example up
        to 1m clients, up to 2m clients, up to 4m clients, ...
      * Before processing a command and before sleep we check if we're over the configured
        limit. If we are we start disconnecting clients from larger buckets downwards until we're
        under the limit.
      
      #### Config
      `maxmemory-clients` max memory all clients are allowed to consume, above this threshold
      we disconnect clients.
      This config can either be set to 0 (meaning no limit), a size in bytes (possibly with MB/GB
      suffix), or as a percentage of `maxmemory` by using the `%` suffix (e.g. setting it to `10%`
      would mean 10% of `maxmemory`).
      
      #### Important code changes
      * During the development I encountered yet more situations where our io-threads access
        global vars. And needed to fix them. I also had to handle keeps the clients sorted into the
        memory buckets (which are global) while their memory usage changes in the io-thread.
        To achieve this I decided to simplify how we check if we're in an io-thread and make it
        much more explicit. I removed the `CLIENT_PENDING_READ` flag used for checking
        if the client is in an io-thread (it wasn't used for anything else) and just used the global
        `io_threads_op` variable the same way to check during writes.
      * I optimized the cleanup of the client from the `clients_pending_read` list on client freeing.
        We now store a pointer in the `client` struct to this list so we don't need to search in it
        (`pending_read_list_node`).
      * Added `evicted_clients` stat to `INFO` command.
      * Added `CLIENT NO-EVICT ON|OFF` sub command to exclude a specific client from the
        client eviction mechanism. Added corrosponding 'e' flag in the client info string.
      * Added `multi-mem` field in the client info string to show how much memory is used up
        by buffered multi commands.
      * Client `tot-mem` now accounts for buffered multi-commands, pubsub patterns and
        channels (partially), tracking prefixes (partially).
      * CLIENT_CLOSE_ASAP flag is now handled in a new `beforeNextClient()` function so
        clients will be disconnected between processing different clients and not only before sleep.
        This new function can be used in the future for work we want to do outside the command
        processing loop but don't want to wait for all clients to be processed before we get to it.
        Specifically I wanted to handle output-buffer-limit related closing before we process client
        eviction in case the two race with each other.
      * Added a `DEBUG CLIENT-EVICTION` command to print out info about the client eviction
        buckets.
      * Each client now holds a pointer to the client eviction memory usage bucket it belongs to
        and listNode to itself in that bucket for quick removal.
      * Global `io_threads_op` variable now can contain a `IO_THREADS_OP_IDLE` value
        indicating no io-threading is currently being executed.
      * In order to track memory used by each clients in real-time we can't rely on updating
        these stats in `clientsCron()` alone anymore. So now I call `updateClientMemUsage()`
        (used to be `clientsCronTrackClientsMemUsage()`) after command processing, after
        writing data to pubsub clients, after writing the output buffer and after reading from the
        socket (and maybe other places too). The function is written to be fast.
      * Clients are evicted if needed (with appropriate log line) in `beforeSleep()` and before
        processing a command (before performing oom-checks and key-eviction).
      * All clients memory usage buckets are grouped as follows:
        * All clients using less than 64k.
        * 64K..128K
        * 128K..256K
        * ...
        * 2G..4G
        * All clients using 4g and up.
      * Added client-eviction.tcl with a bunch of tests for the new mechanism.
      * Extended maxmemory.tcl to test the interaction between maxmemory and
        maxmemory-clients settings.
      * Added an option to flag a numeric configuration variable as a "percent", this means that
        if we encounter a '%' after the number in the config file (or config set command) we
        consider it as valid. Such a number is store internally as a negative value. This way an
        integer value can be interpreted as either a percent (negative) or absolute value (positive).
        This is useful for example if some numeric configuration can optionally be set to a percentage
        of something else.
      Co-authored-by: default avatarOran Agra <oran@redislabs.com>
      2753429c
  13. 15 Jun, 2021 1 commit
    • sundb's avatar
      Fix the wrong reisze of querybuf (#9003) · e5d8a5eb
      sundb authored
      The initialize memory of `querybuf` is `PROTO_IOBUF_LEN(1024*16) * 2` (due to sdsMakeRoomFor being greedy), under `jemalloc`, the allocated memory will be 40k.
      This will most likely result in the `querybuf` being resized when call `clientsCronResizeQueryBuffer` unless the client requests it fast enough.
      
      Note that this bug existed even before #7875, since the condition for resizing includes the sds headers (32k+6).
      
      ## Changes
      1. Use non-greedy sdsMakeRoomFor when allocating the initial query buffer (of 16k).
      1. Also use non-greedy allocation when working with BIG_ARG (we won't use that extra space anyway)
      2. in case we did use a greedy allocation, read as much as we can into the buffer we got (including internal frag), to reduce system calls.
      3. introduce a dedicated constant for the shrinking (same value as before)
      3. Add test for querybuf.
      4. improve a maxmemory test by ignoring the effect of replica query buffers (can accumulate many ACKs on slow env)
      5. improve a maxmemory by disabling slowlog (it will cause slight memory growth on slow env).
      e5d8a5eb
  14. 09 Jun, 2021 1 commit
    • Yossi Gottlieb's avatar
      Improve test suite to handle external servers better. (#9033) · 8a86bca5
      Yossi Gottlieb authored
      This commit revives the improves the ability to run the test suite against
      external servers, instead of launching and managing `redis-server` processes as
      part of the test fixture.
      
      This capability existed in the past, using the `--host` and `--port` options.
      However, it was quite limited and mostly useful when running a specific tests.
      Attempting to run larger chunks of the test suite experienced many issues:
      
      * Many tests depend on being able to start and control `redis-server` themselves,
      and there's no clear distinction between external server compatible and other
      tests.
      * Cluster mode is not supported (resulting with `CROSSSLOT` errors).
      
      This PR cleans up many things and makes it possible to run the entire test suite
      against an external server. It also provides more fine grained controls to
      handle cases where the external server supports a subset of the Redis commands,
      limited number of databases, cluster mode, etc.
      
      The tests directory now contains a `README.md` file that describes how this
      works.
      
      This commit also includes additional cleanups and fixes:
      
      * Tests can now be tagged.
      * Tag-based selection is now unified across `start_server`, `tags` and `test`.
      * More information is provided about skipped or ignored tests.
      * Repeated patterns in tests have been extracted to common procedures, both at a
        global level and on a per-test file basis.
      * Cleaned up some cases where test setup was based on a previous test executing
        (a major anti-pattern that repeats itself in many places).
      * Cleaned up some cases where test teardown was not part of a test (in the
        future we should have dedicated teardown code that executes even when tests
        fail).
      * Fixed some tests that were flaky running on external servers.
      8a86bca5
  15. 08 Mar, 2021 1 commit
    • Yossi Gottlieb's avatar
      Fix flaky unit/maxmemory test on MacOS/BSD. (#8619) · 7d81f392
      Yossi Gottlieb authored
      It seems like non-Linux sockets may be less greedy, resulting with more
      transient client output buffers.
      
      Haven't proven this but empirically when stressing this test on
      non-Linux tends to exhibit increased mem_clients_normal values.
      7d81f392
  16. 08 Dec, 2020 1 commit
    • Oran Agra's avatar
      Improve stability of new CSC eviction test (#8160) · a102b21d
      Oran Agra authored
      c4fdf09c added a test that now fails with valgrind
      it fails for two resons:
      1) the test samples the used memory and then limits the maxmemory to
         that value, but it turns out this is not atomic and on slow machines
         the background cron process that clean out old query buffers reduces
         the memory so that the setting doesn't cause eviction.
      2) the dbsize was tested late, after reading some invalidation messages
         by that time more and more keys got evicted, partially draining the
         db. this is not the focus of this fix (still a known limitation)
      a102b21d
  17. 06 Dec, 2020 2 commits
    • Oran Agra's avatar
      prevent client tracking from causing feedback loop in performEvictions (#8100) · c4fdf09c
      Oran Agra authored
      When client tracking is enabled signalModifiedKey can increase memory usage,
      this can cause the loop in performEvictions to keep running since it was measuring
      the memory usage impact of signalModifiedKey.
      
      The section that measures the memory impact of the eviction should be just on dbDelete,
      excluding keyspace notification, client tracking, and propagation to AOF and replicas.
      
      This resolves part of the problem described in #8069
      p.s. fix took 1 minute, test took about 3 hours to write.
      c4fdf09c
    • Wang Yuan's avatar
      Limit the main db and expires dictionaries to expand (#7954) · 75f9dec6
      Wang Yuan authored
      As we know, redis may reject user's requests or evict some keys if
      used memory is over maxmemory. Dictionaries expanding may make
      things worse, some big dictionaries, such as main db and expires dict,
      may eat huge memory at once for allocating a new big hash table and be
      far more than maxmemory after expanding.
      There are related issues: #4213 #4583
      
      More details, when expand dict in redis, we will allocate a new big
      ht[1] that generally is double of ht[0], The size of ht[1] will be
      very big if ht[0] already is big. For db dict, if we have more than
      64 million keys, we need to cost 1GB for ht[1] when dict expands.
      
      If the sum of used memory and new hash table of dict needed exceeds
      maxmemory, we shouldn't allow the dict to expand. Because, if we
      enable keys eviction, we still couldn't add much more keys after
      eviction and rehashing, what's worse, redis will keep less keys when
      redis only remains a little memory for storing new hash table instead
      of users' data. Moreover users can't write data in redis if disable
      keys eviction.
      
      What this commit changed ?
      
      Add a new member function expandAllowed for dict type, it provide a way
      for caller to allow expand or not. We expose two parameters for this
      function: more memory needed for expanding and dict current load factor,
      users can implement a function to make a decision by them.
      For main db dict and expires dict type, these dictionaries may be very
      big and cost huge memory for expanding, so we implement a judgement
      function: we can stop dict to expand provisionally if used memory will
      be over maxmemory after dict expands, but to guarantee the performance
      of redis, we still allow dict to expand if dict load factor exceeds the
      safe load factor.
      Add test cases to verify we don't allow main db to expand when left
      memory is not enough, so that avoid keys eviction.
      
      Other changes:
      
      For new hash table size when expand. Before this commit, the size is
      that double used of dict and later _dictNextPower. Actually we aim to
      control a dict load factor between 0.5 and 1.0. Now we replace *2 with
      +1, since the first check is that used >= size, the outcome of before
      will usually be the same as _dictNextPower(used+1). The only case where
      it'll differ is when dict_can_resize is false during fork, so that later
      the _dictNextPower(used*2) will cause the dict to jump to *4 (i.e.
      _dictNextPower(1025*2) will return 4096).
      Fix rehash test cases due to changing algorithm of new hash table size
      when expand.
      75f9dec6
  18. 05 May, 2019 1 commit
    • Oran Agra's avatar
      make replication tests more stable on slow machines · ba809f26
      Oran Agra authored
      solving few replication related tests race conditions which fail on slow machines
      
      bugfix in slave buffers test: since the test is executed twice, each time with
      a different commands count, the threshold for the delta can't be a constant.
      ba809f26
  19. 11 Sep, 2018 1 commit
  20. 21 Aug, 2018 1 commit
    • Oran Agra's avatar
      Fix unstable tests on slow machines. · c8452ab0
      Oran Agra authored
      Few tests had borderline thresholds that were adjusted.
      
      The slave buffers test had two issues, preventing the slave buffer from growing:
      1) the slave didn't necessarily go to sleep on time, or woke up too early,
         now using SIGSTOP to make sure it goes to sleep exactly when we want.
      2) the master disconnected the slave on timeout
      c8452ab0
  21. 24 Jul, 2018 1 commit
    • Oran Agra's avatar
      fix slave buffer test suite false positives · d4ae76d1
      Oran Agra authored
      it looks like on slow machines we're getting:
      [err]: slave buffer are counted correctly in tests/unit/maxmemory.tcl
      Expected condition '$slave_buf > 2*1024*1024' to be true (16914 > 2*1024*1024)
      
      this is a result of the slave waking up too early and eating the
      slave buffer before the traffic and the test ends.
      d4ae76d1
  22. 16 Jul, 2018 1 commit
    • Oran Agra's avatar
      slave buffers were wasteful and incorrectly counted causing eviction · bf680b6f
      Oran Agra authored
      A) slave buffers didn't count internal fragmentation and sds unused space,
         this caused them to induce eviction although we didn't mean for it.
      
      B) slave buffers were consuming about twice the memory of what they actually needed.
      - this was mainly due to sdsMakeRoomFor growing to twice as much as needed each time
        but networking.c not storing more than 16k (partially fixed recently in 237a38737).
      - besides it wasn't able to store half of the new string into one buffer and the
        other half into the next (so the above mentioned fix helped mainly for small items).
      - lastly, the sds buffers had up to 30% internal fragmentation that was wasted,
        consumed but not used.
      
      C) inefficient performance due to starting from a small string and reallocing many times.
      
      what i changed:
      - creating dedicated buffers for reply list, counting their size with zmalloc_size
      - when creating a new reply node from, preallocate it to at least 16k.
      - when appending a new reply to the buffer, first fill all the unused space of the
        previous node before starting a new one.
      
      other changes:
      - expose mem_not_counted_for_evict info field for the benefit of the test suite
      - add a test to make sure slave buffers are counted correctly and that they don't cause eviction
      bf680b6f
  23. 15 Mar, 2017 1 commit
  24. 29 Sep, 2014 1 commit
  25. 18 Jul, 2014 1 commit
  26. 28 Jul, 2011 1 commit