1. 27 Oct, 2020 7 commits
  2. 24 Apr, 2020 1 commit
  3. 23 Apr, 2020 2 commits
    • yanhui13's avatar
      optimize the output of cluster slots · 7a62eb96
      yanhui13 authored
      7a62eb96
    • srzhao's avatar
      Check OOM at script start to get stable lua OOM state. · 0efb93d0
      srzhao authored
      Checking OOM by `getMaxMemoryState` inside script might get different result
      with `freeMemoryIfNeededAndSafe` at script start, because lua stack and
      arguments also consume memory.
      
      This leads to memory `borderline` when memory grows near server.maxmemory:
      
      - `freeMemoryIfNeededAndSafe` at script start detects no OOM, no memory freed
      - `getMaxMemoryState` inside script detects OOM, script aborted
      
      We solve this 'borderline' issue by saving OOM state at script start to get
      stable lua OOM state.
      
      related to issue #6565 and #5250.
      0efb93d0
  4. 17 Apr, 2020 2 commits
  5. 08 Apr, 2020 1 commit
  6. 12 Mar, 2020 1 commit
  7. 11 Mar, 2020 1 commit
  8. 05 Mar, 2020 18 commits
  9. 12 Feb, 2020 1 commit
  10. 08 Jan, 2020 1 commit
  11. 19 Nov, 2019 5 commits
    • antirez's avatar
      Redis 5.0.7. · 4891612b
      antirez authored
      4891612b
    • Oran Agra's avatar
      RED-31295 - redis: avoid race between dlopen and thread creation · 9f63fc98
      Oran Agra authored
      It seeems that since I added the creation of the jemalloc thread redis
      sometimes fails to start with the following error:
      
      Inconsistency detected by ld.so: dl-tls.c: 493: _dl_allocate_tls_init: Assertion `listp->slotinfo[cnt].gen <= GL(dl_tls_generation)' failed!
      
      This seems to be due to a race bug in ld.so, in which TLS creation on the
      thread, collide with dlopen.
      
      Move the creation of BIO and jemalloc threads to after modules are loaded.
      
      plus small bugfix when trying to disable the jemalloc thread at runtime
      9f63fc98
    • antirez's avatar
      Cluster: fix memory leak of cached master. · 1a9e70c1
      antirez authored
      This is what happened:
      
      1. Instance starts, is a slave in the cluster configuration, but
      actually server.masterhost is not set, so technically the instance
      is acting like a master.
      
      2. loadDataFromDisk() calls replicationCacheMasterUsingMyself() even if
      the instance is a master, in the case it is logically a slave and the
      cluster is enabled. So now we have a cached master even if the instance
      is practically configured as a master (from the POV of
      server.masterhost value and so forth).
      
      3. clusterCron() sees that the instance requires to replicate from its
      master, because logically it is a slave, so it calls
      replicationSetMaster() that will in turn call
      replicationCacheMasterUsingMyself(): before this commit, this call would
      overwrite the old cached master, creating a memory leak.
      1a9e70c1
    • Guy Benoish's avatar
      Fix usage of server.stream_node_max_* · 69b1b5be
      Guy Benoish authored
      69b1b5be
    • 喜欢兰花山丘's avatar
      Update mkreleasehdr.sh · 1fd97ee7
      喜欢兰花山丘 authored
      fix date +%s errata
      1fd97ee7