- 18 Nov, 2019 1 commit
-
-
antirez authored
-
- 15 Nov, 2019 1 commit
-
-
antirez authored
-
- 14 Nov, 2019 1 commit
-
-
antirez authored
-
- 06 Nov, 2019 1 commit
-
-
antirez authored
One problem with the solution proposed so far in #6537 is that key lookups outside a command execution via call(), still used a cached time. The cached time needed to be refreshed in multiple places, especially because of modules callbacks from timers, cluster bus, and thread safe contexts, that may use RM_Open(). In order to avoid this problem, this commit introduces the ability to detect if we are inside call(): this way we can use the reference fixed time only when we are in the context of a command execution or Lua script, but for the asynchronous lookups, we can still use mstime() to get a fresh time reference.
-
- 05 Nov, 2019 2 commits
-
-
antirez authored
After the thread in #6537 and thanks to the suggestions received, this commit updates the original patch in order to: 1. Solve the problem of updating the time in multiple places by updating it in call(). 2. Avoid introducing a new field but use our cached time. This required some minor refactoring to the function updating the time, and the introduction of a new cached time in microseconds in order to use less gettimeofday() calls.
-
zhaozhao.zz authored
Calling lookupKey*() many times to search a key in one command may get different result. That's because lookupKey*() calls expireIfNeeded(), and delete the key when reach the expire time. So we can get an robj before the expire time, but a NULL after the expire time. The worst is that may lead to Redis crash, for example `RPOPLPUSH foo foo` the first time we get a list form `foo` and hold the pointer, but when we get `foo` again it's expired and deleted. Now we hold a freed memory, when execute rpoplpushHandlePush() redis crash. To fix it, we can refactor the judgment about whether a key is expired, using the same basetime `server.cmd_start_mstime` instead of calling mstime() everytime.
-
- 29 Oct, 2019 1 commit
-
-
Oran Agra authored
* replication hooks: role change, master link status, replica online/offline * persistence hooks: saving, loading, loading progress * misc hooks: cron loop, shutdown, module loaded/unloaded * change the way hooks test work, and add tests for all of the above startLoading() now gets flag indicating what is loaded. stopLoading() now gets an indication of success or failure. adding startSaving() and stopSaving() with similar args and role.
-
- 28 Oct, 2019 1 commit
-
-
zhaozhao.zz authored
As we know if a module exports module-side data types, unload it is not allowed. This rule is the same with blocked clients in module, because we use background threads to implement module blocked clients, and it's not safe to unload a module if there are background threads running. So it's necessary to check if any blocked clients running in this module when unload it. Moreover, after that we can ensure that if no modules, then no module blocked clients even module unloaded. So, we can call moduleHandleBlockedClients only when we have installed modules.
-
- 24 Oct, 2019 1 commit
-
-
antirez authored
-
- 15 Oct, 2019 1 commit
-
-
Yossi Gottlieb authored
-
- 10 Oct, 2019 1 commit
-
-
antirez authored
This is what happened: 1. Instance starts, is a slave in the cluster configuration, but actually server.masterhost is not set, so technically the instance is acting like a master. 2. loadDataFromDisk() calls replicationCacheMasterUsingMyself() even if the instance is a master, in the case it is logically a slave and the cluster is enabled. So now we have a cached master even if the instance is practically configured as a master (from the POV of server.masterhost value and so forth). 3. clusterCron() sees that the instance requires to replicate from its master, because logically it is a slave, so it calls replicationSetMaster() that will in turn call replicationCacheMasterUsingMyself(): before this commit, this call would overwrite the old cached master, creating a memory leak.
-
- 09 Oct, 2019 1 commit
-
-
omg-by authored
-
- 07 Oct, 2019 5 commits
-
-
Yossi Gottlieb authored
Add configuration options for TLS protocol versions, ciphers/cipher suites selection, etc.
-
Oran Agra authored
-
Oran Agra authored
misc: - handle SSL_has_pending by iterating though these in beforeSleep, and setting timeout of 0 to aeProcessEvents - fix issue with epoll signaling EPOLLHUP and EPOLLERR only to the write handlers. (needed to detect the rdb pipe was closed) - add key-load-delay config for testing - trim connShutdown which is no longer needed - rioFdsetWrite -> rioFdWrite - simplified since there's no longer need to write to multiple FDs - don't detect rdb child exited (don't call wait3) until we detect the pipe is closed - Cleanup bad optimization from rio.c, add another one
-
Yossi Gottlieb authored
* Introduce a connection abstraction layer for all socket operations and integrate it across the code base. * Provide an optional TLS connections implementation based on OpenSSL. * Pull a newer version of hiredis with TLS support. * Tests, redis-cli updates for TLS support.
-
Oran Agra authored
cluster.c - stack buffer memory alignment The pointer 'buf' is cast to a more strictly aligned pointer type evict.c - lazyfree_lazy_eviction, lazyfree_lazy_eviction always called defrag.c - bug in dead code server.c - casting was missing parenthesis rax.c - indentation / newline suggested an 'else if' was intended
-
- 02 Oct, 2019 2 commits
-
-
Oran Agra authored
It seeems that since I added the creation of the jemalloc thread redis sometimes fails to start with the following error: Inconsistency detected by ld.so: dl-tls.c: 493: _dl_allocate_tls_init: Assertion `listp->slotinfo[cnt].gen <= GL(dl_tls_generation)' failed! This seems to be due to a race bug in ld.so, in which TLS creation on the thread, collide with dlopen. Move the creation of BIO and jemalloc threads to after modules are loaded. plus small bugfix when trying to disable the jemalloc thread at runtime
-
antirez authored
-
- 30 Sep, 2019 2 commits
- 27 Sep, 2019 3 commits
-
-
antirez authored
We don't want that the API could be used directly in an unsafe way, without checking if there is an active child. Now the safety checks are moved directly in the function performing the operations.
-
antirez authored
-
antirez authored
We can't expect SIGUSR1 to have any specific value range, so let's define an exit code that we can handle in a special way. This also fixes an #include <wait.h> that is not standard.
-
- 26 Sep, 2019 1 commit
-
-
Oran Agra authored
And add a test for that.
-
- 25 Sep, 2019 2 commits
- 23 Sep, 2019 1 commit
-
-
Mike A. Owens authored
SipHash expects a 128-bit key, and we were indeed generating 128-bits, but restricting them to hex characters 0-9a-f, effectively giving us only 4 bits-per-byte of key material, and 64 bits overall. Now, we skip the hex conversion and supply 128 bits of unfiltered random data.
-
- 22 Sep, 2019 1 commit
-
-
valentino authored
discard command should not fail during OOM, otherwise client MULTI state will not be cleared.
-
- 18 Sep, 2019 1 commit
-
-
antirez authored
-
- 02 Sep, 2019 1 commit
-
-
antirez authored
-
- 25 Aug, 2019 1 commit
-
-
Oran Agra authored
-
- 31 Jul, 2019 1 commit
-
-
antirez authored
This was recently introduced with PR #6266.
-
- 24 Jul, 2019 2 commits
- 23 Jul, 2019 3 commits
-
-
antirez authored
-
antirez authored
-
Madelyn Olson authored
-
- 22 Jul, 2019 1 commit
-
-
antirez authored
-
- 17 Jul, 2019 1 commit
-
-
Oran Agra authored
* create module API for forking child processes. * refactor duplicate code around creating and tracking forks by AOF and RDB. * child processes listen to SIGUSR1 and dies exitFromChild in order to eliminate a valgrind warning of unhandled signal. * note that BGSAVE error reply has changed. valgrind error is: Process terminating with default action of signal 10 (SIGUSR1)
-