Unverified Commit 0270abda authored by Vitaly's avatar Vitaly Committed by GitHub
Browse files

Replace cluster metadata with slot specific dictionaries (#11695)

This is an implementation of https://github.com/redis/redis/issues/10589

 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot.  Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.

## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot. 
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.

## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict. 

RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.

## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information. 

---------
Co-authored-by: default avatarVitaly Arbuzov <arvit@amazon.com>
Co-authored-by: default avatarHarkrishn Patro <harkrisp@amazon.com>
Co-authored-by: default avatarRoshan Khatri <rvkhatri@amazon.com>
Co-authored-by: default avatarMadelyn Olson <madelyneolson@gmail.com>
Co-authored-by: default avatarOran Agra <oran@redislabs.com>
parent f0c1c730
......@@ -1671,7 +1671,7 @@ void sdiffstoreCommand(client *c) {
void sscanCommand(client *c) {
robj *set;
unsigned long cursor;
unsigned long long cursor;
if (parseScanCursorOrReply(c,c->argv[2],&cursor) == C_ERR) return;
if ((set = lookupKeyReadOrReply(c,c->argv[1],shared.emptyscan)) == NULL ||
......
......@@ -3901,7 +3901,7 @@ void zrevrankCommand(client *c) {
void zscanCommand(client *c) {
robj *o;
unsigned long cursor;
unsigned long long cursor;
if (parseScanCursorOrReply(c,c->argv[2],&cursor) == C_ERR) return;
if ((o = lookupKeyReadOrReply(c,c->argv[1],shared.emptyscan)) == NULL ||
......
......@@ -602,16 +602,24 @@ proc stop_bg_complex_data {handle} {
# Write num keys with the given key prefix and value size (in bytes). If idx is
# given, it's the index (AKA level) used with the srv procedure and it specifies
# to which Redis instance to write the keys.
proc populate {num {prefix key:} {size 3} {idx 0} {prints false}} {
proc populate {num {prefix key:} {size 3} {idx 0} {prints false} {expires 0}} {
r $idx deferred 1
if {$num > 16} {set pipeline 16} else {set pipeline $num}
set val [string repeat A $size]
for {set j 0} {$j < $pipeline} {incr j} {
if {$expires > 0} {
r $idx set $prefix$j $val ex $expires
} else {
r $idx set $prefix$j $val
}
if {$prints} {puts $j}
}
for {} {$j < $num} {incr j} {
if {$expires > 0} {
r $idx set $prefix$j $val ex $expires
} else {
r $idx set $prefix$j $val
}
r $idx read
if {$prints} {puts $j}
}
......
......@@ -30,6 +30,13 @@ start_cluster 1 0 {tags {external:skip cluster}} {
redis.call('set', 'foo', 'bar'); redis.call('set', 'bar', 'foo')
} 0
# Retrieve data from different slot to verify data has been stored in the correct dictionary in cluster-enabled setup
# during cross-slot operation from the above lua script.
assert_equal "bar" [r 0 get foo]
assert_equal "foo" [r 0 get bar]
r 0 del foo
r 0 del bar
# Functions with allow-cross-slot-keys flag are allowed
r 0 function load REPLACE {#!lua name=crossslot
local function test_cross_slot(keys, args)
......@@ -40,6 +47,11 @@ start_cluster 1 0 {tags {external:skip cluster}} {
redis.register_function{function_name='test_cross_slot', callback=test_cross_slot, flags={ 'allow-cross-slot-keys' }}}
r FCALL test_cross_slot 0
# Retrieve data from different slot to verify data has been stored in the correct dictionary in cluster-enabled setup
# during cross-slot operation from the above lua function.
assert_equal "bar" [r 0 get foo]
assert_equal "foo" [r 0 get bar]
}
test {Cross slot commands are also blocked if they disagree with pre-declared keys} {
......
......@@ -192,8 +192,8 @@ start_server {tags {"expire"}} {
# two seconds.
wait_for_condition 20 100 {
[r dbsize] eq 0
} fail {
"Keys did not actively expire."
} else {
fail "Keys did not actively expire."
}
}
......@@ -378,8 +378,8 @@ start_server {tags {"expire"}} {
{set foo15 bar}
{pexpireat foo15 *}
{set foo16 bar}
{restore foo17 * {*} ABSTTL}
{restore foo18 * {*} absttl}
{restore foo17 * * ABSTTL}
{restore foo18 * * absttl}
}
# Remember the absolute TTLs of all the keys
......@@ -507,8 +507,8 @@ start_server {tags {"expire"}} {
{pexpireat foo4 *}
{pexpireat foo4 *}
{set foo5 bar}
{restore foo6 * {*} ABSTTL}
{restore foo7 * {*} absttl}
{restore foo6 * * ABSTTL}
{restore foo7 * * absttl}
}
close_replication_stream $repl
} {} {needs:repl}
......@@ -833,3 +833,75 @@ start_server {tags {"expire"}} {
assert_equal [r debug set-active-expire 1] {OK}
} {} {needs:debug}
}
start_cluster 1 0 {tags {"expire external:skip cluster"}} {
test "expire scan should skip dictionaries with lot's of empty buckets" {
# Collect two slots to help determine the expiry scan logic is able
# to go past certain slots which aren't valid for scanning at the given point of time.
# And the next non empyt slot after that still gets scanned and expiration happens.
# hashslot(alice) is 749
r psetex alice 500 val
# hashslot(foo) is 12182
# fill data across different slots with expiration
for {set j 1} {$j <= 100} {incr j} {
r psetex "{foo}$j" 500 a
}
# hashslot(key) is 12539
r psetex key 500 val
assert_equal 102 [r dbsize]
# disable resizing
r config set rdb-key-save-delay 10000000
r bgsave
# delete data to have lot's (99%) of empty buckets (slot 12182 should be skipped)
for {set j 1} {$j <= 99} {incr j} {
r del "{foo}$j"
}
# Verify {foo}5 still exists and remaining got cleaned up
wait_for_condition 20 100 {
[r dbsize] eq 1
} else {
if {[r dbsize] eq 0} {
fail "scan didn't handle slot skipping logic."
} else {
fail "scan didn't process all valid slots."
}
}
# Enable resizing
r config set rdb-key-save-delay 0
catch {exec kill -9 [get_child_pid 0]}
wait_for_condition 1000 10 {
[s rdb_bgsave_in_progress] eq 0
} else {
fail "bgsave did not stop in time."
}
# Verify dict is under rehashing
set htstats [r debug HTSTATS 0]
assert_match {*rehashing target*} $htstats
# put some data into slot 12182 and trigger the resize
r psetex "{foo}0" 500 a
# Verify dict rehashing has completed
set htstats [r debug HTSTATS 0]
wait_for_condition 20 100 {
![string match {*rehashing target*} $htstats]
} else {
fail "rehashing didn't complete"
}
# Verify all keys have expired
wait_for_condition 20 100 {
[r dbsize] eq 0
} else {
fail "Keys did not actively expire."
}
}
}
......@@ -37,9 +37,9 @@ start_server {tags {"memefficiency external:skip"}} {
}
run_solo {defrag} {
start_server {tags {"defrag external:skip"} overrides {appendonly yes auto-aof-rewrite-percentage 0 save ""}} {
proc test_active_defrag {type} {
if {[string match {*jemalloc*} [s mem_allocator]] && [r debug mallctl arenas.page] <= 8192} {
test "Active defrag" {
test "Active defrag main dictionary: $type" {
r config set hz 100
r config set activedefrag no
r config set active-defrag-threshold-lower 5
......@@ -50,7 +50,11 @@ start_server {tags {"defrag external:skip"} overrides {appendonly yes auto-aof-r
r config set maxmemory-policy allkeys-lru
populate 700000 asdf1 150
populate 100 asdf1 150 0 false 1000
populate 170000 asdf2 300
populate 100 asdf2 300 0 false 1000
assert {[scan [regexp -inline {expires\=([\d]*)} [r info keyspace]] expires=%d] > 0}
after 120 ;# serverCron only updates the info once in 100ms
set frag [s allocator_frag_ratio]
if {$::verbose} {
......@@ -115,7 +119,7 @@ start_server {tags {"defrag external:skip"} overrides {appendonly yes auto-aof-r
r save ;# saving an rdb iterates over all the data / pointers
# if defrag is supported, test AOF loading too
if {[r config get activedefrag] eq "activedefrag yes"} {
if {[r config get activedefrag] eq "activedefrag yes" && $type eq "standalone"} {
test "Active defrag - AOF loading" {
# reset stats and load the AOF file
r config resetstat
......@@ -160,7 +164,7 @@ start_server {tags {"defrag external:skip"} overrides {appendonly yes auto-aof-r
r config set appendonly no
r config set key-load-delay 0
test "Active defrag eval scripts" {
test "Active defrag eval scripts: $type" {
r flushdb
r script flush sync
r config resetstat
......@@ -242,7 +246,7 @@ start_server {tags {"defrag external:skip"} overrides {appendonly yes auto-aof-r
r script flush sync
} {OK}
test "Active defrag big keys" {
test "Active defrag big keys: $type" {
r flushdb
r config resetstat
r config set hz 100
......@@ -277,6 +281,14 @@ start_server {tags {"defrag external:skip"} overrides {appendonly yes auto-aof-r
$rd read ; # Discard replies
}
# create some small items (effective in cluster-enabled)
r set "{bighash}smallitem" val
r set "{biglist}smallitem" val
r set "{bigzset}smallitem" val
r set "{bigset}smallitem" val
r set "{bigstream}smallitem" val
set expected_frag 1.7
if {$::accurate} {
# scale the hash to 1m fields in order to have a measurable the latency
......@@ -297,7 +309,7 @@ start_server {tags {"defrag external:skip"} overrides {appendonly yes auto-aof-r
for {set j 0} {$j < 500000} {incr j} {
$rd read ; # Discard replies
}
assert_equal [r dbsize] 500010
assert_equal [r dbsize] 500015
# create some fragmentation
for {set j 0} {$j < 500000} {incr j 2} {
......@@ -306,7 +318,7 @@ start_server {tags {"defrag external:skip"} overrides {appendonly yes auto-aof-r
for {set j 0} {$j < 500000} {incr j 2} {
$rd read ; # Discard replies
}
assert_equal [r dbsize] 250010
assert_equal [r dbsize] 250015
# start defrag
after 120 ;# serverCron only updates the info once in 100ms
......@@ -371,7 +383,7 @@ start_server {tags {"defrag external:skip"} overrides {appendonly yes auto-aof-r
r save ;# saving an rdb iterates over all the data / pointers
} {OK}
test "Active defrag big list" {
test "Active defrag big list: $type" {
r flushdb
r config resetstat
r config set hz 100
......@@ -473,7 +485,7 @@ start_server {tags {"defrag external:skip"} overrides {appendonly yes auto-aof-r
r del biglist1 ;# coverage for quicklistBookmarksClear
} {1}
test "Active defrag edge case" {
test "Active defrag edge case: $type" {
# there was an edge case in defrag where all the slabs of a certain bin are exact the same
# % utilization, with the exception of the current slab from which new allocations are made
# if the current slab is lower in utilization the defragger would have ended up in stagnation,
......@@ -576,5 +588,13 @@ start_server {tags {"defrag external:skip"} overrides {appendonly yes auto-aof-r
}
}
}
}
}
start_cluster 1 0 {tags {"defrag external:skip cluster"} overrides {appendonly yes auto-aof-rewrite-percentage 0 save ""}} {
test_active_defrag "cluster"
}
start_server {tags {"defrag external:skip standalone"} overrides {appendonly yes auto-aof-rewrite-percentage 0 save ""}} {
test_active_defrag "standalone"
}
} ;# run_solo
start_server {tags {"scan network"}} {
test "SCAN basic" {
proc test_scan {type} {
test "{$type} SCAN basic" {
r flushdb
populate 1000
......@@ -17,7 +17,7 @@ start_server {tags {"scan network"}} {
assert_equal 1000 [llength $keys]
}
test "SCAN COUNT" {
test "{$type} SCAN COUNT" {
r flushdb
populate 1000
......@@ -35,7 +35,7 @@ start_server {tags {"scan network"}} {
assert_equal 1000 [llength $keys]
}
test "SCAN MATCH" {
test "{$type} SCAN MATCH" {
r flushdb
populate 1000
......@@ -53,7 +53,7 @@ start_server {tags {"scan network"}} {
assert_equal 100 [llength $keys]
}
test "SCAN TYPE" {
test "{$type} SCAN TYPE" {
r flushdb
# populate only creates strings
populate 1000
......@@ -98,7 +98,7 @@ start_server {tags {"scan network"}} {
assert_equal 1000 [llength $keys]
}
test "SCAN unknown type" {
test "{$type} SCAN unknown type" {
r flushdb
# make sure that passive expiration is triggered by the scan
r debug set-active-expire 0
......@@ -131,7 +131,7 @@ start_server {tags {"scan network"}} {
r debug set-active-expire 1
} {OK} {needs:debug}
test "SCAN with expired keys" {
test "{$type} SCAN with expired keys" {
r flushdb
# make sure that passive expiration is triggered by the scan
r debug set-active-expire 0
......@@ -164,7 +164,7 @@ start_server {tags {"scan network"}} {
r debug set-active-expire 1
} {OK} {needs:debug}
test "SCAN with expired keys with TYPE filter" {
test "{$type} SCAN with expired keys with TYPE filter" {
r flushdb
# make sure that passive expiration is triggered by the scan
r debug set-active-expire 0
......@@ -201,7 +201,7 @@ start_server {tags {"scan network"}} {
} {OK} {needs:debug}
foreach enc {intset listpack hashtable} {
test "SSCAN with encoding $enc" {
test "{$type} SSCAN with encoding $enc" {
# Create the Set
r del set
if {$enc eq {intset}} {
......@@ -236,7 +236,7 @@ start_server {tags {"scan network"}} {
}
foreach enc {listpack hashtable} {
test "HSCAN with encoding $enc" {
test "{$type} HSCAN with encoding $enc" {
# Create the Hash
r del hash
if {$enc eq {listpack}} {
......@@ -276,7 +276,7 @@ start_server {tags {"scan network"}} {
}
foreach enc {listpack skiplist} {
test "ZSCAN with encoding $enc" {
test "{$type} ZSCAN with encoding $enc" {
# Create the Sorted Set
r del zset
if {$enc eq {listpack}} {
......@@ -315,7 +315,7 @@ start_server {tags {"scan network"}} {
}
}
test "SCAN guarantees check under write load" {
test "{$type} SCAN guarantees check under write load" {
r flushdb
populate 100
......@@ -344,7 +344,7 @@ start_server {tags {"scan network"}} {
assert_equal 100 [llength $keys2]
}
test "SSCAN with integer encoded object (issue #1345)" {
test "{$type} SSCAN with integer encoded object (issue #1345)" {
set objects {1 a}
r del set
r sadd set {*}$objects
......@@ -354,28 +354,28 @@ start_server {tags {"scan network"}} {
assert_equal [lsort -unique [lindex $res 1]] {1}
}
test "SSCAN with PATTERN" {
test "{$type} SSCAN with PATTERN" {
r del mykey
r sadd mykey foo fab fiz foobar 1 2 3 4
set res [r sscan mykey 0 MATCH foo* COUNT 10000]
lsort -unique [lindex $res 1]
} {foo foobar}
test "HSCAN with PATTERN" {
test "{$type} HSCAN with PATTERN" {
r del mykey
r hmset mykey foo 1 fab 2 fiz 3 foobar 10 1 a 2 b 3 c 4 d
set res [r hscan mykey 0 MATCH foo* COUNT 10000]
lsort -unique [lindex $res 1]
} {1 10 foo foobar}
test "ZSCAN with PATTERN" {
test "{$type} ZSCAN with PATTERN" {
r del mykey
r zadd mykey 1 foo 2 fab 3 fiz 10 foobar
set res [r zscan mykey 0 MATCH foo* COUNT 10000]
lsort -unique [lindex $res 1]
}
test "ZSCAN scores: regression test for issue #2175" {
test "{$type} ZSCAN scores: regression test for issue #2175" {
r del mykey
for {set j 0} {$j < 500} {incr j} {
r zadd mykey 9.8813129168249309e-323 $j
......@@ -385,7 +385,7 @@ start_server {tags {"scan network"}} {
assert {$first_score != 0}
}
test "SCAN regression test for issue #4906" {
test "{$type} SCAN regression test for issue #4906" {
for {set k 0} {$k < 100} {incr k} {
r del set
r sadd set x; # Make sure it's not intset encoded
......@@ -431,3 +431,11 @@ start_server {tags {"scan network"}} {
}
}
}
start_server {tags {"scan network standalone"}} {
test_scan "standalone"
}
start_cluster 1 0 {tags {"external:skip cluster scan"}} {
test_scan "cluster"
}
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment