Commit 2ae7f491 authored by Oran Agra's avatar Oran Agra
Browse files

Squash merging 125 typo/grammar/comment/doc PRs (#7773)



List of squashed commits or PRs
===============================

commit 66801ea
Author: hwware <wen.hui.ware@gmail.com>
Date:   Mon Jan 13 00:54:31 2020 -0500

    typo fix in acl.c

commit 46f55db
Author: Itamar Haber <itamar@redislabs.com>
Date:   Sun Sep 6 18:24:11 2020 +0300

    Updates a couple of comments

    Specifically:

    * RM_AutoMemory completed instead of pointing to docs
    * Updated link to custom type doc

commit 61a2aa0
Author: xindoo <xindoo@qq.com>
Date:   Tue Sep 1 19:24:59 2020 +0800

    Correct errors in code comments

commit a5871d1
Author: yz1509 <pro-756@qq.com>
Date:   Tue Sep 1 18:36:06 2020 +0800

    fix typos in module.c

commit 41eede7
Author: bookug <bookug@qq.com>
Date:   Sat Aug 15 01:11:33 2020 +0800

    docs: fix typos in comments

commit c303c84
Author: lazy-snail <ws.niu@outlook.com>
Date:   Fri Aug 7 11:15:44 2020 +0800

    fix spelling in redis.conf

commit 1eb76bf
Author: zhujian <zhujianxyz@gmail.com>
Date:   Thu Aug 6 15:22:10 2020 +0800

    add a missing 'n' in comment

commit 1530ec2
Author: Daniel Dai <764122422@qq.com>
Date:   Mon Jul 27 00:46:35 2020 -0400

    fix spelling in tracking.c

commit e517b31
Author: Hunter-Chen <huntcool001@gmail.com>
Date:   Fri Jul 17 22:33:32 2020 +0800

    Update redis.conf
Co-authored-by: default avatarItamar Haber <itamar@redislabs.com>

commit c300eff
Author: Hunter-Chen <huntcool001@gmail.com>
Date:   Fri Jul 17 22:33:23 2020 +0800

    Update redis.conf
Co-authored-by: default avatarItamar Haber <itamar@redislabs.com>

commit 4c058a8
Author: 陈浩鹏 <chenhaopeng@heytea.com>
Date:   Thu Jun 25 19:00:56 2020 +0800

    Grammar fix and clarification

commit 5fcaa81
Author: bodong.ybd <bodong.ybd@alibaba-inc.com>
Date:   Fri Jun 19 10:09:00 2020 +0800

    Fix typos

commit 4caca9a
Author: Pruthvi P <pruthvi@ixigo.com>
Date:   Fri May 22 00:33:22 2020 +0530

    Fix typo eviciton => eviction

commit b2a25f6
Author: Brad Dunbar <dunbarb2@gmail.com>
Date:   Sun May 17 12:39:59 2020 -0400

    Fix a typo.

commit 12842ae
Author: hwware <wen.hui.ware@gmail.com>
Date:   Sun May 3 17:16:59 2020 -0400

    fix spelling in redis conf

commit ddba07c
Author: Chris Lamb <chris@chris-lamb.co.uk>
Date:   Sat May 2 23:25:34 2020 +0100

    Correct a "conflicts" spelling error.

commit 8fc7bf2
Author: Nao YONASHIRO <yonashiro@r.recruit.co.jp>
Date:   Thu Apr 30 10:25:27 2020 +0900

    docs: fix EXPIRE_FAST_CYCLE_DURATION to ACTIVE_EXPIRE_CYCLE_FAST_DURATION

commit 9b2b67a
Author: Brad Dunbar <dunbarb2@gmail.com>
Date:   Fri Apr 24 11:46:22 2020 -0400

    Fix a typo.

commit 0746f10
Author: devilinrust <63737265+devilinrust@users.noreply.github.com>
Date:   Thu Apr 16 00:17:53 2020 +0200

    Fix typos in server.c

commit 92b588d
Author: benjessop12 <56115861+benjessop12@users.noreply.github.com>
Date:   Mon Apr 13 13:43:55 2020 +0100

    Fix spelling mistake in lazyfree.c

commit 1da37aa
Merge: 2d4ba28 af347a8c
Author: hwware <wen.hui.ware@gmail.com>
Date:   Thu Mar 5 22:41:31 2020 -0500

    Merge remote-tracking branch 'upstream/unstable' into expiretypofix

commit 2d4ba28
Author: hwware <wen.hui.ware@gmail.com>
Date:   Mon Mar 2 00:09:40 2020 -0500

    fix typo in expire.c

commit 1a746f7
Author: SennoYuki <minakami1yuki@gmail.com>
Date:   Thu Feb 27 16:54:32 2020 +0800

    fix typo

commit 8599b1a
Author: dongheejeong <donghee950403@gmail.com>
Date:   Sun Feb 16 20:31:43 2020 +0000

    Fix typo in server.c

commit f38d4e8
Author: hwware <wen.hui.ware@gmail.com>
Date:   Sun Feb 2 22:58:38 2020 -0500

    fix typo in evict.c

commit fe143fc
Author: Leo Murillo <leonardo.murillo@gmail.com>
Date:   Sun Feb 2 01:57:22 2020 -0600

    Fix a few typos in redis.conf

commit 1ab4d21
Author: viraja1 <anchan.viraj@gmail.com>
Date:   Fri Dec 27 17:15:58 2019 +0530

    Fix typo in Latency API docstring

commit ca1f70e
Author: gosth <danxuedexing@qq.com>
Date:   Wed Dec 18 15:18:02 2019 +0800

    fix typo in sort.c

commit a57c06b
Author: ZYunH <zyunhjob@163.com>
Date:   Mon Dec 16 22:28:46 2019 +0800

    fix-zset-typo

commit b8c92b5
Author: git-hulk <hulk.website@gmail.com>
Date:   Mon Dec 16 15:51:42 2019 +0800

    FIX: typo in cluster.c, onformation->information

commit 9dd981c
Author: wujm2007 <jim.wujm@gmail.com>
Date:   Mon Dec 16 09:37:52 2019 +0800

    Fix typo

commit e132d7a
Author: Sebastien Williams-Wynn <s.williamswynn.mail@gmail.com>
Date:   Fri Nov 15 00:14:07 2019 +0000

    Minor typo change

commit 47f44d5
Author: happynote3966 <01ssrmikururudevice01@gmail.com>
Date:   Mon Nov 11 22:08:48 2019 +0900

    fix comment typo in redis-cli.c

commit b8bdb0d
Author: fulei <fulei@kuaishou.com>
Date:   Wed Oct 16 18:00:17 2019 +0800

    Fix a spelling mistake of comments  in defragDictBucketCallback

commit 0def46a
Author: fulei <fulei@kuaishou.com>
Date:   Wed Oct 16 13:09:27 2019 +0800

    fix some spelling mistakes of comments in defrag.c

commit f3596fd
Author: Phil Rajchgot <tophil@outlook.com>
Date:   Sun Oct 13 02:02:32 2019 -0400

    Typo and grammar fixes

    Redis and its documentation are great -- just wanted to submit a few corrections in the spirit of Hacktoberfest. Thanks for all your work on this project. I use it all the time and it works beautifully.

commit 2b928cd
Author: KangZhiDong <worldkzd@gmail.com>
Date:   Sun Sep 1 07:03:11 2019 +0800

    fix typos

commit 33aea14
Author: Axlgrep <axlgrep@gmail.com>
Date:   Tue Aug 27 11:02:18 2019 +0800

    Fixed eviction spelling issues

commit e282a80
Author: Simen Flatby <simen@oms.no>
Date:   Tue Aug 20 15:25:51 2019 +0200

    Update comments to reflect prop name

    In the comments the prop is referenced as replica-validity-factor,
    but it is really named cluster-replica-validity-factor.

commit 74d1f9a
Author: Jim Green <jimgreen2013@qq.com>
Date:   Tue Aug 20 20:00:31 2019 +0800

    fix comment error, the code is ok

commit eea1407
Author: Liao Tonglang <liaotonglang@gmail.com>
Date:   Fri May 31 10:16:18 2019 +0800

    typo fix

    fix cna't to can't

commit 0da553c
Author: KAWACHI Takashi <tkawachi@gmail.com>
Date:   Wed Jul 17 00:38:16 2019 +0900

    Fix typo

commit 7fc8fb6
Author: Michael Prokop <mika@grml.org>
Date:   Tue May 28 17:58:42 2019 +0200

    Typo fixes

    s/familar/familiar/
    s/compatiblity/compatibility/
    s/ ot / to /
    s/itsef/itself/

commit 5f46c9d
Author: zhumoing <34539422+zhumoing@users.noreply.github.com>
Date:   Tue May 21 21:16:50 2019 +0800

    typo-fixes

    typo-fixes

commit 321dfe1
Author: wxisme <850885154@qq.com>
Date:   Sat Mar 16 15:10:55 2019 +0800

    typo fix

commit b4fb131
Merge: 267e0e6 3df1eb85
Author: Nikitas Bastas <nikitasbst@gmail.com>
Date:   Fri Feb 8 22:55:45 2019 +0200

    Merge branch 'unstable' of antirez/redis into unstable

commit 267e0e6
Author: Nikitas Bastas <nikitasbst@gmail.com>
Date:   Wed Jan 30 21:26:04 2019 +0200

    Minor typo fix

commit 30544e7
Author: inshal96 <39904558+inshal96@users.noreply.github.com>
Date:   Fri Jan 4 16:54:50 2019 +0500

    remove an extra 'a' in the comments

commit 337969d
Author: BrotherGao <yangdongheng11@gmail.com>
Date:   Sat Dec 29 12:37:29 2018 +0800

    fix typo in redis.conf

commit 9f4b121
Merge: 423a030 e504583b
Author: BrotherGao <yangdongheng@xiaomi.com>
Date:   Sat Dec 29 11:41:12 2018 +0800

    Merge branch 'unstable' of antirez/redis into unstable

commit 423a030
Merge: 42b02b7 46a51cdc
Author: 杨东衡 <yangdongheng@xiaomi.com>
Date:   Tue Dec 4 23:56:11 2018 +0800

    Merge branch 'unstable' of antirez/redis into unstable

commit 42b02b7
Merge: 68c0e6e3 b8febe60


Author: Dongheng Yang <yangdongheng11@gmail.com>
Date:   Sun Oct 28 15:54:23 2018 +0800

    Merge pull request #1 from antirez/unstable

    update local data

commit 714b589
Author: Christian <crifei93@gmail.com>
Date:   Fri Dec 28 01:17:26 2018 +0100

    fix typo "resulution"

commit e23259d
Author: garenchan <1412950785@qq.com>
Date:   Wed Dec 26 09:58:35 2018 +0800

    fix typo: segfauls -> segfault

commit a9359f8
Author: xjp <jianping_xie@aliyun.com>
Date:   Tue Dec 18 17:31:44 2018 +0800

    Fixed REDISMODULE_H spell bug

commit a12c3e4
Author: jdiaz <jrd.palacios@gmail.com>
Date:   Sat Dec 15 23:39:52 2018 -0600

    Fixes hyperloglog hash function comment block description

commit 770eb11
Author: 林上耀 <1210tom@163.com>
Date:   Sun Nov 25 17:16:10 2018 +0800

    fix typo

commit fd97fbb
Author: Chris Lamb <chris@chris-lamb.co.uk>
Date:   Fri Nov 23 17:14:01 2018 +0100

    Correct "unsupported" typo.

commit a85522d
Author: Jungnam Lee <jungnam.lee@oracle.com>
Date:   Thu Nov 8 23:01:29 2018 +0900

    fix typo in test comments

commit ade8007
Author: Arun Kumar <palerdot@users.noreply.github.com>
Date:   Tue Oct 23 16:56:35 2018 +0530

    Fixed grammatical typo

    Fixed typo for word 'dictionary'

commit 869ee39
Author: Hamid Alaei <hamid.a85@gmail.com>
Date:   Sun Aug 12 16:40:02 2018 +0430

    fix documentations: (ThreadSafeContextStart/Stop -> ThreadSafeContextLock/Unlock), minor typo

commit f89d158
Author: Mayank Jain <mayankjain255@gmail.com>
Date:   Tue Jul 31 23:01:21 2018 +0530

    Updated README.md with some spelling corrections.

    Made correction in spelling of some misspelled words.

commit 892198e
Author: dsomeshwar <someshwar.dhayalan@gmail.com>
Date:   Sat Jul 21 23:23:04 2018 +0530

    typo fix

commit 8a4d780
Author: Itamar Haber <itamar@redislabs.com>
Date:   Mon Apr 30 02:06:52 2018 +0300

    Fixes some typos

commit e3acef6
Author: Noah Rosamilia <ivoahivoah@gmail.com>
Date:   Sat Mar 3 23:41:21 2018 -0500

    Fix typo in /deps/README.md

commit 04442fb
Author: WuYunlong <xzsyeb@126.com>
Date:   Sat Mar 3 10:32:42 2018 +0800

    Fix typo in readSyncBulkPayload() comment.

commit 9f36880
Author: WuYunlong <xzsyeb@126.com>
Date:   Sat Mar 3 10:20:37 2018 +0800

    replication.c comment: run_id -> replid.

commit f866b4a
Author: Francesco 'makevoid' Canessa <makevoid@gmail.com>
Date:   Thu Feb 22 22:01:56 2018 +0000

    fix comment typo in server.c

commit 0ebc69b
Author: 줍 <jubee0124@gmail.com>
Date:   Mon Feb 12 16:38:48 2018 +0900

    Fix typo in redis.conf

    Fix `five behaviors` to `eight behaviors` in [this sentence ](antirez/redis@unstable/redis.conf#L564)

commit b50a620
Author: martinbroadhurst <martinbroadhurst@users.noreply.github.com>
Date:   Thu Dec 28 12:07:30 2017 +0000

    Fix typo in valgrind.sup

commit 7d8f349
Author: Peter Boughton <peter@sorcerersisle.com>
Date:   Mon Nov 27 19:52:19 2017 +0000

    Update CONTRIBUTING; refer doc updates to redis-doc repo.

commit 02dec7e
Author: Klauswk <klauswk1@hotmail.com>
Date:   Tue Oct 24 16:18:38 2017 -0200

    Fix typo in comment

commit e1efbc8
Author: chenshi <baiwfg2@gmail.com>
Date:   Tue Oct 3 18:26:30 2017 +0800

    Correct two spelling errors of comments

commit 93327d8
Author: spacewander <spacewanderlzx@gmail.com>
Date:   Wed Sep 13 16:47:24 2017 +0800

    Update the comment for OBJ_ENCODING_EMBSTR_SIZE_LIMIT's value

    The value of OBJ_ENCODING_EMBSTR_SIZE_LIMIT is 44 now instead of 39.

commit 63d361f
Author: spacewander <spacewanderlzx@gmail.com>
Date:   Tue Sep 12 15:06:42 2017 +0800

    Fix <prevlen> related doc in ziplist.c

    According to the definition of ZIP_BIG_PREVLEN and other related code,
    the guard of single byte <prevlen> should be 254 instead of 255.

commit ebe228d
Author: hanael80 <hanael80@gmail.com>
Date:   Tue Aug 15 09:09:40 2017 +0900

    Fix typo

commit 6b696e6
Author: Matt Robenolt <matt@ydekproductions.com>
Date:   Mon Aug 14 14:50:47 2017 -0700

    Fix typo in LATENCY DOCTOR output

commit a2ec6ae
Author: caosiyang <caosiyang@qiyi.com>
Date:   Tue Aug 15 14:15:16 2017 +0800

    Fix a typo: form => from

commit 3ab7699
Author: caosiyang <caosiyang@qiyi.com>
Date:   Thu Aug 10 18:40:33 2017 +0800

    Fix a typo: replicationFeedSlavesFromMaster() => replicationFeedSlavesFromMasterStream()

commit 72d43ef
Author: caosiyang <caosiyang@qiyi.com>
Date:   Tue Aug 8 15:57:25 2017 +0800

    fix a typo: servewr => server

commit 707c958
Author: Bo Cai <charpty@gmail.com>
Date:   Wed Jul 26 21:49:42 2017 +0800

    redis-cli.c typo: conut -> count.
Signed-off-by: default avatarBo Cai <charpty@gmail.com>

commit b9385b2
Author: JackDrogon <jack.xsuperman@gmail.com>
Date:   Fri Jun 30 14:22:31 2017 +0800

    Fix some spell problems

commit 20d9230
Author: akosel <aaronjkosel@gmail.com>
Date:   Sun Jun 4 19:35:13 2017 -0500

    Fix typo

commit b167bfc
Author: Krzysiek Witkowicz <krzysiekwitkowicz@gmail.com>
Date:   Mon May 22 21:32:27 2017 +0100

    Fix #4008 small typo in comment

commit 2b78ac8
Author: Jake Clarkson <jacobwclarkson@gmail.com>
Date:   Wed Apr 26 15:49:50 2017 +0100

    Correct typo in tests/unit/hyperloglog.tcl

commit b0f1cdb
Author: Qi Luo <qiluo-msft@users.noreply.github.com>
Date:   Wed Apr 19 14:25:18 2017 -0700

    Fix typo

commit a90b0f9
Author: charsyam <charsyam@naver.com>
Date:   Thu Mar 16 18:19:53 2017 +0900

    fix typos

    fix typos

    fix typos

commit 8430a79
Author: Richard Hart <richardhart92@gmail.com>
Date:   Mon Mar 13 22:17:41 2017 -0400

    Fixed log message typo in listenToPort.

commit 481a1c2
Author: Vinod Kumar <kumar003vinod@gmail.com>
Date:   Sun Jan 15 23:04:51 2017 +0530

    src/db.c: Correct "save" -> "safe" typo

commit 586b4d3
Author: wangshaonan <wshn13@gmail.com>
Date:   Wed Dec 21 20:28:27 2016 +0800

    Fix typo they->the in helloworld.c

commit c1c4b5e
Author: Jenner <hypxm@qq.com>
Date:   Mon Dec 19 16:39:46 2016 +0800

    typo error

commit 1ee1a3f
Author: tielei <43289893@qq.com>
Date:   Mon Jul 18 13:52:25 2016 +0800

    fix some comments

commit 11a41fb
Author: Otto Kekäläinen <otto@seravo.fi>
Date:   Sun Jul 3 10:23:55 2016 +0100

    Fix spelling in documentation and comments

commit 5fb5d82
Author: francischan <f1ancis621@gmail.com>
Date:   Tue Jun 28 00:19:33 2016 +0800

    Fix outdated comments about redis.c file.
    It should now refer to server.c file.

commit 6b254bc
Author: lmatt-bit <lmatt123n@gmail.com>
Date:   Thu Apr 21 21:45:58 2016 +0800

    Refine the comment of dictRehashMilliseconds func

SLAVECONF->REPLCONF in comment - by andyli029

commit ee9869f
Author: clark.kang <charsyam@naver.com>
Date:   Tue Mar 22 11:09:51 2016 +0900

    fix typos

commit f7b3b11
Author: Harisankar H <harisankarh@gmail.com>
Date:   Wed Mar 9 11:49:42 2016 +0530

    Typo correction: "faield" --> "failed"

    Typo correction: "faield" --> "failed"

commit 3fd40fc
Author: Itamar Haber <itamar@redislabs.com>
Date:   Thu Feb 25 10:31:51 2016 +0200

    Fixes a typo in comments

commit 621c160
Author: Prayag Verma <prayag.verma@gmail.com>
Date:   Mon Feb 1 12:36:20 2016 +0530

    Fix typo in Readme.md

    Spelling mistakes -
    `eviciton` > `eviction`
    `familar` > `familiar`

commit d7d07d6
Author: WonCheol Lee <toctoc21c@gmail.com>
Date:   Wed Dec 30 15:11:34 2015 +0900

    Typo fixed

commit a4dade7
Author: Felix Bünemann <buenemann@louis.info>
Date:   Mon Dec 28 11:02:55 2015 +0100

    [ci skip] Improve supervised upstart config docs

    This mentions that "expect stop" is required for supervised upstart
    to work correctly. See http://upstart.ubuntu.com/cookbook/#expect-stop


    for an explanation.

commit d9caba9
Author: daurnimator <quae@daurnimator.com>
Date:   Mon Dec 21 18:30:03 2015 +1100

    README: Remove trailing whitespace

commit 72d42e5
Author: daurnimator <quae@daurnimator.com>
Date:   Mon Dec 21 18:29:32 2015 +1100

    README: Fix typo. th => the

commit dd6e957
Author: daurnimator <quae@daurnimator.com>
Date:   Mon Dec 21 18:29:20 2015 +1100

    README: Fix typo. familar => familiar

commit 3a12b23
Author: daurnimator <quae@daurnimator.com>
Date:   Mon Dec 21 18:28:54 2015 +1100

    README: Fix typo. eviciton => eviction

commit 2d1d03b
Author: daurnimator <quae@daurnimator.com>
Date:   Mon Dec 21 18:21:45 2015 +1100

    README: Fix typo. sever => server

commit 3973b06
Author: Itamar Haber <itamar@garantiadata.com>
Date:   Sat Dec 19 17:01:20 2015 +0200

    Typo fix

commit 4f2e460
Author: Steve Gao <fu@2token.com>
Date:   Fri Dec 4 10:22:05 2015 +0800

    Update README - fix typos

commit b21667c
Author: binyan <binbin.yan@nokia.com>
Date:   Wed Dec 2 22:48:37 2015 +0800

    delete redundancy color judge in sdscatcolor

commit 88894c7
Author: binyan <binbin.yan@nokia.com>
Date:   Wed Dec 2 22:14:42 2015 +0800

    the example output shoule be HelloWorld

commit 2763470
Author: binyan <binbin.yan@nokia.com>
Date:   Wed Dec 2 17:41:39 2015 +0800

    modify error word keyevente
Signed-off-by: default avatarbinyan <binbin.yan@nokia.com>

commit 0847b3d
Author: Bruno Martins <bscmartins@gmail.com>
Date:   Wed Nov 4 11:37:01 2015 +0000

    typo

commit bbb9e9e
Author: dawedawe <dawedawe@gmx.de>
Date:   Fri Mar 27 00:46:41 2015 +0100

    typo: zimap -> zipmap

commit 5ed297e
Author: Axel Advento <badwolf.bloodseeker.rev@gmail.com>
Date:   Tue Mar 3 15:58:29 2015 +0800

    Fix 'salve' typos to 'slave'

commit edec9d6
Author: LudwikJaniuk <ludvig.janiuk@gmail.com>
Date:   Wed Jun 12 14:12:47 2019 +0200

    Update README.md
Co-Authored-By: default avatarQix <Qix-@users.noreply.github.com>

commit 692a7af
Author: LudwikJaniuk <ludvig.janiuk@gmail.com>
Date:   Tue May 28 14:32:04 2019 +0200

    grammar

commit d962b0a
Author: Nick Frost <nickfrostatx@gmail.com>
Date:   Wed Jul 20 15:17:12 2016 -0700

    Minor grammar fix

commit 24fff01aaccaf5956973ada8c50ceb1462e211c6 (typos)
Author: Chad Miller <chadm@squareup.com>
Date:   Tue Sep 8 13:46:11 2020 -0400

    Fix faulty comment about operation of unlink()

commit 3cd5c1f3326c52aa552ada7ec797c6bb16452355
Author: Kevin <kevin.xgr@gmail.com>
Date:   Wed Nov 20 00:13:50 2019 +0800

    Fix typo in server.c.

From a83af59 Mon Sep 17 00:00:00 2001
From: wuwo <wuwo@wacai.com>
Date: Fri, 17 Mar 2017 20:37:45 +0800
Subject: [PATCH] falure to failure

From c961896 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E5=B7=A6=E6=87=B6?= <veficos@gmail.com>
Date: Sat, 27 May 2017 15:33:04 +0800
Subject: [PATCH] fix typo

From e600ef2 Mon Sep 17 00:00:00 2001
From: "rui.zou" <rui.zou@yunify.com>
Date: Sat, 30 Sep 2017 12:38:15 +0800
Subject: [PATCH] fix a typo

From c7d07fa Mon Sep 17 00:00:00 2001
From: Alexandre Perrin <alex@kaworu.ch>
Date: Thu, 16 Aug 2018 10:35:31 +0200
Subject: [PATCH] deps README.md typo

From b25cb67 Mon Sep 17 00:00:00 2001
From: Guy Korland <gkorland@gmail.com>
Date: Wed, 26 Sep 2018 10:55:37 +0300
Subject: [PATCH 1/2] fix typos in header

From ad28ca6 Mon Sep 17 00:00:00 2001
From: Guy Korland <gkorland@gmail.com>
Date: Wed, 26 Sep 2018 11:02:36 +0300
Subject: [PATCH 2/2] fix typos

commit 34924cdedd8552466fc22c1168d49236cb7ee915
Author: Adrian Lynch <adi_ady_ade@hotmail.com>
Date:   Sat Apr 4 21:59:15 2015 +0100

    Typos fixed

commit fd2a1e7
Author: Jan <jsteemann@users.noreply.github.com>
Date:   Sat Oct 27 19:13:01 2018 +0200

    Fix typos

    Fix typos

commit e14e47c1a234b53b0e103c5f6a1c61481cbcbb02
Author: Andy Lester <andy@petdance.com>
Date:   Fri Aug 2 22:30:07 2019 -0500

    Fix multiple misspellings of "following"

commit 79b948ce2dac6b453fe80995abbcaac04c213d5a
Author: Andy Lester <andy@petdance.com>
Date:   Fri Aug 2 22:24:28 2019 -0500

    Fix misspelling of create-cluster

commit 1fffde52666dc99ab35efbd31071a4c008cb5a71
Author: Andy Lester <andy@petdance.com>
Date:   Wed Jul 31 17:57:56 2019 -0500

    Fix typos

commit 204c9ba9651e9e05fd73936b452b9a30be456cfe
Author: Xiaobo Zhu <xiaobo.zhu@shopee.com>
Date:   Tue Aug 13 22:19:25 2019 +0800

    fix typos

Squashed commit of the following:

commit 1d9aaf8
Author: danmedani <danmedani@gmail.com>
Date:   Sun Aug 2 11:40:26 2015 -0700

README typo fix.

Squashed commit of the following:

commit 32bfa7c
Author: Erik Dubbelboer <erik@dubbelboer.com>
Date:   Mon Jul 6 21:15:08 2015 +0200

Fixed grammer

Squashed commit of the following:

commit b24f69c
Author: Sisir Koppaka <sisir.koppaka@gmail.com>
Date:   Mon Mar 2 22:38:45 2015 -0500

utils/hashtable/rehashing.c: Fix typos

Squashed commit of the following:

commit 4e04082
Author: Erik Dubbelboer <erik@dubbelboer.com>
Date:   Mon Mar 23 08:22:21 2015 +0000

Small config file documentation improvements

Squashed commit of the following:

commit acb8773
Author: ctd1500 <ctd1500@gmail.com>
Date:   Fri May 8 01:52:48 2015 -0700

Typo and grammar fixes in readme

commit 2eb75b6
Author: ctd1500 <ctd1500@gmail.com>
Date:   Fri May 8 01:36:18 2015 -0700

fixed redis.conf comment

Squashed commit of the following:

commit a8249a2
Author: Masahiko Sawada <sawada.mshk@gmail.com>
Date:   Fri Dec 11 11:39:52 2015 +0530

Revise correction of typos.

Squashed commit of the following:

commit 3c02028
Author: zhaojun11 <zhaojun11@jd.com>
Date:   Wed Jan 17 19:05:28 2018 +0800

Fix typos include two code typos in cluster.c and latency.c

Squashed commit of the following:

commit 9dba47c
Author: q191201771 <191201771@qq.com>
Date:   Sat Jan 4 11:31:04 2020 +0800

fix function listCreate comment in adlist.c

Update src/server.c

commit 2c7c2cb536e78dd211b1ac6f7bda00f0f54faaeb
Author: charpty <charpty@gmail.com>
Date:   Tue May 1 23:16:59 2018 +0800

    server.c typo: modules system dictionary type comment
Signed-off-by: default avatarcharpty <charpty@gmail.com>

commit a8395323fb63cb59cb3591cb0f0c8edb7c29a680
Author: Itamar Haber <itamar@redislabs.com>
Date:   Sun May 6 00:25:18 2018 +0300

    Updates test_helper.tcl's help with undocumented options

    Specifically:

    * Host
    * Port
    * Client

commit bde6f9ced15755cd6407b4af7d601b030f36d60b
Author: wxisme <850885154@qq.com>
Date:   Wed Aug 8 15:19:19 2018 +0800

    fix comments in deps files

commit 3172474ba991532ab799ee1873439f3402412331
Author: wxisme <850885154@qq.com>
Date:   Wed Aug 8 14:33:49 2018 +0800

    fix some comments

commit 01b6f2b6858b5cf2ce4ad5092d2c746e755f53f0
Author: Thor Juhasz <thor@juhasz.pro>
Date:   Sun Nov 18 14:37:41 2018 +0100

    Minor fixes to comments

    Found some parts a little unclear on a first read, which prompted me to have a better look at the file and fix some minor things I noticed.
    Fixing minor typos and grammar. There are no changes to configuration options.
    These changes are only meant to help the user better understand the explanations to the various configuration options

(cherry picked from commit 1c710385)
parent 03b59cd5
...@@ -20,6 +20,10 @@ each source file that you contribute. ...@@ -20,6 +20,10 @@ each source file that you contribute.
http://stackoverflow.com/questions/tagged/redis http://stackoverflow.com/questions/tagged/redis
Issues and pull requests for documentation belong on the redis-doc repo:
https://github.com/redis/redis-doc
# How to provide a patch for a new feature # How to provide a patch for a new feature
1. If it is a major feature or a semantical change, please don't start coding 1. If it is a major feature or a semantical change, please don't start coding
......
...@@ -3,22 +3,22 @@ This README is just a fast *quick start* document. You can find more detailed do ...@@ -3,22 +3,22 @@ This README is just a fast *quick start* document. You can find more detailed do
What is Redis? What is Redis?
-------------- --------------
Redis is often referred as a *data structures* server. What this means is that Redis provides access to mutable data structures via a set of commands, which are sent using a *server-client* model with TCP sockets and a simple protocol. So different processes can query and modify the same data structures in a shared way. Redis is often referred to as a *data structures* server. What this means is that Redis provides access to mutable data structures via a set of commands, which are sent using a *server-client* model with TCP sockets and a simple protocol. So different processes can query and modify the same data structures in a shared way.
Data structures implemented into Redis have a few special properties: Data structures implemented into Redis have a few special properties:
* Redis cares to store them on disk, even if they are always served and modified into the server memory. This means that Redis is fast, but that is also non-volatile. * Redis cares to store them on disk, even if they are always served and modified into the server memory. This means that Redis is fast, but that it is also non-volatile.
* Implementation of data structures stress on memory efficiency, so data structures inside Redis will likely use less memory compared to the same data structure modeled using an high level programming language. * The implementation of data structures emphasizes memory efficiency, so data structures inside Redis will likely use less memory compared to the same data structure modelled using a high-level programming language.
* Redis offers a number of features that are natural to find in a database, like replication, tunable levels of durability, cluster, high availability. * Redis offers a number of features that are natural to find in a database, like replication, tunable levels of durability, clustering, and high availability.
Another good example is to think of Redis as a more complex version of memcached, where the operations are not just SETs and GETs, but operations to work with complex data types like Lists, Sets, ordered data structures, and so forth. Another good example is to think of Redis as a more complex version of memcached, where the operations are not just SETs and GETs, but operations that work with complex data types like Lists, Sets, ordered data structures, and so forth.
If you want to know more, this is a list of selected starting points: If you want to know more, this is a list of selected starting points:
* Introduction to Redis data types. http://redis.io/topics/data-types-intro * Introduction to Redis data types. http://redis.io/topics/data-types-intro
* Try Redis directly inside your browser. http://try.redis.io * Try Redis directly inside your browser. http://try.redis.io
* The full list of Redis commands. http://redis.io/commands * The full list of Redis commands. http://redis.io/commands
* There is much more inside the Redis official documentation. http://redis.io/documentation * There is much more inside the official Redis documentation. http://redis.io/documentation
Building Redis Building Redis
-------------- --------------
...@@ -29,7 +29,7 @@ and 64 bit systems. ...@@ -29,7 +29,7 @@ and 64 bit systems.
It may compile on Solaris derived systems (for instance SmartOS) but our It may compile on Solaris derived systems (for instance SmartOS) but our
support for this platform is *best effort* and Redis is not guaranteed to support for this platform is *best effort* and Redis is not guaranteed to
work as well as in Linux, OSX, and \*BSD there. work as well as in Linux, OSX, and \*BSD.
It is as simple as: It is as simple as:
...@@ -63,7 +63,7 @@ installed): ...@@ -63,7 +63,7 @@ installed):
Fixing build problems with dependencies or cached build options Fixing build problems with dependencies or cached build options
--------- ---------
Redis has some dependencies which are included into the `deps` directory. Redis has some dependencies which are included in the `deps` directory.
`make` does not automatically rebuild dependencies even if something in `make` does not automatically rebuild dependencies even if something in
the source code of dependencies changes. the source code of dependencies changes.
...@@ -90,7 +90,7 @@ with a 64 bit target, or the other way around, you need to perform a ...@@ -90,7 +90,7 @@ with a 64 bit target, or the other way around, you need to perform a
In case of build errors when trying to build a 32 bit binary of Redis, try In case of build errors when trying to build a 32 bit binary of Redis, try
the following steps: the following steps:
* Install the packages libc6-dev-i386 (also try g++-multilib). * Install the package libc6-dev-i386 (also try g++-multilib).
* Try using the following command line instead of `make 32bit`: * Try using the following command line instead of `make 32bit`:
`make CFLAGS="-m32 -march=native" LDFLAGS="-m32"` `make CFLAGS="-m32 -march=native" LDFLAGS="-m32"`
...@@ -114,15 +114,15 @@ To compile against jemalloc on Mac OS X systems, use: ...@@ -114,15 +114,15 @@ To compile against jemalloc on Mac OS X systems, use:
Verbose build Verbose build
------------- -------------
Redis will build with a user friendly colorized output by default. Redis will build with a user-friendly colorized output by default.
If you want to see a more verbose output use the following: If you want to see a more verbose output, use the following:
% make V=1 % make V=1
Running Redis Running Redis
------------- -------------
To run Redis with the default configuration just type: To run Redis with the default configuration, just type:
% cd src % cd src
% ./redis-server % ./redis-server
...@@ -173,7 +173,7 @@ You can find the list of all the available commands at http://redis.io/commands. ...@@ -173,7 +173,7 @@ You can find the list of all the available commands at http://redis.io/commands.
Installing Redis Installing Redis
----------------- -----------------
In order to install Redis binaries into /usr/local/bin just use: In order to install Redis binaries into /usr/local/bin, just use:
% make install % make install
...@@ -182,8 +182,8 @@ different destination. ...@@ -182,8 +182,8 @@ different destination.
Make install will just install binaries in your system, but will not configure Make install will just install binaries in your system, but will not configure
init scripts and configuration files in the appropriate place. This is not init scripts and configuration files in the appropriate place. This is not
needed if you want just to play a bit with Redis, but if you are installing needed if you just want to play a bit with Redis, but if you are installing
it the proper way for a production system, we have a script doing this it the proper way for a production system, we have a script that does this
for Ubuntu and Debian systems: for Ubuntu and Debian systems:
% cd utils % cd utils
...@@ -201,7 +201,7 @@ You'll be able to stop and start Redis using the script named ...@@ -201,7 +201,7 @@ You'll be able to stop and start Redis using the script named
Code contributions Code contributions
----------------- -----------------
Note: by contributing code to the Redis project in any form, including sending Note: By contributing code to the Redis project in any form, including sending
a pull request via Github, a code fragment or patch via private email or a pull request via Github, a code fragment or patch via private email or
public discussion groups, you agree to release your code under the terms public discussion groups, you agree to release your code under the terms
of the BSD license that you can find in the [COPYING][1] file included in the Redis of the BSD license that you can find in the [COPYING][1] file included in the Redis
...@@ -251,7 +251,7 @@ of complexity incrementally. ...@@ -251,7 +251,7 @@ of complexity incrementally.
Note: lately Redis was refactored quite a bit. Function names and file Note: lately Redis was refactored quite a bit. Function names and file
names have been changed, so you may find that this documentation reflects the names have been changed, so you may find that this documentation reflects the
`unstable` branch more closely. For instance in Redis 3.0 the `server.c` `unstable` branch more closely. For instance, in Redis 3.0 the `server.c`
and `server.h` files were named `redis.c` and `redis.h`. However the overall and `server.h` files were named `redis.c` and `redis.h`. However the overall
structure is the same. Keep in mind that all the new developments and pull structure is the same. Keep in mind that all the new developments and pull
requests should be performed against the `unstable` branch. requests should be performed against the `unstable` branch.
...@@ -296,7 +296,7 @@ The client structure defines a *connected client*: ...@@ -296,7 +296,7 @@ The client structure defines a *connected client*:
* The `fd` field is the client socket file descriptor. * The `fd` field is the client socket file descriptor.
* `argc` and `argv` are populated with the command the client is executing, so that functions implementing a given Redis command can read the arguments. * `argc` and `argv` are populated with the command the client is executing, so that functions implementing a given Redis command can read the arguments.
* `querybuf` accumulates the requests from the client, which are parsed by the Redis server according to the Redis protocol and executed by calling the implementations of the commands the client is executing. * `querybuf` accumulates the requests from the client, which are parsed by the Redis server according to the Redis protocol and executed by calling the implementations of the commands the client is executing.
* `reply` and `buf` are dynamic and static buffers that accumulate the replies the server sends to the client. These buffers are incrementally written to the socket as soon as the file descriptor is writable. * `reply` and `buf` are dynamic and static buffers that accumulate the replies the server sends to the client. These buffers are incrementally written to the socket as soon as the file descriptor is writeable.
As you can see in the client structure above, arguments in a command As you can see in the client structure above, arguments in a command
are described as `robj` structures. The following is the full `robj` are described as `robj` structures. The following is the full `robj`
...@@ -329,13 +329,13 @@ This is the entry point of the Redis server, where the `main()` function ...@@ -329,13 +329,13 @@ This is the entry point of the Redis server, where the `main()` function
is defined. The following are the most important steps in order to startup is defined. The following are the most important steps in order to startup
the Redis server. the Redis server.
* `initServerConfig()` setups the default values of the `server` structure. * `initServerConfig()` sets up the default values of the `server` structure.
* `initServer()` allocates the data structures needed to operate, setup the listening socket, and so forth. * `initServer()` allocates the data structures needed to operate, setup the listening socket, and so forth.
* `aeMain()` starts the event loop which listens for new connections. * `aeMain()` starts the event loop which listens for new connections.
There are two special functions called periodically by the event loop: There are two special functions called periodically by the event loop:
1. `serverCron()` is called periodically (according to `server.hz` frequency), and performs tasks that must be performed from time to time, like checking for timedout clients. 1. `serverCron()` is called periodically (according to `server.hz` frequency), and performs tasks that must be performed from time to time, like checking for timed out clients.
2. `beforeSleep()` is called every time the event loop fired, Redis served a few requests, and is returning back into the event loop. 2. `beforeSleep()` is called every time the event loop fired, Redis served a few requests, and is returning back into the event loop.
Inside server.c you can find code that handles other vital things of the Redis server: Inside server.c you can find code that handles other vital things of the Redis server:
...@@ -352,16 +352,16 @@ This file defines all the I/O functions with clients, masters and replicas ...@@ -352,16 +352,16 @@ This file defines all the I/O functions with clients, masters and replicas
(which in Redis are just special clients): (which in Redis are just special clients):
* `createClient()` allocates and initializes a new client. * `createClient()` allocates and initializes a new client.
* the `addReply*()` family of functions are used by commands implementations in order to append data to the client structure, that will be transmitted to the client as a reply for a given command executed. * the `addReply*()` family of functions are used by command implementations in order to append data to the client structure, that will be transmitted to the client as a reply for a given command executed.
* `writeToClient()` transmits the data pending in the output buffers to the client and is called by the *writable event handler* `sendReplyToClient()`. * `writeToClient()` transmits the data pending in the output buffers to the client and is called by the *writable event handler* `sendReplyToClient()`.
* `readQueryFromClient()` is the *readable event handler* and accumulates data from read from the client into the query buffer. * `readQueryFromClient()` is the *readable event handler* and accumulates data read from the client into the query buffer.
* `processInputBuffer()` is the entry point in order to parse the client query buffer according to the Redis protocol. Once commands are ready to be processed, it calls `processCommand()` which is defined inside `server.c` in order to actually execute the command. * `processInputBuffer()` is the entry point in order to parse the client query buffer according to the Redis protocol. Once commands are ready to be processed, it calls `processCommand()` which is defined inside `server.c` in order to actually execute the command.
* `freeClient()` deallocates, disconnects and removes a client. * `freeClient()` deallocates, disconnects and removes a client.
aof.c and rdb.c aof.c and rdb.c
--- ---
As you can guess from the names these files implement the RDB and AOF As you can guess from the names, these files implement the RDB and AOF
persistence for Redis. Redis uses a persistence model based on the `fork()` persistence for Redis. Redis uses a persistence model based on the `fork()`
system call in order to create a thread with the same (shared) memory system call in order to create a thread with the same (shared) memory
content of the main Redis thread. This secondary thread dumps the content content of the main Redis thread. This secondary thread dumps the content
...@@ -373,13 +373,13 @@ The implementation inside `aof.c` has additional functions in order to ...@@ -373,13 +373,13 @@ The implementation inside `aof.c` has additional functions in order to
implement an API that allows commands to append new commands into the AOF implement an API that allows commands to append new commands into the AOF
file as clients execute them. file as clients execute them.
The `call()` function defined inside `server.c` is responsible to call The `call()` function defined inside `server.c` is responsible for calling
the functions that in turn will write the commands into the AOF. the functions that in turn will write the commands into the AOF.
db.c db.c
--- ---
Certain Redis commands operate on specific data types, others are general. Certain Redis commands operate on specific data types; others are general.
Examples of generic commands are `DEL` and `EXPIRE`. They operate on keys Examples of generic commands are `DEL` and `EXPIRE`. They operate on keys
and not on their values specifically. All those generic commands are and not on their values specifically. All those generic commands are
defined inside `db.c`. defined inside `db.c`.
...@@ -387,7 +387,7 @@ defined inside `db.c`. ...@@ -387,7 +387,7 @@ defined inside `db.c`.
Moreover `db.c` implements an API in order to perform certain operations Moreover `db.c` implements an API in order to perform certain operations
on the Redis dataset without directly accessing the internal data structures. on the Redis dataset without directly accessing the internal data structures.
The most important functions inside `db.c` which are used in many commands The most important functions inside `db.c` which are used in many command
implementations are the following: implementations are the following:
* `lookupKeyRead()` and `lookupKeyWrite()` are used in order to get a pointer to the value associated to a given key, or `NULL` if the key does not exist. * `lookupKeyRead()` and `lookupKeyWrite()` are used in order to get a pointer to the value associated to a given key, or `NULL` if the key does not exist.
...@@ -405,7 +405,7 @@ The `robj` structure defining Redis objects was already described. Inside ...@@ -405,7 +405,7 @@ The `robj` structure defining Redis objects was already described. Inside
a basic level, like functions to allocate new objects, handle the reference a basic level, like functions to allocate new objects, handle the reference
counting and so forth. Notable functions inside this file: counting and so forth. Notable functions inside this file:
* `incrRefcount()` and `decrRefCount()` are used in order to increment or decrement an object reference count. When it drops to 0 the object is finally freed. * `incrRefCount()` and `decrRefCount()` are used in order to increment or decrement an object reference count. When it drops to 0 the object is finally freed.
* `createObject()` allocates a new object. There are also specialized functions to allocate string objects having a specific content, like `createStringObjectFromLongLong()` and similar functions. * `createObject()` allocates a new object. There are also specialized functions to allocate string objects having a specific content, like `createStringObjectFromLongLong()` and similar functions.
This file also implements the `OBJECT` command. This file also implements the `OBJECT` command.
...@@ -429,12 +429,12 @@ replicas, or to continue the replication after a disconnection. ...@@ -429,12 +429,12 @@ replicas, or to continue the replication after a disconnection.
Other C files Other C files
--- ---
* `t_hash.c`, `t_list.c`, `t_set.c`, `t_string.c`, `t_zset.c` and `t_stream.c` contains the implementation of the Redis data types. They implement both an API to access a given data type, and the client commands implementations for these data types. * `t_hash.c`, `t_list.c`, `t_set.c`, `t_string.c`, `t_zset.c` and `t_stream.c` contains the implementation of the Redis data types. They implement both an API to access a given data type, and the client command implementations for these data types.
* `ae.c` implements the Redis event loop, it's a self contained library which is simple to read and understand. * `ae.c` implements the Redis event loop, it's a self contained library which is simple to read and understand.
* `sds.c` is the Redis string library, check http://github.com/antirez/sds for more information. * `sds.c` is the Redis string library, check http://github.com/antirez/sds for more information.
* `anet.c` is a library to use POSIX networking in a simpler way compared to the raw interface exposed by the kernel. * `anet.c` is a library to use POSIX networking in a simpler way compared to the raw interface exposed by the kernel.
* `dict.c` is an implementation of a non-blocking hash table which rehashes incrementally. * `dict.c` is an implementation of a non-blocking hash table which rehashes incrementally.
* `scripting.c` implements Lua scripting. It is completely self contained from the rest of the Redis implementation and is simple enough to understand if you are familar with the Lua API. * `scripting.c` implements Lua scripting. It is completely self-contained and isolated from the rest of the Redis implementation and is simple enough to understand if you are familiar with the Lua API.
* `cluster.c` implements the Redis Cluster. Probably a good read only after being very familiar with the rest of the Redis code base. If you want to read `cluster.c` make sure to read the [Redis Cluster specification][3]. * `cluster.c` implements the Redis Cluster. Probably a good read only after being very familiar with the rest of the Redis code base. If you want to read `cluster.c` make sure to read the [Redis Cluster specification][3].
[3]: http://redis.io/topics/cluster-spec [3]: http://redis.io/topics/cluster-spec
...@@ -460,12 +460,12 @@ top comment inside `server.c`. ...@@ -460,12 +460,12 @@ top comment inside `server.c`.
After the command operates in some way, it returns a reply to the client, After the command operates in some way, it returns a reply to the client,
usually using `addReply()` or a similar function defined inside `networking.c`. usually using `addReply()` or a similar function defined inside `networking.c`.
There are tons of commands implementations inside the Redis source code There are tons of command implementations inside the Redis source code
that can serve as examples of actual commands implementations. To write that can serve as examples of actual commands implementations. Writing
a few toy commands can be a good exercise to familiarize with the code base. a few toy commands can be a good exercise to get familiar with the code base.
There are also many other files not described here, but it is useless to There are also many other files not described here, but it is useless to
cover everything. We want to just help you with the first steps. cover everything. We just want to help you with the first steps.
Eventually you'll find your way inside the Redis code base :-) Eventually you'll find your way inside the Redis code base :-)
Enjoy! Enjoy!
...@@ -21,7 +21,7 @@ just following tose steps: ...@@ -21,7 +21,7 @@ just following tose steps:
1. Remove the jemalloc directory. 1. Remove the jemalloc directory.
2. Substitute it with the new jemalloc source tree. 2. Substitute it with the new jemalloc source tree.
3. Edit the Makefile localted in the same directory as the README you are 3. Edit the Makefile located in the same directory as the README you are
reading, and change the --with-version in the Jemalloc configure script reading, and change the --with-version in the Jemalloc configure script
options with the version you are using. This is required because otherwise options with the version you are using. This is required because otherwise
Jemalloc configuration script is broken and will not work nested in another Jemalloc configuration script is broken and will not work nested in another
...@@ -33,7 +33,7 @@ If you want to upgrade Jemalloc while also providing support for ...@@ -33,7 +33,7 @@ If you want to upgrade Jemalloc while also providing support for
active defragmentation, in addition to the above steps you need to perform active defragmentation, in addition to the above steps you need to perform
the following additional steps: the following additional steps:
5. In Jemalloc three, file `include/jemalloc/jemalloc_macros.h.in`, make sure 5. In Jemalloc tree, file `include/jemalloc/jemalloc_macros.h.in`, make sure
to add `#define JEMALLOC_FRAG_HINT`. to add `#define JEMALLOC_FRAG_HINT`.
6. Implement the function `je_get_defrag_hint()` inside `src/jemalloc.c`. You 6. Implement the function `je_get_defrag_hint()` inside `src/jemalloc.c`. You
can see how it is implemented in the current Jemalloc source tree shipped can see how it is implemented in the current Jemalloc source tree shipped
...@@ -49,7 +49,7 @@ Hiredis uses the SDS string library, that must be the same version used inside R ...@@ -49,7 +49,7 @@ Hiredis uses the SDS string library, that must be the same version used inside R
1. Check with diff if hiredis API changed and what impact it could have in Redis. 1. Check with diff if hiredis API changed and what impact it could have in Redis.
2. Make sure that the SDS library inside Hiredis and inside Redis are compatible. 2. Make sure that the SDS library inside Hiredis and inside Redis are compatible.
3. After the upgrade, run the Redis Sentinel test. 3. After the upgrade, run the Redis Sentinel test.
4. Check manually that redis-cli and redis-benchmark behave as expecteed, since we have no tests for CLI utilities currently. 4. Check manually that redis-cli and redis-benchmark behave as expected, since we have no tests for CLI utilities currently.
Linenoise Linenoise
--- ---
...@@ -77,6 +77,6 @@ and our version: ...@@ -77,6 +77,6 @@ and our version:
1. Makefile is modified to allow a different compiler than GCC. 1. Makefile is modified to allow a different compiler than GCC.
2. We have the implementation source code, and directly link to the following external libraries: `lua_cjson.o`, `lua_struct.o`, `lua_cmsgpack.o` and `lua_bit.o`. 2. We have the implementation source code, and directly link to the following external libraries: `lua_cjson.o`, `lua_struct.o`, `lua_cmsgpack.o` and `lua_bit.o`.
3. There is a security fix in `ldo.c`, line 498: The check for `LUA_SIGNATURE[0]` is removed in order toa void direct bytecode execution. 3. There is a security fix in `ldo.c`, line 498: The check for `LUA_SIGNATURE[0]` is removed in order to avoid direct bytecode execution.
...@@ -625,7 +625,7 @@ static void refreshMultiLine(struct linenoiseState *l) { ...@@ -625,7 +625,7 @@ static void refreshMultiLine(struct linenoiseState *l) {
rpos2 = (plen+l->pos+l->cols)/l->cols; /* current cursor relative row. */ rpos2 = (plen+l->pos+l->cols)/l->cols; /* current cursor relative row. */
lndebug("rpos2 %d", rpos2); lndebug("rpos2 %d", rpos2);
/* Go up till we reach the expected positon. */ /* Go up till we reach the expected position. */
if (rows-rpos2 > 0) { if (rows-rpos2 > 0) {
lndebug("go-up %d", rows-rpos2); lndebug("go-up %d", rows-rpos2);
snprintf(seq,64,"\x1b[%dA", rows-rpos2); snprintf(seq,64,"\x1b[%dA", rows-rpos2);
...@@ -767,7 +767,7 @@ void linenoiseEditBackspace(struct linenoiseState *l) { ...@@ -767,7 +767,7 @@ void linenoiseEditBackspace(struct linenoiseState *l) {
} }
} }
/* Delete the previosu word, maintaining the cursor at the start of the /* Delete the previous word, maintaining the cursor at the start of the
* current word. */ * current word. */
void linenoiseEditDeletePrevWord(struct linenoiseState *l) { void linenoiseEditDeletePrevWord(struct linenoiseState *l) {
size_t old_pos = l->pos; size_t old_pos = l->pos;
......
...@@ -24,7 +24,7 @@ ...@@ -24,7 +24,7 @@
# to customize a few per-server settings. Include files can include # to customize a few per-server settings. Include files can include
# other files, so use this wisely. # other files, so use this wisely.
# #
# Notice option "include" won't be rewritten by command "CONFIG REWRITE" # Note that option "include" won't be rewritten by command "CONFIG REWRITE"
# from admin or Redis Sentinel. Since Redis always uses the last processed # from admin or Redis Sentinel. Since Redis always uses the last processed
# line as value of a configuration directive, you'd better put includes # line as value of a configuration directive, you'd better put includes
# at the beginning of this file to avoid overwriting config change at runtime. # at the beginning of this file to avoid overwriting config change at runtime.
...@@ -46,7 +46,7 @@ ...@@ -46,7 +46,7 @@
################################## NETWORK ##################################### ################################## NETWORK #####################################
# By default, if no "bind" configuration directive is specified, Redis listens # By default, if no "bind" configuration directive is specified, Redis listens
# for connections from all the network interfaces available on the server. # for connections from all available network interfaces on the host machine.
# It is possible to listen to just one or multiple selected interfaces using # It is possible to listen to just one or multiple selected interfaces using
# the "bind" configuration directive, followed by one or more IP addresses. # the "bind" configuration directive, followed by one or more IP addresses.
# #
...@@ -58,13 +58,12 @@ ...@@ -58,13 +58,12 @@
# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the # ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the
# internet, binding to all the interfaces is dangerous and will expose the # internet, binding to all the interfaces is dangerous and will expose the
# instance to everybody on the internet. So by default we uncomment the # instance to everybody on the internet. So by default we uncomment the
# following bind directive, that will force Redis to listen only into # following bind directive, that will force Redis to listen only on the
# the IPv4 loopback interface address (this means Redis will be able to # IPv4 loopback interface address (this means Redis will only be able to
# accept connections only from clients running into the same computer it # accept client connections from the same host that it is running on).
# is running).
# #
# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES # IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
# JUST COMMENT THE FOLLOWING LINE. # JUST COMMENT OUT THE FOLLOWING LINE.
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
bind 127.0.0.1 bind 127.0.0.1
...@@ -93,8 +92,8 @@ port 6379 ...@@ -93,8 +92,8 @@ port 6379
# TCP listen() backlog. # TCP listen() backlog.
# #
# In high requests-per-second environments you need an high backlog in order # In high requests-per-second environments you need a high backlog in order
# to avoid slow clients connections issues. Note that the Linux kernel # to avoid slow clients connection issues. Note that the Linux kernel
# will silently truncate it to the value of /proc/sys/net/core/somaxconn so # will silently truncate it to the value of /proc/sys/net/core/somaxconn so
# make sure to raise both the value of somaxconn and tcp_max_syn_backlog # make sure to raise both the value of somaxconn and tcp_max_syn_backlog
# in order to get the desired effect. # in order to get the desired effect.
...@@ -118,8 +117,8 @@ timeout 0 ...@@ -118,8 +117,8 @@ timeout 0
# of communication. This is useful for two reasons: # of communication. This is useful for two reasons:
# #
# 1) Detect dead peers. # 1) Detect dead peers.
# 2) Take the connection alive from the point of view of network # 2) Force network equipment in the middle to consider the connection to be
# equipment in the middle. # alive.
# #
# On Linux, the specified value (in seconds) is the period used to send ACKs. # On Linux, the specified value (in seconds) is the period used to send ACKs.
# Note that to close the connection the double of the time is needed. # Note that to close the connection the double of the time is needed.
...@@ -228,11 +227,12 @@ daemonize no ...@@ -228,11 +227,12 @@ daemonize no
# supervision tree. Options: # supervision tree. Options:
# supervised no - no supervision interaction # supervised no - no supervision interaction
# supervised upstart - signal upstart by putting Redis into SIGSTOP mode # supervised upstart - signal upstart by putting Redis into SIGSTOP mode
# requires "expect stop" in your upstart job config
# supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET # supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
# supervised auto - detect upstart or systemd method based on # supervised auto - detect upstart or systemd method based on
# UPSTART_JOB or NOTIFY_SOCKET environment variables # UPSTART_JOB or NOTIFY_SOCKET environment variables
# Note: these supervision methods only signal "process is ready." # Note: these supervision methods only signal "process is ready."
# They do not enable continuous liveness pings back to your supervisor. # They do not enable continuous pings back to your supervisor.
supervised no supervised no
# If a pid file is specified, Redis writes it where specified at startup # If a pid file is specified, Redis writes it where specified at startup
...@@ -291,7 +291,7 @@ always-show-logo yes ...@@ -291,7 +291,7 @@ always-show-logo yes
# Will save the DB if both the given number of seconds and the given # Will save the DB if both the given number of seconds and the given
# number of write operations against the DB occurred. # number of write operations against the DB occurred.
# #
# In the example below the behaviour will be to save: # In the example below the behavior will be to save:
# after 900 sec (15 min) if at least 1 key changed # after 900 sec (15 min) if at least 1 key changed
# after 300 sec (5 min) if at least 10 keys changed # after 300 sec (5 min) if at least 10 keys changed
# after 60 sec if at least 10000 keys changed # after 60 sec if at least 10000 keys changed
...@@ -324,7 +324,7 @@ save 60 10000 ...@@ -324,7 +324,7 @@ save 60 10000
stop-writes-on-bgsave-error yes stop-writes-on-bgsave-error yes
# Compress string objects using LZF when dump .rdb databases? # Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win. # By default compression is enabled as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but # If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys. # the dataset will likely be bigger if you have compressible values or keys.
rdbcompression yes rdbcompression yes
...@@ -412,11 +412,11 @@ dir ./ ...@@ -412,11 +412,11 @@ dir ./
# still reply to client requests, possibly with out of date data, or the # still reply to client requests, possibly with out of date data, or the
# data set may just be empty if this is the first synchronization. # data set may just be empty if this is the first synchronization.
# #
# 2) if replica-serve-stale-data is set to 'no' the replica will reply with # 2) If replica-serve-stale-data is set to 'no' the replica will reply with
# an error "SYNC with master in progress" to all the kind of commands # an error "SYNC with master in progress" to all commands except:
# but to INFO, replicaOF, AUTH, PING, SHUTDOWN, REPLCONF, ROLE, CONFIG, # INFO, REPLICAOF, AUTH, PING, SHUTDOWN, REPLCONF, ROLE, CONFIG, SUBSCRIBE,
# SUBSCRIBE, UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB, # UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB, COMMAND, POST,
# COMMAND, POST, HOST: and LATENCY. # HOST and LATENCY.
# #
replica-serve-stale-data yes replica-serve-stale-data yes
...@@ -487,7 +487,7 @@ repl-diskless-sync-delay 5 ...@@ -487,7 +487,7 @@ repl-diskless-sync-delay 5
# #
# Replica can load the RDB it reads from the replication link directly from the # Replica can load the RDB it reads from the replication link directly from the
# socket, or store the RDB to a file and read that file after it was completely # socket, or store the RDB to a file and read that file after it was completely
# recived from the master. # received from the master.
# #
# In many cases the disk is slower than the network, and storing and loading # In many cases the disk is slower than the network, and storing and loading
# the RDB file may increase replication time (and even increase the master's # the RDB file may increase replication time (and even increase the master's
...@@ -517,7 +517,8 @@ repl-diskless-load disabled ...@@ -517,7 +517,8 @@ repl-diskless-load disabled
# #
# It is important to make sure that this value is greater than the value # It is important to make sure that this value is greater than the value
# specified for repl-ping-replica-period otherwise a timeout will be detected # specified for repl-ping-replica-period otherwise a timeout will be detected
# every time there is low traffic between the master and the replica. # every time there is low traffic between the master and the replica. The default
# value is 60 seconds.
# #
# repl-timeout 60 # repl-timeout 60
...@@ -542,21 +543,21 @@ repl-disable-tcp-nodelay no ...@@ -542,21 +543,21 @@ repl-disable-tcp-nodelay no
# partial resync is enough, just passing the portion of data the replica # partial resync is enough, just passing the portion of data the replica
# missed while disconnected. # missed while disconnected.
# #
# The bigger the replication backlog, the longer the time the replica can be # The bigger the replication backlog, the longer the replica can endure the
# disconnected and later be able to perform a partial resynchronization. # disconnect and later be able to perform a partial resynchronization.
# #
# The backlog is only allocated once there is at least a replica connected. # The backlog is only allocated if there is at least one replica connected.
# #
# repl-backlog-size 1mb # repl-backlog-size 1mb
# After a master has no longer connected replicas for some time, the backlog # After a master has no connected replicas for some time, the backlog will be
# will be freed. The following option configures the amount of seconds that # freed. The following option configures the amount of seconds that need to
# need to elapse, starting from the time the last replica disconnected, for # elapse, starting from the time the last replica disconnected, for the backlog
# the backlog buffer to be freed. # buffer to be freed.
# #
# Note that replicas never free the backlog for timeout, since they may be # Note that replicas never free the backlog for timeout, since they may be
# promoted to masters later, and should be able to correctly "partially # promoted to masters later, and should be able to correctly "partially
# resynchronize" with the replicas: hence they should always accumulate backlog. # resynchronize" with other replicas: hence they should always accumulate backlog.
# #
# A value of 0 means to never release the backlog. # A value of 0 means to never release the backlog.
# #
...@@ -606,8 +607,8 @@ replica-priority 100 ...@@ -606,8 +607,8 @@ replica-priority 100
# Another place where this info is available is in the output of the # Another place where this info is available is in the output of the
# "ROLE" command of a master. # "ROLE" command of a master.
# #
# The listed IP and address normally reported by a replica is obtained # The listed IP address and port normally reported by a replica is
# in the following way: # obtained in the following way:
# #
# IP: The address is auto detected by checking the peer address # IP: The address is auto detected by checking the peer address
# of the socket used by the replica to connect with the master. # of the socket used by the replica to connect with the master.
...@@ -617,7 +618,7 @@ replica-priority 100 ...@@ -617,7 +618,7 @@ replica-priority 100
# listen for connections. # listen for connections.
# #
# However when port forwarding or Network Address Translation (NAT) is # However when port forwarding or Network Address Translation (NAT) is
# used, the replica may be actually reachable via different IP and port # used, the replica may actually be reachable via different IP and port
# pairs. The following two options can be used by a replica in order to # pairs. The following two options can be used by a replica in order to
# report to its master a specific set of IP and port, so that both INFO # report to its master a specific set of IP and port, so that both INFO
# and ROLE will report those values. # and ROLE will report those values.
...@@ -634,7 +635,7 @@ replica-priority 100 ...@@ -634,7 +635,7 @@ replica-priority 100
# This is implemented using an invalidation table that remembers, using # This is implemented using an invalidation table that remembers, using
# 16 millions of slots, what clients may have certain subsets of keys. In turn # 16 millions of slots, what clients may have certain subsets of keys. In turn
# this is used in order to send invalidation messages to clients. Please # this is used in order to send invalidation messages to clients. Please
# to understand more about the feature check this page: # check this page to understand more about the feature:
# #
# https://redis.io/topics/client-side-caching # https://redis.io/topics/client-side-caching
# #
...@@ -666,7 +667,7 @@ replica-priority 100 ...@@ -666,7 +667,7 @@ replica-priority 100
################################## SECURITY ################################### ################################## SECURITY ###################################
# Warning: since Redis is pretty fast an outside user can try up to # Warning: since Redis is pretty fast, an outside user can try up to
# 1 million passwords per second against a modern box. This means that you # 1 million passwords per second against a modern box. This means that you
# should use very strong passwords, otherwise they will be very easy to break. # should use very strong passwords, otherwise they will be very easy to break.
# Note that because the password is really a shared secret between the client # Note that because the password is really a shared secret between the client
...@@ -690,7 +691,7 @@ replica-priority 100 ...@@ -690,7 +691,7 @@ replica-priority 100
# AUTH (or the HELLO command AUTH option) in order to be authenticated and # AUTH (or the HELLO command AUTH option) in order to be authenticated and
# start to work. # start to work.
# #
# The ACL rules that describe what an user can do are the following: # The ACL rules that describe what a user can do are the following:
# #
# on Enable the user: it is possible to authenticate as this user. # on Enable the user: it is possible to authenticate as this user.
# off Disable the user: it's no longer possible to authenticate # off Disable the user: it's no longer possible to authenticate
...@@ -718,7 +719,7 @@ replica-priority 100 ...@@ -718,7 +719,7 @@ replica-priority 100
# It is possible to specify multiple patterns. # It is possible to specify multiple patterns.
# allkeys Alias for ~* # allkeys Alias for ~*
# resetkeys Flush the list of allowed keys patterns. # resetkeys Flush the list of allowed keys patterns.
# ><password> Add this passowrd to the list of valid password for the user. # ><password> Add this password to the list of valid password for the user.
# For example >mypass will add "mypass" to the list. # For example >mypass will add "mypass" to the list.
# This directive clears the "nopass" flag (see later). # This directive clears the "nopass" flag (see later).
# <<password> Remove this password from the list of valid passwords. # <<password> Remove this password from the list of valid passwords.
...@@ -772,7 +773,7 @@ acllog-max-len 128 ...@@ -772,7 +773,7 @@ acllog-max-len 128
# #
# Instead of configuring users here in this file, it is possible to use # Instead of configuring users here in this file, it is possible to use
# a stand-alone file just listing users. The two methods cannot be mixed: # a stand-alone file just listing users. The two methods cannot be mixed:
# if you configure users here and at the same time you activate the exteranl # if you configure users here and at the same time you activate the external
# ACL file, the server will refuse to start. # ACL file, the server will refuse to start.
# #
# The format of the external ACL user file is exactly the same as the # The format of the external ACL user file is exactly the same as the
...@@ -780,7 +781,7 @@ acllog-max-len 128 ...@@ -780,7 +781,7 @@ acllog-max-len 128
# #
# aclfile /etc/redis/users.acl # aclfile /etc/redis/users.acl
# IMPORTANT NOTE: starting with Redis 6 "requirepass" is just a compatiblity # IMPORTANT NOTE: starting with Redis 6 "requirepass" is just a compatibility
# layer on top of the new ACL system. The option effect will be just setting # layer on top of the new ACL system. The option effect will be just setting
# the password for the default user. Clients will still authenticate using # the password for the default user. Clients will still authenticate using
# AUTH <password> as usually, or more explicitly with AUTH default <password> # AUTH <password> as usually, or more explicitly with AUTH default <password>
...@@ -891,8 +892,8 @@ acllog-max-len 128 ...@@ -891,8 +892,8 @@ acllog-max-len 128
# LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated # LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can tune it for speed or # algorithms (in order to save memory), so you can tune it for speed or
# accuracy. For default Redis will check five keys and pick the one that was # accuracy. By default Redis will check five keys and pick the one that was
# used less recently, you can change the sample size using the following # used least recently, you can change the sample size using the following
# configuration directive. # configuration directive.
# #
# The default of 5 produces good enough results. 10 Approximates very closely # The default of 5 produces good enough results. 10 Approximates very closely
...@@ -932,8 +933,8 @@ acllog-max-len 128 ...@@ -932,8 +933,8 @@ acllog-max-len 128
# it is possible to increase the expire "effort" that is normally set to # it is possible to increase the expire "effort" that is normally set to
# "1", to a greater value, up to the value "10". At its maximum value the # "1", to a greater value, up to the value "10". At its maximum value the
# system will use more CPU, longer cycles (and technically may introduce # system will use more CPU, longer cycles (and technically may introduce
# more latency), and will tollerate less already expired keys still present # more latency), and will tolerate less already expired keys still present
# in the system. It's a tradeoff betweeen memory, CPU and latecy. # in the system. It's a tradeoff between memory, CPU and latency.
# #
# active-expire-effort 1 # active-expire-effort 1
...@@ -1001,7 +1002,7 @@ lazyfree-lazy-user-del no ...@@ -1001,7 +1002,7 @@ lazyfree-lazy-user-del no
# #
# Now it is also possible to handle Redis clients socket reads and writes # Now it is also possible to handle Redis clients socket reads and writes
# in different I/O threads. Since especially writing is so slow, normally # in different I/O threads. Since especially writing is so slow, normally
# Redis users use pipelining in order to speedup the Redis performances per # Redis users use pipelining in order to speed up the Redis performances per
# core, and spawn multiple instances in order to scale more. Using I/O # core, and spawn multiple instances in order to scale more. Using I/O
# threads it is possible to easily speedup two times Redis without resorting # threads it is possible to easily speedup two times Redis without resorting
# to pipelining nor sharding of the instance. # to pipelining nor sharding of the instance.
...@@ -1019,7 +1020,7 @@ lazyfree-lazy-user-del no ...@@ -1019,7 +1020,7 @@ lazyfree-lazy-user-del no
# #
# io-threads 4 # io-threads 4
# #
# Setting io-threads to 1 will just use the main thread as usually. # Setting io-threads to 1 will just use the main thread as usual.
# When I/O threads are enabled, we only use threads for writes, that is # When I/O threads are enabled, we only use threads for writes, that is
# to thread the write(2) syscall and transfer the client buffers to the # to thread the write(2) syscall and transfer the client buffers to the
# socket. However it is also possible to enable threading of reads and # socket. However it is also possible to enable threading of reads and
...@@ -1036,7 +1037,7 @@ lazyfree-lazy-user-del no ...@@ -1036,7 +1037,7 @@ lazyfree-lazy-user-del no
# #
# NOTE 2: If you want to test the Redis speedup using redis-benchmark, make # NOTE 2: If you want to test the Redis speedup using redis-benchmark, make
# sure you also run the benchmark itself in threaded mode, using the # sure you also run the benchmark itself in threaded mode, using the
# --threads option to match the number of Redis theads, otherwise you'll not # --threads option to match the number of Redis threads, otherwise you'll not
# be able to notice the improvements. # be able to notice the improvements.
############################ KERNEL OOM CONTROL ############################## ############################ KERNEL OOM CONTROL ##############################
...@@ -1189,8 +1190,8 @@ aof-load-truncated yes ...@@ -1189,8 +1190,8 @@ aof-load-truncated yes
# #
# [RDB file][AOF tail] # [RDB file][AOF tail]
# #
# When loading Redis recognizes that the AOF file starts with the "REDIS" # When loading, Redis recognizes that the AOF file starts with the "REDIS"
# string and loads the prefixed RDB file, and continues loading the AOF # string and loads the prefixed RDB file, then continues loading the AOF
# tail. # tail.
aof-use-rdb-preamble yes aof-use-rdb-preamble yes
...@@ -1204,7 +1205,7 @@ aof-use-rdb-preamble yes ...@@ -1204,7 +1205,7 @@ aof-use-rdb-preamble yes
# #
# When a long running script exceeds the maximum execution time only the # When a long running script exceeds the maximum execution time only the
# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be # SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
# used to stop a script that did not yet called write commands. The second # used to stop a script that did not yet call any write commands. The second
# is the only way to shut down the server in the case a write command was # is the only way to shut down the server in the case a write command was
# already issued by the script but the user doesn't want to wait for the natural # already issued by the script but the user doesn't want to wait for the natural
# termination of the script. # termination of the script.
...@@ -1230,7 +1231,7 @@ lua-time-limit 5000 ...@@ -1230,7 +1231,7 @@ lua-time-limit 5000
# Cluster node timeout is the amount of milliseconds a node must be unreachable # Cluster node timeout is the amount of milliseconds a node must be unreachable
# for it to be considered in failure state. # for it to be considered in failure state.
# Most other internal time limits are multiple of the node timeout. # Most other internal time limits are a multiple of the node timeout.
# #
# cluster-node-timeout 15000 # cluster-node-timeout 15000
...@@ -1257,18 +1258,18 @@ lua-time-limit 5000 ...@@ -1257,18 +1258,18 @@ lua-time-limit 5000
# the failover if, since the last interaction with the master, the time # the failover if, since the last interaction with the master, the time
# elapsed is greater than: # elapsed is greater than:
# #
# (node-timeout * replica-validity-factor) + repl-ping-replica-period # (node-timeout * cluster-replica-validity-factor) + repl-ping-replica-period
# #
# So for example if node-timeout is 30 seconds, and the replica-validity-factor # So for example if node-timeout is 30 seconds, and the cluster-replica-validity-factor
# is 10, and assuming a default repl-ping-replica-period of 10 seconds, the # is 10, and assuming a default repl-ping-replica-period of 10 seconds, the
# replica will not try to failover if it was not able to talk with the master # replica will not try to failover if it was not able to talk with the master
# for longer than 310 seconds. # for longer than 310 seconds.
# #
# A large replica-validity-factor may allow replicas with too old data to failover # A large cluster-replica-validity-factor may allow replicas with too old data to failover
# a master, while a too small value may prevent the cluster from being able to # a master, while a too small value may prevent the cluster from being able to
# elect a replica at all. # elect a replica at all.
# #
# For maximum availability, it is possible to set the replica-validity-factor # For maximum availability, it is possible to set the cluster-replica-validity-factor
# to a value of 0, which means, that replicas will always try to failover the # to a value of 0, which means, that replicas will always try to failover the
# master regardless of the last time they interacted with the master. # master regardless of the last time they interacted with the master.
# (However they'll always try to apply a delay proportional to their # (However they'll always try to apply a delay proportional to their
...@@ -1299,7 +1300,7 @@ lua-time-limit 5000 ...@@ -1299,7 +1300,7 @@ lua-time-limit 5000
# cluster-migration-barrier 1 # cluster-migration-barrier 1
# By default Redis Cluster nodes stop accepting queries if they detect there # By default Redis Cluster nodes stop accepting queries if they detect there
# is at least an hash slot uncovered (no available node is serving it). # is at least a hash slot uncovered (no available node is serving it).
# This way if the cluster is partially down (for example a range of hash slots # This way if the cluster is partially down (for example a range of hash slots
# are no longer covered) all the cluster becomes, eventually, unavailable. # are no longer covered) all the cluster becomes, eventually, unavailable.
# It automatically returns available as soon as all the slots are covered again. # It automatically returns available as soon as all the slots are covered again.
...@@ -1354,7 +1355,7 @@ lua-time-limit 5000 ...@@ -1354,7 +1355,7 @@ lua-time-limit 5000
# * cluster-announce-port # * cluster-announce-port
# * cluster-announce-bus-port # * cluster-announce-bus-port
# #
# Each instruct the node about its address, client port, and cluster message # Each instructs the node about its address, client port, and cluster message
# bus port. The information is then published in the header of the bus packets # bus port. The information is then published in the header of the bus packets
# so that other nodes will be able to correctly map the address of the node # so that other nodes will be able to correctly map the address of the node
# publishing the information. # publishing the information.
...@@ -1365,7 +1366,7 @@ lua-time-limit 5000 ...@@ -1365,7 +1366,7 @@ lua-time-limit 5000
# Note that when remapped, the bus port may not be at the fixed offset of # Note that when remapped, the bus port may not be at the fixed offset of
# clients port + 10000, so you can specify any port and bus-port depending # clients port + 10000, so you can specify any port and bus-port depending
# on how they get remapped. If the bus-port is not set, a fixed offset of # on how they get remapped. If the bus-port is not set, a fixed offset of
# 10000 will be used as usually. # 10000 will be used as usual.
# #
# Example: # Example:
# #
...@@ -1494,7 +1495,7 @@ notify-keyspace-events "" ...@@ -1494,7 +1495,7 @@ notify-keyspace-events ""
# two kind of inline requests that were anyway illegal: an empty request # two kind of inline requests that were anyway illegal: an empty request
# or any request that starts with "/" (there are no Redis commands starting # or any request that starts with "/" (there are no Redis commands starting
# with such a slash). Normal RESP2/RESP3 requests are completely out of the # with such a slash). Normal RESP2/RESP3 requests are completely out of the
# path of the Gopher protocol implementation and are served as usually as well. # path of the Gopher protocol implementation and are served as usual as well.
# #
# If you open a connection to Redis when Gopher is enabled and send it # If you open a connection to Redis when Gopher is enabled and send it
# a string like "/foo", if there is a key named "/foo" it is served via the # a string like "/foo", if there is a key named "/foo" it is served via the
...@@ -1666,7 +1667,7 @@ client-output-buffer-limit pubsub 32mb 8mb 60 ...@@ -1666,7 +1667,7 @@ client-output-buffer-limit pubsub 32mb 8mb 60
# client-query-buffer-limit 1gb # client-query-buffer-limit 1gb
# In the Redis protocol, bulk requests, that are, elements representing single # In the Redis protocol, bulk requests, that are, elements representing single
# strings, are normally limited ot 512 mb. However you can change this limit # strings, are normally limited to 512 mb. However you can change this limit
# here, but must be 1mb or greater # here, but must be 1mb or greater
# #
# proto-max-bulk-len 512mb # proto-max-bulk-len 512mb
...@@ -1695,7 +1696,7 @@ hz 10 ...@@ -1695,7 +1696,7 @@ hz 10
# #
# Since the default HZ value by default is conservatively set to 10, Redis # Since the default HZ value by default is conservatively set to 10, Redis
# offers, and enables by default, the ability to use an adaptive HZ value # offers, and enables by default, the ability to use an adaptive HZ value
# which will temporary raise when there are many connected clients. # which will temporarily raise when there are many connected clients.
# #
# When dynamic HZ is enabled, the actual configured HZ will be used # When dynamic HZ is enabled, the actual configured HZ will be used
# as a baseline, but multiples of the configured HZ value will be actually # as a baseline, but multiples of the configured HZ value will be actually
...@@ -1762,7 +1763,7 @@ rdb-save-incremental-fsync yes ...@@ -1762,7 +1763,7 @@ rdb-save-incremental-fsync yes
# for the key counter to be divided by two (or decremented if it has a value # for the key counter to be divided by two (or decremented if it has a value
# less <= 10). # less <= 10).
# #
# The default value for the lfu-decay-time is 1. A Special value of 0 means to # The default value for the lfu-decay-time is 1. A special value of 0 means to
# decay the counter every time it happens to be scanned. # decay the counter every time it happens to be scanned.
# #
# lfu-log-factor 10 # lfu-log-factor 10
...@@ -1782,7 +1783,7 @@ rdb-save-incremental-fsync yes ...@@ -1782,7 +1783,7 @@ rdb-save-incremental-fsync yes
# restart is needed in order to lower the fragmentation, or at least to flush # restart is needed in order to lower the fragmentation, or at least to flush
# away all the data and create it again. However thanks to this feature # away all the data and create it again. However thanks to this feature
# implemented by Oran Agra for Redis 4.0 this process can happen at runtime # implemented by Oran Agra for Redis 4.0 this process can happen at runtime
# in an "hot" way, while the server is running. # in a "hot" way, while the server is running.
# #
# Basically when the fragmentation is over a certain level (see the # Basically when the fragmentation is over a certain level (see the
# configuration options below) Redis will start to create new copies of the # configuration options below) Redis will start to create new copies of the
...@@ -1859,3 +1860,4 @@ jemalloc-bg-thread yes ...@@ -1859,3 +1860,4 @@ jemalloc-bg-thread yes
# #
# Set bgsave child process to cpu affinity 1,10,11 # Set bgsave child process to cpu affinity 1,10,11
# bgsave_cpulist 1,10-11 # bgsave_cpulist 1,10-11
...@@ -259,6 +259,6 @@ sentinel deny-scripts-reconfig yes ...@@ -259,6 +259,6 @@ sentinel deny-scripts-reconfig yes
# SENTINEL SET can also be used in order to perform this configuration at runtime. # SENTINEL SET can also be used in order to perform this configuration at runtime.
# #
# In order to set a command back to its original name (undo the renaming), it # In order to set a command back to its original name (undo the renaming), it
# is possible to just rename a command to itsef: # is possible to just rename a command to itself:
# #
# SENTINEL rename-command mymaster CONFIG CONFIG # SENTINEL rename-command mymaster CONFIG CONFIG
...@@ -289,7 +289,7 @@ void ACLFreeUserAndKillClients(user *u) { ...@@ -289,7 +289,7 @@ void ACLFreeUserAndKillClients(user *u) {
while ((ln = listNext(&li)) != NULL) { while ((ln = listNext(&li)) != NULL) {
client *c = listNodeValue(ln); client *c = listNodeValue(ln);
if (c->user == u) { if (c->user == u) {
/* We'll free the conenction asynchronously, so /* We'll free the connection asynchronously, so
* in theory to set a different user is not needed. * in theory to set a different user is not needed.
* However if there are bugs in Redis, soon or later * However if there are bugs in Redis, soon or later
* this may result in some security hole: it's much * this may result in some security hole: it's much
......
...@@ -34,8 +34,9 @@ ...@@ -34,8 +34,9 @@
#include "zmalloc.h" #include "zmalloc.h"
/* Create a new list. The created list can be freed with /* Create a new list. The created list can be freed with
* AlFreeList(), but private value of every node need to be freed * listRelease(), but private value of every node need to be freed
* by the user before to call AlFreeList(). * by the user before to call listRelease(), or by setting a free method using
* listSetFreeMethod.
* *
* On error, NULL is returned. Otherwise the pointer to the new list. */ * On error, NULL is returned. Otherwise the pointer to the new list. */
list *listCreate(void) list *listCreate(void)
...@@ -217,8 +218,8 @@ void listRewindTail(list *list, listIter *li) { ...@@ -217,8 +218,8 @@ void listRewindTail(list *list, listIter *li) {
* listDelNode(), but not to remove other elements. * listDelNode(), but not to remove other elements.
* *
* The function returns a pointer to the next element of the list, * The function returns a pointer to the next element of the list,
* or NULL if there are no more elements, so the classical usage patter * or NULL if there are no more elements, so the classical usage
* is: * pattern is:
* *
* iter = listGetIterator(list,<direction>); * iter = listGetIterator(list,<direction>);
* while ((node = listNext(iter)) != NULL) { * while ((node = listNext(iter)) != NULL) {
......
...@@ -457,7 +457,7 @@ int aeProcessEvents(aeEventLoop *eventLoop, int flags) ...@@ -457,7 +457,7 @@ int aeProcessEvents(aeEventLoop *eventLoop, int flags)
int fired = 0; /* Number of events fired for current fd. */ int fired = 0; /* Number of events fired for current fd. */
/* Normally we execute the readable event first, and the writable /* Normally we execute the readable event first, and the writable
* event laster. This is useful as sometimes we may be able * event later. This is useful as sometimes we may be able
* to serve the reply of a query immediately after processing the * to serve the reply of a query immediately after processing the
* query. * query.
* *
...@@ -465,7 +465,7 @@ int aeProcessEvents(aeEventLoop *eventLoop, int flags) ...@@ -465,7 +465,7 @@ int aeProcessEvents(aeEventLoop *eventLoop, int flags)
* asking us to do the reverse: never fire the writable event * asking us to do the reverse: never fire the writable event
* after the readable. In such a case, we invert the calls. * after the readable. In such a case, we invert the calls.
* This is useful when, for instance, we want to do things * This is useful when, for instance, we want to do things
* in the beforeSleep() hook, like fsynching a file to disk, * in the beforeSleep() hook, like fsyncing a file to disk,
* before replying to a client. */ * before replying to a client. */
int invert = fe->mask & AE_BARRIER; int invert = fe->mask & AE_BARRIER;
......
...@@ -232,7 +232,7 @@ static void aeApiDelEvent(aeEventLoop *eventLoop, int fd, int mask) { ...@@ -232,7 +232,7 @@ static void aeApiDelEvent(aeEventLoop *eventLoop, int fd, int mask) {
/* /*
* ENOMEM is a potentially transient condition, but the kernel won't * ENOMEM is a potentially transient condition, but the kernel won't
* generally return it unless things are really bad. EAGAIN indicates * generally return it unless things are really bad. EAGAIN indicates
* we've reached an resource limit, for which it doesn't make sense to * we've reached a resource limit, for which it doesn't make sense to
* retry (counter-intuitively). All other errors indicate a bug. In any * retry (counter-intuitively). All other errors indicate a bug. In any
* of these cases, the best we can do is to abort. * of these cases, the best we can do is to abort.
*/ */
......
...@@ -544,7 +544,7 @@ sds catAppendOnlyGenericCommand(sds dst, int argc, robj **argv) { ...@@ -544,7 +544,7 @@ sds catAppendOnlyGenericCommand(sds dst, int argc, robj **argv) {
return dst; return dst;
} }
/* Create the sds representation of an PEXPIREAT command, using /* Create the sds representation of a PEXPIREAT command, using
* 'seconds' as time to live and 'cmd' to understand what command * 'seconds' as time to live and 'cmd' to understand what command
* we are translating into a PEXPIREAT. * we are translating into a PEXPIREAT.
* *
...@@ -1818,7 +1818,7 @@ void backgroundRewriteDoneHandler(int exitcode, int bysignal) { ...@@ -1818,7 +1818,7 @@ void backgroundRewriteDoneHandler(int exitcode, int bysignal) {
"Background AOF rewrite terminated with error"); "Background AOF rewrite terminated with error");
} else { } else {
/* SIGUSR1 is whitelisted, so we have a way to kill a child without /* SIGUSR1 is whitelisted, so we have a way to kill a child without
* tirggering an error condition. */ * triggering an error condition. */
if (bysignal != SIGUSR1) if (bysignal != SIGUSR1)
server.aof_lastbgrewrite_status = C_ERR; server.aof_lastbgrewrite_status = C_ERR;
......
...@@ -21,7 +21,7 @@ ...@@ -21,7 +21,7 @@
* *
* Never use return value from the macros, instead use the AtomicGetIncr() * Never use return value from the macros, instead use the AtomicGetIncr()
* if you need to get the current value and increment it atomically, like * if you need to get the current value and increment it atomically, like
* in the followign example: * in the following example:
* *
* long oldvalue; * long oldvalue;
* atomicGetIncr(myvar,oldvalue,1); * atomicGetIncr(myvar,oldvalue,1);
......
...@@ -36,7 +36,7 @@ ...@@ -36,7 +36,7 @@
/* Count number of bits set in the binary array pointed by 's' and long /* Count number of bits set in the binary array pointed by 's' and long
* 'count' bytes. The implementation of this function is required to * 'count' bytes. The implementation of this function is required to
* work with a input string length up to 512 MB. */ * work with an input string length up to 512 MB. */
size_t redisPopcount(void *s, long count) { size_t redisPopcount(void *s, long count) {
size_t bits = 0; size_t bits = 0;
unsigned char *p = s; unsigned char *p = s;
...@@ -107,7 +107,7 @@ long redisBitpos(void *s, unsigned long count, int bit) { ...@@ -107,7 +107,7 @@ long redisBitpos(void *s, unsigned long count, int bit) {
int found; int found;
/* Process whole words first, seeking for first word that is not /* Process whole words first, seeking for first word that is not
* all ones or all zeros respectively if we are lookig for zeros * all ones or all zeros respectively if we are looking for zeros
* or ones. This is much faster with large strings having contiguous * or ones. This is much faster with large strings having contiguous
* blocks of 1 or 0 bits compared to the vanilla bit per bit processing. * blocks of 1 or 0 bits compared to the vanilla bit per bit processing.
* *
...@@ -496,7 +496,7 @@ robj *lookupStringForBitCommand(client *c, size_t maxbit) { ...@@ -496,7 +496,7 @@ robj *lookupStringForBitCommand(client *c, size_t maxbit) {
* in 'len'. The user is required to pass (likely stack allocated) buffer * in 'len'. The user is required to pass (likely stack allocated) buffer
* 'llbuf' of at least LONG_STR_SIZE bytes. Such a buffer is used in the case * 'llbuf' of at least LONG_STR_SIZE bytes. Such a buffer is used in the case
* the object is integer encoded in order to provide the representation * the object is integer encoded in order to provide the representation
* without usign heap allocation. * without using heap allocation.
* *
* The function returns the pointer to the object array of bytes representing * The function returns the pointer to the object array of bytes representing
* the string it contains, that may be a pointer to 'llbuf' or to the * the string it contains, that may be a pointer to 'llbuf' or to the
......
...@@ -53,7 +53,7 @@ ...@@ -53,7 +53,7 @@
* to 0, no timeout is processed). * to 0, no timeout is processed).
* It usually just needs to send a reply to the client. * It usually just needs to send a reply to the client.
* *
* When implementing a new type of blocking opeation, the implementation * When implementing a new type of blocking operation, the implementation
* should modify unblockClient() and replyToBlockedClientTimedOut() in order * should modify unblockClient() and replyToBlockedClientTimedOut() in order
* to handle the btype-specific behavior of this two functions. * to handle the btype-specific behavior of this two functions.
* If the blocking operation waits for certain keys to change state, the * If the blocking operation waits for certain keys to change state, the
...@@ -118,7 +118,7 @@ void processUnblockedClients(void) { ...@@ -118,7 +118,7 @@ void processUnblockedClients(void) {
/* This function will schedule the client for reprocessing at a safe time. /* This function will schedule the client for reprocessing at a safe time.
* *
* This is useful when a client was blocked for some reason (blocking opeation, * This is useful when a client was blocked for some reason (blocking operation,
* CLIENT PAUSE, or whatever), because it may end with some accumulated query * CLIENT PAUSE, or whatever), because it may end with some accumulated query
* buffer that needs to be processed ASAP: * buffer that needs to be processed ASAP:
* *
......
...@@ -377,7 +377,7 @@ void clusterSaveConfigOrDie(int do_fsync) { ...@@ -377,7 +377,7 @@ void clusterSaveConfigOrDie(int do_fsync) {
} }
} }
/* Lock the cluster config using flock(), and leaks the file descritor used to /* Lock the cluster config using flock(), and leaks the file descriptor used to
* acquire the lock so that the file will be locked forever. * acquire the lock so that the file will be locked forever.
* *
* This works because we always update nodes.conf with a new version * This works because we always update nodes.conf with a new version
...@@ -544,13 +544,13 @@ void clusterInit(void) { ...@@ -544,13 +544,13 @@ void clusterInit(void) {
/* Reset a node performing a soft or hard reset: /* Reset a node performing a soft or hard reset:
* *
* 1) All other nodes are forget. * 1) All other nodes are forgotten.
* 2) All the assigned / open slots are released. * 2) All the assigned / open slots are released.
* 3) If the node is a slave, it turns into a master. * 3) If the node is a slave, it turns into a master.
* 5) Only for hard reset: a new Node ID is generated. * 4) Only for hard reset: a new Node ID is generated.
* 6) Only for hard reset: currentEpoch and configEpoch are set to 0. * 5) Only for hard reset: currentEpoch and configEpoch are set to 0.
* 7) The new configuration is saved and the cluster state updated. * 6) The new configuration is saved and the cluster state updated.
* 8) If the node was a slave, the whole data set is flushed away. */ * 7) If the node was a slave, the whole data set is flushed away. */
void clusterReset(int hard) { void clusterReset(int hard) {
dictIterator *di; dictIterator *di;
dictEntry *de; dictEntry *de;
...@@ -646,7 +646,7 @@ static void clusterConnAcceptHandler(connection *conn) { ...@@ -646,7 +646,7 @@ static void clusterConnAcceptHandler(connection *conn) {
/* Create a link object we use to handle the connection. /* Create a link object we use to handle the connection.
* It gets passed to the readable handler when data is available. * It gets passed to the readable handler when data is available.
* Initiallly the link->node pointer is set to NULL as we don't know * Initially the link->node pointer is set to NULL as we don't know
* which node is, but the right node is references once we know the * which node is, but the right node is references once we know the
* node identity. */ * node identity. */
link = createClusterLink(NULL); link = createClusterLink(NULL);
...@@ -1060,7 +1060,7 @@ uint64_t clusterGetMaxEpoch(void) { ...@@ -1060,7 +1060,7 @@ uint64_t clusterGetMaxEpoch(void) {
* 3) Persist the configuration on disk before sending packets with the * 3) Persist the configuration on disk before sending packets with the
* new configuration. * new configuration.
* *
* If the new config epoch is generated and assigend, C_OK is returned, * If the new config epoch is generated and assigned, C_OK is returned,
* otherwise C_ERR is returned (since the node has already the greatest * otherwise C_ERR is returned (since the node has already the greatest
* configuration around) and no operation is performed. * configuration around) and no operation is performed.
* *
...@@ -1133,7 +1133,7 @@ int clusterBumpConfigEpochWithoutConsensus(void) { ...@@ -1133,7 +1133,7 @@ int clusterBumpConfigEpochWithoutConsensus(void) {
* *
* In general we want a system that eventually always ends with different * In general we want a system that eventually always ends with different
* masters having different configuration epochs whatever happened, since * masters having different configuration epochs whatever happened, since
* nothign is worse than a split-brain condition in a distributed system. * nothing is worse than a split-brain condition in a distributed system.
* *
* BEHAVIOR * BEHAVIOR
* *
...@@ -1192,7 +1192,7 @@ void clusterHandleConfigEpochCollision(clusterNode *sender) { ...@@ -1192,7 +1192,7 @@ void clusterHandleConfigEpochCollision(clusterNode *sender) {
* entries from the black list. This is an O(N) operation but it is not a * entries from the black list. This is an O(N) operation but it is not a
* problem since add / exists operations are called very infrequently and * problem since add / exists operations are called very infrequently and
* the hash table is supposed to contain very little elements at max. * the hash table is supposed to contain very little elements at max.
* However without the cleanup during long uptimes and with some automated * However without the cleanup during long uptime and with some automated
* node add/removal procedures, entries could accumulate. */ * node add/removal procedures, entries could accumulate. */
void clusterBlacklistCleanup(void) { void clusterBlacklistCleanup(void) {
dictIterator *di; dictIterator *di;
...@@ -1346,12 +1346,12 @@ int clusterHandshakeInProgress(char *ip, int port, int cport) { ...@@ -1346,12 +1346,12 @@ int clusterHandshakeInProgress(char *ip, int port, int cport) {
return de != NULL; return de != NULL;
} }
/* Start an handshake with the specified address if there is not one /* Start a handshake with the specified address if there is not one
* already in progress. Returns non-zero if the handshake was actually * already in progress. Returns non-zero if the handshake was actually
* started. On error zero is returned and errno is set to one of the * started. On error zero is returned and errno is set to one of the
* following values: * following values:
* *
* EAGAIN - There is already an handshake in progress for this address. * EAGAIN - There is already a handshake in progress for this address.
* EINVAL - IP or port are not valid. */ * EINVAL - IP or port are not valid. */
int clusterStartHandshake(char *ip, int port, int cport) { int clusterStartHandshake(char *ip, int port, int cport) {
clusterNode *n; clusterNode *n;
...@@ -1793,7 +1793,7 @@ int clusterProcessPacket(clusterLink *link) { ...@@ -1793,7 +1793,7 @@ int clusterProcessPacket(clusterLink *link) {
if (sender) sender->data_received = now; if (sender) sender->data_received = now;
if (sender && !nodeInHandshake(sender)) { if (sender && !nodeInHandshake(sender)) {
/* Update our curretEpoch if we see a newer epoch in the cluster. */ /* Update our currentEpoch if we see a newer epoch in the cluster. */
senderCurrentEpoch = ntohu64(hdr->currentEpoch); senderCurrentEpoch = ntohu64(hdr->currentEpoch);
senderConfigEpoch = ntohu64(hdr->configEpoch); senderConfigEpoch = ntohu64(hdr->configEpoch);
if (senderCurrentEpoch > server.cluster->currentEpoch) if (senderCurrentEpoch > server.cluster->currentEpoch)
...@@ -2480,7 +2480,7 @@ void clusterSetGossipEntry(clusterMsg *hdr, int i, clusterNode *n) { ...@@ -2480,7 +2480,7 @@ void clusterSetGossipEntry(clusterMsg *hdr, int i, clusterNode *n) {
} }
/* Send a PING or PONG packet to the specified node, making sure to add enough /* Send a PING or PONG packet to the specified node, making sure to add enough
* gossip informations. */ * gossip information. */
void clusterSendPing(clusterLink *link, int type) { void clusterSendPing(clusterLink *link, int type) {
unsigned char *buf; unsigned char *buf;
clusterMsg *hdr; clusterMsg *hdr;
...@@ -2500,7 +2500,7 @@ void clusterSendPing(clusterLink *link, int type) { ...@@ -2500,7 +2500,7 @@ void clusterSendPing(clusterLink *link, int type) {
* node_timeout we exchange with each other node at least 4 packets * node_timeout we exchange with each other node at least 4 packets
* (we ping in the worst case in node_timeout/2 time, and we also * (we ping in the worst case in node_timeout/2 time, and we also
* receive two pings from the host), we have a total of 8 packets * receive two pings from the host), we have a total of 8 packets
* in the node_timeout*2 falure reports validity time. So we have * in the node_timeout*2 failure reports validity time. So we have
* that, for a single PFAIL node, we can expect to receive the following * that, for a single PFAIL node, we can expect to receive the following
* number of failure reports (in the specified window of time): * number of failure reports (in the specified window of time):
* *
...@@ -2527,7 +2527,7 @@ void clusterSendPing(clusterLink *link, int type) { ...@@ -2527,7 +2527,7 @@ void clusterSendPing(clusterLink *link, int type) {
* faster to propagate to go from PFAIL to FAIL state. */ * faster to propagate to go from PFAIL to FAIL state. */
int pfail_wanted = server.cluster->stats_pfail_nodes; int pfail_wanted = server.cluster->stats_pfail_nodes;
/* Compute the maxium totlen to allocate our buffer. We'll fix the totlen /* Compute the maximum totlen to allocate our buffer. We'll fix the totlen
* later according to the number of gossip sections we really were able * later according to the number of gossip sections we really were able
* to put inside the packet. */ * to put inside the packet. */
totlen = sizeof(clusterMsg)-sizeof(union clusterMsgData); totlen = sizeof(clusterMsg)-sizeof(union clusterMsgData);
...@@ -2564,7 +2564,7 @@ void clusterSendPing(clusterLink *link, int type) { ...@@ -2564,7 +2564,7 @@ void clusterSendPing(clusterLink *link, int type) {
if (this->flags & (CLUSTER_NODE_HANDSHAKE|CLUSTER_NODE_NOADDR) || if (this->flags & (CLUSTER_NODE_HANDSHAKE|CLUSTER_NODE_NOADDR) ||
(this->link == NULL && this->numslots == 0)) (this->link == NULL && this->numslots == 0))
{ {
freshnodes--; /* Tecnically not correct, but saves CPU. */ freshnodes--; /* Technically not correct, but saves CPU. */
continue; continue;
} }
...@@ -3149,7 +3149,7 @@ void clusterHandleSlaveFailover(void) { ...@@ -3149,7 +3149,7 @@ void clusterHandleSlaveFailover(void) {
} }
} }
/* If the previous failover attempt timedout and the retry time has /* If the previous failover attempt timeout and the retry time has
* elapsed, we can setup a new one. */ * elapsed, we can setup a new one. */
if (auth_age > auth_retry_time) { if (auth_age > auth_retry_time) {
server.cluster->failover_auth_time = mstime() + server.cluster->failover_auth_time = mstime() +
...@@ -3255,7 +3255,7 @@ void clusterHandleSlaveFailover(void) { ...@@ -3255,7 +3255,7 @@ void clusterHandleSlaveFailover(void) {
* *
* Slave migration is the process that allows a slave of a master that is * Slave migration is the process that allows a slave of a master that is
* already covered by at least another slave, to "migrate" to a master that * already covered by at least another slave, to "migrate" to a master that
* is orpaned, that is, left with no working slaves. * is orphaned, that is, left with no working slaves.
* ------------------------------------------------------------------------- */ * ------------------------------------------------------------------------- */
/* This function is responsible to decide if this replica should be migrated /* This function is responsible to decide if this replica should be migrated
...@@ -3272,7 +3272,7 @@ void clusterHandleSlaveFailover(void) { ...@@ -3272,7 +3272,7 @@ void clusterHandleSlaveFailover(void) {
* the nodes anyway, so we spend time into clusterHandleSlaveMigration() * the nodes anyway, so we spend time into clusterHandleSlaveMigration()
* if definitely needed. * if definitely needed.
* *
* The fuction is called with a pre-computed max_slaves, that is the max * The function is called with a pre-computed max_slaves, that is the max
* number of working (not in FAIL state) slaves for a single master. * number of working (not in FAIL state) slaves for a single master.
* *
* Additional conditions for migration are examined inside the function. * Additional conditions for migration are examined inside the function.
...@@ -3391,7 +3391,7 @@ void clusterHandleSlaveMigration(int max_slaves) { ...@@ -3391,7 +3391,7 @@ void clusterHandleSlaveMigration(int max_slaves) {
* data loss due to the asynchronous master-slave replication. * data loss due to the asynchronous master-slave replication.
* -------------------------------------------------------------------------- */ * -------------------------------------------------------------------------- */
/* Reset the manual failover state. This works for both masters and slavesa /* Reset the manual failover state. This works for both masters and slaves
* as all the state about manual failover is cleared. * as all the state about manual failover is cleared.
* *
* The function can be used both to initialize the manual failover state at * The function can be used both to initialize the manual failover state at
...@@ -3683,7 +3683,7 @@ void clusterCron(void) { ...@@ -3683,7 +3683,7 @@ void clusterCron(void) {
replicationSetMaster(myself->slaveof->ip, myself->slaveof->port); replicationSetMaster(myself->slaveof->ip, myself->slaveof->port);
} }
/* Abourt a manual failover if the timeout is reached. */ /* Abort a manual failover if the timeout is reached. */
manualFailoverCheckTimeout(); manualFailoverCheckTimeout();
if (nodeIsSlave(myself)) { if (nodeIsSlave(myself)) {
...@@ -3788,12 +3788,12 @@ int clusterNodeSetSlotBit(clusterNode *n, int slot) { ...@@ -3788,12 +3788,12 @@ int clusterNodeSetSlotBit(clusterNode *n, int slot) {
* target for replicas migration, if and only if at least one of * target for replicas migration, if and only if at least one of
* the other masters has slaves right now. * the other masters has slaves right now.
* *
* Normally masters are valid targerts of replica migration if: * Normally masters are valid targets of replica migration if:
* 1. The used to have slaves (but no longer have). * 1. The used to have slaves (but no longer have).
* 2. They are slaves failing over a master that used to have slaves. * 2. They are slaves failing over a master that used to have slaves.
* *
* However new masters with slots assigned are considered valid * However new masters with slots assigned are considered valid
* migration tagets if the rest of the cluster is not a slave-less. * migration targets if the rest of the cluster is not a slave-less.
* *
* See https://github.com/antirez/redis/issues/3043 for more info. */ * See https://github.com/antirez/redis/issues/3043 for more info. */
if (n->numslots == 1 && clusterMastersHaveSlaves()) if (n->numslots == 1 && clusterMastersHaveSlaves())
...@@ -3977,7 +3977,7 @@ void clusterUpdateState(void) { ...@@ -3977,7 +3977,7 @@ void clusterUpdateState(void) {
* A) If no other node is in charge according to the current cluster * A) If no other node is in charge according to the current cluster
* configuration, we add these slots to our node. * configuration, we add these slots to our node.
* B) If according to our config other nodes are already in charge for * B) If according to our config other nodes are already in charge for
* this lots, we set the slots as IMPORTING from our point of view * this slots, we set the slots as IMPORTING from our point of view
* in order to justify we have those slots, and in order to make * in order to justify we have those slots, and in order to make
* redis-trib aware of the issue, so that it can try to fix it. * redis-trib aware of the issue, so that it can try to fix it.
* 2) If we find data in a DB different than DB0 we return C_ERR to * 2) If we find data in a DB different than DB0 we return C_ERR to
...@@ -4507,7 +4507,7 @@ NULL ...@@ -4507,7 +4507,7 @@ NULL
} }
/* If this slot is in migrating status but we have no keys /* If this slot is in migrating status but we have no keys
* for it assigning the slot to another node will clear * for it assigning the slot to another node will clear
* the migratig status. */ * the migrating status. */
if (countKeysInSlot(slot) == 0 && if (countKeysInSlot(slot) == 0 &&
server.cluster->migrating_slots_to[slot]) server.cluster->migrating_slots_to[slot])
server.cluster->migrating_slots_to[slot] = NULL; server.cluster->migrating_slots_to[slot] = NULL;
...@@ -4852,7 +4852,7 @@ NULL ...@@ -4852,7 +4852,7 @@ NULL
server.cluster->currentEpoch = epoch; server.cluster->currentEpoch = epoch;
/* No need to fsync the config here since in the unlucky event /* No need to fsync the config here since in the unlucky event
* of a failure to persist the config, the conflict resolution code * of a failure to persist the config, the conflict resolution code
* will assign an unique config to this node. */ * will assign a unique config to this node. */
clusterDoBeforeSleep(CLUSTER_TODO_UPDATE_STATE| clusterDoBeforeSleep(CLUSTER_TODO_UPDATE_STATE|
CLUSTER_TODO_SAVE_CONFIG); CLUSTER_TODO_SAVE_CONFIG);
addReply(c,shared.ok); addReply(c,shared.ok);
...@@ -4900,7 +4900,7 @@ void createDumpPayload(rio *payload, robj *o, robj *key) { ...@@ -4900,7 +4900,7 @@ void createDumpPayload(rio *payload, robj *o, robj *key) {
unsigned char buf[2]; unsigned char buf[2];
uint64_t crc; uint64_t crc;
/* Serialize the object in a RDB-like format. It consist of an object type /* Serialize the object in an RDB-like format. It consist of an object type
* byte followed by the serialized object. This is understood by RESTORE. */ * byte followed by the serialized object. This is understood by RESTORE. */
rioInitWithBuffer(payload,sdsempty()); rioInitWithBuffer(payload,sdsempty());
serverAssert(rdbSaveObjectType(payload,o)); serverAssert(rdbSaveObjectType(payload,o));
...@@ -5567,7 +5567,7 @@ void readwriteCommand(client *c) { ...@@ -5567,7 +5567,7 @@ void readwriteCommand(client *c) {
* resharding in progress). * resharding in progress).
* *
* On success the function returns the node that is able to serve the request. * On success the function returns the node that is able to serve the request.
* If the node is not 'myself' a redirection must be perfomed. The kind of * If the node is not 'myself' a redirection must be performed. The kind of
* redirection is specified setting the integer passed by reference * redirection is specified setting the integer passed by reference
* 'error_code', which will be set to CLUSTER_REDIR_ASK or * 'error_code', which will be set to CLUSTER_REDIR_ASK or
* CLUSTER_REDIR_MOVED. * CLUSTER_REDIR_MOVED.
...@@ -5694,7 +5694,7 @@ clusterNode *getNodeByQuery(client *c, struct redisCommand *cmd, robj **argv, in ...@@ -5694,7 +5694,7 @@ clusterNode *getNodeByQuery(client *c, struct redisCommand *cmd, robj **argv, in
} }
} }
/* Migarting / Improrting slot? Count keys we don't have. */ /* Migrating / Importing slot? Count keys we don't have. */
if ((migrating_slot || importing_slot) && if ((migrating_slot || importing_slot) &&
lookupKeyRead(&server.db[0],thiskey) == NULL) lookupKeyRead(&server.db[0],thiskey) == NULL)
{ {
...@@ -5763,7 +5763,7 @@ clusterNode *getNodeByQuery(client *c, struct redisCommand *cmd, robj **argv, in ...@@ -5763,7 +5763,7 @@ clusterNode *getNodeByQuery(client *c, struct redisCommand *cmd, robj **argv, in
} }
/* Handle the read-only client case reading from a slave: if this /* Handle the read-only client case reading from a slave: if this
* node is a slave and the request is about an hash slot our master * node is a slave and the request is about a hash slot our master
* is serving, we can reply without redirection. */ * is serving, we can reply without redirection. */
int is_readonly_command = (c->cmd->flags & CMD_READONLY) || int is_readonly_command = (c->cmd->flags & CMD_READONLY) ||
(c->cmd->proc == execCommand && !(c->mstate.cmd_inv_flags & CMD_READONLY)); (c->cmd->proc == execCommand && !(c->mstate.cmd_inv_flags & CMD_READONLY));
...@@ -5777,7 +5777,7 @@ clusterNode *getNodeByQuery(client *c, struct redisCommand *cmd, robj **argv, in ...@@ -5777,7 +5777,7 @@ clusterNode *getNodeByQuery(client *c, struct redisCommand *cmd, robj **argv, in
} }
/* Base case: just return the right node. However if this node is not /* Base case: just return the right node. However if this node is not
* myself, set error_code to MOVED since we need to issue a rediretion. */ * myself, set error_code to MOVED since we need to issue a redirection. */
if (n != myself && error_code) *error_code = CLUSTER_REDIR_MOVED; if (n != myself && error_code) *error_code = CLUSTER_REDIR_MOVED;
return n; return n;
} }
...@@ -5823,7 +5823,7 @@ void clusterRedirectClient(client *c, clusterNode *n, int hashslot, int error_co ...@@ -5823,7 +5823,7 @@ void clusterRedirectClient(client *c, clusterNode *n, int hashslot, int error_co
* 3) The client may remain blocked forever (or up to the max timeout time) * 3) The client may remain blocked forever (or up to the max timeout time)
* waiting for a key change that will never happen. * waiting for a key change that will never happen.
* *
* If the client is found to be blocked into an hash slot this node no * If the client is found to be blocked into a hash slot this node no
* longer handles, the client is sent a redirection error, and the function * longer handles, the client is sent a redirection error, and the function
* returns 1. Otherwise 0 is returned and no operation is performed. */ * returns 1. Otherwise 0 is returned and no operation is performed. */
int clusterRedirectBlockedClientIfNeeded(client *c) { int clusterRedirectBlockedClientIfNeeded(client *c) {
......
...@@ -51,8 +51,8 @@ typedef struct clusterLink { ...@@ -51,8 +51,8 @@ typedef struct clusterLink {
#define CLUSTER_NODE_HANDSHAKE 32 /* We have still to exchange the first ping */ #define CLUSTER_NODE_HANDSHAKE 32 /* We have still to exchange the first ping */
#define CLUSTER_NODE_NOADDR 64 /* We don't know the address of this node */ #define CLUSTER_NODE_NOADDR 64 /* We don't know the address of this node */
#define CLUSTER_NODE_MEET 128 /* Send a MEET message to this node */ #define CLUSTER_NODE_MEET 128 /* Send a MEET message to this node */
#define CLUSTER_NODE_MIGRATE_TO 256 /* Master elegible for replica migration. */ #define CLUSTER_NODE_MIGRATE_TO 256 /* Master eligible for replica migration. */
#define CLUSTER_NODE_NOFAILOVER 512 /* Slave will not try to failver. */ #define CLUSTER_NODE_NOFAILOVER 512 /* Slave will not try to failover. */
#define CLUSTER_NODE_NULL_NAME "\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000" #define CLUSTER_NODE_NULL_NAME "\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"
#define nodeIsMaster(n) ((n)->flags & CLUSTER_NODE_MASTER) #define nodeIsMaster(n) ((n)->flags & CLUSTER_NODE_MASTER)
...@@ -164,10 +164,10 @@ typedef struct clusterState { ...@@ -164,10 +164,10 @@ typedef struct clusterState {
clusterNode *mf_slave; /* Slave performing the manual failover. */ clusterNode *mf_slave; /* Slave performing the manual failover. */
/* Manual failover state of slave. */ /* Manual failover state of slave. */
long long mf_master_offset; /* Master offset the slave needs to start MF long long mf_master_offset; /* Master offset the slave needs to start MF
or zero if stil not received. */ or zero if still not received. */
int mf_can_start; /* If non-zero signal that the manual failover int mf_can_start; /* If non-zero signal that the manual failover
can start requesting masters vote. */ can start requesting masters vote. */
/* The followign fields are used by masters to take state on elections. */ /* The following fields are used by masters to take state on elections. */
uint64_t lastVoteEpoch; /* Epoch of the last vote granted. */ uint64_t lastVoteEpoch; /* Epoch of the last vote granted. */
int todo_before_sleep; /* Things to do in clusterBeforeSleep(). */ int todo_before_sleep; /* Things to do in clusterBeforeSleep(). */
/* Messages received and sent by type. */ /* Messages received and sent by type. */
......
...@@ -1279,7 +1279,7 @@ void rewriteConfigNumericalOption(struct rewriteConfigState *state, const char * ...@@ -1279,7 +1279,7 @@ void rewriteConfigNumericalOption(struct rewriteConfigState *state, const char *
rewriteConfigRewriteLine(state,option,line,force); rewriteConfigRewriteLine(state,option,line,force);
} }
/* Rewrite a octal option. */ /* Rewrite an octal option. */
void rewriteConfigOctalOption(struct rewriteConfigState *state, char *option, int value, int defvalue) { void rewriteConfigOctalOption(struct rewriteConfigState *state, char *option, int value, int defvalue) {
int force = value != defvalue; int force = value != defvalue;
sds line = sdscatprintf(sdsempty(),"%s %o",option,value); sds line = sdscatprintf(sdsempty(),"%s %o",option,value);
...@@ -2097,7 +2097,7 @@ static int isValidAOFfilename(char *val, char **err) { ...@@ -2097,7 +2097,7 @@ static int isValidAOFfilename(char *val, char **err) {
static int updateHZ(long long val, long long prev, char **err) { static int updateHZ(long long val, long long prev, char **err) {
UNUSED(prev); UNUSED(prev);
UNUSED(err); UNUSED(err);
/* Hz is more an hint from the user, so we accept values out of range /* Hz is more a hint from the user, so we accept values out of range
* but cap them to reasonable values. */ * but cap them to reasonable values. */
server.config_hz = val; server.config_hz = val;
if (server.config_hz < CONFIG_MIN_HZ) server.config_hz = CONFIG_MIN_HZ; if (server.config_hz < CONFIG_MIN_HZ) server.config_hz = CONFIG_MIN_HZ;
...@@ -2115,7 +2115,7 @@ static int updateJemallocBgThread(int val, int prev, char **err) { ...@@ -2115,7 +2115,7 @@ static int updateJemallocBgThread(int val, int prev, char **err) {
static int updateReplBacklogSize(long long val, long long prev, char **err) { static int updateReplBacklogSize(long long val, long long prev, char **err) {
/* resizeReplicationBacklog sets server.repl_backlog_size, and relies on /* resizeReplicationBacklog sets server.repl_backlog_size, and relies on
* being able to tell when the size changes, so restore prev becore calling it. */ * being able to tell when the size changes, so restore prev before calling it. */
UNUSED(err); UNUSED(err);
server.repl_backlog_size = prev; server.repl_backlog_size = prev;
resizeReplicationBacklog(val); resizeReplicationBacklog(val);
......
...@@ -166,7 +166,7 @@ void setproctitle(const char *fmt, ...); ...@@ -166,7 +166,7 @@ void setproctitle(const char *fmt, ...);
#endif /* BYTE_ORDER */ #endif /* BYTE_ORDER */
/* Sometimes after including an OS-specific header that defines the /* Sometimes after including an OS-specific header that defines the
* endianess we end with __BYTE_ORDER but not with BYTE_ORDER that is what * endianness we end with __BYTE_ORDER but not with BYTE_ORDER that is what
* the Redis code uses. In this case let's define everything without the * the Redis code uses. In this case let's define everything without the
* underscores. */ * underscores. */
#ifndef BYTE_ORDER #ifndef BYTE_ORDER
......
...@@ -106,7 +106,7 @@ static inline int connAccept(connection *conn, ConnectionCallbackFunc accept_han ...@@ -106,7 +106,7 @@ static inline int connAccept(connection *conn, ConnectionCallbackFunc accept_han
} }
/* Establish a connection. The connect_handler will be called when the connection /* Establish a connection. The connect_handler will be called when the connection
* is established, or if an error has occured. * is established, or if an error has occurred.
* *
* The connection handler will be responsible to set up any read/write handlers * The connection handler will be responsible to set up any read/write handlers
* as needed. * as needed.
...@@ -168,7 +168,7 @@ static inline int connSetReadHandler(connection *conn, ConnectionCallbackFunc fu ...@@ -168,7 +168,7 @@ static inline int connSetReadHandler(connection *conn, ConnectionCallbackFunc fu
/* Set a write handler, and possibly enable a write barrier, this flag is /* Set a write handler, and possibly enable a write barrier, this flag is
* cleared when write handler is changed or removed. * cleared when write handler is changed or removed.
* With barroer enabled, we never fire the event if the read handler already * With barrier enabled, we never fire the event if the read handler already
* fired in the same event loop iteration. Useful when you want to persist * fired in the same event loop iteration. Useful when you want to persist
* things to disk before sending replies, and want to do that in a group fashion. */ * things to disk before sending replies, and want to do that in a group fashion. */
static inline int connSetWriteHandlerWithBarrier(connection *conn, ConnectionCallbackFunc func, int barrier) { static inline int connSetWriteHandlerWithBarrier(connection *conn, ConnectionCallbackFunc func, int barrier) {
......
...@@ -116,7 +116,7 @@ robj *lookupKeyReadWithFlags(redisDb *db, robj *key, int flags) { ...@@ -116,7 +116,7 @@ robj *lookupKeyReadWithFlags(redisDb *db, robj *key, int flags) {
* However, if the command caller is not the master, and as additional * However, if the command caller is not the master, and as additional
* safety measure, the command invoked is a read-only command, we can * safety measure, the command invoked is a read-only command, we can
* safely return NULL here, and provide a more consistent behavior * safely return NULL here, and provide a more consistent behavior
* to clients accessign expired values in a read-only fashion, that * to clients accessing expired values in a read-only fashion, that
* will say the key as non existing. * will say the key as non existing.
* *
* Notably this covers GETs when slaves are used to scale reads. */ * Notably this covers GETs when slaves are used to scale reads. */
...@@ -374,7 +374,7 @@ robj *dbUnshareStringValue(redisDb *db, robj *key, robj *o) { ...@@ -374,7 +374,7 @@ robj *dbUnshareStringValue(redisDb *db, robj *key, robj *o) {
* firing module events. * firing module events.
* and the function to return ASAP. * and the function to return ASAP.
* *
* On success the fuction returns the number of keys removed from the * On success the function returns the number of keys removed from the
* database(s). Otherwise -1 is returned in the specific case the * database(s). Otherwise -1 is returned in the specific case the
* DB number is out of range, and errno is set to EINVAL. */ * DB number is out of range, and errno is set to EINVAL. */
long long emptyDbGeneric(redisDb *dbarray, int dbnum, int flags, void(callback)(void*)) { long long emptyDbGeneric(redisDb *dbarray, int dbnum, int flags, void(callback)(void*)) {
...@@ -866,7 +866,7 @@ void scanGenericCommand(client *c, robj *o, unsigned long cursor) { ...@@ -866,7 +866,7 @@ void scanGenericCommand(client *c, robj *o, unsigned long cursor) {
/* Filter element if it is an expired key. */ /* Filter element if it is an expired key. */
if (!filter && o == NULL && expireIfNeeded(c->db, kobj)) filter = 1; if (!filter && o == NULL && expireIfNeeded(c->db, kobj)) filter = 1;
/* Remove the element and its associted value if needed. */ /* Remove the element and its associated value if needed. */
if (filter) { if (filter) {
decrRefCount(kobj); decrRefCount(kobj);
listDelNode(keys, node); listDelNode(keys, node);
...@@ -1367,7 +1367,7 @@ int *getKeysUsingCommandTable(struct redisCommand *cmd,robj **argv, int argc, in ...@@ -1367,7 +1367,7 @@ int *getKeysUsingCommandTable(struct redisCommand *cmd,robj **argv, int argc, in
/* Return all the arguments that are keys in the command passed via argc / argv. /* Return all the arguments that are keys in the command passed via argc / argv.
* *
* The command returns the positions of all the key arguments inside the array, * The command returns the positions of all the key arguments inside the array,
* so the actual return value is an heap allocated array of integers. The * so the actual return value is a heap allocated array of integers. The
* length of the array is returned by reference into *numkeys. * length of the array is returned by reference into *numkeys.
* *
* 'cmd' must be point to the corresponding entry into the redisCommand * 'cmd' must be point to the corresponding entry into the redisCommand
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment