Redis Cluster分布式集群搭建

Redis Cluster分布式集群搭建

  1. 下载和准备工作

    # download and compile
    wget https://download.redis.io/releases/redis-4.0.10.tar.gz
    tar zxvf redis-4.0.10.tar.gz
    cd redis-4.0.10
    make
    # prepare environment
    mkdir redis_essential
    cp redis-4.0.10/src/redis-cli redis_essential/
    cp redis-4.0.10/src/redis-server redis_essential/
    cp redis-4.0.10/src/redis-trib.rb redis_essential/
    cp redis-4.0.10/redis.conf redis_essential/
    cp redis-essential/ redis-6380 -r
    cd redis-6380/
    rm redis-cli redis-server redis-trib.rb
    
  2. 配置文件

    后台运行

    # By default Redis does not run as a daemon. Use 'yes' if you need it.
    # Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
    daemonize yes
    

    端口

    # Accept connections on the specified port, default is 6379 (IANA #815344).
    # If port 0 is specified Redis will not listen on a TCP socket.
    port 6380
    

    关闭RDB持久化

    ################################ SNAPSHOTTING  ################################
    #
    # Save the DB on disk:
    #
    #   save <seconds> <changes>
    #
    #   Will save the DB if both the given number of seconds and the given
    #   number of write operations against the DB occurred.
    #
    #   In the example below the behaviour will be to save:
    #   after 900 sec (15 min) if at least 1 key changed
    #   after 300 sec (5 min) if at least 10 keys changed
    #   after 60 sec if at least 10000 keys changed
    #
    #   Note: you can disable saving completely by commenting out all "save" lines.
    #
    #   It is also possible to remove all the previously configured save
    #   points by adding a save directive with a single empty string argument
    #   like in the following example:
    #
    #   save ""
    
    #save 900 1
    #save 300 10
    #save 60 10000
    

    开启AOF持久化

    ############################## APPEND ONLY MODE ###############################
    
    # By default Redis asynchronously dumps the dataset on disk. This mode is
    # good enough in many applications, but an issue with the Redis process or
    # a power outage may result into a few minutes of writes lost (depending on
    # the configured save points).
    #
    # The Append Only File is an alternative persistence mode that provides
    # much better durability. For instance using the default data fsync policy
    # (see later in the config file) Redis can lose just one second of writes in a
    # dramatic event like a server power outage, or a single write if something
    # wrong with the Redis process itself happens, but the operating system is
    # still running correctly.
    #
    # AOF and RDB persistence can be enabled at the same time without problems.
    # If the AOF is enabled on startup Redis will load the AOF, that is the file
    # with the better durability guarantees.
    #
    # Please check http://redis.io/topics/persistence for more information.
    appendonly yes
    
    # The name of the append only file (default: "appendonly.aof")
    appendfilename "appendonly.aof"
    
    
    

    fsync

    # The fsync() call tells the Operating System to actually write data on disk
    # instead of waiting for more data in the output buffer. Some OS will really flush
    # data on disk, some other OS will just try to do it ASAP.
    #
    # Redis supports three different modes:
    #
    # no: don't fsync, just let the OS flush the data when it wants. Faster.
    # always: fsync after every write to the append only log. Slow, Safest.
    # everysec: fsync only one time every second. Compromise.
    #
    # The default is "everysec", as that's usually the right compromise between
    # speed and data safety. It's up to you to understand if you can relax this to
    # "no" that will let the operating system flush the output buffer when
    # it wants, for better performances (but if you can live with the idea of
    # some data loss consider the default persistence mode that's snapshotting),
    # or on the contrary, use "always" that's very slow but a bit safer than
    # everysec.
    #
    # More details please check the following article:
    # http://antirez.com/post/redis-persistence-demystified.html
    #
    # If unsure, use "everysec".
    
    # appendfsync always
    appendfsync everysec
    

    日志文件

    # The working directory. 
    # 
    # The DB will be written inside this directory, with the filename specified 
    # above using the 'dbfilename' configuration directive. 
    # 
    # The Append Only File will also be created inside this directory. 
    # 
    # Note that you must specify a directory here, not a file name. 
    dir ./
    
    # Specify the server verbosity level.
    # This can be one of:
    # debug (a lot of information, useful for development/testing)
    # verbose (many rarely useful info, but not a mess like the debug level)
    # notice (moderately verbose, what you want in production probably)
    # warning (only very important / critical messages are logged)
    loglevel verbose
    
    # Specify the log file name. Also the empty string can be used to force
    # Redis to log on the standard output. Note that if you use standard
    # output for logging but daemonize, logs will be sent to /dev/null
    logfile "redis-log"
    

    pidfile

    # If a pid file is specified, Redis writes it where specified at startup
    # and removes it at exit.
    #
    # When the server runs non daemonized, no pid file is created if none is
    # specified in the configuration. When the server is daemonized, the pid file
    # is used even if not specified, defaulting to "/var/run/redis.pid".
    #
    # Creating a pid file is best effort: if Redis is not able to create it
    # nothing bad happens, the server will start and run normally.
    pidfile redis.pid
    

    超时配置

    # Unix socket.
    #
    # Specify the path for the Unix socket that will be used to listen for
    # incoming connections. There is no default, so Redis will not listen
    # on a unix socket when not specified.
    #
    # unixsocket /tmp/redis.sock
    # unixsocketperm 700
    
    # Close the connection after a client is idle for N seconds (0 to disable)
    timeout 0
    # TCP keepalive.
    #
    # If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
    # of communication. This is useful for two reasons:
    #
    # 1) Detect dead peers.
    # 2) Take the connection alive from the point of view of network
    #    equipment in the middle.
    #
    # On Linux, the specified value (in seconds) is the period used to send ACKs.
    # Note that to close the connection the double of the time is needed.
    # On other kernels the period depends on the kernel configuration.
    #
    # A reasonable value for this option is 300 seconds, which is the new
    # Redis default starting with Redis 3.2.1.
    tcp-keepalive 300
    

    集群配置

    ################################ REDIS CLUSTER  ###############################
    #
    # ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    # WARNING EXPERIMENTAL: Redis Cluster is considered to be stable code, however
    # in order to mark it as "mature" we need to wait for a non trivial percentage
    # of users to deploy it in production.
    # ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    #
    # Normal Redis instances can't be part of a Redis Cluster; only nodes that are
    # started as cluster nodes can. In order to start a Redis instance as a
    # cluster node enable the cluster support uncommenting the following:
    #
    cluster-enabled yes
    
    # Every cluster node has a cluster configuration file. This file is not
    # intended to be edited by hand. It is created and updated by Redis nodes.
    # Every Redis Cluster node requires a different cluster configuration file.
    # Make sure that instances running in the same system do not have
    # overlapping cluster configuration file names.
    #
    cluster-config-file nodes.conf
    
    # Cluster node timeout is the amount of milliseconds a node must be unreachable
    # for it to be considered in failure state.
    # Most other internal time limits are multiple of the node timeout.
    #
    cluster-node-timeout 15000
    
  3. 启动集群

    配置6个节点,分别放在不同文件夹下,例如redis-6380, redis-6381。修改配置文件的端口号。

    编写脚本,批量启动6个节点:

    #!/bin/bash
    cd /usr/local/redis_cluster/4.0.10/redis-6381;
    echo "cluster starting......"
    /usr/local/redis_cluster/4.0.10/redis-essential/redis-server redis.conf;
    cd /usr/local/redis_cluster/4.0.10/redis-6382;
    /usr/local/redis_cluster/4.0.10/redis-essential/redis-server redis.conf;
    cd /usr/local/redis_cluster/4.0.10/redis-6383;
    /usr/local/redis_cluster/4.0.10/redis-essential/redis-server redis.conf;
    cd /usr/local/redis_cluster/4.0.10/redis-6384;
    /usr/local/redis_cluster/4.0.10/redis-essential/redis-server redis.conf;
    cd /usr/local/redis_cluster/4.0.10/redis-6385;
    /usr/local/redis_cluster/4.0.10/redis-essential/redis-server redis.conf;
    cd /usr/local/redis_cluster/4.0.10/redis-6386;
    /usr/local/redis_cluster/4.0.10/redis-essential/redis-server redis.conf;
    echo "cluster started." 
    ps -ef|grep redis
    

    使用redis-trib.rb配置集群

    sudo apt-get install ruby
    gem install redis
    redis-essential/redis-trib.rb  create --replicas 1 127.0.0.1:6381 127.0.0.1:6382 127.0.0.1:6383 127.0.0.1:6384 127.0.0.1:6385 127.0.0.1:6386
    >>> Creating cluster
    >>> Performing hash slots allocation on 6 nodes...
    Using 3 masters:
    127.0.0.1:6381
    127.0.0.1:6382
    127.0.0.1:6383
    Adding replica 127.0.0.1:6385 to 127.0.0.1:6381
    Adding replica 127.0.0.1:6386 to 127.0.0.1:6382
    Adding replica 127.0.0.1:6384 to 127.0.0.1:6383
    >>> Trying to optimize slaves allocation for anti-affinity
    [WARNING] Some slaves are in the same host as their master
    M: aad64e40200a01ea5cdc98bb031485586dbb7565 127.0.0.1:6381
       slots:0-5460 (5461 slots) master
    M: e3649a691aa10b66e80e47ed002786532f6d0baa 127.0.0.1:6382
       slots:5461-10922 (5462 slots) master
    M: 5b6f61a1e2635a14d9c11e78e26be6ca776a459c 127.0.0.1:6383
       slots:10923-16383 (5461 slots) master
    S: 3ffaf31be68e127409e616316ecece8199760557 127.0.0.1:6384
       replicates aad64e40200a01ea5cdc98bb031485586dbb7565
    S: 750ce04a97cd9d33d033abb10cb847ffb9b73474 127.0.0.1:6385
       replicates e3649a691aa10b66e80e47ed002786532f6d0baa
    S: f0e17a4a37ea0c51210227ce8695769d38e158eb 127.0.0.1:6386
       replicates 5b6f61a1e2635a14d9c11e78e26be6ca776a459c
    Can I set the above configuration? (type 'yes' to accept): yes
    >>> Nodes configuration updated
    >>> Assign a different config epoch to each node
    >>> Sending CLUSTER MEET messages to join the cluster
    Waiting for the cluster to join.....
    >>> Performing Cluster Check (using node 127.0.0.1:6381)
    M: aad64e40200a01ea5cdc98bb031485586dbb7565 127.0.0.1:6381
       slots:0-5460 (5461 slots) master
       1 additional replica(s)
    S: f0e17a4a37ea0c51210227ce8695769d38e158eb 127.0.0.1:6386
       slots: (0 slots) slave
       replicates 5b6f61a1e2635a14d9c11e78e26be6ca776a459c
    S: 3ffaf31be68e127409e616316ecece8199760557 127.0.0.1:6384
       slots: (0 slots) slave
       replicates aad64e40200a01ea5cdc98bb031485586dbb7565
    S: 750ce04a97cd9d33d033abb10cb847ffb9b73474 127.0.0.1:6385
       slots: (0 slots) slave
       replicates e3649a691aa10b66e80e47ed002786532f6d0baa
    M: 5b6f61a1e2635a14d9c11e78e26be6ca776a459c 127.0.0.1:6383
       slots:10923-16383 (5461 slots) master
       1 additional replica(s)
    M: e3649a691aa10b66e80e47ed002786532f6d0baa 127.0.0.1:6382
       slots:5461-10922 (5462 slots) master
       1 additional replica(s)
    [OK] All nodes agree about slots configuration.
    >>> Check for open slots...
    >>> Check slots coverage...
    [OK] All 16384 slots covered.
    

    到此为止,集群已经搭建成功了。这是一个3主3从的架构。


参考资料:https://blog.csdn.net/wolf1105/article/details/91849354

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值