大数据处理框架: Flume + Redis4.0.11 集群

上一篇文章关于Storm kafka Zookeeper 集群、本次加入Flume Redis 的集群

Apache Flume是一个分布式,可靠且可用的系统,用于高效地收集,汇总和将来自多个不同源的大量日志数据移动到集中式数据存储。
Apache Flume的使用不仅限于日志数据聚合。由于数据源是可定制的,因此Flume可用于传输大量事件数据,包括但不限于网络流量数据,社交媒体生成的数据,电子邮件消息以及几乎任何可能的数据源。

一、安装配置:

(1)前期准备:kafka+zookeeper+Storm 集群环境以安装
(2)Flume : apache-flume-1.8.0-bin.tar.gz 可以到官网下载: wget http://mirror.bit.edu.cn/apache/flume/1.8.0/apache-flume-1.8.0-bin.tar.gz (一定要匹配 flume1.8.0,使用jdk 1.8或更高版本)可以在 http://mirror.bit.edu.cn/apache/flume/ 里面找到你需要版本号下载。
(3)Redis:redis-4.0.11.tar.gz wget http://download.redis.io/releases/redis-4.0.11.tar.gzhttp://download.redis.io/releases/ 在这里面找你需要的版本号)
(6)进行解压 配置环境变量 vi /ect/profile

# JAVA_HOME
export JAVA_HOME=/usr/local/java/jdk1.8.0_191
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export ZOOKEEPER_HOME=/usr/local/java/zookeeper-3.4.13
export PATH=$PATH:$ZOOKEEPER_HOME/bin/:$JAVA_HOME/bin
#KAFKA_HOME
export KAFKA_HOME=/usr/local/java/kafka_2.11-2.0.0
export PATH=$PATH:$KAFKA_HOME/bin
# STORM_HOME
export STORM_HOME=/usr/local/java/apache-storm-1.2.2
export PATH=.:${JAVA_HOME}/bin:${ZK_HOME}/bin:${STORM_HOME}/bin:$PATH

#FLUME_HOME
export FLUME_HOME=/usr/local/java/flume/apache-flume-1.8.0-bin
export path=$PATH:FLUME_HOME/bin

环境变量需要重启生效 source /ect/profile

1、需要配置flume-env
[root@hadoop conf]# pwd
/usr/local/java/flume/apache-flume-1.8.0-bin/conf
[root@hadoop conf]# cp -r flume-env.sh.template flume-env.sh
[root@hadoop conf]# vi flume-env.sh
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# If this file is placed at FLUME_CONF_DIR/flume-env.sh, it will be sourced
# during Flume startup.

# Enviroment variables can be set here.

# export JAVA_HOME=/usr/lib/jvm/java-8-oracle  JAVA_HOME 目录
 export JAVA_HOME=/usr/local/java/jdk1.8.0_191
# Give Flume more memory and pre-allocate, enable remote monitoring via JMX
# export JAVA_OPTS="-Xms100m -Xmx2000m -Dcom.sun.management.jmxremote"

# Let Flume write raw event data and configuration information to its log files for debugging
# purposes. Enabling these flags is not recommended in production,
# as it may result in logging sensitive user information or encryption secrets.
# export JAVA_OPTS="$JAVA_OPTS -Dorg.apache.flume.log.rawdata=true -Dorg.apache.flume.log.printconfig=true "
# Note that the Flume conf directory is always included in the classpath.
#FLUME_CLASSPATH=""
2、配置flum到其他两台虚拟机
[root@hadoop flume]# scp -r apache-flume-1.8.0-bin root@192.168.164.134:/usr/local/java/flume/
root@192.168.164.134's password: 

[root@hadoop flume]# scp -r apache-flume-1.8.0-bin root@192.168.164.135:/usr/local/java/flume/
root@192.168.164.134's password: 
3、配置环境变量 source /etc/profile 重启
[root@hadoop flume]# vi /etc/profile
# java JDK
export JAVA_HOME=/usr/local/java/jdk1.8.0_191
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export ZOOKEEPER_HOME=/usr/local/java/zookeeper-3.4.13
export PATH=$PATH:$ZOOKEEPER_HOME/bin/:$JAVA_HOME/bin
#KAFKA_HOME
export KAFKA_HOME=/usr/local/java/kafka_2.11-2.0.0
export PATH=$PATH:$KAFKA_HOME/bin
# strom_home
export STORM_HOME=/usr/local/java/apache-storm-1.2.2
export PATH=.:${JAVA_HOME}/bin:${ZK_HOME}/bin:${STORM_HOME}/bin:$PATH

#FLUME_HOME
export FLUME_HOME=/usr/local/java/flume/apache-flume-1.8.0-bin
export path=$PATH:FLUME_HOME/bin

二、Redis集群正常工作至少需要3个主节点(3个主节点、3个从节点)请注意Redis集群是从3.0以后开始支持的。

(1 ) 从redis官网https://redis.io/下载redis版本redis-4.0.11。解压redis tar -zxvf redis-4.0.11.tar.gz
(2)进入 redis-4.0.11

#在虚拟机 192.168.164.133 
[root@hadoop redis]# cd redis-4.0.11
#进入解压后的redis-4.0.11/src目录、执行make install 命令
[root@hadoop src]# pwd
/usr/local/java/redis/redis-4.0.11/src
[root@hadoop src]# make install
    CC Makefile.dep
Hint: It's a good idea to run 'make test' ;)

    INSTALL install
    INSTALL install
    INSTALL install
    INSTALL install
    INSTALL install

#创建redis节点目录
[root@hadoop redis-cluster]# mkdir -p /usr/local/java/redis/redis-4.0.11/redis-cluster/7000
[root@hadoop redis-cluster]# mkdir -p /usr/local/java/redis/redis-4.0.11/redis-cluster/7001

[root@hadoop redis-cluster]# cp /usr/local/java/redis/redis-4.0.11/redis.conf /usr/local/java/redis/redis-4.0.11/redis-cluster/7000
[root@hadoop redis-cluster]# cp /usr/local/java/redis/redis-4.0.11/redis.conf /usr/local/java/redis/redis-4.0.11/redis-cluster/7001

(3)分别进入 redis-cluster/7000 、redis-cluster/7000 下修改redis.conf 配置项,

不能设置密码,否则集群启动时会连接不上
[root@hadoop redis-4.0.11]# cd redis-cluster/7000/
[root@hadoop 7000]# vi redis.conf 
################################## NETWORK #####################################

# By default, if no "bind" configuration directive is specified, Redis listens
# for connections from all the network interfaces available on the server.
# It is possible to listen to just one or multiple selected interfaces using
# the "bind" configuration directive, followed by one or more IP addresses.
#
# Examples:
#
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1 ::1
#
# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the
# internet, binding to all the interfaces is dangerous and will expose the
# instance to everybody on the internet. So by default we uncomment the
# following bind directive, that will force Redis to listen only into
# the IPv4 lookback interface address (this means Redis will be able to
# accept connections only from clients running into the same computer it
# is running).
#
# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
# JUST COMMENT THE FOLLOWING LINE.
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
bind 127.0.0.1    //***************根据本机所在的IP或hostname去配制
# Protected mode is a layer of security protection, in order to avoid that
# Redis instances left open on the internet are accessed and exploited.
#
# When protected mode is on and if:
#
# 1) The server is not binding explicitly to a set of addresses using the
#    "bind" directive.
# 2) No password is configured.
#
# The server only accepts connections from clients connecting from the
# IPv4 and IPv6 loopback addresses 127.0.0.1 and ::1, and from Unix domain
# sockets.
#
# By default protected mode is enabled. You should disable it only if
# you are sure you want clients from other hosts to connect to Redis
# even if no authentication is configured, nor a specific set of interfaces
# are explicitly listed using the "bind" directive.
protected-mode yes

# Accept connections on the specified port, default is 6379 (IANA #815344).
# If port 0 is specified Redis will not listen on a TCP socket.
port 7000   //端口根据对应的文件夹去配制端口 7000,7001,7002,7003,7004,7005 


################################# GENERAL #####################################

# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
daemonize yes

# If you run Redis from upstart or systemd, Redis can interact with your
# supervision tree. Options:
#   supervised no      - no supervision interaction
#   supervised upstart - signal upstart by putting Redis into SIGSTOP mode
#   supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
#   supervised auto    - detect upstart or systemd method based on
#                        UPSTART_JOB or NOTIFY_SOCKET environment variables
# Note: these supervision methods only signal "process is ready."
#       They do not enable continuous liveness pings back to your supervisor.
supervised no

# If a pid file is specified, Redis writes it where specified at startup
# and removes it at exit.
#
# When the server runs non daemonized, no pid file is created if none is
# specified in the configuration. When the server is daemonized, the pid file
# is used even if not specified, defaulting to "/var/run/redis.pid".
#
# Creating a pid file is best effort: if Redis is not able to create it
# nothing bad happens, the server will start and run normally.
pidfile /var/run/redis_7000.pid   //pidfile文件对应7000,7001,7002,7003,7004,7005

# Specify the server verbosity level.
# This can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
loglevel notice

# Specify the log file name. Also the empty string can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile ""

# To enable logging to the system logger, just set 'syslog-enabled' to yes,
# and optionally update the other syslog parameters to suit your needs.
# syslog-enabled no

# Specify the syslog identity.



################################ REDIS CLUSTER  ###############################
#
# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# WARNING EXPERIMENTAL: Redis Cluster is considered to be stable code, however
# in order to mark it as "mature" we need to wait for a non trivial percentage
# of users to deploy it in production.
# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
#
# Normal Redis instances can't be part of a Redis Cluster; only nodes that are
# started as cluster nodes can. In order to start a Redis instance as a
# cluster node enable the cluster support uncommenting the following:
#
 cluster-enabled yes   //开启集群  把注释#去掉

# Every cluster node has a cluster configuration file. This file is not
# intended to be edited by hand. It is created and updated by Redis nodes.
# Every Redis Cluster node requires a different cluster configuration file.
# Make sure that instances running in the same system do not have
# overlapping cluster configuration file names.
#
 cluster-config-file nodes-7000.conf  //集群的配置  配置文件首次启动自动生成 7000,7001,7002,7003,7004,7005

# Cluster node timeout is the amount of milliseconds a node must be unreachable
# for it to be considered in failure state.
# Most other internal time limits are multiple of the node timeout.
#
 cluster-node-timeout 15000      //请求超时  默认15秒,可自行设置

# A slave of a failing master will avoid to start a failover if its data
# looks too old.
#
# There is no simple way for a slave to actually have an exact measure of
# its "data age", so the following two checks are performed:
#
# 1) If there are multiple slaves able to failover, they exchange messages
#    in order to try to give an advantage to the slave with the best
#    replication offset (more data from the master processed).
#    Slaves will try to get their rank by offset, and apply to the start
#    of the failover a delay proportional to their rank.
#
# 2) Every single slave computes the time of the last interaction with
#    its master. This can be the last ping or command received (if the master
#    is still in the "connected" state), or the time that elapsed since the
#    disconnection with the master (if the replication link is currently down).
#    If the last interaction is too old, the slave will not try to failover
#    at all.
#
# The point "2" can be tuned by user. Specifically a slave will not perform
# the failover if, since the last interaction with the master, the time
# elapsed is greater than:

############################## APPEND ONLY MODE ###############################

# By default Redis asynchronously dumps the dataset on disk. This mode is
# good enough in many applications, but an issue with the Redis process or
# a power outage may result into a few minutes of writes lost (depending on
# the configured save points).
#
# The Append Only File is an alternative persistence mode that provides
# much better durability. For instance using the default data fsync policy
# (see later in the config file) Redis can lose just one second of writes in a
# dramatic event like a server power outage, or a single write if something
# wrong with the Redis process itself happens, but the operating system is
# still running correctly.
#
# AOF and RDB persistence can be enabled at the same time without problems.
# If the AOF is enabled on startup Redis will load the AOF, that is the file
# with the better durability guarantees.
#
# Please check http://redis.io/topics/persistence for more information.

appendonly yes    //aof日志开启  有需要就开启,它会每次写操作都记录一条日志

# The name of the append only file (default: "appendonly.aof")

appendfilename "appendonly.aof"

# The fsync() call tells the Operating System to actually write data on disk
# instead of waiting for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
#
# Redis supports three different modes:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log. Slow, Safest.
# everysec: fsync only one time every second. Compromise.
#
# The default is "everysec", as that's usually the right compromise between
# speed and data safety. It's up to you to understand if you can relax this to
# "no" that will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting),
# or on the contrary, use "always" that's very slow but a bit safer than
# everysec.
#
# More details please check the following article:
# http://antirez.com/post/redis-persistence-demystified.html
#
# If unsure, use "everysec".

# appendfsync always

(4)配置项

port  7000                                //端口根据对应的文件夹去配制端口 7000,7001,7002,7003,7004,7005      
bind 本机ip                               //根据本机所在的IP或hostname去配制
daemonize    yes                          //redis后台运行
pidfile  /var/run/redis_7000.pid          //pidfile文件对应7000,7001,7002,7003,7004,7005
cluster-enabled  yes                      //开启集群
cluster-config-file  nodes_7000.conf      //集群的配置  配置文件首次启动自动生成 7000,7001,7002,7003,7004,7005
cluster-node-timeout  15000               //请求超时  默认15秒
appendonly  yes                           //aof日志开启  有需要就开启,它会每次写操作都记录一条日志

(5)把当前redis-4.0.11 文件夹拷贝到其它两台服务器,分别修改 7002、7003、7004、7005

# 拷贝到其它两台服务器上,输入密码即可
[root@hadoop redis]# scp -r redis-4.0.11 root@192.168.164.134:/usr/local/java/redis/
root@192.168.164.134's password: 
#-------------------------------
[root@hadoop redis]# scp -r redis-4.0.11 root@192.168.164.134:/usr/local/java/redis/
root@192.168.164.134's password: 

(6)进入 cd redis-cluster/ 分别修改 7002、7003、7004、7005

[root@hadoop redis-cluster]# mv 7000 7002
[root@hadoop redis-cluster]# mv 7001 7003
[root@hadoop redis-cluster]# ls
7002  7003

(7)进入 7002、7003 修改相对应 redis.conf
(8)另一台虚拟机请按照上面修改。

#redis进行源码安装,先要安装gcc,再make redis。执行以下命令安装redis:
yum -y install gcc gcc-c++ libstdc++-devel
安装ruby环境,卸载老版本ruby2.0.0,安装ruby2.2.2以上版本
[root@hadoop ~]# yum install centos-release-scl-rh    //
[root@hadoop ~]# yum install rh-ruby23  -y    //直接yum安装即可  
[root@hadoop ~]# scl  enable  rh-ruby23 bash    //必要一步~~ 
[root@hadoop ~]# ruby -v    //查看安装版本
或者升级所有防止不兼容
#所有机器都需要操作,在执行过程等待一会儿
yum -y update
执行 gem install redis
[root@hadoop redis-4.0.11]# gem install redis
Successfully installed redis-4.0.3
Parsing documentation for redis-4.0.3
Done installing documentation for redis after 4 seconds
1 gem installed
[root@hadoop redis-4.0.11]# 
分别在三台机器上执行 启动
[root@hadoop redis-4.0.11]# pwd
/usr/local/java/redis/redis-4.0.11
[root@hadoop redis-4.0.11]# ./src/redis-server redis-cluster/7000/redis.conf
[root@hadoop redis-4.0.11]# ./src/redis-server redis-cluster/7001/redis.conf
[root@hadoop redis-4.0.11]# ./src/redis-server redis-cluster/7002/redis.conf
[root@hadoop redis-4.0.11]# ./src/redis-server redis-cluster/7003/redis.conf
[root@hadoop redis-4.0.11]# ./src/redis-server redis-cluster/7004/redis.conf
[root@hadoop redis-4.0.11]# ./src/redis-server redis-cluster/7005/redis.conf

#启动成功
[root@hadoop redis-4.0.11]# ./src/redis-server redis-cluster/7003/redis.conf 
1293:C 27 Nov 18:51:15.564 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1293:C 27 Nov 18:51:15.564 # Redis version=4.0.11, bits=64, commit=00000000, modified=0, pid=1293, just started
1293:C 27 Nov 18:51:15.564 # Configuration loaded
[root@hadoop redis-4.0.11]# 

注意的地方:如果虚拟机开着防火墙需要把对应端口打卡
firewall-cmd --zone=public --add-port=7002/tcp --permanent
firewall-cmd --zone=public --add-port=7003/tcp --permanent

firewall-cmd --zone=public --add-port=7002/tcp --permanent
firewall-cmd --zone=public --add-port=7003/tcp --permanent

firewall-cmd --zone=public --add-port=7004/tcp --permanent
firewall-cmd --zone=public --add-port=7005/tcp --permanent

firewall-cmd --reload 重新启动防火墙
创建集群
注意:在任意一台上运行 不要在每台机器上都运行,一台就够了,进入src 目录中
[root@hadoop redis-4.0.11]# ./src/redis-trib.rb create  --replicas 1 192.168.164.133:7000 192.168.164.133:7001 192.168.164.134:7002 192.168.164.134:7003 192.168.164.135:7004 192.168.164.135:7005 
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
192.168.164.133:7000
192.168.164.134:7002
192.168.164.135:7004
Adding replica 192.168.164.134:7003 to 192.168.164.133:7000
Adding replica 192.168.164.135:7005 to 192.168.164.134:7002
Adding replica 192.168.164.133:7001 to 192.168.164.135:7004
M: c0cd9247f7c71d034370106e554c3acdb15d98e9 192.168.164.133:7000
   slots:0-5460 (5461 slots) master
S: 4d1f6c35c278ad9d60f2694c24b8a701a73e5b89 192.168.164.133:7001
   replicates b1ec42c9bb72e5370af6f61afaa839e830e9af3d
M: 2eaf76a8f769210546261957a5a64711173eb372 192.168.164.134:7002
   slots:5461-10922 (5462 slots) master
S: 95cbe7d9dcdf54f3b01fac279d9401f691bc8ea2 192.168.164.134:7003
   replicates c0cd9247f7c71d034370106e554c3acdb15d98e9
M: b1ec42c9bb72e5370af6f61afaa839e830e9af3d 192.168.164.135:7004
   slots:10923-16383 (5461 slots) master
S: 39137ae2249b416e4941856b33d9bc9c3c8a7f73 192.168.164.135:7005
   replicates 2eaf76a8f769210546261957a5a64711173eb372
# 输入 yes
Can I set the above configuration? (type 'yes' to accept): yes

*** Aborting...
[root@hadoop redis-4.0.11]# ./src/redis-trib.rb create  --replicas 1 192.168.164.133:7000 192.168.164.133:7001 192.168.164.134:7002 192.168.164.134:7003 192.168.164.135:7004 192.168.164.135:7005 
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
192.168.164.133:7000
192.168.164.134:7002
192.168.164.135:7004
Adding replica 192.168.164.134:7003 to 192.168.164.133:7000
Adding replica 192.168.164.135:7005 to 192.168.164.134:7002
Adding replica 192.168.164.133:7001 to 192.168.164.135:7004
M: c0cd9247f7c71d034370106e554c3acdb15d98e9 192.168.164.133:7000
   slots:0-5460 (5461 slots) master
S: 4d1f6c35c278ad9d60f2694c24b8a701a73e5b89 192.168.164.133:7001
   replicates b1ec42c9bb72e5370af6f61afaa839e830e9af3d
M: 2eaf76a8f769210546261957a5a64711173eb372 192.168.164.134:7002
   slots:5461-10922 (5462 slots) master
S: 95cbe7d9dcdf54f3b01fac279d9401f691bc8ea2 192.168.164.134:7003
   replicates c0cd9247f7c71d034370106e554c3acdb15d98e9
M: b1ec42c9bb72e5370af6f61afaa839e830e9af3d 192.168.164.135:7004
   slots:10923-16383 (5461 slots) master
S: 39137ae2249b416e4941856b33d9bc9c3c8a7f73 192.168.164.135:7005
   replicates 2eaf76a8f769210546261957a5a64711173eb372
Can I set the above configuration? (type 'yes' to accept): yes          
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join.....
>>> Performing Cluster Check (using node 192.168.164.133:7000)
M: c0cd9247f7c71d034370106e554c3acdb15d98e9 192.168.164.133:7000
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
S: 39137ae2249b416e4941856b33d9bc9c3c8a7f73 192.168.164.135:7005
   slots: (0 slots) slave
   replicates 2eaf76a8f769210546261957a5a64711173eb372
M: b1ec42c9bb72e5370af6f61afaa839e830e9af3d 192.168.164.135:7004
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
S: 4d1f6c35c278ad9d60f2694c24b8a701a73e5b89 192.168.164.133:7001
   slots: (0 slots) slave
   replicates b1ec42c9bb72e5370af6f61afaa839e830e9af3d
M: 2eaf76a8f769210546261957a5a64711173eb372 192.168.164.134:7002
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: 95cbe7d9dcdf54f3b01fac279d9401f691bc8ea2 192.168.164.134:7003
   slots: (0 slots) slave
   replicates c0cd9247f7c71d034370106e554c3acdb15d98e9
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
[root@hadoop redis-4.0.11]# 

集群信息验证

参数 -C 可连接到集群,因为 redis.conf 将 bind 改为了ip地址,所以 -h 参数不可以省略,-p 参数为端口号
在 192.168.164.134机器 redis 7000 节点 set 一个 key

[root@hadoop redis-4.0.11]# ./src/redis-cli -h 192.168.164.133 -c -p 7000
192.168.164.133:7000> set name www.baidu.com
-> Redirected to slot [5798] located at 192.168.164.134:7002
OK
192.168.164.134:7002> get name
"www.baidu.com"
192.168.164.134:7002> 

我们可以在其它机器获取这个key

[root@hadoop redis-4.0.11]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.164.135  netmask 255.255.255.0  broadcast 192.168.164.255
        inet6 fe80::2a3f:383c:75d9:52ff  prefixlen 64  scopeid 0x20<link>
        ether 00:50:56:3a:da:02  txqueuelen 1000  (Ethernet)
        RX packets 221642  bytes 301445640 (287.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 47111  bytes 18194804 (17.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 1375  bytes 2317673 (2.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1375  bytes 2317673 (2.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@hadoop redis-4.0.11]# ./src/redis-cli -h 192.168.164.135 -c -p 7005
192.168.164.135:7005> get name
-> Redirected to slot [5798] located at 192.168.164.134:7002
"www.baidu.com"
192.168.164.134:7002> 

通过 get name 重新定向到192.168.164.134 机器 7002这个节点上了,说明这个集群已经ok了。

关闭集群

pkill redis

节点
cluster meet <ip> <port> :将 ip 和 port 所指定的节点添加到集群当中,让它成为集群的一份子。
cluster forget <node_id> :从集群中移除 node_id 指定的节点。
cluster replicate <node_id> :将当前节点设置为 node_id 指定的节点的从节点。
cluster saveconfig :将节点的配置文件保存到硬盘里面。

如果遇到启动失败:请查看 https://blog.csdn.net/liruizi/article/details/84572260

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值