keepalived和redis

本文详细介绍了如何通过keepalived实现nginx的高可用,包括安装配置、LVS-DR模式的设置以及nginx反向代理的配置。同时,还讲解了建立三个节点的redis集群,包括哨兵模式和rediscluster的部署方法,确保数据的高可用性和可扩展性。
摘要由CSDN通过智能技术生成

1、使用keepalived做nginx高可用。

1.1 安装keepalived

通过apt 安装keepalived

apt  install keepalived -y 
dpkg -L keepalived   查看安装的包的配置文件

拷贝模板文件

cp /usr/share/doc/keepalived/samples/keepalived.conf.sample /etc/keepalived/keepalived.conf 

配置日志,可以修改配置使其输出到一个独立的 log 文件中
ubuntu:

cat /etc/default/keepalived
DAEMON_ARGS="-D -S 6"
vi /etc/rsyslog.conf
local6.* /var/log/keepalived.log

重启服务

1.2 LVS-DR 模式

实现单主的 LVS-DR 模式
192.168.1.103 keepalived
192.168.1.109 keepalived
192.168.1.80 nginx服务器
192.168.1.120 nginx服务器
192.168.1.188 VIP
192.168.1.153 client
keepalived服务器 配置VIP在eth0,nginx服务器 配置vip 在lo网卡
安装二台nginx服务器分别是192.168.1.80和192.168.1.120

准备web服务器并使用脚本绑定VIP至web服务器lo网卡

apt install apache2 -y 
cat lvs_dr_rs.sh 
#/bin/bash
vip=192.168.1.188
mask='255.255.255.255'
dev=lo:1
case $1 in
  start)
	echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
	echo 2 > /proc/sys/net/ipv4/conf/lo/arp_ignore
	echo 1 > /proc/sys/net/ipv4/conf/all/arp_announce
	echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
	ifconfig $dev $vip netmask $mask 
	echo "this RS server is Ready!"
	;;
  stop)
	ifconfig $dev down
	echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
	echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
	echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
	echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
	echo "this RS server is Canceled!"
	;;
esac

sh lvs_dr_rs.sh start

192.168.1.103keepalived配置文件

cd /etc/keepalived/
 cat keepalived.conf 
! Configuration File for keepalived

global_defs {
    notification_email {
	2222@qq.com
    }
    notification_email_from Alexandre.Cassen@firewall.loc
    smtp_server 192.168.200.1
	smtp_connect_timeout 30
    router_id kv1
    vrrp_skip_check_adv_addr
    vrrp_garp_interval 0
	vrrp_mcast_group4 224.0.0.18
	 
}
include /etc/keepalived/conf.d/*.conf


root@ubuntu20:/etc/keepalived# cat conf.d/www.luo.org.conf 
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 222
    priority 100
    advert_int 1
	 #preempt_delay 30
    authentication {
		  auth_type PASS
		  auth_pass 123456	
		}
    virtual_ipaddress {
        192.168.1.188 dev eth0 label eth0:1
    }
	notify_master "/etc/keepalived/conf.d/notify.sh master"
	notify_backup "/etc/keepalived/conf.d/notify.sh backup"
	notify_fault "/etc/keepalived/conf.d/notify.sh fault"
}

virtual_server 192.168.1.188 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    protocol TCP

    real_server 192.168.1.80  80{
        weight 1
        TCP_CHECK {
           connect_timeout 3
		   nb_ger_retry 3
           delay_before_retry 3
		   connect_port  80
       }
   }
    real_server 192.168.1.120  80{
        weight 1
        TCP_CHECK {
           connect_timeout 3
		   nb_ger_retry 3
           delay_before_retry 3
		   connect_port  80
       }
   }
}

192.168.1.109keepalived配置文件

root@ubuntu20:/etc/keepalived# cat keepalived.conf 
! Configuration File for keepalived

global_defs {
    notification_email {
	   2222@qq.com
    }
    notification_email_from Alexandre.Cassen@firewall.loc
    smtp_server 192.168.200.1
	smtp_connect_timeout 30   
    router_id kv2
    vrrp_skip_check_adv_addr
    vrrp_garp_interval 0
	vrrp_mcast_group4 224.0.0.18  #指定组播的ip地址
	 
}
include /etc/keepalived/conf.d/*.conf


root@ubuntu20:/etc/keepalived# cat conf.d/www.luo.edu.conf
vrrp_instance VI_1 {
	state BACKUP
    interface eth0
    virtual_router_id 222
    priority 80
    advert_int 1   #vrrp通告时间间隔
	#preempt_delay 30
	authentication  {
		  auth_type  PASS
		  auth_pass  123456
	}
    virtual_ipaddress {
        192.168.1.188   dev eth0 label eth0:1  #指定vip
    }
#   unicast_src_ip 192.168.1.109  #配置单播
#		  unicast_peer {
#		  192.168.1.103
#	 }
	notify_master "/etc/keepalived/conf.d/notify.sh master" #当前节点成为主节点执行的脚本
	notify_backup "/etc/keepalived/conf.d/notify.sh backup" #当前节点成为备节点执行的脚本
	notify_fault "/etc/keepalived/conf.d/notify.sh fault" #当前节点转为失败执行的脚本
}  
virtual_server 192.168.1.188 80 {
    delay_loop 6   #检查后端服务的时间间隔
    lb_algo rr     #定义调度算法
    lb_kind DR     #lvs类型,NAT|DR|TUN
    protocol TCP   指定服务类型

    real_server 192.168.1.80 80 {  #后端真实服务器ip和端口
        weight 1 		#调度权重
        TCP_CHECK {     #
           connect_timeout 3  #客户端请求的超时时长
           nb_ger_retry 3      #重新链接次数
           delay_before_retry 3  #重新连接间隔时间
           connect_port  80  #向当前RS的那个端口发起健康检测请求
       }
   }
    real_server 192.168.1.120  80 {
        weight 1
        TCP_CHECK {
           connect_timeout 3
           nb_ger_retry 3
           delay_before_retry 3
           connect_port  80
       }
   }
}

重启keepalived服务

systemctl restart keepalived.service  
root@ubuntu20:/etc/keepalived# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.1.188:80 rr
  -> 192.168.1.80:80              Route   1      0          22        
  -> 192.168.1.120:80             Route   1      0          90 

测试
curl VIP

while true ;do curl  192.168.1.188 ; sleep 1; done

1.3 Nginx 反向代理的高可用

192.168.1.103 keepalived+nginx
192.168.1.109 keepalived +nginx
192.168.1.80 网站
192.168.1.120 网站
192.168.1.188 VIP
192.168.1.153 client

架构详解:二台keepalived+nginx,其中nginx配置反向代理后端二台机器的网站
keepalived机器上配置vip

在103和109上配置nginx反向代理后端80和120,配置如下:

root@ubuntu20:/etc/nginx/conf.d# cat www.luo.com.conf 
	upstream websrvs {
 		server 192.168.1.80:80 weight=1;
 		server 192.168.1.120:80 weight=1;
 }
	server {
 		listen 80;
 		location /{
 			proxy_pass http://websrvs/;
 		}
 }

修改二个节点的keepalived配置文件,二个节点的keepalived配置文件route_id 、priority 和state不一样

cat /etc/keepalived/keepalived.conf 
! Configuration File for keepalived

global_defs {
    notification_email {
	   222@qq.com
    }
    notification_email_from Alexandre.Cassen@firewall.loc
    smtp_server 192.168.200.1
	smtp_connect_timeout 30
    router_id kv2   #不一样
    vrrp_skip_check_adv_addr
    vrrp_garp_interval 0
	 vrrp_mcast_group4 224.0.0.18
	 
}
include /etc/keepalived/conf.d/*.conf

 cat /etc/keepalived/conf.d/www.luo.org.conf
vrrp_script check_nginx {
   script "/etc/keepalived/conf.d/check_nginx.sh"
   interval 1
   weight -30
   fall 3
   rise 2
   timeout 2
}

vrrp_instance VI_1 {
    state SLAVE   #不一样
    interface eth0
    virtual_router_id 222
    priority 100  #不一样
    advert_int 1
	#nopreempt
	#preempt_delay 30
    authentication {
		  auth_type PASS
		  auth_pass 123456	
		}
    virtual_ipaddress {
        192.168.1.188 dev eth0 label eth0:1
    }
	notify_master "/etc/keepalived/conf.d/notify.sh master"
	notify_backup "/etc/keepalived/conf.d/notify.sh backup"
	notify_fault "/etc/keepalived/conf.d/notify.sh fault"
 	track_script {
        check_nginx         #调用前面定义的脚本
   }

}

当192.168.1.103上面的nginx服务出现问题时,vip切换至192.168.1.109

tcpdump -i eth0 -nn host 224.0.0.18
在这里插入图片描述
while true;do curl 192.168.1.188;sleep 0.5 ;done
在这里插入图片描述

2、做三个节点的redis集群。

2.1 redis哨兵模式

哨兵监控主库是否工作, 主库故障,会进行投票,会进行切换

192.168.1.121 redis redis-sentinel
192.168.1.122 redis redis-sentinel
192.168.1.123 redis redis-sentinel

安装redis
apt install redis redis-sentinel -y
配置主从
修改配置文件,192.168.1.121配置为主,122和123设置为从
所有节点配置

vi /etc/redis/redis.conf
bind 0.0.0.0           #注意 一定不要监听127.0.0.1 不然会切换失败
masterauth "123456"    master密码
requirepass "123456"   设置密码

从节点添加 replicaof 192.168.1.121 6379 指定主节点
sentinel配置

cat /etc/redis/sentinel.conf 
bind 0.0.0.0
port 26379       
daemonize yes   #守护进程运行
pidfile "/var/run/sentinel/redis-sentinel.pid"   
logfile "/var/log/redis/redis-sentinel.log"   
dir "/var/lib/redis"   

sentinel myid d6fca1380ff4beccdcec9cd2210f39a506c22c7b   #此行自动生成,每个节点不一样

sentinel deny-scripts-reconfig yes     #禁止修改脚本
sentinel monitor mymaster 192.168.1.123 6379 2 #指定Redis初始集群的master库节点并命名为"oldboyedu_master,master的服务器地址和端口,2 表示需要多少个Sentinel实例认为主节点不可用时,才会触发故障转移。
sentinel down-after-milliseconds mymaster 3000#判断mymaster 集群中所有节点的主观下线(SDOWN)的时间,毫秒
sentinel failover-timeout mymaster 18000#所有slaves指向新的master所需的超时时间,单位:毫秒

sentinel auth-pass mymaster 123456 # 由于sentinel需要访问Redis集群,因此我们要设置访问整个集群的密码

systemctl restart redis-sentinel.service

启动redis 和sentinel,目前的主节点是192.168.1.121,当停止121的redis后,进行选举,192.168.1.122成为master
tail -f /var/log/redis/redis-sentinel.log
在这里插入图片描述

2.2 redis cluster

Redis Cluster(Redis 集群)是 Redis 提供的一种分布式模式,用于在多个节点上水平扩展和复制 Redis 数据。它通过分片(sharding)和故障转移(failover)来实现数据的高可用性和可扩展性。

在 Redis Cluster 中,数据被分成多个槽(slot),每个槽会被分配到集群中的不同节点上。每个节点负责维护一部分槽和数据。客户端通过连接到任意一个节点来访问集群中的数据,节点会根据数据的槽位映射将请求转发到相应的节点。

192.168.1.121
192.168.1.122
192.168.1.123
192.168.1.125
192.168.1.126
192.168.1.127

每个节点安装redis,修改配置文件,添加cluster集群配置,拷贝到其他节点

vim /etc/redis/redis.conf 
bind 0.0.0.0
masterauth "123456"
requirepass "123456"
cluster-enabled yes   #开启集群
cluster-config-file nodes-6379.conf  #,此为集群状态数据文件,记录主从关系
及slot范围信息,由redis cluster 集群自动创建和维护
cluster-require-full-coverage no  #默认值为yes,设为no可以防止一个节点不可用导致整
个cluster不可用

在这里插入图片描述

创建集群 ,–cluster-replicas 1 表示每个master对应一个slave节点

 redis-cli  -a 123456 --cluster create 192.168.1.121:6379  192.168.1.122:6379 192.168.1.123:6379 192.168.1.125:6379 192.168.1.126:6379 192.168.1.127:6379   --cluster-replicas 1 
 redis-cli  -a 123456 --cluster create 192.168.1.121:6379  192.168.1.122:6379 192.168.1.123:6379 192.168.1.125:6379 192.168.1.126:6379 192.168.1.127:6379   --cluster-replicas 1 
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.1.126:6379 to 192.168.1.121:6379
Adding replica 192.168.1.127:6379 to 192.168.1.122:6379
Adding replica 192.168.1.125:6379 to 192.168.1.123:6379
M: af3730439ad038cc79767d56ba7dfefeeee0c107 192.168.1.121:6379
   slots:[0-5460] (5461 slots) master
M: 336b8ddd3b77c279487d6b05bf3e8a974a2d1349 192.168.1.122:6379
   slots:[5461-10922] (5462 slots) master
M: d99dc713a7b057f1e2069a94b5c2cd622e68b91e 192.168.1.123:6379
   slots:[10923-16383] (5461 slots) master
S: afba49851f3de2a917b349efe6e3a8c21584c1c6 192.168.1.125:6379
   replicates d99dc713a7b057f1e2069a94b5c2cd622e68b91e
S: 3754e0cb1b430255210932e4b04184c2747160fb 192.168.1.126:6379
   replicates af3730439ad038cc79767d56ba7dfefeeee0c107
S: b13a87d0a13874b48f4a0b2e89fdef11032be3c9 192.168.1.127:6379
   replicates 336b8ddd3b77c279487d6b05bf3e8a974a2d1349
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
....
>>> Performing Cluster Check (using node 192.168.1.121:6379)
M: af3730439ad038cc79767d56ba7dfefeeee0c107 192.168.1.121:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: afba49851f3de2a917b349efe6e3a8c21584c1c6 192.168.1.125:6379
   slots: (0 slots) slave
   replicates d99dc713a7b057f1e2069a94b5c2cd622e68b91e
M: d99dc713a7b057f1e2069a94b5c2cd622e68b91e 192.168.1.123:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: b13a87d0a13874b48f4a0b2e89fdef11032be3c9 192.168.1.127:6379
   slots: (0 slots) slave
   replicates 336b8ddd3b77c279487d6b05bf3e8a974a2d1349
M: 336b8ddd3b77c279487d6b05bf3e8a974a2d1349 192.168.1.122:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: 3754e0cb1b430255210932e4b04184c2747160fb 192.168.1.126:6379
   slots: (0 slots) slave
   replicates af3730439ad038cc79767d56ba7dfefeeee0c107
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

redis-cli -a 123456 --cluster check 192.168.1.125:6379 查看集群状态
在这里插入图片描述
redis-cli -c -a 123456 #-c 启用集群模式
python连接redis cluster 安装模块: pip3 install redis-py-cluster

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值