lvs+keepalived+memcached+varnish+LNMP 实现高可用的负载均衡调度服务器

实验环境: web1 :eth0 192.168.4.11     web2 : eth0192.168.4.12    两台web服务器需要配置VIP

                   调度器Lvs1: eth0 192.168.4.5    调度器Lvs2: eth0 192.168.4.6  两台调度器分别安装memcached数据库

                   varnish web缓存服务器一台: etho 192.168.4.53      eth1 192.168.2.100

                    web服务器和lvs调度器的VIP设置为 192.168.4.20

                   客户端用于访问主机: eth1 192.168.2.5

                  系统rhel7

一 部署 web1和web2的LNMP环境及相关配置: 

1  LNMP环境具体部署过程就不做赘述,查看相关服务是否开启:(eth0 192.168.4.11)

[root@web1 ~]# ss -antulp |grep 3306
tcp    LISTEN     0      50        *:3306                  *:*                   users:(("mysqld",pid=10216,fd=14))

[root@web1 ~]# ss -antulp | grep nginx
tcp    LISTEN     0      128       *:80                    *:*                   users:(("nginx",pid=32635,fd=6),("nginx",pid=22580,fd=6))
tcp    LISTEN     0      128       *:443                   *:*                   users:(("nginx",pid=32635,fd=7),("nginx",pid=22580,fd=7))

[root@web1 ~]# ss -antulp | grep 9000
tcp    LISTEN     0      128    127.0.0.1:9000                  *:*                   users:(("php-fpm",pid=3932,fd=0),("php-fpm",pid=2350,fd=0),("php-fpm",pid=2349,fd=0),("php-fpm",pid=2348,fd=0),("php-fpm",pid=2347,fd=0),("php-fpm",pid=2346,fd=0),("php-fpm",pid=2343,fd=6))

2  为web1配置VIP:

[root@web1 ~]# cd /etc/sysconfig/network-scripts/
[root@web1 network-scripts]# cp ifcfg-lo{,:0}
[root@web1 network-scripts]# vim ifcfg-lo:0
DEVICE=lo:0
IPADDR=192.168.4.20
NETMASK=255.255.255.255
NETWORK=192.168.4.20
# If you're having problems with gated making 127.0.0.0/8 a martian,
# you can change this to something else (255.255.255.255, for example)
BROADCAST=192.168.4.20
ONBOOT=yes
NAME=lo:0

3  修改/etc/sysctl.conf文件 解决arp广播风暴,重启网络服务

[root@web1 network-scripts]# vim /etc/sysctl.conf
net.ipv4.conf.all.arp_ignore=1
net.ipv4.conf.lo.arp_ignore=1
net.ipv4.conf.all.arp_announce=2
net.ipv4.conf.lo.arp_announce=2

[root@web1 ~]# systemctl restart network

4 查看VIP 

[root@web1 ~]# ifconfig lo:0
lo:0: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 192.168.4.20  netmask 255.255.255.255
        loop  txqueuelen 1  (Local Loopback)

5 部署nginx测试页面:

[root@web1 ~]# vim /usr/local/nginx/html/index.php
<!DOCTYPE html>
<html>
<head>

<title>Login Form in PHP with Session</title>
<link href="style.css" rel="stylesheet" type="text/css">

</head>
<body bgcolor=green>

<body>

<div id="main">
<h1>PHP Login Session Example</h1>
<div id="login">
<h2>Login Form</h2>

<form action="login.php" method="post">

  <label>UserName :</label>
  <input id="name" name="username" placeholder="username" type="text">
  <label>Password :</label>
  <input id="password" name="password" placeholder="**********" type="password">
  <input type ="submit">

</form>

</div>
</div>
</body>
</html>
[root@web1 ~]# vim /usr/local/nginx/html/home.php
<?php
session_start();

if(!isset($_SESSION['login_user'])) {

       header("location: index.php");
}
?>

<html>
<head>
<title>Logged In</title>

</head>
<body bgcolor=red>

<body>

Welcome :  <?php echo $_SESSION['login_user'] ?>
<br>
Logged In :  <?php echo $_SESSION['logged_in'] ?>
<br>
Session ID:  <?php echo $_SESSION['id'] ?>

</body>
</html>

6 修改nginx 配置文件并重载

[root@web1 ~]# vim /usr/local/nginx/conf/nginx.conf
http {
    include       mime.types;
    default_type  application/octet-stream;
     ……………………
    server {
        listen       80;
        server_name  localhost;
    location / {
            root   html;
            index  index.php index.html index.htm;
        }
    
    location ~ \.php$ {
            root           html;
            fastcgi_pass   127.0.0.1:9000;
            fastcgi_index  index.php;          
           include        fastcgi.conf;
        }
   }
}

[root@web1 ~]# /usr/local/nginx/sbin/nginx -s reload

7 修改PHP文件指定session的存放路径

[root@web1 ~]# vim /etc/php-fpm.d/www.conf   ###修改最后三行
php_value[session.save_handler] = memcache
php_value[session.save_path] = "tcp://192.168.4.5:11211"
php_value[session.save_path] = "tcp://192.168.4.6:11211"

[root@web1 ~]# systemctl restart php-fpm

8 安装php连接memcache数据库的依赖包

[root@web1 ~]# yum -y install  php-pecl-memcache

9 web2按照上述操作重复一次 (eth0 192.168.4.12)

 

二 部署两台调度服务器:

1 安装keepalived软件包 (eth0 192.168.4.5)

[root@lvs1 ~]# yum -y install keepalived

2 修改配置文件

[root@lvs1 ~]# vim /etc/keepalived/keepalived.conf


global_defs {
   notification_email {
     dmin@tarena.com.cn                //设置报警收件人邮箱
   }
   notification_email_from ka@localhost    //设置发件人
   smtp_server 127.0.0.1                 //定义邮件服务器   
   smtp_connect_timeout 30
   router_id LVS1                    //设置路由ID号   
}       

vrrp_instance VI_1 {
    state MASTER                   //主服务器为MASTER
    interface eth0              //定义网络接口
    virtual_router_id 51          //主辅VRID号必须一致
    priority 100               //服务器优先级
    advert_int 1              
    authentication {
        auth_type PASS
        auth_pass 1111          //主辅服务器密码必须一致
    }
    virtual_ipaddress {
        192.168.4.20             //配置VIP
    }
}
virtual_server 192.168.4.20 80 {           //设置ipvsadm的VIP规则
    delay_loop 6
    lb_algo wrr                 //设置LVS调度算法为WRR
    lb_kind DR                  //设置LVS的模式为DR
 #   persistence_timeout 50
    protocol TCP

    real_server 192.168.4.11 80 {        //设置后端web服务器真实IP
        weight 1                       //设置权重为1
        TCP_CHECK {
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 192.168.4.12 80 {
        weight 2
        TCP_CHECK {
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

3 下载ipvsadm的命令行管理工具

[root@lvs1 ~]# yum -y install ipvsadm

4 启动keepalived服务

[root@lvs1 ~]# systemctl start keepalived

5 查看lvs规则列表,并保存规则

[root@lvs1 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.4.20:80 wrr
  -> 192.168.4.11:80              Route   1      0          0         
  -> 192.168.4.12:80              Route   2      0          0 

[root@lvs1 ~]# ipvsadm-save -n > /etc/sysconfig/ipvsadm

6 安装memcache数据库并启动服务

[root@lvs1 ~]# yum -y install memcached
[root@lvs1 ~]# systemctl restart memcached
[root@lvs1 ~]# ss -antulp | grep memcached
udp    UNCONN     0      0         *:11211                 *:*                   users:(("memcached",pid=5110,fd=28))
udp    UNCONN     0      0        :::11211                :::*                   users:(("memcached",pid=5110,fd=29))
tcp    LISTEN     0      128       *:11211                 *:*                   users:(("memcached",pid=5110,fd=26))
tcp    LISTEN     0      128      :::11211                :::*                   users:(("memcached",pid=5110,fd=27)

7 在lvs2调度器重复以上所有操作,keepalived配置文件有变,如下:(eth0 192.168.4.6)

[root@lvs2 ~]# vim /etc/keepalived/keepalived.conf


global_defs {
   notification_email {
     dmin@tarena.com.cn                //设置报警收件人邮箱
   }
   notification_email_from ka@localhost    //设置发件人
   smtp_server 127.0.0.1                 //定义邮件服务器   
   smtp_connect_timeout 30
   router_id LVS2                    //设置路由ID号   
}       

vrrp_instance VI_1 {
    state BACKUP                   //从服务器为BACKUP
    interface eth0              //定义网络接口
    virtual_router_id 51          //主辅VRID号必须一致
    priority 50               //服务器优先级 从服务器优先级要低
    advert_int 1              
    authentication {
        auth_type PASS
        auth_pass 1111          //主辅服务器密码必须一致
    }
    virtual_ipaddress {
        192.168.4.20             //配置VIP
    }
}
virtual_server 192.168.4.20 80 {           //设置ipvsadm的VIP规则
    delay_loop 6
    lb_algo wrr                 //设置LVS调度算法为WRR
    lb_kind DR                  //设置LVS的模式为DR
 #   persistence_timeout 50
    protocol TCP

    real_server 192.168.4.11 80 {        //设置后端web服务器真实IP
        weight 1                       //设置权重为1
        TCP_CHECK {
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 192.168.4.12 80 {
        weight 2
        TCP_CHECK {
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

8 启动lvs2调度器的keealived服务

[root@node4 ~]# systemctl restart keepalived

9 查看lvs规则列表,并保存规则

[root@lvs2 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.4.20:80 wrr
  -> 192.168.4.11:80              Route   1      0          0         
  -> 192.168.4.12:80              Route   2      0          0 

[root@lvs1 ~]# ipvsadm-save -n > /etc/sysconfig/ipvsadm

10 查看lvs2调度器的VIP,可以看到VIP 192168.4.20 并没有出现,因为lvs2的优先级低

[root@lvs2 ~]# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:73:86:d9 brd ff:ff:ff:ff:ff:ff
    inet 192.168.4.6/24 brd 192.168.4.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::ef7f:39e5:3bff:29ea/64 scope link 
       valid_lft forever preferred_lft forever

11 查看lvs1调度器的VIP,192.168.4.20 的VIP出现

[root@lvs1 ~]# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:53:98:c9 brd ff:ff:ff:ff:ff:ff
    inet 192.168.4.5/24 brd 192.168.4.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 192.168.4.20/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::835f:6bfe:e3d3:82c9/64 scope link 
       valid_lft forever preferred_lft forever

 三 部署varnish web缓存服务器(etho 192.168.4.53     eth1 192.168.2.100)

1 编译安装软件

[root@proxy ~]# yum -y install gcc readline-devel    //安装软件依赖包
[root@proxy ~]# yum -y install ncurses-devel         //安装软件依赖包
[root@proxy ~]# yum -y install pcre-devel            //安装软件依赖包
[root@proxy ~]# yum -y install \
python-docutils-0.11-0.2.20130715svn7687.el7.noarch.rpm         //安装软件依赖包
[root@proxy ~]# useradd -s /sbin/nologin varnish                //创建账户
[root@proxy ~]# tar -xf varnish-5.2.1.tar.gz
[root@proxy ~]# cd varnish-5.2.1
[root@proxy varnish-5.2.1]# ./configure
[root@proxy varnish-5.2.1]# make && make install

2 复制启动脚本及配置文件

[root@proxy varnish-5.2.1]# cp  etc/example.vcl   /usr/local/etc/default.vcl

3 修改代理配置文件

[root@proxy ~]# vim  /usr/local/etc/default.vcl
backend default {
     .host = "192.168.4.20";
     .port = "80";
 }

4 启动服务

 [root@proxy ~]# varnishd  -f /usr/local/etc/default.vcl

四 客户端访问并测试 ( eth1 192.168.2.5)

测试结果:keepalived 为两台调度服务器提供了高可用性,lvs实现了后端web服务器集群的高性能和负载均衡,同时对集群提供的服务做健康检查。memcache数据库主要解决了集群中session不共享的问题,vainish实现了CDN内容分发网络,使用户就近获取所需内容,降低网络拥塞,提高用户访问响应速度和命中率.

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值