Nginx

什么是正向代理服务器与反向代理服务器

正向代理服务器

  • 客户端和目标服务器之间的服务器,客户端向代理发送一个请求指定目标服务器,然后代理向目标服务器请求并获得内容,并返回给客户端,平时说的代理服务器一般是正向代理服务器

反向代理服务器

  • 客户端和目标服务器之间的服务器,客户端向代理发送一个请求,然后代理向目标服务器请求并获得内容,并返回给客户端。反向代理隐藏了真实的服务器

安装nginx

  • 安装依赖
yum -y install gcc zlib zlib-devel pcre-devel openssl openssl-devel
  • 解压安装包
tar -zxvf nginx-1.18.0.tar.gz
  • 安装
./configure --prefix=安装目录(不加--prefix的话,默认安装在/usr/local/nginx下)
make && make install
  • 启动
    nginx默认启动80端口,如果非root用户启动的话,会报权限不足
    此时可以将启动端口改成非80端口,比如8088,然后使用端口转发
    iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080
进入到sbin目录
./nginx
  • 常用命令
./nginx  #默认配置文件启动
​
./nginx -s reload #重启,加载默认配置文件
​
./nginx -c /usr/local/nginx/conf/nginx.conf #启动指定某个配置文件
​
./nginx -s stop #停止
​
#关闭进程,nginx有master process 和worker process,关闭master即可
ps -ef | grep "nginx" 
kill -9 PID

配置文件详解


#user  nobody; 指定Nginx Worker进程运行以及用户组
worker_processes  1;  worker进程数(通常和cpu核数保持一致)

#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;


events {
	#事件模块指令,用来指定Nginx的IO模型,Nginx支持的有select、poll、kqueue、epoll 等。不同的是epoll用在Linux平台上,而kqueue用在BSD系统中,对于Linux系	统,epoll工作模式是首选
	use epoll;  异步非阻塞
	# 作为反向代理来说,最大并发数量应该是worker_connections * worker_processes/2。因为反向代理服务器,每个  并发会建立与客户端的连接和与后端服务的连接,会占用两个连接
    worker_connections  1024; 
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
    #                  '$status $body_bytes_sent "$http_referer" '
    #                  '"$http_user_agent" "$http_x_forwarded_for"';

    #access_log  logs/access.log  main;

    sendfile        on; #是否开启高效传输模式
    #tcp_nopush     on; #减少网络报文段的数量

    #keepalive_timeout  0;
    keepalive_timeout  65; # 客户端连接保持活动的超时时间,超过这个时间之后,服务器会关闭该连接

    #gzip  on;

    server {
       #listen       80;
	listen 8088;
        server_name  localhost;

        #charset koi8-r;

        #access_log  logs/host.access.log  main;

        location / {
            root   html; #访问目录
            index  index.html index.htm; #访问文件
        }

        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }

        # proxy the PHP scripts to Apache listening on 127.0.0.1:80
        #
        #location ~ \.php$ {
        #    proxy_pass   http://127.0.0.1;
        #}

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        #
        #location ~ \.php$ {
        #    root           html;
        #    fastcgi_pass   127.0.0.1:9000;
        #    fastcgi_index  index.php;
        #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
        #    include        fastcgi_params;
        #}

        # deny access to .htaccess files, if Apache's document root
        # concurs with nginx's one
        #
        #location ~ /\.ht {
        #    deny  all;
        #}
    }


    # another virtual host using mix of IP-, name-, and port-based configuration
    #
    #server {
    #    listen       8000;
    #    listen       somename:8080;
    #    server_name  somename  alias  another.alias;

    #    location / {
    #        root   html; 
    #        index  index.html index.htm;
    #    }
    #}


    # HTTPS server
    #
    #server {
    #    listen       443 ssl;
    #    server_name  localhost;

    #    ssl_certificate      cert.pem;
    #    ssl_certificate_key  cert.key;

    #    ssl_session_cache    shared:SSL:1m;
    #    ssl_session_timeout  5m;

    #    ssl_ciphers  HIGH:!aNULL:!MD5;
    #    ssl_prefer_server_ciphers  on;

    #    location / {
    #        root   html;
    #        index  index.html index.htm;
    #    }
    #}

}

负载均衡

  • 配置案例
upstream lbs {
   server 192.1.1.101:8080;
   server 192.1.1.101:8081;
}
​
location /api/ {
    proxy_pass http://lbs;
    proxy_redirect default;
}

Nginx常见的负载均衡策略

  • 节点轮询(默认)、
upstream lbs {
   server 192.1.1.101:8080;
   server 192.1.1.101:8081;
}
  • weight 权重配置
upstream lbs {
   server 192.1.1.101:8080 weight=5;
   server 192.1.1.101:8081 weight=10;
}
  • ip_hash(固定分发)
upstream lbs {
	ip_hash;
   server 192.1.1.101:8080 weight=5;
   server 192.1.1.101:8081 weight=10;
}
  • down 表示当前的server暂时不参与负载
upstream lbs {
	
   server 192.1.1.101:8080 weight=5 down;
   server 192.1.1.101:8081 weight=10;
}
  • backup 其它所有的非backup机器down的时候,会请求backup机器,这台机器压力会最轻,配置也会相对低
upstream lbs {
	
   server 192.1.1.101:8080 weight=5 backup;
   server 192.1.1.101:8081 weight=10;
}

跨域

  • Nginx开启跨域配置
location / { 
    add_header 'Access-Control-Allow-Origin' $http_origin; //来源
    add_header 'Access-Control-Allow-Credentials' 'true';
    add_header 'Access-Control-Allow-Headers' 'DNT,web-token,app-token,Authorization,Accept,Origin,Keep-Alive,User-Agent,X-Mx-ReqToken,X-Data-Type,X-Auth-Token,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
    add_header Access-Control-Allow-Methods 'GET,POST,OPTIONS';
​
#如果预检请求则返回成功,不需要转发到后端
  if ($request_method = 'OPTIONS') {
      add_header 'Access-Control-Max-Age' 1728000;
      add_header 'Content-Type' 'text/plain; charset=utf-8';
      add_header 'Content-Length' 0;
      return 200;
    }
  
}

路径匹配-Nginx的locatioin规则应用

  • 正则
^ 以什么开始
$ 以什么结束
​
^/api/user$
  • 精准匹配
location = /uri
  • 前缀匹配
location /uri
  • 匹配任何已 /uri/ 开头的任何查询并且停止搜索
location ^~ /uri/
  • 通用匹配
location /
  • 正则匹配
区分大小写匹配(~)
不区分大小写匹配(~*)

LVS+KeepAlived

  • LVS是Linux Virtual Server,Linux虚拟服务器,是一个虚拟的服务器集群系统
  • 软件负载解决的两个核心问题是:选谁、转发
三种负载均衡转发技术

NAT:数据进出都通过 LVS, 前端的Master既要处理客户端发起的请求,又要处理后台RealServer的响应信息,将RealServer响应的信息再转发给客户端, 容易成为整个集群系统性能的瓶颈; (支持任意系统且可以实现端口映射)
DR: 移花接木,最高效的负载均衡规则,前端的Master只处理客户端的请求,将请求转发给RealServer,由后台的RealServer直接响应客户端,不再经过Master, 性能要优于LVS-NAT; 需要LVS和RS集群绑定同一个VIP(支持多数系统,不可以实现端口映射)
TUNL:隧道技术,前端的Master只处理客户端的请求,将请求转发给RealServer,然后由后台的RealServer直接响应客户端,不再经过Master;(支持少数系统,不可以实现端口映射))
  • 什么是keepalived

监控并管理 LVS 集群系统中各个服务节点的状态

  • 安装keepalived
yum install -y keepalived
  • 配置/etc/keepalived/keepalived.conf
! Configuration File for keepalived
​
global_defs {
​
  router_id LVS_DEVEL # 设置lvs的id,在一个网络内应该是唯一的
  enable_script_security #允许执行外部脚本
}
​
​
#配置vrrp_script,主要用于健康检查及检查失败后执行的动作。
vrrp_script chk_real_server {
#健康检查脚本,当脚本返回值不为0时认为失败
   script "/usr/local/software/conf/chk_server.sh"
#检查频率,以下配置每2秒检查1次
   interval 2
#当检查失败后,将vrrp_instance的priority减小5
   weight -5
#连续监测失败3次,才认为真的健康检查失败。并调整优先级
   fall 3
#连续监测2次成功,就认为成功。但不调整优先级
   rise 2
​
   user root
}
​
​
​
#配置对外提供服务的VIP vrrp_instance配置
​
vrrp_instance VI_1 {
​
#指定vrrp_instance的状态,是MASTER还是BACKUP主要还是看优先级。
   state MASTER
​
#指定vrrp_instance绑定的网卡,最终通过指定的网卡绑定VIP
   interface enp0s8
​
#相当于VRID,用于在一个网内区分组播,需要组播域内内唯一。
   virtual_router_id 51
​
#本机的优先级,VRID相同的机器中,优先级最高的会被选举为MASTER
   priority 100
​
#心跳间隔检查,默认为1s,MASTER会每隔1秒发送一个报文告知组内其他机器自己还活着。
   advert_int 1
​
   authentication {
       auth_type PASS
       auth_pass 1111
   }
​
#定义虚拟IP(VIP)为192.1.1.155,可多设,每行一个
   virtual_ipaddress {
       192.1.1.155  //客户访问的虚拟IP
   }
​
   #本vrrp_instance所引用的脚本配置,名称就是vrrp_script 定义的容器名
 track_script {
     chk_real_server
   }
}
​
# 定义对外提供服务的LVS的VIP以及port
virtual_server 192.1.1.155 8088 { //客户访问的虚拟IP
   # 设置健康检查时间,单位是秒
   delay_loop 6
​
   # 设置负载调度的算法为rr
   lb_algo rr
​
   # 设置LVS实现负载的机制,有NAT、TUN、DR三个模式
   lb_kind NAT
​
   # 会话保持时间
   persistence_timeout 50
​
  #指定转发协议类型(TCP、UDP)
   protocol TCP
​
   # 指定real server1的IP地址
​
   **real_server 192.1.1.101 8088 {  //NGINX地址**
       # 配置节点权值,数字越大权重越高
       weight 1
​
       # 健康检查方式
       TCP_CHECK {                  # 健康检查方式
           connect_timeout 10       # 连接超时
           retry 3           # 重试次数
           delay_before_retry 3     # 重试间隔
           connect_port 80          # 检查时连接的端口
       }
​
   }
​
}
  • 配置注意
router_id后面跟的自定义的ID在同一个网络下是一致的
​
state后跟的MASTER和BACKUP必须是大写;否则会造成配置无法生效的问题
​
interface 网卡ID;要根据自己的实际情况来看,可以使用以下方式查询 ip a  查询
​
在BACKUP节点上,其keepalived.conf与Master上基本一致,修改state为BACKUP,priority值改小即可
​
authentication主备之间的认证方式,一般使用PASS即可;主备的配置必须一致,不能超过8位
  • 启动keepalived
#启动
systemctl start keepalived.servicve
​
#停止
systemctl stop keepalived.servicve
​
#查看状态
systemctl status keepalived.servicve
​
#重启
systemctl restart keepalived.servicve

场景应用

准备三台虚拟机 192.1.1.101/102/103
101/102分别安装nginx与keepalived
103部署两个jar包,供nginx访问

  1. 分别启动101/102的nginx
    101nginx配置

#user  nobody;
worker_processes  1;

#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;


events {
   worker_connections  1024;
}


http {
   include       mime.types;
   default_type  application/octet-stream;

   #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
   #                  '$status $body_bytes_sent "$http_referer" '
   #                  '"$http_user_agent" "$http_x_forwarded_for"';

   #access_log  logs/access.log  main;

   sendfile        on;
   #tcp_nopush     on;

   #keepalive_timeout  0;
   keepalive_timeout  65;

   #gzip  on;
 upstream lbs {
 	server 192.1.1.103:8080;
 	server 192.1.1.103:8081;
 }	
   server {
      #listen       80;
   listen 8088;
       server_name  localhost;

       #charset koi8-r;

       #access_log  logs/host.access.log  main;

       location /api/ {
           #root   html;
           #index  index.html index.htm;
           proxy_pass http://lbs;
   				proxy_redirect default;
           
       }

       #error_page  404              /404.html;

       # redirect server error pages to the static page /50x.html
       #
       error_page   500 502 503 504  /50x.html;
       location = /50x.html {
           root   html;
       }

       # proxy the PHP scripts to Apache listening on 127.0.0.1:80
       #
       #location ~ \.php$ {
       #    proxy_pass   http://127.0.0.1;
       #}

       # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
       #
       #location ~ \.php$ {
       #    root           html;
       #    fastcgi_pass   127.0.0.1:9000;
       #    fastcgi_index  index.php;
       #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
       #    include        fastcgi_params;
       #}

       # deny access to .htaccess files, if Apache's document root
       # concurs with nginx's one
       #
       #location ~ /\.ht {
       #    deny  all;
       #}
   }
   server {
     
   listen 8085;
       server_name  localhost;



       location / {
           root   html;
           index  indextest.html;
       }

     
       error_page   500 502 503 504  /50x.html;
       location = /50x.html {
           root   html;
       }

      
   }
  
 server {
     
   listen 8086;
       server_name  localhost;



       location /app/img {
           #root   html;
           #index  indextest.html;
           alias /usr/local/software/img/;
       }

     
       error_page   500 502 503 504  /50x.html;
       location = /50x.html {
           root   html;
       }

      
   }

   # another virtual host using mix of IP-, name-, and port-based configuration
   #
   #server {
   #    listen       8000;
   #    listen       somename:8080;
   #    server_name  somename  alias  another.alias;

   #    location / {
   #        root   html;
   #        index  index.html index.htm;
   #    }
   #}


   # HTTPS server
   #
   #server {
   #    listen       443 ssl;
   #    server_name  localhost;

   #    ssl_certificate      cert.pem;
   #    ssl_certificate_key  cert.key;

   #    ssl_session_cache    shared:SSL:1m;
   #    ssl_session_timeout  5m;

   #    ssl_ciphers  HIGH:!aNULL:!MD5;
   #    ssl_prefer_server_ciphers  on;

   #    location / {
   #        root   html;
   #        index  index.html index.htm;
   #    }
   #}

}

102nginx配置设置


#user  nobody;
worker_processes  1;

#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;


events {
   worker_connections  1024;
}


http {
   include       mime.types;
   default_type  application/octet-stream;

   #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
   #                  '$status $body_bytes_sent "$http_referer" '
   #                  '"$http_user_agent" "$http_x_forwarded_for"';

   #access_log  logs/access.log  main;

   sendfile        on;
   #tcp_nopush     on;

   #keepalive_timeout  0;
   keepalive_timeout  65;

   #gzip  on;
  upstream lbs{
   server 192.1.1.103:8080;
   server 192.1.1.103:8081;
}
   server {
      # listen       80;
       listen       8088;
       server_name  localhost;

       #charset koi8-r;

       #access_log  logs/host.access.log  main;

       location / {
           root   html;
           index  index.html index.htm;
       }
   location /api/ {
    proxy_pass http://lbs;		
}

       #error_page  404              /404.html;

       # redirect server error pages to the static page /50x.html
       #
       error_page   500 502 503 504  /50x.html;
       location = /50x.html {
           root   html;
       }

       # proxy the PHP scripts to Apache listening on 127.0.0.1:80
       #
       #location ~ \.php$ {
       #    proxy_pass   http://127.0.0.1;
       #}

       # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
       #
       #location ~ \.php$ {
       #    root           html;
       #    fastcgi_pass   127.0.0.1:9000;
       #    fastcgi_index  index.php;
       #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
       #    include        fastcgi_params;
       #}

       # deny access to .htaccess files, if Apache's document root
       # concurs with nginx's one
       #
       #location ~ /\.ht {
       #    deny  all;
       #}
   }


   # another virtual host using mix of IP-, name-, and port-based configuration
   #
   #server {
   #    listen       8000;
   #    listen       somename:8080;
   #    server_name  somename  alias  another.alias;

   #    location / {
   #        root   html;
   #        index  index.html index.htm;
   #    }
   #}


   # HTTPS server
   #
   #server {
   #    listen       443 ssl;
   #    server_name  localhost;

   #    ssl_certificate      cert.pem;
   #    ssl_certificate_key  cert.key;

   #    ssl_session_cache    shared:SSL:1m;
   #    ssl_session_timeout  5m;

   #    ssl_ciphers  HIGH:!aNULL:!MD5;
   #    ssl_prefer_server_ciphers  on;

   #    location / {
   #        root   html;
   #        index  index.html index.htm;
   #    }
   #}

}

  1. 将jar包部署到103的8080和8081端口
nohup java -jar demo-1.jar --server.port=8080 >> /usr/local/app/logs/demo1/nohup.log &
nohup java -jar demo-2.jar --server.port=8081 >>/usr/local/app/logs/demo2/nohup.log &
  1. 测试101与102的nginx是否可以访问项目
    在这里插入图片描述

在这里插入图片描述

  • 101nginx访问项目
    在这里插入图片描述
  • 102nginx访问项目
    在这里插入图片描述
  1. 101配置keepalived
    主要修改:
  • state MASTER //另外102配置成BACKUP
  • interface enp0s8
  • virtual_ipaddress //vip 客户访问的虚拟IP
  • virtual_server //设置lvs的负载信息
  • real_server //nginx地址
vi /etc/keepalived/keepalived.conf
```! Configuration File for keepalived

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
   vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_instance VI_1 {
    state MASTER 
    interface enp0s8 
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {//客户访问的虚拟IP
        192.1.1.155
    }
}

virtual_server 192.1.1.155 8089 { //设置lvs负载信息
    delay_loop 6
    lb_algo rr
    lb_kind NAT
    persistence_timeout 50
    protocol TCP

    real_server 192.1.1.101 8088 { //nginx地址
        weight 1
        SSL_GET {
            url {
              path /
              digest ff20ad2481f97b1754ef3e12ecd3a9cc
            }
            url {
              path /mrtg/
              digest 9b3a0c85a887a256d6939da88aabd8cd
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

virtual_server 10.10.10.2 1358 {
    delay_loop 6
    lb_algo rr 
    lb_kind NAT
    persistence_timeout 50
    protocol TCP

    sorry_server 192.168.200.200 1358

    real_server 192.168.200.2 1358 {
        weight 1
        HTTP_GET {
            url { 
              path /testurl/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl2/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl3/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.200.3 1358 {
        weight 1
        HTTP_GET {
            url { 
              path /testurl/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334c
            }
            url { 
              path /testurl2/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334c
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

virtual_server 10.10.10.3 1358 {
    delay_loop 3
    lb_algo rr 
    lb_kind NAT
    persistence_timeout 50
    protocol TCP

    real_server 192.168.200.4 1358 {
        weight 1
        HTTP_GET {
            url { 
              path /testurl/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl2/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl3/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.200.5 1358 {
        weight 1
        HTTP_GET {
            url { 
              path /testurl/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl2/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl3/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
  1. 102配置keepalived
! Configuration File for keepalived

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
   vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_instance VI_1 {
    state BACKUP
    interface enp0s8 
    virtual_router_id 51
    priority 99
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.1.1.155
    }
}

virtual_server 192.1.1.155 8089 {
    delay_loop 6
    lb_algo rr
    lb_kind NAT
    persistence_timeout 50
    protocol TCP

    real_server 192.1.1.102 8088 {
        weight 1
        SSL_GET {
            url {
              path /
              digest ff20ad2481f97b1754ef3e12ecd3a9cc
            }
            url {
              path /mrtg/
              digest 9b3a0c85a887a256d6939da88aabd8cd
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

virtual_server 10.10.10.2 1358 {
    delay_loop 6
    lb_algo rr 
    lb_kind NAT
    persistence_timeout 50
    protocol TCP

    sorry_server 192.168.200.200 1358

    real_server 192.168.200.2 1358 {
        weight 1
        HTTP_GET {
            url { 
              path /testurl/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl2/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl3/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.200.3 1358 {
        weight 1
        HTTP_GET {
            url { 
              path /testurl/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334c
            }
            url { 
              path /testurl2/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334c
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

virtual_server 10.10.10.3 1358 {
    delay_loop 3
    lb_algo rr 
    lb_kind NAT
    persistence_timeout 50
    protocol TCP

    real_server 192.168.200.4 1358 {
        weight 1
        HTTP_GET {
            url { 
              path /testurl/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl2/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl3/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.200.5 1358 {
        weight 1
        HTTP_GET {
            url { 
              path /testurl/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl2/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl3/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

6.分别启动101和102的keepalived

  • 启动101keepalived并查看启动状态
systemctl start keepalived.service
  • 查看启动状态
systemctl status keepalived.service

在这里插入图片描述

  • 使用ip addr命令查看ip信息
    会多出来刚刚我们在keepalived中配置的客户访问的虚拟ip
    在这里插入图片描述
  • 启动102keepalived并查看启动状态
    在这里插入图片描述
  1. 测试输入客户虚拟IP地址是否能正常访问nginx
    在这里插入图片描述
    此时看到通过客户虚拟IP访问的是101的nginx
    将101的keepalived停掉,继续访问
    停掉101的keepalived
    在这里插入图片描述
    此时继续访问,看到通过虚拟IP访问的是102的nginx
    在这里插入图片描述
    上述场景适用于keepalived机器宕机
    如果keepalived机器没有宕机,但是nginx宕机了的话,会有问题,master的keepalived还会一直转发宕机的nginx机器,导致出现无法访问的现象,这种场景可以通过编写shell脚本,让keepalived定时查询nginx进程是否存在,如果不存在的话,立即kill掉当前keepalived进程,将虚拟ip飘到backup上
  • 编写chk_server.sh
counter=$(ps -C nginx --no-heading|wc -l)
if [ "${counter}" -eq "0" ]; then
   systemctl stop keepalived.service
   echo 'nginx server is died......'
fi

修改一下101的keepalived配置文件,加入脚本检查

! Configuration File for keepalived
global_defs {

   router_id LVS_DEVEL # 设置lvs的id,在一个网络内应该是唯一的
   enable_script_security #允许执行外部脚本
}
#配置vrrp_script,主要用于健康检查及检查失败后执行的动作。
vrrp_script chk_real_server {
#健康检查脚本,当脚本返回值不为0时认为失败
    script "/usr/local/software/chk_server.sh"
#检查频率,以下配置每2秒检查1次
    interval 2
#当检查失败后,将vrrp_instance的priority减小5
    weight -5
#连续监测失败3次,才认为真的健康检查失败。并调整优先级
    fall 3
#连续监测2次成功,就认为成功。但不调整优先级
    rise 2

    user root
}
#配置对外提供服务的VIP vrrp_instance配置
vrrp_instance VI_1 {
#指定vrrp_instance的状态,是MASTER还是BACKUP主要还是看优先级。
    state MASTER
#指定vrrp_instance绑定的网卡,最终通过指定的网卡绑定VIP
    interface enp0s8
#相当于VRID,用于在一个网内区分组播,需要组播域内内唯一。
    virtual_router_id 51
#本机的优先级,VRID相同的机器中,优先级最高的会被选举为MASTER
    priority 100
#心跳间隔检查,默认为1s,MASTER会每隔1秒发送一个报文告知组内其他机器自己还活着。
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
#定义虚拟IP(VIP)为192.1.1.155,可多设,每行一个
    virtual_ipaddress {
        192.1.1.155
    }
    #本vrrp_instance所引用的脚本配置,名称就是vrrp_script 定义的容器名
  track_script {
      chk_real_server
    }
}
# 定义对外提供服务的LVS的VIP以及port
virtual_server 192.1.1.155 8088 {
    # 设置健康检查时间,单位是秒
    delay_loop 6
    # 设置负载调度的算法为rr
    lb_algo rr
    # 设置LVS实现负载的机制,有NAT、TUN、DR三个模式
    lb_kind NAT
    # 会话保持时间
    persistence_timeout 50
   #指定转发协议类型(TCP、UDP)
    protocol TCP
    # 指定real server1的IP地址
    real_server 192.1.1.101 8088 {
        # 配置节点权值,数字越大权重越高
        weight 1
        # 健康检查方式
        TCP_CHECK {                  # 健康检查方式
            connect_timeout 10       # 连接超时
            retry 3           # 重试次数
            delay_before_retry 3     # 重试间隔
            connect_port 80          # 检查时连接的端口
        }
    }
}

测试
停掉101的nginx后,访问客户虚拟ip后自动切换到backup的nginx上
在这里插入图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 3
    评论
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值