Nginx 动态负载均衡
一直以来,大家都是通过手工的方式来管理 Nginx 的 upstream,这种方式不太优雅。本文提供了一种思路,使用动态负载均衡的方式来管理 upstream。
Consul + Consul-template
启动脚本为:
➜ nginx-lua git:(master) cat nginx_with_consul_and_consul_template/restart.sh
#!/bin/sh
NGX_PATH="/usr/local/openresty/nginx/sbin"
ps -ef |grep nginx |grep -v grep
if [ $? -ne 0 ]; then
sudo ${NGX_PATH}/nginx -p `pwd` -c conf/nginx.conf
echo "nginx start"
else
sudo ${NGX_PATH}/nginx -p `pwd` -c conf/nginx.conf -s reload
echo "nginx reload"
fi
配置模板文件为:
➜ nginx_with_consul_and_consul_template git:(master) cat item.account.tomcat.ctmpl
upstream ItemAccountTomcat {
{{ range service "dev.account_tomcat@dc1" }}
server {{ .Address }}:{{ .Port }} weight=1;
{{ end }}
}
配置文件为:
➜ nginx_with_consul_and_consul_template git:(master) cat conf/conf.d/tomcat.conf
server {
listen 8081;
server_name _;
access_log logs/tomcat.access.log;
error_log logs/tomcat.error.log;
charset UTF-8;
location / {
proxy_pass http://ItemAccountTomcat;
}
}
一个 Spring 项目演示。该项目的 pom.xml
文件为:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.3.2.RELEASE</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<groupId>com.example</groupId>
<artifactId>spring-consul-demo</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>demo</name>
<description>Demo project for Spring Boot</description>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<java.version>1.8</java.version>
</properties>
<dependencies>
<dependency>
<groupId>com.orbitz.consul</groupId>
<artifactId>consul-client</artifactId>
<version>1.4.0</version>
<!-- <version>0.12.8</version> -->
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
<exclusions>
<exclusion>
<groupId>org.junit.vintage</groupId>
<artifactId>junit-vintage-engine</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
其源码为:
package com.example.springconsuldemo;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import com.orbitz.consul.AgentClient;
import com.orbitz.consul.Consul;
import com.google.common.net.HostAndPort;
import com.orbitz.consul.model.agent.ImmutableRegistration;
@SpringBootApplication
public class DemoApplication {
public static void main(String[] args) {
// 启动嵌入式容器(如 Tomcat)
SpringApplication.run(DemoApplication.class, args);
// 服务注册
Consul consul = Consul.builder()
.withHostAndPort(HostAndPort.fromString("11.11.1.100:8500"))
.build();
final AgentClient agentClient = consul.agentClient();
String service = "account_tomcat";
String address = "11.11.1.101";
String tag = "dev";
Integer port = 8080;
final String serviceId = address + ":" + port;
ImmutableRegistration.Builder builder = ImmutableRegistration.builder();
builder.id(serviceId).name(service)
.address(address).port(port).addTags(tag);
agentClient.register(builder.build());
// JVM 停止时摘除服务
Runtime.getRuntime().addShutdownHook(new Thread() {
@Override
public void run() {
agentClient.deregister(serviceId);
}
});
}
}
Consul + Upsync
这里使用 Consul + Upsync
来实现 Nginx
的 upstream
动态负载均衡。
wget https://codeload.github.com/weibocom/nginx-upsync-module/zip/master
unzip master
mv nginx-upsync-module-master openresty-1.15.8.2/bundle/
cd openresty-1.15.8.2/bundle/
mv nginx-upsync-module-master nginx-upsync-module
sudo ./configure \
--prefix=/opt/openresty \
--add-module=/home/vagrant/github/openresty-1.15.8.2/bundle/nginx-upsync-module
sudo gmake
sudo gmake install
sudo /opt/openresty/nginx/sbin/nginx -V
nginx version: openresty/1.15.8.2
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-39) (GCC)
built with OpenSSL 1.0.2k-fips 26 Jan 2017
TLS SNI support enabled
configure arguments: --prefix=/opt/openresty/nginx --with-cc-opt=-O2 --add-module=../ngx_devel_kit-0.3.1rc1 --add-module=../echo-nginx-module-0.61 --add-module=../xss-nginx-module-0.06 --add-module=../ngx_coolkit-0.2 --add-module=../set-misc-nginx-module-0.32 --add-module=../form-input-nginx-module-0.12 --add-module=../encrypted-session-nginx-module-0.08 --add-module=../srcache-nginx-module-0.31 --add-module=../ngx_lua-0.10.15 --add-module=../ngx_lua_upstream-0.07 --add-module=../headers-more-nginx-module-0.33 --add-module=../array-var-nginx-module-0.05 --add-module=../memc-nginx-module-0.19 --add-module=../redis2-nginx-module-0.15 --add-module=../redis-nginx-module-0.3.7 --add-module=../rds-json-nginx-module-0.15 --add-module=../rds-csv-nginx-module-0.09 --add-module=../ngx_stream_lua-0.0.7 --with-ld-opt=-Wl,-rpath,/opt/openresty/luajit/lib --add-module=/home/vagrant/github/openresty-1.15.8.2/bundle/nginx-upsync-module --with-stream --with-stream_ssl_module --with-stream_ssl_preread_module --with-http_ssl_module
开始之前先把 Consul
集群启动:
# 在nginx那台机器上启动
consul agent -server \
-bootstrap-expect=1 \
-data-dir=/tmp/consul \
-node=agent-one \
-bind=11.11.1.100 \ # 如果是多网卡,则写成 -bind=0.0.0.0,会报错
-enable-script-checks=true \
-config-dir=/home/vagrant/consul.d
-client=0.0.0.0 \
-ui # 启用管理界面
# 在web01机器上启动
consul agent \
-data-dir=/tmp/consul \
-node=agent-two \
-bind=11.11.1.101 \
-enable-script-checks=true \
-config-dir=/home/vagrant/consul.d
# 在web02机器上启动
consul agent \
-data-dir=/tmp/consul \
-node=agent-three \
-bind=11.11.1.102 \
-enable-script-checks=true \
-config-dir=/home/vagrant/consul.d
各个节点加入集群。登录到 nginx
机器上,
[vagrant@nginx ~]$ consul members
Node Address Status Type Build Protocol DC Segment
agent-one 11.11.1.100:8301 alive server 1.1.0 2 dc1 <all>
[vagrant@nginx ~]$ consul join 11.11.1.101
Successfully joined cluster by contacting 1 nodes.
[vagrant@nginx ~]$ consul members
Node Address Status Type Build Protocol DC Segment
agent-one 11.11.1.100:8301 alive server 1.1.0 2 dc1 <all>
agent-two 11.11.1.101:8301 alive client 1.1.0 2 dc1 <default>
然后登录到web02上,执行join操作:
[vagrant@web02 ~]$ consul join 11.11.1.101
Successfully joined cluster by contacting 1 nodes.
[vagrant@web02 ~]$ consul members
Node Address Status Type Build Protocol DC Segment
agent-one 11.11.1.100:8301 alive server 1.1.0 2 dc1 <all>
agent-three 11.11.1.102:8301 alive client 1.1.0 2 dc1 <default>
agent-two 11.11.1.101:8301 alive client 1.1.0 2 dc1 <default>
一个 nginx
配置:
[vagrant@nginx myngx]$ pwd
/home/vagrant/myngx
[vagrant@nginx myngx]$ cat conf/conf.d/tomcat.conf
# 就下面这么多关键的配置
upstream ucenter {
server 127.0.0.1:11111;
upsync 127.0.0.1:8500/v1/kv/upstreams/app/ upsync_timeout=6m upsync_interval=500ms upsync_type=consul strong_dependency=off;
upsync_dump_path /home/vagrant/myngx/conf/services/tomcat.conf;
include /home/vagrant/myngx/conf/services/tomcat.conf;
}
server {
listen 8081;
server_name _;
access_log logs/tomcat.access.log;
error_log logs/tomcat.error.log;
charset UTF-8;
location / {
# proxy_pass http://ItemAccountTomcat;
proxy_pass http://ucenter;
}
location /upstream_show {
upstream_show;
}
# location /upstream_status {
# check_status;
# access_log off;
# }
}
[vagrant@nginx myngx]$ cat conf/services/tomcat.conf
server 11.11.1.101:8080 weight=1 max_fails=2 fail_timeout=10s;
server 11.11.1.102:8080 weight=1 max_fails=2 fail_timeout=10s;
测试:
[vagrant@nginx myngx]$ curl localhost:8081/upstream_show
Upstream name: ItemAccountTomcat; Backend server count: 1
server 11.11.1.101:8080 weight=1 max_fails=1 fail_timeout=10s;
Upstream name: ucenter; Backend server count: 2
server 11.11.1.102:8080 weight=1 max_fails=2 fail_timeout=10s;
server 11.11.1.101:8080 weight=1 max_fails=2 fail_timeout=10s;
[vagrant@nginx myngx]$ curl localhost:8081
Hello tomcat2.
[vagrant@nginx myngx]$ curl localhost:8081
Hello tomcat1.
[vagrant@nginx myngx]$ curl localhost:8081
Hello tomcat2.
[vagrant@nginx myngx]$ curl localhost:8081
Hello tomcat1.
[vagrant@nginx myngx]$ curl localhost:8081
Hello tomcat2.
[vagrant@nginx myngx]$ curl localhost:8081
Hello tomcat1.
摘除一台:
[vagrant@nginx myngx]$ curl -X PUT -d '{"weight":1, "max_fails":2, "fail_timeout":10, "down":1}' http://127.0.0.1:8500/v1/kv/upstreams/app/11.11.1.101:8080
[vagrant@nginx myngx]$ curl localhost:8081
Hello tomcat2.
[vagrant@nginx myngx]$ curl localhost:8081
Hello tomcat2.
[vagrant@nginx myngx]$ curl localhost:8081
Hello tomcat2.
[vagrant@nginx myngx]$ curl localhost:8081
Hello tomcat2.
[vagrant@nginx myngx]$ curl localhost:8081
Hello tomcat2.
查看后端情况:
[vagrant@nginx myngx]$ curl localhost:8081/upstream_show
Upstream name: ItemAccountTomcat; Backend server count: 1
server 11.11.1.101:8080 weight=1 max_fails=1 fail_timeout=10s;
Upstream name: ucenter; Backend server count: 2
server 11.11.1.101:8080 weight=1 max_fails=2 fail_timeout=10s down;
server 11.11.1.102:8080 weight=1 max_fails=2 fail_timeout=10s;
再次重新上线一台:
[vagrant@nginx myngx]$ curl -X PUT -d '{"weight":1, "max_fails":2, "fail_timeout":10, "down":0}' http://127.0.0.1:8500/v1/kv/upstreams/app/11.11.1.101:8080
true[vagrant@nginx myngx]$ clocalhost:8081/upstream_show
Upstream name: ItemAccountTomcat; Backend server count: 1
server 11.11.1.101:8080 weight=1 max_fails=1 fail_timeout=10s;
Upstream name: ucenter; Backend server count: 2
server 11.11.1.101:8080 weight=1 max_fails=2 fail_timeout=10s;
server 11.11.1.102:8080 weight=1 max_fails=2 fail_timeout=10s;
[vagrant@nginx myngx]$ curl localhost:8081
Hello tomcat2.
[vagrant@nginx myngx]$ curl localhost:8081
Hello tomcat1.
[vagrant@nginx myngx]$ curl localhost:8081
Hello tomcat2.
[vagrant@nginx myngx]$ curl localhost:8081
Hello tomcat1.
[vagrant@nginx myngx]$ curl localhost:8081
Hello tomcat2.
[vagrant@nginx myngx]$ curl localhost:8081
Hello tomcat1.