最近有人讨论问题时说:nginx不能实现TCP层load balance, 只能做HTTP websocket这些协议的load balance。虽然没做过SLB,但我印象中nginx是可以做4层 load balance,会后查询的官方文档,也确实如此,顺便记录下来。
同时提醒自己:以官方为主,以实践为准,切忌信口雌黄。
官方文档地址:
https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/
1, nginx配置
我从官方网站下载了nginx-1.17.6, Windows版本,默认就自带了stream模块。
stream{
upstream tcp_proxy {
server 192.168.119.1:5000;
server 192.168.119.1:5001;
}
server {
listen 3000;
proxy_connect_timeout 1s;
proxy_timeout 3s;
proxy_pass tcp_proxy;
}
}
2,模拟测试程序
测试程序非常简单, 开启了两个服务器,端口分别是5000和5001, 然后nginx对外暴露的端口是3000, 客户端通过连接nginx的3000端口,实际上连接到被代理的5000或者5001。
我的Socket示例程序使用nettt,主要使用LineBasedFrameDecoder和StringDecoder, 简单理解就是按照字符串并且一行分割消息进行封包解包。 从代码中可以看到我发送的消息都是带上换行符。熟悉socket编程或者netty基本都知道,我们必须定义消息的封包解包规则。
整个工程主要依赖了netty。 源码在这里,欢迎fork加星。
2,1 服务器端程序
服务器端也就是被nginx代理的真实地址, 服务器端直接接收客户端发送的消息,然后打印日志, 可以从服务端日志发现,他收到的消息的源地址实际上是nginx的地址,不是客户端的地址(测试中故意将客户端连接的目标地址设置为127.0.0.1, 端口为nginx代理的3000)
public class StringServer {
public void bind(int port) throws Exception {
// 配置服务端的NIO线程组
EventLoopGroup bossGroup = new NioEventLoopGroup();
EventLoopGroup workerGroup = new NioEventLoopGroup();
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup).channel(NioServerSocketChannel.class)
.option(ChannelOption.SO_BACKLOG, 100)
.handler(new LoggingHandler(LogLevel.INFO))
.childHandler(new ChannelInitializer<SocketChannel>() {
@Override
public void initChannel(SocketChannel ch)
throws IOException {
ch.pipeline().addLast("encoder", new StringEncoder());
ch.pipeline().addLast("decoder1", new LineBasedFrameDecoder(1024));
ch.pipeline().addLast("decoder2", new StringDecoder());
ch.pipeline().addLast("decoder3", new ServerStringHandler());
}
});
// 绑定端口,同步等待成功
b.bind(port).sync();
System.out.println("Netty server start ok : " + (port));
}
public static void main(String[] args) throws Exception {
new StringServer().bind(5000);
}
}
ServerStringHandler .java
@Slf4j
@NoArgsConstructor
public class ServerStringHandler extends SimpleChannelInboundHandler<String> {
@Override
public void channelRead0(ChannelHandlerContext ctx, String msgStr) throws Exception {
SocketAddress remoteAddress = ctx.channel().remoteAddress();
String host = ((InetSocketAddress) remoteAddress).getHostString();
int port = ((InetSocketAddress) remoteAddress).getPort();
log.info("ip:port={}:{}, msg={}", host, port, msgStr);
ctx.writeAndFlush("reply\r\n");
}
}
2.2 客户端程序
客户端比较简单,就是连接到指定地址和端口. 然后定时向服务器端发送消息,收到消息后打印日志。
@Slf4j
public class StringClient {
private EventLoopGroup group = new NioEventLoopGroup();
private void connect(String host, int port) throws Exception {
// 配置客户端NIO线程组
try {
Bootstrap b = new Bootstrap();
b.group(group).channel(NioSocketChannel.class)
.option(ChannelOption.TCP_NODELAY, true)
.handler(new ChannelInitializer<SocketChannel>() {
@Override
public void initChannel(SocketChannel ch)
throws Exception {
ch.pipeline().addLast("decoder1", new LineBasedFrameDecoder(1024));
ch.pipeline().addLast("decoder2", new StringDecoder());
ch.pipeline().addLast("encoder", new StringEncoder());
ch.pipeline().addLast("encoder2", new ScheduleStringHandler());
}
});
// 发起异步连接操作
ChannelFuture future = b.connect(new InetSocketAddress(host, port)).sync();
future.channel().closeFuture().sync();
} finally {
log.info("done");
}
}
/**
* @param args
* @throws Exception
*/
public static void main(String[] args) throws Exception {
new StringClient().connect("127.0.0.1", 3000);
}
}
@Slf4j
public class ScheduleStringHandler extends SimpleChannelInboundHandler<String> {
private ScheduledExecutorService executor = Executors.newScheduledThreadPool(1);
private int count = 0;
private Future future;
@Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
future = executor.scheduleAtFixedRate(new Runnable() {
@Override
public void run() {
ctx.writeAndFlush(count + "th" + "\r\n");
count++;
SocketAddress remoteAddress = ctx.channel().remoteAddress();
String host = ((InetSocketAddress) remoteAddress).getHostString();
int port = ((InetSocketAddress) remoteAddress).getPort();
log.info("send msg at fixed rate to ip:prot={}:{}", host, port);
}
},
0, 1, TimeUnit.SECONDS);
}
@Override
protected void channelRead0(ChannelHandlerContext ctx, String msgStr) {
SocketAddress remoteAddress = ctx.channel().remoteAddress();
String host = ((InetSocketAddress) remoteAddress).getHostString();
int port = ((InetSocketAddress) remoteAddress).getPort();
log.info("Client receive. ip:port={}:{}, msg={}", host, port, msgStr);
}
@Override
public void channelInactive(final ChannelHandlerContext ctx) {
log.info("ctx inactive.");
ctx.close();
if (future != null) {
future.cancel(true);
}
}
}
3, 效果截图
我们从运行日志,可以清楚看到nginx完成tcp端口代理这样工作。