基于Netty实现的RPC
网络传输从BIO到NIO,序列化要减少字节流长度,提高序列化反序列化的效率
一、Netty服务端和客户端
1、服务端server
1.1 NettyServer
服务端接收客户端的RpcRquest请求,其中执行链中添加了对应的处理器,分别是编码器、拆包器、解码器、客户端处理器
- 编码器(对象 -> 字节数组 -> ByteBuf(自定义协议)):发送RpcResponse响应对象,经过CommonEncoder编码按照自定义协议编码成ByteBuf对象
- 拆包器:接收客户端请求的RpcRequest对象编码成的ByteBuf对象,按照基于固定长度域的拆包器进行拆包
- 解码器(ByteBuf -> 字节数组 -> 对象(自定义协议)):对ByteBuf对象按照自定义协议进行解码成POJO对象
- 服务端处理器:NettyServerHandler,接收客户端传送的RpcReuqest,执行客户端调用对应接口的服务方法,返回响应对象RpcResponse
/**
* Netty中处理RpcRequest的Handler
* @ClassName: NettyServerHandler
* @Author: whc
* @Date: 2021/05/29/21:49
*/
public class NettyServerHandler extends SimpleChannelInboundHandler<RpcRequest> {
private static final Logger logger = LoggerFactory.getLogger(NettyServerHandler.class);
private static RequestHandler requestHandler;
private static ServiceRegistry serviceRegistry;
static {
requestHandler = new RequestHandler();
serviceRegistry = new DefaultServiceRegistry();
}
@Override
protected void channelRead0(ChannelHandlerContext ctx, RpcRequest msg) throws Exception {
try {
logger.info("服务器接收到消息:{}", msg);
String interfaceName = msg.getInterfaceName();
Object server = serviceRegistry.getService(interfaceName);
Object result = requestHandler.handle(msg, server);
// 向客户端返回响应数据
// 注意服务端处理器这里的执行链顺序,因为是ctx.writeAndFlush而不是ch.writeAndFlush,所以在执行出栈(out)时,是从当前ctx处理器从后往前找,而不是从通道最后从后往前找
ChannelFuture future = ctx.writeAndFlush(RpcResponse.success(result, msg.getRequestId()));
// 消息发送完毕关闭连接
future.addListener(ChannelFutureListener.CLOSE);
} finally {
// 记得释放对象,防止内存泄漏
ReferenceCountUtil.release(msg);
}
}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
logger.error("处理过程调用时有错误发生");
cause.printStackTrace();
ctx.close();
}
}
1.2 NettyServerHandler
位于服务端责任链的尾部,用于接收RpcRequest,并且执行调用,执行真正的接口调用方法,返回处理结果。
/**
* Netty中处理RpcRequest的Handler
* @ClassName: NettyServerHandler
* @Author: whc
* @Date: 2021/05/29/21:49
*/
public class NettyServerHandler extends SimpleChannelInboundHandler<RpcRequest> {
private static final Logger logger = LoggerFactory.getLogger(NettyServerHandler.class);
private static RequestHandler requestHandler;
private static ServiceRegistry serviceRegistry;
static {
requestHandler = new RequestHandler();
serviceRegistry = new DefaultServiceRegistry();
}
@Override
protected void channelRead0(ChannelHandlerContext ctx, RpcRequest msg) throws Exception {
try {
logger.info("服务器接收到消息:{}", msg);
String interfaceName = msg.getInterfaceName();
Object server = serviceRegistry.getService(interfaceName);
Object result = requestHandler.handle(msg, server);
// 向客户端返回响应数据
ChannelFuture future = ctx.writeAndFlush(RpcResponse.success(result, msg.getRequestId()));
// 消息发送完毕关闭连接
future.addListener(ChannelFutureListener.CLOSE);
} finally {
// 记得释放对象,防止内存泄漏
ReferenceCountUtil.release(msg);
}
}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
logger.error("处理过程调用时有错误发生");
cause.printStackTrace();
ctx.close();
}
}
2、客户端client
2.1 NettyClient
负责执行客户端调用远程服务端服务的sendRequest请求,其中执行链中添加了对应的处理器,分别是编码器、拆包器、解码器、客户端处理器
- 编码器(对象 -> 字节数组 -> ByteBuf(自定义协议)):发送RpcRequest请求对象,经过CommonEncoder编码按照自定义协议编码成ByteBuf对象
- 拆包器:接收服务端响应回来的RpcResponse对象编码成的ByteBuf对象,然后对网络数据包按照基于固定长度域的拆包器进行拆包
- 解码器(ByteBuf -> 字节数组 -> 对象(自定义协议)):对ByteBuf对象按照自定义协议进行解码成POJO对象
- 客户端处理器:NettyClientHandler,接收服务端传送的RpcReponse,设置服务端响应的RpcResponse标识
/**
* NIO方式消费者客户端类
* @ClassName: NettyClient
* @Author: whc
* @Date: 2021/05/29/23:07
*/
public class NettyClient implements RpcClient {
private static final Logger logger = LoggerFactory.getLogger(NettyClient.class);
private static final Bootstrap bootstrap;
private CommonSerializer serializer;
static {
EventLoopGroup group = new NioEventLoopGroup();
bootstrap = new Bootstrap();
// 1. 指定线程模型
bootstrap.group(group)
// 2. 指定IO类型为NIO
.channel(NioSocketChannel.class)
// 开启TCP底层心跳机制
.option(ChannelOption.SO_KEEPALIVE, true);
}
private String host;
private int port;
public NettyClient(<