简易RPC框架实现——3、引入Netty实现网络通信

本章是在前面基于socket通信基础上的一种改进,引入了netty实现了客户端与服务端的网络通信,本章改动较大,对应的commit为cf7021b

Netty是一个非阻塞I/O客户端-服务器框架,主要用于开发Java网络应用程序,如协议服务器和客户端。异步事件驱动的网络应用程序框架和工具用于简化网络编程,例如TCP和UDP套接字服务器。Netty包括了反应器编程模式的实现。(源自维基百科)。既然涉及到了网络编程,那么就离不开java中网络编程的王者——Netty。

Netty的使用方法本文不做过多赘述,如果对于Netty了解不够,可以参考一下b站黑马满一航老师讲解的Netty使用视频。

Netty客户端与服务端的实现

我们所引入的netty依赖为netty4,netty5是netty对于异步非阻塞通信的一种实现,但存在较为明显的问题,目前已经被废弃:

		<dependency>
            <groupId>io.netty</groupId>
            <artifactId>netty-all</artifactId>
            <version>4.1.42.Final</version>
        </dependency>

客户端

客户端只需要引入一个工作组进行读写事件的监听即可,在客户端获取与服务端连接的channel时,我们可以抽象出一个channel的提供类,并在里边引入一个连接重试机制,重试机制利用的是CountdownLatch实现的:

public class ChannelProvider {
    private static EventLoopGroup eventLoopGroup;
    private static Bootstrap bootstrap = initializeBootstrap();
    private static final int MAX_RETRY_NUMBER = 5;
    private static Channel channel = null;
    private static final Logger logger = LoggerFactory.getLogger(ChannelProvider.class);




    private static Bootstrap initializeBootstrap(){
        eventLoopGroup = new NioEventLoopGroup();
        Bootstrap bootstrap = new Bootstrap();
        bootstrap.group(eventLoopGroup)
                .channel(NioSocketChannel.class)
                .option(ChannelOption.CONNECT_TIMEOUT_MILLIS,5000)
                .option(ChannelOption.SO_KEEPALIVE,true)
                .option(ChannelOption.TCP_NODELAY,true);
        return bootstrap;
    }


    public static Channel get(InetSocketAddress inetSocketAddress, CommonSerializer serializer){
        bootstrap.handler(new ChannelInitializer<SocketChannel>() {
            @Override
            protected void initChannel(SocketChannel ch) throws Exception {
                ChannelPipeline pipeline = ch.pipeline();
                pipeline.addLast(new MyEncoder(serializer))
                        .addLast(new MyDecoder())
                        .addLast(new NettyClientHandler());
            }
        });
        CountDownLatch countDownLatch = new CountDownLatch(1);
        try {
            connect(bootstrap,inetSocketAddress,countDownLatch);
            countDownLatch.await();
        } catch (InterruptedException e) {
            logger.error("获取channel时发生错误");
        }
        return channel;
    }
    private static void connect(Bootstrap bootstrap, InetSocketAddress inetSocketAddress,CountDownLatch countDownLatch){
        connect(bootstrap,inetSocketAddress,MAX_RETRY_NUMBER,countDownLatch);
    }

    private static void connect(Bootstrap bootstrap, InetSocketAddress inetSocketAddress,int retry,CountDownLatch countDownLatch){
        bootstrap.connect(inetSocketAddress).addListener((ChannelFutureListener) future ->{
            if(future.isSuccess()){
                logger.info("获取channel连接成功,连接到服务器{}:{}",inetSocketAddress.getHostName(),inetSocketAddress.getPort());
                channel = future.channel();
                countDownLatch.countDown();
                return;
            }
            if(retry == 0){
                logger.error("达到最大重连次数,连接失败");
                countDownLatch.countDown();
                throw new RpcException(RpcError.CLIENT_CONNECT_SERVER_FAILURE);
            }
            int number = (MAX_RETRY_NUMBER - retry) + 1;
            logger.info("第{}次重连.....",number);
            int delay = 1 << number ;
            bootstrap.config().group().schedule(() -> connect(bootstrap,inetSocketAddress,retry - 1,countDownLatch),delay, TimeUnit.SECONDS);
        });

    }
}

在获取到了与服务端连接的channe之后,就可以向服务端进行通信:

public class NettyClient implements CommonClient{
    private static final Logger logger = LoggerFactory.getLogger(NettyClient.class);
    private final CommonSerializer serializer;
    public NettyClient(CommonSerializer serializer){
        this.serializer = serializer;
    }


    @Override
    public Object sendRequest(RpcRequest rpcRequest, String host, int port) {
        Channel channel = ChannelProvider.get(new InetSocketAddress(host,port),serializer);
        try {
            channel.writeAndFlush(rpcRequest).addListener(future -> {
                if(future.isSuccess()){
                    logger.info("成功向服务端发送请求:{}",rpcRequest);
                }else{
                    logger.error("向服务器发送消息失败",future.cause());
                }
            });
            channel.closeFuture().sync();
            AttributeKey<RpcResponse> key = AttributeKey.valueOf(rpcRequest.getRequestId());
            RpcResponse rpcResponse = channel.attr(key).get();
            return rpcResponse;
        } catch (InterruptedException e) {
            logger.error("发送消息时有错误发生:{}",e);
        }
        return null;
    }
}

服务端

服务端的工作组则需要创建两个,一个用来监听连接事件,一个用来监听读写事件:

public class NettyServer implements CommonServer{

    private static final Logger logger = LoggerFactory.getLogger(NettyServer.class);
    private final ServerPublisher serverPublisher;
    private final CommonSerializer serializer;
    public NettyServer(ServerPublisher serverPublisher,CommonSerializer serializer){
        this.serverPublisher = serverPublisher;
        this.serializer = serializer;
    }
    @Override
    public void start(int port) {
        EventLoopGroup boss = new NioEventLoopGroup();
        EventLoopGroup worker = new NioEventLoopGroup();
        try{
            ServerBootstrap serverBootstrap = new ServerBootstrap();
            serverBootstrap.group(boss,worker)
                    .channel(NioServerSocketChannel.class)
                    .handler(new LoggingHandler(LogLevel.INFO))
                    .option(ChannelOption.SO_BACKLOG,256)
                    .option(ChannelOption.SO_KEEPALIVE,true)
                    .childOption(ChannelOption.TCP_NODELAY,true)
                    .childHandler(new ChannelInitializer<SocketChannel>() {
                        @Override
                        protected void initChannel(SocketChannel ch) throws Exception {
                            ChannelPipeline pipeline = ch.pipeline();
                            pipeline.addLast(new MyEncoder(serializer))
                                    .addLast(new MyDecoder())
                                    .addLast(new NettyServerHandler(serverPublisher));
                        }
                    });
            ChannelFuture future = serverBootstrap.bind(port).sync();
            future.channel().closeFuture().sync();
        }catch (InterruptedException e){
            logger.error("启动服务时发生错误");
        }
        finally {
            boss.shutdownGracefully();
            worker.shutdownGracefully();
        }
    }
}

自定义协议与编解码器

在数据传输的过程中,我们可以在我们要传输的数据前边加上一些用于识别验证的数据,来保证双方通信的可靠,在传输数据前边加上数据是由编码器完成,而将数据从传输的数据中解析出来则是解码器的工作。

        +---------------+---------------+-----------------+-------------+
        |  Magic Number |  Package Type | Serializer Type | Data Length |
        |    4 bytes    |    4 bytes    |     4 bytes     |   4 bytes   |
        +---------------+---------------+-----------------+-------------+
        |                          Data Bytes                           |
        |                   Length: ${Data Length}                      |
        +---------------------------------------------------------------+

本文所采用的自定义协议如上图所示,首先是四个字节的魔数,用来给通信双方识别是否为可靠的通信。之后是四个字节的数据包类型,通过这个告知解码器需要得到的数据包类型。然后是四个字节的编解码器类型,由此可以选择正确的解码器进行解析。之后传入需要传输二进制文件的字节长度,可以防止黏包半包问题的出现。最后则是所要传输的数据。

编码器

协议规定好之后,我们则需要设计我们的编解码器,首先是编码器,利用继承netty中的MessageToByteEncoder让他变成一个出站处理器,编码器里边则根据我们定义的协议进行编写:

public class MyEncoder extends MessageToByteEncoder {
    private static final int MAGIC_NUMBER = 0x19990430;

    private final CommonSerializer serializer;

    public MyEncoder(CommonSerializer serializer){
        this.serializer = serializer;
    }
    @Override
    protected void encode(ChannelHandlerContext ctx, Object msg, ByteBuf out) throws Exception {
        out.writeInt(MAGIC_NUMBER);
        if(msg instanceof RpcRequest){
            out.writeInt(PackageType.REQUEST_PACKAGE.getCode());
        }else{
            out.writeInt(PackageType.RESPONSE_PACKAGE.getCode());
        }
        out.writeInt(serializer.getSerializerCode());
        byte[] data = serializer.serialize(msg);
        out.writeInt(data.length);
        out.writeBytes(data);
    }
}

解码器

解码器的实现是去继承netty中的ReplayingDecoder,他是ByteToMessage的子类,他们具体的区别可以参考《ReplayingDecoder<S>和ByteToMessageDecoder》这篇博客。

public class MyDecoder extends ReplayingDecoder {
    private static final int MAGIC_NUMBER = 0x19990430;
    private static final Logger logger = LoggerFactory.getLogger(MyDecoder.class);
    @Override
    protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) throws Exception {
        int magic = in.readInt();
        if(MAGIC_NUMBER != magic){
            logger.error("不识别的协议包");
            throw new RpcException(RpcError.UNKNOWN_PROTOCOL);
        }
        int packageType = in.readInt();
        Class<?> packageClass;
        if(packageType == PackageType.REQUEST_PACKAGE.getCode()){
            packageClass = RpcRequest.class;
        }else if(packageType == PackageType.RESPONSE_PACKAGE.getCode()){
            packageClass = RpcResponse.class;
        }else{
            logger.error("不识别的数据包");
            throw new RpcException(RpcError.UNKNOWN_PACKAGE_TYPE);
        }
        CommonSerializer serializer = CommonSerializer.getSerializerByCode(in.readInt());
        int length = in.readInt();
        byte[] data = new byte[length];
        in.readBytes(data);
        Object o = serializer.deSerialize(data, packageClass);
        out.add(o);
    }
}

在对得到的数据进行解码后,由out写入交给下一个入站处理器进行处理。

序列化与反序列化

在我们进行数据传输的过程中,是需要将我们传输的对象序列化为二进制字节数组进行传输的,在传输后,也需要根据得到的二进制字节数组进行反序列化。

我们可以自定义序列化与反序列的序列化器,首先我们需要一个序列化与反序列化的顶层接口:

public interface CommonSerializer {
    <T>byte[] serialize(T obj);

    <T>T deSerialize(byte[] bytes,Class<T> clazz);
    int getSerializerCode();
    static CommonSerializer getSerializerByCode(int code){
        switch (code){
            case 0:
                return new ProtobufSerializer();
        }
        return null;
    }
}

里边的方法则是在传输过程之前的序列化,以及传输完成后的反序列化,之后我们可以采用目前比较流行的序列化器来实现这个接口,本文所采用的序列化器为Protobuf,可以看下图简单了解一下:

Protobuf的简介
引入protobuf序列化机制,构建一个Protobuf序列化器:

public class ProtobufSerializer implements CommonSerializer{

    private static LinkedBuffer buffer = LinkedBuffer.allocate(LinkedBuffer.DEFAULT_BUFFER_SIZE);
    private static Map<Class<?>, Schema<?>> schemaCache = new ConcurrentHashMap<>();
    @Override
    public <T>byte[] serialize(T obj) {
        Class<T> clazz = (Class<T>) obj.getClass();
        Schema<T> schema = getSchema(clazz);
        byte[] data;
        try{
            data = ProtostuffIOUtil.toByteArray(obj,schema,buffer);
        }finally {
            buffer.clear();
        }
        return data;
    }

    @Override
    public <T>T deSerialize(byte[] bytes, Class<T> clazz) {
        Schema<T> schema = getSchema(clazz);
        T obj = schema.newMessage();
        ProtostuffIOUtil.mergeFrom(bytes,obj,schema);
        return obj;
    }

    @Override
    public int getSerializerCode() {
        return SerializerCode.valueOf("PROTOBUF").getCode();
    }

    private <T>Schema<T> getSchema(Class<T> clazz){
        Schema<T> schema = (Schema<T>) schemaCache.get(clazz);
        if(Objects.isNull(schema)){
            schema = RuntimeSchema.getSchema(clazz);
            if(Objects.nonNull(schema)){
                schemaCache.put(clazz,schema);
            }
        }
        return schema;
    }
}

客户端与服务端的入站处理器

解码器解码之后的数据,会交给下一个入站处理器进行处理,所以我们需要构建自定义的Handler,来对于接收到的数据进行处理。

服务端Handler

服务端编写一个NettyServerHandler继承netty中的SimpleChannelInboundHandler,并指定泛型来处理RpcRequest,重写方法里边则调用上一张我们的反射调用的方法,之后将得到的RpcResponse传回给客户端:

public class NettyServerHandler extends SimpleChannelInboundHandler<RpcRequest> {

    private static final Logger logger = LoggerFactory.getLogger(NettyServerHandler.class);
    private final RequestHandler requestHandler;
    private final ServerPublisher serverPublisher;

    public NettyServerHandler(ServerPublisher serverPublisher){
        this.serverPublisher = serverPublisher;
        requestHandler = new RequestHandler();
    }
    @Override
    protected void channelRead0(ChannelHandlerContext ctx, RpcRequest msg) throws Exception {
        try {
            logger.info("服务器接收到请求:{}",msg);
            Object service = serverPublisher.getService(msg.getInterfaceName());
            Object res = requestHandler.handle(msg, service);
            ChannelFuture future = ctx.writeAndFlush(RpcResponse.success(res, msg.getRequestId()));
            future.addListener(ChannelFutureListener.CLOSE);
        } finally {
            ReferenceCountUtil.release(msg);
        }
    }

    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
        logger.error("处理过程中出现错误");
        cause.printStackTrace();
    }
}

客户端Handler

客户端与服务端类似,处理的是RpcResponse消息:

public class NettyClientHandler extends SimpleChannelInboundHandler<RpcResponse> {

    private static final Logger logger = LoggerFactory.getLogger(NettyClientHandler.class);
    @Override
    protected void channelRead0(ChannelHandlerContext ctx, RpcResponse msg) throws Exception {
        try {
            logger.info("客户端获取到服务端返回的信息:{}",msg);
            AttributeKey<RpcResponse> key = AttributeKey.valueOf(msg.getRequestId());
            ctx.channel().attr(key).set(msg);
            ctx.channel().close();
        } finally {
            ReferenceCountUtil.release(msg);
        }

    }

    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
        logger.error("在信息从服务端返回客户端发生错误");
        cause.printStackTrace();
        ctx.close();
    }
}

信息都是在处理器获取的,也就是在channelRead0方法中,所以我们要在sendRequest()方法中,获取服务端传来的值,通过AttributeKey来进行获取。值得注意的是,在客户端中获取服务器返回的数据之前,我们需要将发送数据的channel阻塞,只有成功发送消息之后才能进行获取,否则在channel.attr.key()中每次获得的结果都为null,原因就是sendRequest()不会等待channelRead0中有数据才进行读取。

测试

服务端的测试代码如下:

public class NettyServerTest {

    public static void main(String[] args) {
        ServerPublisher serverPublisher = new DefaultServerPublisher();
        HelloService helloService = new HelloServiceImpl();
        ByeService byeService = new ByeServiceImpl();
        serverPublisher.publishService(helloService);
        serverPublisher.publishService(byeService);
        NettyServer server = new NettyServer(serverPublisher,new ProtobufSerializer());
        server.start(10000);
    }
}

客户端的测试代码如下:

public class NettyClientTest {
    public static void main(String[] args) {
        NettyClient client = new NettyClient(new ProtobufSerializer());
        ClientProxy proxy = new ClientProxy(client,"127.0.0.1",10000);
        HelloService helloService = (HelloService)proxy.getProxy(HelloService.class);
        ByeService byeService = (ByeService) proxy.getProxy(ByeService.class);
        RpcObject rpcObject = new RpcObject(2,"This is NettyClient!");
        String s = helloService.sayHello(rpcObject);
        System.out.println(s);
        String a = byeService.bye(rpcObject);
        System.out.println(a);
    }
}

服务端的测试结果:

[main] INFO cn.fzzfrjf.core.DefaultServerPublisher - 向接口:[interface cn.fzzfrjf.entity.HelloService]注册服务:cn.fzzfrjf.service.HelloServiceImpl
[main] INFO cn.fzzfrjf.core.DefaultServerPublisher - 向接口:[interface cn.fzzfrjf.entity.ByeService]注册服务:cn.fzzfrjf.service.ByeServiceImpl
[main] WARN io.netty.bootstrap.ServerBootstrap - Unknown channel option 'SO_KEEPALIVE' for channel '[id: 0xa792b98f]'
[nioEventLoopGroup-2-1] INFO io.netty.handler.logging.LoggingHandler - [id: 0xa792b98f] REGISTERED
[nioEventLoopGroup-2-1] INFO io.netty.handler.logging.LoggingHandler - [id: 0xa792b98f] BIND: 0.0.0.0/0.0.0.0:10000
[nioEventLoopGroup-2-1] INFO io.netty.handler.logging.LoggingHandler - [id: 0xa792b98f, L:/0:0:0:0:0:0:0:0:10000] ACTIVE
[nioEventLoopGroup-2-1] INFO io.netty.handler.logging.LoggingHandler - [id: 0xa792b98f, L:/0:0:0:0:0:0:0:0:10000] READ: [id: 0xfac7b904, L:/127.0.0.1:10000 - R:/127.0.0.1:55776]
[nioEventLoopGroup-2-1] INFO io.netty.handler.logging.LoggingHandler - [id: 0xa792b98f, L:/0:0:0:0:0:0:0:0:10000] READ COMPLETE
[nioEventLoopGroup-3-1] INFO cn.fzzfrjf.core.NettyServerHandler - 服务器接收到请求:cn.fzzfrjf.entity.RpcRequest@73f95071
[nioEventLoopGroup-2-1] INFO io.netty.handler.logging.LoggingHandler - [id: 0xa792b98f, L:/0:0:0:0:0:0:0:0:10000] READ: [id: 0xb1bc4e91, L:/127.0.0.1:10000 - R:/127.0.0.1:55777]
[nioEventLoopGroup-2-1] INFO io.netty.handler.logging.LoggingHandler - [id: 0xa792b98f, L:/0:0:0:0:0:0:0:0:10000] READ COMPLETE
[nioEventLoopGroup-3-2] INFO cn.fzzfrjf.core.NettyServerHandler - 服务器接收到请求:cn.fzzfrjf.entity.RpcRequest@103348e2

客户端的测试结果:

[nioEventLoopGroup-2-1] INFO cn.fzzfrjf.core.ChannelProvider - 获取channel连接成功,连接到服务器activate.navicat.com:10000
[nioEventLoopGroup-2-1] INFO cn.fzzfrjf.core.NettyClient - 成功向服务端发送请求:cn.fzzfrjf.entity.RpcRequest@ec58f6b
[nioEventLoopGroup-2-1] INFO cn.fzzfrjf.core.NettyClientHandler - 客户端获取到服务端返回的信息:RpcResponse(code=200, requestId=98d12bf4-6a12-4e9c-8606-606d3ff5118e, data=这是id为:2发送的:This is NettyClient!)
[nioEventLoopGroup-2-2] INFO cn.fzzfrjf.core.ChannelProvider - 获取channel连接成功,连接到服务器activate.navicat.com:10000
这是id为:2发送的:This is NettyClient!
[nioEventLoopGroup-2-2] INFO cn.fzzfrjf.core.NettyClient - 成功向服务端发送请求:cn.fzzfrjf.entity.RpcRequest@6489b48d
[nioEventLoopGroup-2-2] INFO cn.fzzfrjf.core.NettyClientHandler - 客户端获取到服务端返回的信息:RpcResponse(code=200, requestId=a85778b1-fbe3-4bf3-ba57-8e1bb36e290b, data=(This is NettyClient!),bye!)
(This is NettyClient!),bye!

引入Netty成功!
未完。。。。。。

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值