1. SOFABolt源码分析-服务启动到消息处理

前言

本篇文章大致分析SofaBolt怎么利用用户设置的Processor去处理消息,篇幅有点长。

为什么想到分析SofaBolt框架的源码,贫僧学完Netty后想去学习一下目前Netty的应用场景及使用方式来巩固所学知识,首先想到的是Dubbo,因为Dubbo底层通信也是基于Netty,把Dubbo源码拉下来发现底层对Netty封装度极高,Debug一上午感觉效果不明显,于是想看看阿里有没有其他基于Netty实现的框架,于是找到了SofaBolt,它是蚂蚁金服18年设计的一个基础通信框架,蚂蚁金服的很多自研的中间件的基础通信都是用这套框架,其完全是对Netty的封装,由于是基础通信框架,底层细节暴露的很直接,很适合用来学习。贫僧利用一周的零碎时间将它扒了个底朝天,这片文章先把流程梳理一下,后面再揪几个组件具体分析一下设计的思路,看能不能提几个issues。

1. Demo

这里直接把官网的源码拉下来,自带Demo示例

代码清单1:RpcServerDemoByMain.class

public class RpcServerDemoByMain {
    static Logger             logger                    = LoggerFactory
                                                            .getLogger(RpcServerDemoByMain.class);

    BoltServer                server;

    int                       port                      = 8999;

    SimpleServerUserProcessor serverUserProcessor       = new SimpleServerUserProcessor();
    CONNECTEventProcessor     serverConnectProcessor    = new CONNECTEventProcessor();
    DISCONNECTEventProcessor  serverDisConnectProcessor = new DISCONNECTEventProcessor();

    public RpcServerDemoByMain() {
        // 1. create a Rpc server with port assigned
        server = new BoltServer(port);
        // 2. add processor for connect and close event if you need
        server.addConnectionEventProcessor(ConnectionEventType.CONNECT, serverConnectProcessor);
        server.addConnectionEventProcessor(ConnectionEventType.CLOSE, serverDisConnectProcessor);
        // 3. register user processor for client request
        server.registerUserProcessor(serverUserProcessor);
        // 4. server start
        if (server.start()) {
            System.out.println("server start ok!");
        } else {
            System.out.println("server start failed!");
        }
        // server.getRpcServer().stop();
    }

    public static void main(String[] args) {
        new RpcServerDemoByMain();
    }
}

代码清单2:RpcClientDemoByMain.class

public class RpcClientDemoByMain {
    static Logger             logger                    = LoggerFactory
                                                            .getLogger(RpcClientDemoByMain.class);

    static RpcClient          client;

    static String             addr                      = "127.0.0.1:8999";

    SimpleClientUserProcessor clientUserProcessor       = new SimpleClientUserProcessor();
    CONNECTEventProcessor     clientConnectProcessor    = new CONNECTEventProcessor();
    DISCONNECTEventProcessor  clientDisConnectProcessor = new DISCONNECTEventProcessor();

    public RpcClientDemoByMain() {
        // 1. create a rpc client
        client = new RpcClient();
        // 2. add processor for connect and close event if you need
        client.addConnectionEventProcessor(ConnectionEventType.CONNECT, clientConnectProcessor);
        client.addConnectionEventProcessor(ConnectionEventType.CLOSE, clientDisConnectProcessor);
        // 3. do init
        client.init();
    }

    public static void main(String[] args) {
        new RpcClientDemoByMain();
        RequestBody req = new RequestBody(2, "hello world sync");
        try {
            String res = (String) client.invokeSync(addr, req, 3000);
            System.out.println("invoke sync result = [" + res + "]");
        } catch (RemotingException e) {
            String errMsg = "RemotingException caught in oneway!";
            logger.error(errMsg, e);
            Assert.fail(errMsg);
        } catch (InterruptedException e) {
            logger.error("interrupted!");
        }
        client.shutdown();
    }
}

这里主要拿代码清单1中的代码来分析,可以看到在server中创建了一个BoltServer实例并注册了一个SimpleServerUserProcessor,那这个SimpleServerUserProcessor是干嘛的呢,这里可以先给出答案,在这个例子中,SimpleServerUserProcessor是用来处理客户端的请求的。接下来看它的具体实现:

2. UserProcessor

看下SimpleServerUserProcessor它的继承和实现结构:
图1
我们自定义的处理器也是去实现UserProcessor或者继承AbstractUserProcessor类,接下来看下UserProcessor里面有哪些方法:
在这里插入图片描述
其中主要的是:

  1. BizContext preHandleRequest:预处理请求,避免直接暴露业务处理逻辑
  2. void handleRequest:异步处理请求
  3. Object handleRequest:同步处理请求,直接返回给客户端
  4. String interest() :服务端感兴趣的请求的类名称
    其他的方法我们在后面的分析遇到时再讲解。

在官网给的SimpleServerUserProcessor具体细节我们不去深究了,在最后给出代码清单,我们需要关注的是,UserProcessor的方法是在什么时机去调用的。要想让它生效,我们先要将它注册进入BoltServer,来看下:

3. 注册UserProcessor

回到代码清单1的这行

server.registerUserProcessor(serverUserProcessor);

进去看下:

public class BoltServer {
    /** port */
    private int       port;

    /** rpc server */
    private RpcServer server;

    // ~~~ constructors
    public BoltServer(int port) {
        this.port = port;
        this.server = new RpcServer(this.port);
    }
//省略其他无关的方法
    public void registerUserProcessor(UserProcessor<?> processor) {
        this.server.registerUserProcessor(processor);
    }
}

可以看到里面其实是间接地向RpcServer注册了UserProcessor,那再追进去看下注册的时候做了哪些事情:

RpcServer.java

    /** user processors of rpc server */
    private ConcurrentHashMap<String, UserProcessor<?>> userProcessors = new ConcurrentHashMap<String, UserProcessor<?>>(4);
    @Override
    public void registerUserProcessor(UserProcessor<?> processor) {
        UserProcessorRegisterHelper.registerUserProcessor(processor, this.userProcessors);
        // startup the processor if it registered after component startup
        if (isStarted() && !processor.isStarted()) {
            processor.startup();
        }
    }

方法内部第一行是用一个工具类将传入的processor注册到RpcServer的Map,以processor的interest返回值作为键,processor作为值,放到userProcessors中,注意这是RpcServer中的一个map

若现在这个服务是已经启动并且processor是未启用状态,直接将注册进来的processor设置为启动状态

其实registerUserProcessor方法是接口RemotingServer中的方法
在这里插入图片描述
我们梳理一下注册UserProcessor的流程:

  1. 调用BoltServer的registerUserProcessor方法;
  2. BoltServer的registerUserProcessor方法里面间接调用RpcServer的registerUserProcessor方法,并将UserProcess保存在RpcServer的userProcessors变量中;
  3. 判断这个service是不是已经启动了,如果已经启动那就将这个注册的processor置为启动状态;

了解了上面的细节之后,接下来就得看一下Processor里面的handleRequest等处理请求的方法是在哪里调用的了。

这里就得从它的启动流程开始说起了,一起来看下

4. 启动过程

我们再次回到代码清单1

server.start()

调用的是BoltServer的start方法,BoltServer的方法又调用了RpcService的start方法:

    public boolean start() {
        this.server.start();
        return true;
    }

一直追进去最终走的是模板类AbstractRemotingService的startUp方法:

    @Override
    public void startup() throws LifeCycleException {
        super.startup();//将服务标记为启动状态

        try {
            doInit();

            logger.warn("Prepare to start server on port {} ", port);
            if (doStart()) {
                logger.warn("Server started on port {}", port);
            } else {
                logger.warn("Failed starting server on port {}", port);
                throw new LifeCycleException("Failed starting server on port: " + port);
            }
        } catch (Throwable t) {
            this.shutdown();// do stop to ensure close resources created during doInit()
            throw new IllegalStateException("ERROR: Failed to start the Server!", t);
        }
    }

可以看到在这个方法中,分别调用了doInit方法和doStart方法,这两个方法都是抽象方法,继承AbstractRemotingService的RpcService的类要去实现这两个方法,那么我们直接看RpcService的doInit方法和doStart方法做了些什么事情。

4.1 RpcService

先上代码

  @Override
    protected void doInit() {
         //省略无关代码
        //这里在RpcRemoting中用静态代码块初始化了Protocol manager
        initRpcRemoting();

        this.bootstrap = new ServerBootstrap();
        this.bootstrap
            .group(bossGroup, workerGroup)
            .channel(NettyEventLoopUtil.getServerSocketChannelClass())
                //设置可连接队列的队列大小
            .option(ChannelOption.SO_BACKLOG, ConfigManager.tcp_so_backlog())
                //允许重复使用本地地址和端口
            .option(ChannelOption.SO_REUSEADDR, ConfigManager.tcp_so_reuseaddr())
            .childOption(ChannelOption.TCP_NODELAY, ConfigManager.tcp_nodelay())
            .childOption(ChannelOption.SO_KEEPALIVE, ConfigManager.tcp_so_keepalive())
            .childOption(ChannelOption.SO_SNDBUF,
                tcpSoSndBuf != null ? tcpSoSndBuf : ConfigManager.tcp_so_sndbuf())
            .childOption(ChannelOption.SO_RCVBUF,
                tcpSoRcvBuf != null ? tcpSoRcvBuf : ConfigManager.tcp_so_rcvbuf());


        // enable trigger mode for epoll if need
        NettyEventLoopUtil.enableTriggeredMode(bootstrap);
		
        final RpcHandler rpcHandler = new RpcHandler(true, this.userProcessors);//代码1
        this.bootstrap.childHandler(new ChannelInitializer<SocketChannel>() {

            @Override
            protected void initChannel(SocketChannel channel) {
                ChannelPipeline pipeline = channel.pipeline();
               //省略无关代码
                pipeline.addLast("decoder", codec.newDecoder());//代码2
                pipeline.addLast("encoder", codec.newEncoder());
                if (idleSwitch) {
                    pipeline.addLast("idleStateHandler", new IdleStateHandler(0, 0, idleTime,
                        TimeUnit.MILLISECONDS));
                    pipeline.addLast("serverIdleHandler", serverIdleHandler);
                }
                pipeline.addLast("connectionEventHandler", connectionEventHandler);
                pipeline.addLast("handler", rpcHandler);
                createConnection(channel);
            }

            private void createConnection(SocketChannel channel) {
                Url url = addressParser.parse(RemotingUtil.parseRemoteAddress(channel));
                if (switches().isOn(GlobalSwitch.SERVER_MANAGE_CONNECTION_SWITCH)) {
                    connectionManager.add(new Connection(channel, url), url.getUniqueKey());
                } else {
                    new Connection(channel, url);
                }
                channel.pipeline().fireUserEventTriggered(ConnectionEventType.CONNECT);
            }
        });
    }

这里就是Netty创建Server的基本操作了,先创建一个服务启动器,设置父子工作线程组,关于Netty的细节就不去深究了,重要的是注意代码1标注的内容,它创建了一个处理器RpcHandler并将RpcService的userProcessors这个map传了进去,最后将它放在了channel对应管道的最后面,也就是说一个连接过来,数据是在经过管道的最后位置去处理的,当然排除Netty默认的最后面的处理器,那么我们看一下这个rpcHandler:

4.2 RpcHandler
@ChannelHandler.Sharable
public class RpcHandler extends ChannelInboundHandlerAdapter {
    private boolean                                     serverSide;

    private ConcurrentHashMap<String, UserProcessor<?>> userProcessors;

    public RpcHandler() {
        serverSide = false;
    }

    public RpcHandler(ConcurrentHashMap<String, UserProcessor<?>> userProcessors) {
        serverSide = false;
        this.userProcessors = userProcessors;
    }

    public RpcHandler(boolean serverSide, ConcurrentHashMap<String, UserProcessor<?>> userProcessors) {
        this.serverSide = serverSide;
        this.userProcessors = userProcessors;
    }

    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
        ProtocolCode protocolCode = ctx.channel().attr(Connection.PROTOCOL).get();//代码3
        Protocol protocol = ProtocolManager.getProtocol(protocolCode);//代码4
        protocol.getCommandHandler().handleCommand(
            new RemotingContext(ctx, new InvokeContext(), serverSide, userProcessors), msg);
    }
}

这是一个可共享的Handler,用一个serverSide去标识是否是服务端,学过Netty的我们知道,当我们的ChannelHandler在管道中有数据过来它的channelRead方法就会被执行,那么服务端接收到一个请求,它的RpcHandler的channelRead方法就会执行,看看这个方法里面做了什么。
先看到代码3,这里是从ChannelHandlerContext中拿到protocolCode,后面根据protocolCode拿到处理器去处理请求,这个protocolCode是什么时候塞进去的呢,这里需要分析一下,在Netty中,服务端在这里处理的是一个入站请求,那么就需要用到解码器,其实在RpcService初始化的时候已经给管道设置了解码器,回到RpcService 的代码2:

pipeline.addLast(“decoder”, codec.newDecoder());

这行代码给管道设置了一个解码器,这个Codec是专门创建编码器和解码器的接口,Bolt的RpcCodec实现了它,并且RpcService创建Codec的时候直接实例化了一个RpcCodec:

private Codec codec = new RpcCodec();

那么就只需看RpcCodec创建解码器做了什么:

public class RpcCodec implements Codec {

    @Override
    public ChannelHandler newEncoder() {
        return new ProtocolCodeBasedEncoder(ProtocolCode.fromBytes(RpcProtocolV2.PROTOCOL_CODE));
    }

    @Override
    public ChannelHandler newDecoder() {
        return new RpcProtocolDecoder(RpcProtocolManager.DEFAULT_PROTOCOL_CODE_LENGTH);
    }
}

直接创建了一个RpcProtocolDecoder,看一下它的继承关系:
在这里插入图片描述
可以看出来这是Bolt自己写的一个解码器,那么它一定有一个decode方法,直接告诉你吧,在ProtocolCodeBasedDecoder里:

4.3 ProtocolCodeBasedDecoder
    @Override
    protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) throws Exception {
        //打个标记
        in.markReaderIndex();
        //协议代码目前长度为1  值为1或2
        ProtocolCode protocolCode = decodeProtocolCode(in);
        if (null != protocolCode) {
            //读取协议版本号
            byte protocolVersion = decodeProtocolVersion(in);
            if (ctx.channel().attr(Connection.PROTOCOL).get() == null) {
                ctx.channel().attr(Connection.PROTOCOL).set(protocolCode);
                if (DEFAULT_ILLEGAL_PROTOCOL_VERSION_LENGTH != protocolVersion) {
                    //设置连接版本号为协议版本号
                    ctx.channel().attr(Connection.VERSION).set(protocolVersion);
                }
            }
            //通过协议代码获取协议,Protocol实现类有两种
            Protocol protocol = ProtocolManager.getProtocol(protocolCode);
            if (null != protocol) {
                //将rederIndex重置到之前打的标记位置上
                in.resetReaderIndex();
                protocol.getDecoder().decode(ctx, in, out);
            } else {
                throw new CodecException("Unknown protocol code: [" + protocolCode
                                         + "] while decode in ProtocolDecoder.");
            }
        }
    }

这部分代码基本内容是:

  1. 去读取协议版本并保存
  2. 然后通过指定版本的协议获取编码器对请求的消息进行解码。

上面的代码都标注了注释,主要讲一下通过协议代码获取指定版本的协议和协议里包含的解码器解码出来的产物,首先协议是通过一个Manager去获取的,咱们看下Manager里面都是些什么.

4.4 ProtocolManager
public class ProtocolManager {

    private static final ConcurrentMap<ProtocolCode, Protocol> protocols = new ConcurrentHashMap<ProtocolCode, Protocol>();

    public static Protocol getProtocol(ProtocolCode protocolCode) {
        return protocols.get(protocolCode);
    }

    public static void registerProtocol(Protocol protocol, byte... protocolCodeBytes) {
        registerProtocol(protocol, ProtocolCode.fromBytes(protocolCodeBytes));
    }

    public static void registerProtocol(Protocol protocol, ProtocolCode protocolCode) {
        if (null == protocolCode || null == protocol) {
            throw new RuntimeException("Protocol: " + protocol + " and protocol code:"
                                       + protocolCode + " should not be null!");
        }
        Protocol exists = ProtocolManager.protocols.putIfAbsent(protocolCode, protocol);
        if (exists != null) {
            throw new RuntimeException("Protocol for code: " + protocolCode + " already exists!");
        }
    }

    public static Protocol unRegisterProtocol(byte protocolCode) {
        return ProtocolManager.protocols.remove(ProtocolCode.fromBytes(protocolCode));
    }
}

很容易能看出来这个管理器用一个Map管理这各种Protocol,目前Bolt实现的Protocol只有两种,一个是RpcProtocol,另一个是RpcProtocolV2,有兴趣的读者可以看看源码,里面的注释很详细,包括它的协议构造。说到这里你肯定也会和我有同样的疑问,这两个协议是什么时候放到这个Manager里面去的呢?

我们要回到4.1节RpcService#doInit方法中的下面这句了:

initRpcRemoting();

在这个方法里面初始化了ProtocolManager,并讲两个版本的协议实例都放了进去,我们一起来看下:

    /**
     * init rpc remoting
     */
    protected void initRpcRemoting() {
        this.rpcRemoting = new RpcServerRemoting(new RpcCommandFactory(), this.addressParser,
            this.connectionManager);
    }

它在初始化的时候创建了RpcServerRemoting,我们先不管它的作用是什么,一直追进去可以发现它的父类有个静态代码块初始化了ProtocolManager:


    static {
        RpcProtocolManager.initProtocols();
    }
    
	public class RpcProtocolManager {
    public static final int DEFAULT_PROTOCOL_CODE_LENGTH = 1;

    public static void initProtocols() {
        ProtocolManager.registerProtocol(new RpcProtocol(), RpcProtocol.PROTOCOL_CODE);
        ProtocolManager.registerProtocol(new RpcProtocolV2(), RpcProtocolV2.PROTOCOL_CODE);
    }
}
4.4.1 Protocol

Protocol主要初始化了编码解码器、解码器和心跳触发器等:

public class RpcProtocol implements Protocol {
    public static final byte PROTOCOL_CODE       = (byte) 1;
    private static final int REQUEST_HEADER_LEN  = 22;
    private static final int RESPONSE_HEADER_LEN = 20;
    private CommandEncoder   encoder;
    private CommandDecoder   decoder;
    private HeartbeatTrigger heartbeatTrigger;
    private CommandHandler   commandHandler;
    private CommandFactory   commandFactory;

    public RpcProtocol() {
        this.encoder = new RpcCommandEncoder();
        this.decoder = new RpcCommandDecoder();
        this.commandFactory = new RpcCommandFactory();
        this.heartbeatTrigger = new RpcHeartbeatTrigger(this.commandFactory);
        this.commandHandler = new RpcCommandHandler(this.commandFactory);
    }
 }

篇幅限制,关于编码器这些东西这篇文章就不去揪细节了,后面会去分析的。

4.5 RemotingCommand

4.3节ProtocolCodeBasedDecoder#decode方法最后调用了下面方法:

protocol.getDecoder().decode(ctx, in, out);

通过Protocol获取一个解码器对消息进行解码,可以从4.4.1节代码看出这个解码器在创建协议的时候就初始化了一个RpcCommandDecoder,那么可以看下RpcCommandDecoder#decode方法:

    @Override
    public void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) throws Exception {
        // the less length between response header and request header
        if (in.readableBytes() >= lessLen) {
            in.markReaderIndex();
            byte protocol = in.readByte();
            in.resetReaderIndex();
            if (protocol == RpcProtocol.PROTOCOL_CODE) {
                /*
                 * ver: version for protocol
                 * type: request/response/request oneway
                 * cmdcode: code for remoting command
                 * ver2:version for remoting command
                 * requestId: id of request
                 * codec: code for codec
                 * (req)timeout: request timeout
                 * (resp)respStatus: response status
                 * classLen: length of request or response class name
                 * headerLen: length of header
                 * contentLen: length of content
                 * className
                 * header
                 * content
                 */
                if (in.readableBytes() > 2) {
                    in.markReaderIndex();
                    in.readByte(); //version
                    byte type = in.readByte(); //type
                    if (type == RpcCommandType.REQUEST || type == RpcCommandType.REQUEST_ONEWAY) {
                        //decode request
                        if (in.readableBytes() >= RpcProtocol.getRequestHeaderLength() - 2) {
                            short cmdCode = in.readShort();
                            byte ver2 = in.readByte();
                            int requestId = in.readInt();
                            byte serializer = in.readByte();
                            int timeout = in.readInt();
                            short classLen = in.readShort();
                            short headerLen = in.readShort();
                            int contentLen = in.readInt();
                            byte[] clazz = null;
                            byte[] header = null;
                            byte[] content = null;
                            if (in.readableBytes() >= classLen + headerLen + contentLen) {
                                if (classLen > 0) {
                                    clazz = new byte[classLen];
                                    in.readBytes(clazz);
                                }
                                if (headerLen > 0) {
                                    header = new byte[headerLen];
                                    in.readBytes(header);
                                }
                                if (contentLen > 0) {
                                    content = new byte[contentLen];
                                    in.readBytes(content);
                                }
                            } else {// not enough data
                                in.resetReaderIndex();
                                return;
                            }
                            RequestCommand command;
                            if (cmdCode == CommandCode.HEARTBEAT_VALUE) {
                                command = new HeartbeatCommand();
                            } else {
                                command = createRequestCommand(cmdCode);
                            }
                            command.setType(type);
                            command.setVersion(ver2);
                            command.setId(requestId);
                            command.setSerializer(serializer);
                            command.setTimeout(timeout);
                            command.setClazz(clazz);
                            command.setHeader(header);
                            command.setContent(content);
                            out.add(command);

                        } else {
                            in.resetReaderIndex();
                        }
                    } else if (type == RpcCommandType.RESPONSE) {
                        //decode response
                        if (in.readableBytes() >= RpcProtocol.getResponseHeaderLength() - 2) {
                            short cmdCode = in.readShort();
                            byte ver2 = in.readByte();
                            int requestId = in.readInt();
                            byte serializer = in.readByte();
                            short status = in.readShort();
                            short classLen = in.readShort();
                            short headerLen = in.readShort();
                            int contentLen = in.readInt();
                            byte[] clazz = null;
                            byte[] header = null;
                            byte[] content = null;
                            if (in.readableBytes() >= classLen + headerLen + contentLen) {
                                if (classLen > 0) {
                                    clazz = new byte[classLen];
                                    in.readBytes(clazz);
                                }
                                if (headerLen > 0) {
                                    header = new byte[headerLen];
                                    in.readBytes(header);
                                }
                                if (contentLen > 0) {
                                    content = new byte[contentLen];
                                    in.readBytes(content);
                                }
                            } else {// not enough data
                                in.resetReaderIndex();
                                return;
                            }
                            ResponseCommand command;
                            if (cmdCode == CommandCode.HEARTBEAT_VALUE) {

                                command = new HeartbeatAckCommand();
                            } else {
                                command = createResponseCommand(cmdCode);
                            }
                            command.setType(type);
                            command.setVersion(ver2);
                            command.setId(requestId);
                            command.setSerializer(serializer);
                            command.setResponseStatus(ResponseStatus.valueOf(status));
                            command.setClazz(clazz);
                            command.setHeader(header);
                            command.setContent(content);
                            command.setResponseTimeMillis(System.currentTimeMillis());
                            command.setResponseHost((InetSocketAddress) ctx.channel()
                                .remoteAddress());
                            out.add(command);
                        } else {
                            in.resetReaderIndex();
                        }
                    } else {
                        String emsg = "Unknown command type: " + type;
                        logger.error(emsg);
                        throw new RuntimeException(emsg);
                    }
                }

            } else {
                String emsg = "Unknown protocol: " + protocol;
                logger.error(emsg);
                throw new RuntimeException(emsg);
            }

        }
    }

这个方法最终目的是根据请求类型将消息解码成ResponseCommand或RequestCommand,为什么要解码成这样呢,我们知道消息在网络中传输的时候要转成字节并打包,在Netty传输数据时服务端默认会使用一个缓冲区去接收传送过来的数据包,因为这样会提高传输效率,但是这样也会出现粘包的问题,即一个缓冲区一次接收了多个数据包,服务端一次不知道该读多少才算读全一个数据包,最终会导致接收到的文件乱码或者丢失一部分。

一般的解决办法就是在客户端和服务端约定好协议,数据包中包含数据长度或者其他信息,这样服务端读取数据的时候就知道读到多少停下来并打包交给下一个Handler去处理,在Bolt也是这样设计的:

可以看一下它的抽象类

4.5.1 RpcCommand
public abstract class RpcCommand implements RemotingCommand {

    /** For serialization  */
    private static final long serialVersionUID = -3570261012462596503L;

    /**
     * Code which stands for the command.
     */
    private CommandCode       cmdCode;
    /* command version */
    private byte              version          = 0x1;
    private byte              type;
    /**
     * Serializer, see the Configs.SERIALIZER_DEFAULT for the default serializer.
     * Notice: this can not be changed after initialized at runtime.
     */
    private byte              serializer       = ConfigManager.serializer;
    /**
     * protocol switches
     */
    private ProtocolSwitch    protocolSwitch   = new ProtocolSwitch();
    private int               id;
    /** The length of clazz */
    private short             clazzLength      = 0;
    private short             headerLength     = 0;
    private int               contentLength    = 0;
    /** The class of content */
    private byte[]            clazz;
    /** Header is used for transparent transmission. */
    private byte[]            header;
    /** The bytes format of the content of the command. */
    private byte[]            content;
    /** invoke context of each rpc command. */
    private InvokeContext     invokeContext;
    }
    

结合我上面的解释这个类的作用应该很清晰了吧。

5. 消息处理

5.1 CommandHandler

我们接着回到4.2节RpcHandler的channelRead方法中这一行:

 protocol.getCommandHandler().handleCommand(
            new RemotingContext(ctx, new InvokeContext(), serverSide, userProcessors), msg);

protocol.getCommandHandler()方法拿到的是初始化好的RpcCommandHandler,它是CommandHandler唯一实现,看下它的handleCommand方法:

    /*
     * Handle the request(s).
     */
    private void handle(final RemotingContext ctx, final Object msg) {
        try {
            if (msg instanceof List) {
                final Runnable handleTask = new Runnable() {
                    @Override
                    public void run() {
                        if (logger.isDebugEnabled()) {
                            logger.debug("Batch message! size={}", ((List<?>) msg).size());
                        }
                        for (final Object m : (List<?>) msg) {
                            RpcCommandHandler.this.process(ctx, m);
                        }
                    }
                };
                if (RpcConfigManager.dispatch_msg_list_in_default_executor()) {
                    // If msg is list ,then the batch submission to biz threadpool can save io thread.
                    // See com.alipay.remoting.decoder.ProtocolDecoder
                    processorManager.getDefaultExecutor().execute(handleTask);
                } else {
                    handleTask.run();
                }
            } else {
                process(ctx, msg);
            }
        } catch (final Throwable t) {
            processException(ctx, msg, t);
        }
    }

    @SuppressWarnings({ "rawtypes", "unchecked" })
    private void process(RemotingContext ctx, Object msg) {
        try {
            final RpcCommand cmd = (RpcCommand) msg;
            final RemotingProcessor processor = processorManager.getProcessor(cmd.getCmdCode());
            processor.process(ctx, cmd, processorManager.getDefaultExecutor());
        } catch (final Throwable t) {
            processException(ctx, msg, t);
        }
    }

再坚持一下,马上就见到庐山真面目了。
这里的handle方法最终会去调用RpcCommandHandler#process方法:

  1. 在process方法中从上一个处理器或者解码器拿到解码后的RpcCommand,因为RpcHandler是最后加到管道的,结合4.5节知道传输到这个处理器的消息早就被解码成RpcCommand了,所以可以直接转成RpcCommand。
  2. 从processorManager中拿到一个处理器,这个处理器是在实例化RpcCommandHandler初始化的:
    public RpcCommandHandler(CommandFactory commandFactory) {
        this.commandFactory = commandFactory;
        this.processorManager = new ProcessorManager();
        //process request
        this.processorManager.registerProcessor(RpcCommandCode.RPC_REQUEST,
            new RpcRequestProcessor(this.commandFactory));
        //process response
        this.processorManager.registerProcessor(RpcCommandCode.RPC_RESPONSE,
            new RpcResponseProcessor());

        this.processorManager.registerProcessor(CommonCommandCode.HEARTBEAT,
            new RpcHeartBeatProcessor());

        this.processorManager
            .registerDefaultProcessor(new AbstractRemotingProcessor<RemotingCommand>() {
                @Override
                public void doProcess(RemotingContext ctx, RemotingCommand msg) throws Exception {
                    logger.error("No processor available for command code {}, msgId {}",
                        msg.getCmdCode(), msg.getId());
                }
            });
    }
 
  1. 假设我们这里是需要处理请求类型的消息,那么拿到的处理器就是RpcRequestProcessor,并调用它的process方法:
    RpcRequestProcessor#process
    public void process(RemotingContext ctx, RpcRequestCommand cmd, ExecutorService defaultExecutor)
                                                                                                    throws Exception {
        if (!deserializeRequestCommand(ctx, cmd, RpcDeserializeLevel.DESERIALIZE_CLAZZ)) {
            return;
        }
        UserProcessor userProcessor = ctx.getUserProcessor(cmd.getRequestClass());//拿到用户自己设置的处理器
        if (userProcessor == null) {
            String errMsg = "No user processor found for request: " + cmd.getRequestClass();
            logger.error(errMsg);
            sendResponseIfNecessary(ctx, cmd.getType(), this.getCommandFactory()
                .createExceptionResponse(cmd.getId(), errMsg));
            return;// must end process
        }

        // set timeout check state from user's processor
        ctx.setTimeoutDiscard(userProcessor.timeoutDiscard());

        // to check whether to process in io thread
        if (userProcessor.processInIOThread()) {
            if (!deserializeRequestCommand(ctx, cmd, RpcDeserializeLevel.DESERIALIZE_ALL)) {
                return;
            }
            // process in io thread
            new ProcessTask(ctx, cmd).run();
            return;// end
        }

        Executor executor;
        // to check whether get executor using executor selector
        if (null == userProcessor.getExecutorSelector()) {
            executor = userProcessor.getExecutor();
        } else {
            // in case haven't deserialized in io thread
            // it need to deserialize clazz and header before using executor dispath strategy
            if (!deserializeRequestCommand(ctx, cmd, RpcDeserializeLevel.DESERIALIZE_HEADER)) {
                return;
            }
            //try get executor with strategy
            executor = userProcessor.getExecutorSelector().select(cmd.getRequestClass(),
                cmd.getRequestHeader());
        }

        // Till now, if executor still null, then try default
        if (executor == null) {
            executor = (this.getExecutor() == null ? defaultExecutor : this.getExecutor());
        }

        // use the final executor dispatch process task
        executor.execute(new ProcessTask(ctx, cmd));
    }

在这个方法中会拿到用户设置的处理器,还记得Demo中的SimpleServerUserProcessor嘛,有兴趣的可以去看一下官方写的示例,最后是创建一个任务交给Executor去处理。

5.2 RpcRequestProcessor

ProcessTask是RpcRequestProcessor的内部类:

    class ProcessTask implements Runnable {

        RemotingContext   ctx;
        RpcRequestCommand msg;

        public ProcessTask(RemotingContext ctx, RpcRequestCommand msg) {
            this.ctx = ctx;
            this.msg = msg;
        }

        /**
         * @see java.lang.Runnable#run()
         */
        @Override
        public void run() {
            try {
                RpcRequestProcessor.this.doProcess(ctx, msg);
            } catch (Throwable e) {
                //protect the thread running this task
                String remotingAddress = RemotingUtil.parseRemoteAddress(ctx.getChannelContext()
                    .channel());
                logger
                    .error(
                        "Exception caught when process rpc request command in RpcRequestProcessor, Id="
                                + msg.getId() + "! Invoke source address is [" + remotingAddress
                                + "].", e);
            }
        }

    }

在这个任务里面还是调用RpcRequestProcessor#doProcess方法去处理:

    public void doProcess(final RemotingContext ctx, RpcRequestCommand cmd) throws Exception {
        long currentTimestamp = System.currentTimeMillis();

        preProcessRemotingContext(ctx, cmd, currentTimestamp); //预处理消息
        if (ctx.isTimeoutDiscard() && ctx.isRequestTimeout()) {
            timeoutLog(cmd, currentTimestamp, ctx);// do some log
            return;// then, discard this request
        }
        debugLog(ctx, cmd, currentTimestamp);
        // decode request all
        if (!deserializeRequestCommand(ctx, cmd, RpcDeserializeLevel.DESERIALIZE_ALL)) {//反序列化
            return;
        }
        dispatchToUserProcessor(ctx, cmd);
    }

预处理消息暂不看了,主要是预处理远程上下文,初始化一些有用的信息,真正的处理在dispatchToUserProcessor方法,咱们看下:

 private void dispatchToUserProcessor(RemotingContext ctx, RpcRequestCommand cmd) {
        final int id = cmd.getId();
        final byte type = cmd.getType();
        // processor here must not be null, for it have been checked before
        UserProcessor processor = ctx.getUserProcessor(cmd.getRequestClass());//拿到用户设置的处理器

        ClassLoader classLoader = null;
        try {
            ClassLoader bizClassLoader = processor.getBizClassLoader();
            if (bizClassLoader != null) {
                classLoader = Thread.currentThread().getContextClassLoader();
                Thread.currentThread().setContextClassLoader(bizClassLoader);
            }

            if (processor instanceof AsyncUserProcessor
                || processor instanceof AsyncMultiInterestUserProcessor) {
                try {
                    processor.handleRequest(
                        processor.preHandleRequest(ctx, cmd.getRequestObject()),
                        new RpcAsyncContext(ctx, cmd, this), cmd.getRequestObject());
                } catch (RejectedExecutionException e) {
                    logger
                        .warn("RejectedExecutionException occurred when do ASYNC process in RpcRequestProcessor");
                    sendResponseIfNecessary(ctx, type, this.getCommandFactory()
                        .createExceptionResponse(id, ResponseStatus.SERVER_THREADPOOL_BUSY));
                } catch (Throwable t) {
                    String errMsg = "AYSNC process rpc request failed in RpcRequestProcessor, id="
                                    + id;
                    logger.error(errMsg, t);
                    sendResponseIfNecessary(ctx, type, this.getCommandFactory()
                        .createExceptionResponse(id, t, errMsg));
                }
            } else {
                try {
                    Object responseObject = processor.handleRequest(
                        processor.preHandleRequest(ctx, cmd.getRequestObject()),
                        cmd.getRequestObject());

                    sendResponseIfNecessary(ctx, type,
                        this.getCommandFactory().createResponse(responseObject, cmd));
                } catch (RejectedExecutionException e) {
                    logger
                        .warn("RejectedExecutionException occurred when do SYNC process in RpcRequestProcessor");
                    sendResponseIfNecessary(ctx, type, this.getCommandFactory()
                        .createExceptionResponse(id, ResponseStatus.SERVER_THREADPOOL_BUSY));
                } catch (Throwable t) {
                    String errMsg = "SYNC process rpc request failed in RpcRequestProcessor, id="
                                    + id;
                    logger.error(errMsg, t);
                    sendResponseIfNecessary(ctx, type, this.getCommandFactory()
                        .createExceptionResponse(id, t, errMsg));
                }
            }
        } finally {
            if (classLoader != null) {
                Thread.currentThread().setContextClassLoader(classLoader);
            }
        }
    }

代码很易懂,都扒到这儿了,自己看去吧,懒得写了。。。。。

SOFABolt 是蚂蚁金融服务集团开发的一套基于 Netty 实现的网络通信框架。 为了让 Java 程序员能将更多的精力放在基于网络通信的业务逻辑实现上,而不是过多的纠结于网络底层 NIO 的实现以及处理难以调试的网络问题,Netty 应运而生。 为了让中间件开发者能将更多的精力放在产品功能特性实现上,而不是重复地一遍遍制造通信框架的轮子,SOFABolt 应运而生。 Bolt 名字取自迪士尼动画-闪电狗,是一个基于 Netty 最佳实践的轻量、易用、高性能、易扩展的通信框架。 这些年我们在微服务消息中间件在网络通信上解决过很多问题,积累了很多经验,并持续的进行着优化和完善,我们希望能把总结出的解决方案沉淀到 SOFABolt 这个基础组件里,让更多的使用网络通信的场景能够统一受益。 目前该产品已经运用在了蚂蚁中间件的微服务 (SOFARPC)、消息中心、分布式事务、分布式开关、以及配置中心等众多产品上。 SOFABolt的基础功能包括: 1、基础通信功能 ( remoting-core ) 基于 Netty 高效的网络 IO 与线程模型运用 连接管理 (无锁建连,定时断链,自动重连) 基础通信模型 ( oneway,sync,future,callback ) 超时控制 批量解包与批量提交处理器 心跳与 IDLE 事件处理 2、协议框架 ( protocol-skeleton ) 命令与命令处理器 编解码处理器 心跳触发器 3、私有协议定制实现 - RPC 通信协议 ( protocol-implementation ) RPC 通信协议的设计 灵活的反序列化时机控制 请求处理超时 FailFast 机制 用户请求处理器 ( UserProcessor ) 双工通信
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值