Dubbo 服务端接收调用请求并执行本地方法

上一篇文章简要说了下服务端协议类(DubboProtocol)启动server监听端口的大概流程,这里我们分析一下在接收到服务调用请求后通过什么途径去找到本地服务的。

Invoker

一句话概述:Invoker接口是服务“最底层”交互的接口,有两个方法:

public interface Invoker<T> extends Node {

    //获得该服务的接口
    Class<T> getInterface();

    //执行方法
    Result invoke(Invocation invocation) throws RpcException;
}

记住一点(不然后面会被绕晕):

服务端的Invoker是 AbstractProxyInvoker 的子类(实际上是一个匿名类,即通过下面方法获得)

com.alibaba.dubbo.rpc.ProxyFactory#getInvoker

消费端的Invoker是DubboInvoker(以dubbo协议为例)

代码入口

要看dubbo服务端怎么处理调用请求的,我觉得代码入口是在nettyhandler那里;

上一篇我们知道生成netty server是在DubboProtocol类的export方法

public <T> Exporter<T> export(Invoker<T> invoker) throws RpcException {
        URL url = invoker.getUrl();

        // export service.
        String key = serviceKey(url);

//创建 DubboExporter
        DubboExporter<T> exporter = new DubboExporter<T>(invoker, key, exporterMap);

//这里将exporter缓存进入了exporterMap中
        exporterMap.put(key, exporter);

        //export an stub service for dispatching event
        Boolean isStubSupportEvent = url.getParameter(Constants.STUB_EVENT_KEY, Constants.DEFAULT_STUB_EVENT);
        Boolean isCallbackservice = url.getParameter(Constants.IS_CALLBACK_SERVICE, false);
        if (isStubSupportEvent && !isCallbackservice) {
            String stubServiceMethods = url.getParameter(Constants.STUB_EVENT_METHODS_KEY);
            if (stubServiceMethods == null || stubServiceMethods.length() == 0) {
                if (logger.isWarnEnabled()) {
                    logger.warn(new IllegalStateException("consumer [" + url.getParameter(Constants.INTERFACE_KEY) +
                            "], has set stubproxy support event ,but no stub methods founded."));
                }
            } else {
                stubServiceMethodsMap.put(url.getServiceKey(), stubServiceMethods);
            }
        }
//检查netty server是否启动,如果没有启动则实例化一个netty server
        openServer(url);

        return exporter;
    }

创建server代码

    private ExchangeServer createServer(URL url) {
        // send readonly event when server closes, it's enabled by default
        url = url.addParameterIfAbsent(Constants.CHANNEL_READONLYEVENT_SENT_KEY, Boolean.TRUE.toString());
        // enable heartbeat by default
        url = url.addParameterIfAbsent(Constants.HEARTBEAT_KEY, String.valueOf(Constants.DEFAULT_HEARTBEAT));
        String str = url.getParameter(Constants.SERVER_KEY, Constants.DEFAULT_REMOTING_SERVER);

        if (str != null && str.length() > 0 && !ExtensionLoader.getExtensionLoader(Transporter.class).hasExtension(str))
            throw new RpcException("Unsupported server type: " + str + ", url: " + url);

        url = url.addParameter(Constants.CODEC_KEY, DubboCodec.NAME);
        ExchangeServer server;
        try {

//这里我们要注意这个传入的requestHandler对象,这就是框架向netty注册的handler,后续代码只是对该handler做了一个包装(HeaderExchangeHandler->DecodeHandler )
//具体包装方法:com.alibaba.dubbo.remoting.exchange.support.header.HeaderExchanger#bind
            server = Exchangers.bind(url, requestHandler);
        } catch (RemotingException e) {
            throw new RpcException("Fail to start server(url: " + url + ") " + e.getMessage(), e);
        }
        str = url.getParameter(Constants.CLIENT_KEY);
        if (str != null && str.length() > 0) {
            Set<String> supportedTypes = ExtensionLoader.getExtensionLoader(Transporter.class).getSupportedExtensions();
            if (!supportedTypes.contains(str)) {
                throw new RpcException("Unsupported client type: " + str);
            }
        }
        return server;
    }

处理调用核心类

最底层处理调用请求的是一个 ExchangeHandler对象,也就是上面代码里向netty注册的的requestHandler(会经过重重包装)

private ExchangeHandler requestHandler = new ExchangeHandlerAdapter()

关键方法是 com.alibaba.dubbo.remoting.exchange.support.ExchangeHandlerAdapter#reply 方法

首先调用 getInvoker 方法找到一个Invoker,

Invoker<?> getInvoker(Channel channel, Invocation inv) throws RemotingException ,而Invoker是通过 DubboExporter获得的;

通过上面服务发布的代码我们知道,DubboProtocol在执行export方法的时候已经将DubboExporter缓存进入了map中

 

下面附一个调用链路图

从这里可以看出最终从netty那里接收消息的handler是一个 com.alibaba.dubbo.remoting.transport.netty.NettyHandler类型,那么我们注入的com.alibaba.dubbo.remoting.exchange.ExchangeHandler 怎么就变成了NettyHandler类型了呢,那是因为框架对ExchangeHandler做了多层包装(代码太绕,难以直视);

在上一篇我们讲了最终启动netty server(以netty3框架实现为例子,netty4也有相应的实现)的代码是

com.alibaba.dubbo.remoting.transport.netty.NettyServer#doOpen

我想这里应该会有收获,的确:

@Override
    protected void doOpen() throws Throwable {
        NettyHelper.setNettyLoggerFactory();
        ExecutorService boss = Executors.newCachedThreadPool(new NamedThreadFactory("NettyServerBoss", true));
        ExecutorService worker = Executors.newCachedThreadPool(new NamedThreadFactory("NettyServerWorker", true));
        ChannelFactory channelFactory = new NioServerSocketChannelFactory(boss, worker, getUrl().getPositiveParameter(Constants.IO_THREADS_KEY, Constants.DEFAULT_IO_THREADS));
        bootstrap = new ServerBootstrap(channelFactory);

//这里可以看出,我们传入的handler(会被框架包装)又被包装了一下,封装成了NettyHandler
        final NettyHandler nettyHandler = new NettyHandler(getUrl(), this);
        channels = nettyHandler.getChannels();
        // https://issues.jboss.org/browse/NETTY-365
        // https://issues.jboss.org/browse/NETTY-379
        // final Timer timer = new HashedWheelTimer(new NamedThreadFactory("NettyIdleTimer", true));
        bootstrap.setPipelineFactory(new ChannelPipelineFactory() {
            public ChannelPipeline getPipeline() {
                NettyCodecAdapter adapter = new NettyCodecAdapter(getCodec(), getUrl(), NettyServer.this);
                ChannelPipeline pipeline = Channels.pipeline();
                /*int idleTimeout = getIdleTimeout();
                if (idleTimeout > 10000) {
                    pipeline.addLast("timer", new IdleStateHandler(timer, idleTimeout / 1000, 0, 0));
                }*/
                pipeline.addLast("decoder", adapter.getDecoder());
                pipeline.addLast("encoder", adapter.getEncoder());
                pipeline.addLast("handler", nettyHandler);
                return pipeline;
            }
        });
        // bind
        channel = bootstrap.bind(getBindAddress());
    }

上面我们看到我们的handler的确被封装成了NettyHandler

NettyHandler->AllChannelHandler

根据上面的截图看出:NettyHandler在接收到netty回调后最终会调用AllChannelHandler去处理;

而AllChannelHandler处理方式是向线程池里丢一个Runnable对象去处理Channel,这就是dubbo服务端应用线程模型的入口了

public void received(Channel channel, Object message) throws RemotingException {
        ExecutorService cexecutor = getExecutorService();
        try {

//核心方法:这里向线程池里提交新的线程
            cexecutor.execute(new ChannelEventRunnable(channel, handler, ChannelState.RECEIVED, message));
        } catch (Throwable t) {
            //TODO A temporary solution to the problem that the exception information can not be sent to the opposite end after the thread pool is full. Need a refactoring
            //fix The thread pool is full, refuses to call, does not return, and causes the consumer to wait for time out
        	if(message instanceof Request && t instanceof RejectedExecutionException){
        		Request request = (Request)message;
        		if(request.isTwoWay()){
        			String msg = "Server side(" + url.getIp() + "," + url.getPort() + ") threadpool is exhausted ,detail msg:" + t.getMessage();
        			Response response = new Response(request.getId(), request.getVersion());
        			response.setStatus(Response.SERVER_THREADPOOL_EXHAUSTED_ERROR);
        			response.setErrorMessage(msg);
        			channel.send(response);
        			return;
        		}
        	}
            throw new ExecutionException(message, channel, getClass() + " error when process received event .", t);
        }
    }

那么自定义hander在什么时候被包装成AllChannelHandler的呢?通过搜"new AllChannelHandler"关键字在这里找到了答案

public class AllDispatcher implements Dispatcher {

    public static final String NAME = "all";

    public ChannelHandler dispatch(ChannelHandler handler, URL url) {
        return new AllChannelHandler(handler, url);
    }

}

我们debug启动流程,的确是在openserver方法里完成的

当然我们的handler还被包装成了其他的hadler,在设计模式上应该是装饰器模式,为的是让我们的handle拥有更强的功能

(代码太绕,这里就不逐一debug了;关键代码在 NettyServer的构造方法里

'''

public NettyServer(URL url, ChannelHandler handler) throws RemotingException {
    super(url, ChannelHandlers.wrap(handler, ExecutorUtil.setThreadName(url, SERVER_THREAD_POOL_NAME)));//
}

ChannelHandlers.wrap方法是核心

''' ),

 

知道入口就是一个很大方便,以后需要对源码修改或者原理的理解就又帮助了

线程池

接下来我们来分析一下处理业务的线程池设计。AllChannelHandler继承自WrappedChannelHandler,而executor是在构造函数的时候初始化的,我们debug一下代码

在openServer代码流程里调用了WrappedChannelHandler的构造方法,这里即将初始化线程池咯!!!

executor = (ExecutorService) ExtensionLoader.getExtensionLoader(ThreadPool.class).getAdaptiveExtension().getExecutor(url);

看到没有,这里又用了spi;那简单了我们进入ThreadPool类直接看有哪些实现类,然后再看SPI注解上的name是什么就可以看出默认是哪一个实现类了

打开com.alibaba.dubbo.common.threadpool.ThreadPool spi配置文件,查看fix对应的是哪一个实现类:

ok 我们打开

com.alibaba.dubbo.common.threadpool.support.fixed.FixedThreadPool
public class FixedThreadPool implements ThreadPool {

    public Executor getExecutor(URL url) {
        String name = url.getParameter(Constants.THREAD_NAME_KEY, Constants.DEFAULT_THREAD_NAME);
        int threads = url.getParameter(Constants.THREADS_KEY, Constants.DEFAULT_THREADS);
        int queues = url.getParameter(Constants.QUEUES_KEY, Constants.DEFAULT_QUEUES);
        return new ThreadPoolExecutor(threads, threads, 0, TimeUnit.MILLISECONDS,
                queues == 0 ? new SynchronousQueue<Runnable>() : (queues < 0 ? new LinkedBlockingQueue<Runnable>() : new LinkedBlockingQueue<Runnable>(queues)),
                new NamedThreadFactory(name, true), new AbortPolicyWithReport(name, url));
    }
}

这里实例化了一个核心线程数与最大线程数相同的线程池,线程数可以通过配置获得,类似:

    <dubbo:provider retries="0" threads="33" queues="11" />
threads是核心线程数大小,queues是阻塞队列的大小;如果没有配置则默认使用200作为线程数,无界队列作为缓冲队列
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值