Dubbo学习记录(十八)-服务调用【四】-服务消费端启动Netty客户端、Cluster扩展点

服务消费端启动Netty客户端

涉及的地方是服务导出, 创建DubboInvoker的过程中,启动了Netty客户端;
具体代码:

public abstract class AbstractProtocol implements Protocol {
    @Override
    public <T> Invoker<T> refer(Class<T> type, URL url) throws RpcException {
        // 异步转同步Invoker , type是接口,url是服务地址
        // DubboInvoker是异步的,而AsyncToSyncInvoker会封装为同步的
        return new AsyncToSyncInvoker<>(protocolBindingRefer(type, url));
    }

    protected abstract <T> Invoker<T> protocolBindingRefer(Class<T> type, URL url) throws RpcException;
}

调用DubboProtocol的refer方法, DubboProtocol中并没有定义refer, 其继承了AbstractProtocol, 其内部定义了refer方法, 调用protocolBindingRefer方法, 该方法为抽象方法,子类DubboProtocol实现该方法;

    @Override
    public <T> Invoker<T> protocolBindingRefer(Class<T> serviceType, URL url) throws RpcException {
        // 在DubboInvoker发送请求时会轮询clients去发送数据
        DubboInvoker<T> invoker = new DubboInvoker<T>(serviceType, url, getClients(url), invokers);
        invokers.add(invoker);

        return invoker;
    }

DubboProtocol#getClients

    private ExchangeClient[] getClients(URL url) {
        boolean useShareConnect = false;
        int connections = url.getParameter(CONNECTIONS_KEY, 0);

        List<ReferenceCountExchangeClient> shareClients = null;
        if (connections == 0) {
            useShareConnect = true;
than properties.
             */
            String shareConnectionsStr = url.getParameter(SHARE_CONNECTIONS_KEY, (String) null);
            connections = Integer.parseInt(StringUtils.isBlank(shareConnectionsStr) ? ConfigUtils.getProperty(SHARE_CONNECTIONS_KEY,
                    DEFAULT_SHARE_CONNECTIONS) : shareConnectionsStr);
            shareClients = getSharedClient(url, connections);
        }
        //...省略部分代码
        return clients;
    }
  1. 获取connections参数对象,没有设置,则设置默认值为0;
  2. 获取shareConnections参数, 没有设置, 则shareConnectionsStr 为空;
  3. 如果shareConnectionsStr 为空,则设置connections默认值为1;
  4. 调用getSharedClient方法,创建Netty客户端;
    /**
     * Get shared connection
     *
     * @param url
     * @param connectNum connectNum must be greater than or equal to 1
     */
    private List<ReferenceCountExchangeClient> getSharedClient(URL url, int connectNum) {
        // 这个方法返回的是可以共享的client,要么已经生成过了,要么需要重新生成
        // 对于已经生成过的client,都会存在referenceClientMap中,key为所调用的服务IP+PORT
        String key = url.getAddress();
        List<ReferenceCountExchangeClient> clients = referenceClientMap.get(key);
        // 根据当前引入的服务对应的ip+port,看看是否已经存在clients了,
        if (checkClientCanUse(clients)) {
            // 如果每个client都可用,那就对每个client的计数+1,表示这些client被引用了多少次
            batchClientRefIncr(clients);
            return clients;
        }
        locks.putIfAbsent(key, new Object());
        synchronized (locks.get(key)) {
            clients = referenceClientMap.get(key);
			//省略部分代码;
            if (CollectionUtils.isEmpty(clients)) {
                // 如果clients为空,则按指定的connectNum生成client
                clients = buildReferenceCountExchangeClientList(url, connectNum);
                referenceClientMap.put(key, clients);
            } else {
            	//省略部分代码;
            }
            return clients;
        }
    }

调用buildReferenceCountExchangeClientList创建客户端;

DubboProtocol#buildReferenceCountExchangeClientList

    private List<ReferenceCountExchangeClient> buildReferenceCountExchangeClientList(URL url, int connectNum) {
        List<ReferenceCountExchangeClient> clients = new ArrayList<>();

        for (int i = 0; i < connectNum; i++) {
            clients.add(buildReferenceCountExchangeClient(url));
        }

        return clients;
    }

循环connectNum, 默认为1, 调用buildReferenceCountExchangeClient创建客户端;

DubboProtocol#buildReferenceCountExchangeClient

  1. 调用initClient创建客户端;
  2. 再将client实例包装为ReferenceCountExchangeClient实例;
    private ReferenceCountExchangeClient buildReferenceCountExchangeClient(URL url) {
        // 生成一个ExchangeClient
        ExchangeClient exchangeClient = initClient(url);

        // 包装成ReferenceCountExchangeClient
        return new ReferenceCountExchangeClient(exchangeClient);
    }

DubboProtocol#initClient

  1. 获取心跳, 编码, 客户端名称等参数;
  2. 调用Exchangers.connect(url, requestHandler)创建客户端;
    private ExchangeClient initClient(URL url) {

        // client type setting.
        // 拿设置的client,默认为netty
        String str = url.getParameter(CLIENT_KEY, url.getParameter(SERVER_KEY, DEFAULT_REMOTING_CLIENT));
        // 编码方式
        url = url.addParameter(CODEC_KEY, DubboCodec.NAME);
        // enable heartbeat by default
        // 心跳, 默认60 * 1000,60秒一个心跳
        url = url.addParameterIfAbsent(HEARTBEAT_KEY, String.valueOf(DEFAULT_HEARTBEAT));
		
        ExchangeClient client;
        try {
            // connection should be lazy
            if (url.getParameter(LAZY_CONNECT_KEY, false)) {
            //设置懒加载方式, 则创建的LazyConnectExchangeClient实例
                client = new LazyConnectExchangeClient(url, requestHandler);
            } else {
                // 先建立连接,在调用方法时再基于这个连接去发送数据
                client = Exchangers.connect(url, requestHandler);  // connect
            }

        } catch (RemotingException e) {
        }

        return client;
    }

Exchangers.connect(url, requestHandler)

  1. 调用getExchanger(url)方法,获取一个Exchanger实例, 默认类型为HeaderExchanger;
  2. 调用HeaderExchanger#connect方法连接服务端;
    public static ExchangeClient connect(URL url, ExchangeHandler handler) throws RemotingException {

        url = u	rl.addParameterIfAbsent(Constants.CODEC_KEY, "exchange");
        // 得到一个HeaderExchanger去connect
        return getExchanger(url).connect(url, handler);
    }

HeaderExchanger#connect

返回了一个ExchangeClient 客户端实例;
调用Transporters.connect方法返回一个Client实例, 作为参数, 创建HeaderExchangeClient实例;

public class HeaderExchanger implements Exchanger {

    public static final String NAME = "header";

    @Override
    public ExchangeClient connect(URL url, ExchangeHandler handler) throws RemotingException {
        // 利用NettyTransporter去connect
        // 为什么在connect和bind时都是DecodeHandler,解码,解的是把InputStream解析成AppResponse对象
        return new HeaderExchangeClient(Transporters.connect(url, new DecodeHandler(new HeaderExchangeHandler(handler))), true);
    }

    @Override
    public ExchangeServer bind(URL url, ExchangeHandler handler) throws RemotingException {
        // 对handler包装了两层,表示当处理一个请求时,每层Handler负责不同的处理逻辑
        // 为什么在connect和bind时都是DecodeHandler,解码,解的是把InputStream解析成RpcInvocation对象
        return new HeaderExchangeServer(Transporters.bind(url, new DecodeHandler(new HeaderExchangeHandler(handler))));
    }

}

Transporters#connect

  1. 调用getTransporter方法, 默认使用SPI机制,拿到默认的Transporter值类型为NettyTransporter;
  2. 调用NettyTransporter#connect方法,创建NettyClient;
    public static Client connect(URL url, ChannelHandler... handlers) throws RemotingException {

        // NettyTransporter
        return getTransporter().connect(url, handler);
    }

NettyTransporter#connect

创建了一个NettyClient实例,返回;

public class NettyTransporter implements Transporter {

    public static final String NAME = "netty";

    @Override
    public Server bind(URL url, ChannelHandler listener) throws RemotingException {
        return new NettyServer(url, listener);
    }

    @Override
    public Client connect(URL url, ChannelHandler listener) throws RemotingException {
        return new NettyClient(url, listener);
    }

}

NettyClient

调用了父类的构造参数;

public class NettyClient extends AbstractClient {
    public NettyClient(final URL url, final ChannelHandler handler) throws RemotingException {
    	super(url, wrapChannelHandler(url, handler));
    }
}

AbstractClient

  1. 调用父类的构造方法
  2. 获取发送连接数send.connect;
  3. 调用doOpen()方法
  4. 创建消费端线程池;
public abstract class AbstractClient extends AbstractEndpoint implements Client {

    protected static final String CLIENT_THREAD_POOL_NAME = "DubboClientHandler";
    private final Lock connectLock = new ReentrantLock();
    private final boolean needReconnect;
    protected volatile ExecutorService executor;

    public AbstractClient(URL url, ChannelHandler handler) throws RemotingException {
        super(url, handler);

        needReconnect = url.getParameter(Constants.SEND_RECONNECT_KEY, false);

        try {
            doOpen();
        } catch (Throwable t) {
          //省略异常处理;
        } catch (Throwable t) {
        }

        // 得到消费端的线程池
        executor = (ExecutorService) ExtensionLoader.getExtensionLoader(DataStore.class)
                .getDefaultExtension().get(CONSUMER_SIDE, Integer.toString(url.getPort()));

        ExtensionLoader.getExtensionLoader(DataStore.class)
                .getDefaultExtension().remove(CONSUMER_SIDE, Integer.toString(url.getPort()));
    }

AbstractClient#doOpen()

  1. 创建Bootstrap实例,代表客户端;
  2. 设置客户端的参数, 例如长连接类型, 以及客户端工作线程池nioEventLoopGroup
  3. 设置处理器handler;
    @Override
    protected void doOpen() throws Throwable {
        final NettyClientHandler nettyClientHandler = new NettyClientHandler(getUrl(), this);
        bootstrap = new Bootstrap();
        bootstrap.group(nioEventLoopGroup)
                .option(ChannelOption.SO_KEEPALIVE, true)
                .option(ChannelOption.TCP_NODELAY, true)
                .option(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT)
                //.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, getTimeout())
                .channel(NioSocketChannel.class);

        bootstrap.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, Math.max(3000, getConnectTimeout()));
        bootstrap.handler(new ChannelInitializer() {

            @Override
            protected void initChannel(Channel ch) throws Exception {
                int heartbeatInterval = UrlUtils.getHeartbeat(getUrl());
                NettyCodecAdapter adapter = new NettyCodecAdapter(getCodec(), getUrl(), NettyClient.this);
                ch.pipeline()//.addLast("logging",new LoggingHandler(LogLevel.INFO))//for debug
                        .addLast("decoder", adapter.getDecoder())
                        .addLast("encoder", adapter.getEncoder())
                        .addLast("client-idle-handler", new IdleStateHandler(heartbeatInterval, 0, 0, MILLISECONDS))
                        .addLast("handler", nettyClientHandler);
                String socksProxyHost = ConfigUtils.getProperty(SOCKS_PROXY_HOST);
                if(socksProxyHost != null) {
                    int socksProxyPort = Integer.parseInt(ConfigUtils.getProperty(SOCKS_PROXY_PORT, DEFAULT_SOCKS_PROXY_PORT));
                    Socks5ProxyHandler socks5ProxyHandler = new Socks5ProxyHandler(new InetSocketAddress(socksProxyHost, socksProxyPort));
                    ch.pipeline().addFirst(socks5ProxyHandler);
                }
            }
        });
    }

至此, 客户端NettyClient就创建完成了。再包装为HeaderExchangeServer实例, 再包装为ReferenceCountExchangeClient实例;
而handler的绑定, 和服务提供者的handler一样的,没有区别;

Cluster扩展点

  • 当有多个服务提供方时,将多个服务提供方组织成一个集群,并伪装成一个提供方。
  • 简单点说:Cluster对应的是集群容错的功能;
@SPI(FailoverCluster.NAME)
public interface Cluster {

    /**
     * Merge the directory invokers to a virtual invoker.
     *
     * @param <T>
     * @param directory
     * @return cluster invoker
     * @throws RpcException
     */
    @Adaptive
    <T> Invoker<T> join(Directory<T> directory) throws RpcException;

}

默认情况的扩展点实现类是FailoverCluster;

Cluster扩展点实现类

  • org.apache.dubbo.rpc.cluster.support.FailoverCluster
    失败自动切换,当出现失败,重试其它服务器。通常用于读操作,但重试会带来更长延迟。可通过 retries=“2” 来设置重试次数(不含第一次)。默认是2次;
public class FailoverCluster implements Cluster {

    public final static String NAME = "failover";

    @Override
    public <T> Invoker<T> join(Directory<T> directory) throws RpcException {
        return new FailoverClusterInvoker<T>(directory);
    }

}

  • org.apache.dubbo.rpc.cluster.support.FailfastCluster
    快速失败,只发起一次调用,失败立即报错。通常用于非幂等性的写操作,比如新增记录。
public class FailfastCluster implements Cluster {

    public final static String NAME = "failfast";

    @Override
    public <T> Invoker<T> join(Directory<T> directory) throws RpcException {
        return new FailfastClusterInvoker<T>(directory);
    }

}

  • org.apache.dubbo.rpc.cluster.support.FailsafeCluster
    失败安全,出现异常时,直接忽略。通常用于写入审计日志等操作。
public class FailsafeCluster implements Cluster {

    public final static String NAME = "failsafe";

    @Override
    public <T> Invoker<T> join(Directory<T> directory) throws RpcException {
        return new FailsafeClusterInvoker<T>(directory);
    }

}

  • org.apache.dubbo.rpc.cluster.support.FailbackCluster
    失败自动恢复,后台记录失败请求,定时重发。通常用于消息通知操作。
public class FailbackCluster implements Cluster {

    public final static String NAME = "failback";

    @Override
    public <T> Invoker<T> join(Directory<T> directory) throws RpcException {
        return new FailbackClusterInvoker<T>(directory);
    }

}

  • org.apache.dubbo.rpc.cluster.support.ForkingCluster
    并行调用多个服务器,只要一个成功即返回。通常用于实时性要求较高的读操作,但需要浪费更多服务资源。可通过 forks=“2” 来设置最大并行数。
public class ForkingCluster implements Cluster {

    public final static String NAME = "forking";

    @Override
    public <T> Invoker<T> join(Directory<T> directory) throws RpcException {
        return new ForkingClusterInvoker<T>(directory);
    }

}

  • org.apache.dubbo.rpc.cluster.support.AvailableCluster

  • 广播调用所有提供者,逐个调用,任意一台报错则报错。通常用于通知所有提供者更新缓存或日志等本地资源信息。

  • 现在广播调用中,可以通过 broadcast.fail.percent 配置节点调用失败的比例,当达到这个比例后,BroadcastClusterInvoker 将不再调用其他节点,直接抛出异常。

  • broadcast.fail.percent 取值在 0~100 范围内。默认情况下当全部调用失败后,才会抛出异常。

  • broadcast.fail.percent 只是控制的当失败后是否继续调用其他节点,并不改变结果(任意一台报错则报错)。

  • broadcast.fail.percent 参数 在 dubbo2.7.10 及以上版本生效。

  • Broadcast Cluster 配置 broadcast.fail.percent。

  • broadcast.fail.percent=20 代表了当 20% 的节点调用失败就抛出异常,不再调用其他节点。

public class BroadcastCluster implements Cluster {

    @Override
    public <T> Invoker<T> join(Directory<T> directory) throws RpcException {
        return new BroadcastClusterInvoker<T>(directory);
    }

}

评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值