nacos注册中心(三)服务订阅

Nacos监听服务

nacos监听服务,官方sdk资料:https://nacos.io/zh-cn/docs/sdk.html

client端

  • NacosNamingService.subscribe

可以看到最下边的方法是订阅的最终执行方法。

这里有个changeNotifier的注册监听。

@Override
public void subscribe(String serviceName, EventListener listener) throws NacosException {
    subscribe(serviceName, new ArrayList<String>(), listener);
}

@Override
public void subscribe(String serviceName, String groupName, EventListener listener) throws NacosException {
    subscribe(serviceName, groupName, new ArrayList<String>(), listener);
}

@Override
public void subscribe(String serviceName, List<String> clusters, EventListener listener) throws NacosException {
    subscribe(serviceName, Constants.DEFAULT_GROUP, clusters, listener);
}
@Override
public void subscribe(String serviceName, String groupName, List<String> clusters, EventListener listener)
    throws NacosException {
    if (null == listener) {
        return;
    }
    String clusterString = StringUtils.join(clusters, ",");
    // 根据分组名称、服务名称、集群名称去注册一个监听器
    changeNotifier.registerListener(groupName, serviceName, clusterString, listener);
    clientProxy.subscribe(serviceName, groupName, clusterString);
}    
  • InstancesChangeNotifier.registerListener

这里会有一个事件的监听器listener,而一旦触发了事件,则一定会进入到该类的onEvent()事件中。

// 存放key与事件对应关系的集合
private final Map<String, ConcurrentHashSet<EventListener>> listenerMap = new ConcurrentHashMap<String, ConcurrentHashSet<EventListener>>();
    
private final Object lock = new Object();
public void registerListener(String groupName, String serviceName, String clusters, EventListener listener) {
    // 获取到一个key值
    String key = ServiceInfo.getKey(NamingUtils.getGroupedName(serviceName, groupName), clusters);
    ConcurrentHashSet<EventListener> eventListeners = listenerMap.get(key);
    // 双重检查锁 Double Check Lock
    // 获取事件,如果没有的话,则放一个的进去
    if (eventListeners == null) {
        synchronized (lock) {
            eventListeners = listenerMap.get(key);
            if (eventListeners == null) {
                eventListeners = new ConcurrentHashSet<EventListener>();
                listenerMap.put(key, eventListeners);
            }
        }
    }
    eventListeners.add(listener);
}
@Override
public void onEvent(InstancesChangeEvent event) {
    String key = ServiceInfo
        .getKey(NamingUtils.getGroupedName(event.getServiceName(), event.getGroupName()), event.getClusters());
    ConcurrentHashSet<EventListener> eventListeners = listenerMap.get(key);
    if (CollectionUtils.isEmpty(eventListeners)) {
        return;
    }
    for (final EventListener listener : eventListeners) {
        final com.alibaba.nacos.api.naming.listener.Event namingEvent = transferToNamingEvent(event);
        if (listener instanceof AbstractEventListener && ((AbstractEventListener) listener).getExecutor() != null) {
            ((AbstractEventListener) listener).getExecutor().execute(() -> listener.onEvent(namingEvent));
        } else {
            listener.onEvent(namingEvent);
        }
    }
}
  • NamingGrpcClientProxy.subscribe

这里构建了一个SubscribeServiceRequest,进入到server端的SubscribeServiceRequestHandler中处理。

@Override
public ServiceInfo subscribe(String serviceName, String groupName, String clusters) throws NacosException {
    if (NAMING_LOGGER.isDebugEnabled()) {
        NAMING_LOGGER.debug("[GRPC-SUBSCRIBE] service:{}, group:{}, cluster:{} ", serviceName, groupName, clusters);
    }
    // 把需要重试的缓存下来
    redoService.cacheSubscriberForRedo(serviceName, groupName, clusters);
    return doSubscribe(serviceName, groupName, clusters);
}
public ServiceInfo doSubscribe(String serviceName, String groupName, String clusters) throws NacosException {
    SubscribeServiceRequest request = new SubscribeServiceRequest(namespaceId, groupName, serviceName, clusters,
                                                                  true);
    SubscribeServiceResponse response = requestToServer(request, SubscribeServiceResponse.class);
    redoService.subscriberRegistered(serviceName, groupName, clusters);
    return response.getServiceInfo();
}

server端

  • SubscribeServiceRequestHandler.handle

接收到客户端订阅的请求

@Override
@Secured(action = ActionTypes.READ)
public SubscribeServiceResponse handle(SubscribeServiceRequest request, RequestMeta meta) throws NacosException {
    String namespaceId = request.getNamespace();
    String serviceName = request.getServiceName();
    String groupName = request.getGroupName();
    String app = request.getHeader("app", "unknown");
    String groupedServiceName = NamingUtils.getGroupedName(serviceName, groupName);
    // 三要素定义一个Service
    Service service = Service.newService(namespaceId, groupName, serviceName, true);
    // 通过meta获取客户端ip,version,表明
    Subscriber subscriber = new Subscriber(meta.getClientIp(), meta.getClientVersion(), app, meta.getClientIp(),
                                           namespaceId, groupedServiceName, 0, request.getClusters());
    // 针对服务列表,拿到健康节点的列表
    ServiceInfo serviceInfo = ServiceUtil.selectInstancesWithHealthyProtection(serviceStorage.getData(service),
metadataManager.getServiceMetadata(service).orElse(null), subscriber.getCluster(), false,
                                                                               true, subscriber.getIp());
    // 订阅
    if (request.isSubscribe()) {
        // 这里进入到下一步订阅
        clientOperationService.subscribeService(service, subscriber, meta.getConnectionId());
        NotifyCenter.publishEvent(new SubscribeServiceTraceEvent(System.currentTimeMillis(),
                                                                 meta.getClientIp(), service.getNamespace(), service.getGroup(), service.getName()));
    } else {
        // 取消订阅
        clientOperationService.unsubscribeService(service, subscriber, meta.getConnectionId());
        NotifyCenter.publishEvent(new UnsubscribeServiceTraceEvent(System.currentTimeMillis(),
                                                                   meta.getClientIp(), service.getNamespace(), service.getGroup(), service.getName()));
    }
    return new SubscribeServiceResponse(ResponseCode.SUCCESS.getCode(), "success", serviceInfo);
}
  • EphemeralClientOperationServiceImpl.subscribeService
@Override
public void subscribeService(Service service, Subscriber subscriber, String clientId) {
	// 获取被订阅的服务,如果不存在,就是request传过来的服务
    Service singleton = ServiceManager.getInstance().getSingletonIfExist(service).orElse(service);
    // 获取client
    Client client = clientManager.getClient(clientId);
    if (!clientIsLegal(client, clientId)) {
        return;
    }
    // 添加到订阅关系里边。 这里进入到下一步
    client.addServiceSubscriber(singleton, subscriber);
    client.setLastUpdatedTime();
    // 订阅关系添加完成后,发布一个事件。
    NotifyCenter.publishEvent(new ClientOperationEvent.ClientSubscribeServiceEvent(singleton, clientId));
}
  • AbstractClient.addServiceSubscriber
protected final ConcurrentHashMap<Service, InstancePublishInfo> publishers = new ConcurrentHashMap<>(16, 0.75f, 1);
// 保存服务和订阅者的关系,Subscriber中包含了订阅者的clientIp和version。简单来说就是谁订阅了这个服务。
protected final ConcurrentHashMap<Service, Subscriber> subscribers = new ConcurrentHashMap<>(16, 0.75f, 1);
    
@Override
public boolean addServiceSubscriber(Service service, Subscriber subscriber) {
    if (null == subscribers.put(service, subscriber)) {
        MetricsMonitor.incrementSubscribeCount();
    }
    return true;
}
  • ClientOperationEvent.ClientSubscribeServiceEvent

上边的第二步中发布了一个事件。

全局去找这个事件。ClientSubscribeServiceEvent

  • ClientServiceIndexesManager.handleClientOperation

如果key存在

private void handleClientOperation(ClientOperationEvent event) {
    Service service = event.getService();
    String clientId = event.getClientId();
    // 4个主要的事件
    if (event instanceof ClientOperationEvent.ClientRegisterServiceEvent) {
        addPublisherIndexes(service, clientId);
    } else if (event instanceof ClientOperationEvent.ClientDeregisterServiceEvent) {
        removePublisherIndexes(service, clientId);
    } else if (event instanceof ClientOperationEvent.ClientSubscribeServiceEvent) {
        addSubscriberIndexes(service, clientId);
    } else if (event instanceof ClientOperationEvent.ClientUnsubscribeServiceEvent) {
        removeSubscriberIndexes(service, clientId);
    }
}
// 存储服务和订阅者clientId集合
private final ConcurrentMap<Service, Set<String>> subscriberIndexes = new ConcurrentHashMap<>();
private void addSubscriberIndexes(Service service, String clientId) {
    // 保存了服务和订阅者clientId的关系
    subscriberIndexes.computeIfAbsent(service, key -> new ConcurrentHashSet<>());
    // Fix #5404, Only first time add need notify event.
    if (subscriberIndexes.get(service).add(clientId)) {
        // 再次发布一个ServiceEvent订阅的事件
        NotifyCenter.publishEvent(new ServiceEvent.ServiceSubscribedEvent(service, clientId));
    }
}
  • NamingSubscriberServiceV2Impl.onEvent

这里可以看到有延时任务引擎

private final PushDelayTaskExecuteEngine delayTaskEngine;
@Override
public void onEvent(Event event) {
    if (event instanceof ServiceEvent.ServiceChangedEvent) {
        // If service changed, push to all subscribers.
        ServiceEvent.ServiceChangedEvent serviceChangedEvent = (ServiceEvent.ServiceChangedEvent) event;
        Service service = serviceChangedEvent.getService();
        delayTaskEngine.addTask(service, new PushDelayTask(service, PushConfig.getInstance().getPushTaskDelay()));
        MetricsMonitor.incrementServiceChangeCount(service.getNamespace(), service.getGroup(), service.getName());
    } else if (event instanceof ServiceEvent.ServiceSubscribedEvent) {
        // If service is subscribed by one client, only push this client.
        ServiceEvent.ServiceSubscribedEvent subscribedEvent = (ServiceEvent.ServiceSubscribedEvent) event;
        Service service = subscribedEvent.getService();
        delayTaskEngine.addTask(service, new PushDelayTask(service, PushConfig.getInstance().getPushTaskDelay(),
                                                           subscribedEvent.getClientId()));
    }
}
  • NacosDelayTaskExecuteEngine.addTask

添加一个延时任务,把这个任务放入到执行引擎中。再用线程池中的线程去执行。


// 根据不同的taskKey去获取不同的执行引擎。// 重入锁
protected final ReentrantLock lock = new ReentrantLock();

private final ScheduledExecutorService processingExecutor;

public NacosDelayTaskExecuteEngine(String name, int initCapacity, Logger logger, long processInterval) {
    super(logger);
    tasks = new ConcurrentHashMap<>(initCapacity);
    // 初始化一个任务执行器,每隔100ms执行一次。
    processingExecutor = ExecutorFactory.newSingleScheduledExecutorService(new NameThreadFactory(name));
    // 这里的new ProcessRunnable()进入下一段。
    processingExecutor
        .scheduleWithFixedDelay(new ProcessRunnable(), processInterval, processInterval, TimeUnit.MILLISECONDS);
}
// 定义一个tasks的CHM
protected final ConcurrentHashMap<Object, AbstractDelayTask> tasks;
@Override
public void addTask(Object key, AbstractDelayTask newTask) {
    lock.lock();
    try {
        AbstractDelayTask existTask = tasks.get(key);
        if (null != existTask) {
            newTask.merge(existTask);
        }
        tasks.put(key, newTask);
    } finally {
        lock.unlock();
    }
}
// 内部类
private class ProcessRunnable implements Runnable {
        
    @Override
    public void run() {
        try {
            processTasks();
        } catch (Throwable e) {
            getEngineLog().error(e.toString(), e);
        }
    }
}

protected void processTasks() {
    Collection<Object> keys = getAllTaskKeys();
    // taskKey = 传入进来的service
    for (Object taskKey : keys) {
        AbstractDelayTask task = removeTask(taskKey);
        if (null == task) {
            continue;
        }
        // 根据不同的taskKey去获取不同的执行引擎。
        NacosTaskProcessor processor = getProcessor(taskKey);
        if (null == processor) {
            getEngineLog().error("processor not found for task, so discarded. " + task);
            continue;
        }
        try {
            // ReAdd task if process failed 
            // 这里进入到下一步
            if (!processor.process(task)) {
                retryFailedTask(taskKey, task);
            }
        } catch (Throwable e) {
            getEngineLog().error("Nacos task execute error ", e);
            retryFailedTask(taskKey, task);
        }
    }
}
  • PushDelayTaskExecuteEngine.process

因为我们创建任务的时候创建的是一个PushDelayTask,所以到最后一定会进入到PushDelayTaskExecuteEngine.

private static class PushDelayTaskProcessor implements NacosTaskProcessor {
        
    private final PushDelayTaskExecuteEngine executeEngine;

    public PushDelayTaskProcessor(PushDelayTaskExecuteEngine executeEngine) {
        this.executeEngine = executeEngine;
    }

    @Override
    public boolean process(NacosTask task) {
        PushDelayTask pushDelayTask = (PushDelayTask) task;
        Service service = pushDelayTask.getService();
        // 任务分发
        NamingExecuteTaskDispatcher.getInstance()
            .dispatchAndExecuteTask(service, new PushExecuteTask(service, executeEngine, pushDelayTask));
        return true;
    }
}
  • NamingExecuteTaskDispatcher.dispatchAndExecuteTask

这里是一个典型的单例

public class NamingExecuteTaskDispatcher {
    
    private static final NamingExecuteTaskDispatcher INSTANCE = new NamingExecuteTaskDispatcher();
    
    private final NacosExecuteTaskExecuteEngine executeEngine;
    
    private NamingExecuteTaskDispatcher() {
        executeEngine = new NacosExecuteTaskExecuteEngine(EnvUtil.FUNCTION_MODE_NAMING, Loggers.SRV_LOG);
    }
    
    public static NamingExecuteTaskDispatcher getInstance() {
        return INSTANCE;
    }
    
    public void dispatchAndExecuteTask(Object dispatchTag, AbstractExecuteTask task) {
    	// 分发任务去执行
        executeEngine.addTask(dispatchTag, task);
    }
    
    public String workersStatus() {
        return executeEngine.workersStatus();
    }
    
    public void destroy() throws Exception {
        executeEngine.shutdown();
    }
}
  • NacosExecuteTaskExecuteEngine.addTask

@Override
public void addTask(Object tag, AbstractExecuteTask task) {
    // 这段代码很奇怪,会陷入循环执行中,应该不是处理PushDelayTask的,所以我们进入到下边
    NacosTaskProcessor processor = getProcessor(tag);
    if (null != processor) {
        processor.process(task);
        return;
    }
    
    TaskExecuteWorker worker = getWorker(tag);
    // 这里进入到下一步
    worker.process(task);
}

private TaskExecuteWorker getWorker(Object tag) {
    // service的hashCode 模 Integer.MAX_VALUE 再和workerCount求余
    int idx = (tag.hashCode() & Integer.MAX_VALUE) % workersCount();
    return executeWorkers[idx];
}
public NacosExecuteTaskExecuteEngine(String name, Logger logger) {
    // 线程的数量,这里是从配置中获取或者是Runtime.getRuntime().availableProcessors();获取服务器的CPU
    this(name, logger, ThreadUtils.getSuitableThreadCount(1));
}
public NacosExecuteTaskExecuteEngine(String name, Logger logger, int dispatchWorkerCount) {
    super(logger);
    executeWorkers = new TaskExecuteWorker[dispatchWorkerCount];
    for (int mod = 0; mod < dispatchWorkerCount; ++mod) {
        executeWorkers[mod] = new TaskExecuteWorker(name, mod, dispatchWorkerCount, getEngineLog());
    }
}
private final TaskExecuteWorker[] executeWorkers;
private int workersCount() {
    return executeWorkers.length;
}
  • TaskExecuteWorker.process

初始化了一个阻塞队列,然后把这个task线程丢入到阻塞队列中。

TaskExecuteWorker初始化的时候,new了一个InnerWorker。

然后启动这个innerWorker。这个worker一定是去消费queue中的数据。

private final BlockingQueue<Runnable> queue;
public TaskExecuteWorker(final String name, final int mod, final int total, final Logger logger) {
    this.name = name + "_" + mod + "%" + total;
    this.queue = new ArrayBlockingQueue<>(QUEUE_CAPACITY);
    this.closed = new AtomicBoolean(false);
    this.log = null == logger ? LoggerFactory.getLogger(TaskExecuteWorker.class) : logger;
    realWorker = new InnerWorker(this.name);
    realWorker.start();
}
@Override
public boolean process(NacosTask task) {
    if (task instanceof AbstractExecuteTask) {
        putTask((Runnable) task);
    }
    return true;
}
 private void putTask(Runnable task) {
     try {
         queue.put(task);
     } catch (InterruptedException ire) {
         log.error(ire.toString(), ire);
     }
 }
  • InnerWork.run

queue.take(),有任务就执行任务,没有任务就阻塞。

task.run()

run就会进入到PushDelayTaskExecuteEngine中的new PushExecuteTask(service, executeEngine, pushDelayTask);中的run方法

然后我们再去看看run方法。

private class InnerWorker extends Thread {
    
    InnerWorker(String name) {
        setDaemon(false);
        setName(name);
    }
    
    @Override
    public void run() {
        while (!closed.get()) {
            try {
                Runnable task = queue.take();
                long begin = System.currentTimeMillis();
                task.run();
                long duration = System.currentTimeMillis() - begin;
                if (duration > 1000L) {
                    log.warn("task {} takes {}ms", task, duration);
                }
            } catch (Throwable e) {
                log.error("[TASK-FAILED] " + e, e);
            }
        }
    }
}
  • PushDelayTaskExecuteEngine.run

这里包装了ServiceInfo和meta数据,然后遍历目标的clientId,并把subscriber信息推送给clientId

@Override
public void run() {
    try {
        PushDataWrapper wrapper = generatePushData();
        ClientManager clientManager = delayTaskEngine.getClientManager();
        for (String each : getTargetClientIds()) {
            Client client = clientManager.getClient(each);
            if (null == client) {
                // means this client has disconnect
                continue;
            }
            Subscriber subscriber = clientManager.getClient(each).getSubscriber(service);
            delayTaskEngine.getPushExecutor().doPushWithCallback(each, subscriber, wrapper,
                                                                 new ServicePushCallback(each, subscriber, wrapper.getOriginalData(), delayTask.isPushToAll()));
        }
    } catch (Exception e) {
        Loggers.PUSH.error("Push task for service" + service.getGroupedServiceName() + " execute failed ", e);
        delayTaskEngine.addTask(service, new PushDelayTask(service, 1000L));
    }
}
  • PushExecutorRpcImpl.doPushWithCallback

这里就是jrpc的推送数据层了。

private final RpcPushService pushService;
@Override
public void doPushWithCallback(String clientId, Subscriber subscriber, PushDataWrapper data,
        NamingPushCallback callBack) {
    ServiceInfo actualServiceInfo = getServiceInfo(data, subscriber);
    callBack.setActualServiceInfo(actualServiceInfo);
    pushService.pushWithCallback(clientId, NotifySubscriberRequest.buildNotifySubscriberRequest(actualServiceInfo),
            callBack, GlobalExecutor.getCallbackExecutor());
}
  • RpcPushService.pushWithCallback

这里是异步的推送,至此,事件就已经结束了。

public void pushWithCallback(String connectionId, ServerRequest request, PushCallBack requestCallBack,
        Executor executor) {
    Connection connection = connectionManager.getConnection(connectionId);
    if (connection != null) {
        try {
            connection.asyncRequest(request, new AbstractRequestCallBack(requestCallBack.getTimeout()) {
                
                @Override
                public Executor getExecutor() {
                    return executor;
                }
                
                @Override
                public void onResponse(Response response) {
                    if (response.isSuccess()) {
                        requestCallBack.onSuccess();
                    } else {
                        requestCallBack.onFail(new NacosException(response.getErrorCode(), response.getMessage()));
                    }
                }
                
                @Override
                public void onException(Throwable e) {
                    requestCallBack.onFail(e);
                }
            });
        } catch (ConnectionAlreadyClosedException e) {
            connectionManager.unregister(connectionId);
            requestCallBack.onSuccess();
        } catch (Exception e) {
            Loggers.REMOTE_DIGEST
                    .error("error to send push response to connectionId ={},push response={}", connectionId,
                            request, e);
            requestCallBack.onFail(e);
        }
    } else {
        requestCallBack.onSuccess();
    }
}

再次进入client端

  • NamingClientProxyDelegate.subscribe

这里是grpcClientProxy调用完subscribe后返回的结果。

然后进入ServiceInfoHolder.processServiceInfo

@Override
public ServiceInfo subscribe(String serviceName, String groupName, String clusters) throws NacosException {
    NAMING_LOGGER.info("[SUBSCRIBE-SERVICE] service:{}, group:{}, clusters:{} ", serviceName, groupName, clusters);
    String serviceNameWithGroup = NamingUtils.getGroupedName(serviceName, groupName);
    String serviceKey = ServiceInfo.getKey(serviceNameWithGroup, clusters);
    serviceInfoUpdateService.scheduleUpdateIfAbsent(serviceName, groupName, clusters);
    ServiceInfo result = serviceInfoHolder.getServiceInfoMap().get(serviceKey);
    if (null == result || !isSubscribed(serviceName, groupName, clusters)) {
        result = grpcClientProxy.subscribe(serviceName, groupName, clusters);
    }
    serviceInfoHolder.processServiceInfo(result);
    return result;
}
  • ServiceInfoHolder.processServiceInfo
public ServiceInfo processServiceInfo(ServiceInfo serviceInfo) {
    String serviceKey = serviceInfo.getKey();
    if (serviceKey == null) {
        return null;
    }
    ServiceInfo oldService = serviceInfoMap.get(serviceInfo.getKey());
    if (isEmptyOrErrorPush(serviceInfo)) {
        //empty or error push, just ignore
        return oldService;
    }
    serviceInfoMap.put(serviceInfo.getKey(), serviceInfo);
    boolean changed = isChangedServiceInfo(oldService, serviceInfo);
    if (StringUtils.isBlank(serviceInfo.getJsonFromServer())) {
        serviceInfo.setJsonFromServer(JacksonUtils.toJson(serviceInfo));
    }
    MetricsMonitor.getServiceInfoMapSizeMonitor().set(serviceInfoMap.size());
    if (changed) {
        NAMING_LOGGER.info("current ips:({}) service: {} -> {}", serviceInfo.ipCount(), serviceInfo.getKey(),
                JacksonUtils.toJson(serviceInfo.getHosts()));
        // 更新完成后,会发布一个事件,InstancesChangeEvent
        NotifyCenter.publishEvent(new InstancesChangeEvent(serviceInfo.getName(), serviceInfo.getGroupName(),
                serviceInfo.getClusters(), serviceInfo.getHosts()));
        DiskCache.write(serviceInfo, cacheDir);
    }
    return serviceInfo;
}
  • NotifyCenter.publishEvent
public static boolean publishEvent(final Event event) {
    try {
        return publishEvent(event.getClass(), event);
    } catch (Throwable ex) {
        LOGGER.error("There was an exception to the message publishing : ", ex);
        return false;
    }
}
private static boolean publishEvent(final Class<? extends Event> eventType, final Event event) {
    if (ClassUtils.isAssignableFrom(SlowEvent.class, eventType)) {
        return INSTANCE.sharePublisher.publish(event);
    }

    final String topic = ClassUtils.getCanonicalName(eventType);

    EventPublisher publisher = INSTANCE.publisherMap.get(topic);
    if (publisher != null) {
        // 这里进入到下一步
        return publisher.publish(event);
    }
    LOGGER.warn("There are no [{}] publishers for this event, please register", topic);
    return false;
}
  • DefaultPublisher.publish

把这个event放入到一个queue中,这个queue是一个阻塞队列。

private BlockingQueue<Event> queue;
@Override
public boolean publish(Event event) {
    checkIsStart();
    boolean success = this.queue.offer(event);
    if (!success) {
        LOGGER.warn("Unable to plug in due to interruption, synchronize sending time, event : {}", event);
        receiveEvent(event);
        return true;
    }
    return true;
}

然后queue.take去获取,在run的时候去执行一个任务

@Override
public void run() {
    openEventHandler();
}

void openEventHandler() {
    try {
        
        // This variable is defined to resolve the problem which message overstock in the queue.
        int waitTimes = 60;
        // To ensure that messages are not lost, enable EventHandler when
        // waiting for the first Subscriber to register
        for (; ; ) {
            if (shutdown || hasSubscriber() || waitTimes <= 0) {
                break;
            }
            ThreadUtils.sleep(1000L);
            waitTimes--;
        }
        
        for (; ; ) {
            if (shutdown) {
                break;
            }
            final Event event = queue.take();
            receiveEvent(event);
            UPDATER.compareAndSet(this, lastEventSequence, Math.max(lastEventSequence, event.sequence()));
        }
    } catch (Throwable ex) {
        LOGGER.error("Event listener exception : ", ex);
    }
}


void receiveEvent(Event event) {
    final long currentEventSequence = event.sequence();

    if (!hasSubscriber()) {
        LOGGER.warn("[NotifyCenter] the {} is lost, because there is no subscriber.", event);
        return;
    }

    // Notification single event listener
    for (Subscriber subscriber : subscribers) {
        // Whether to ignore expiration events
        if (subscriber.ignoreExpireEvent() && lastEventSequence > currentEventSequence) {
            LOGGER.debug("[NotifyCenter] the {} is unacceptable to this subscriber, because had expire",
                         event.getClass());
            continue;
        }

        // Because unifying smartSubscriber and subscriber, so here need to think of compatibility.
        // Remove original judge part of codes.
        notifySubscriber(subscriber, event);
    }
}

进入到receiveEvent中,遍历subscribers。

如果subscriber有线程池,则直接使用线程池中的线程来执行,否则用Runnable.run

@Override
public void notifySubscriber(final Subscriber subscriber, final Event event) {
    
    LOGGER.debug("[NotifyCenter] the {} will received by {}", event, subscriber);
    
    final Runnable job = () -> subscriber.onEvent(event);
    final Executor executor = subscriber.executor();
    
    if (executor != null) {
        executor.execute(job);
    } else {
        try {
            job.run();
        } catch (Throwable e) {
            LOGGER.error("Event callback exception: ", e);
        }
    }
}

而这个run方法,就是实现了方法的回调。

NamingService naming = NamingFactory.createNamingService(System.getProperty("serveAddr"));
naming.subscribe("nacos.test.3", event -> {
    if (event instanceof NamingEvent) {
        System.out.println(((NamingEvent) event).getServceName());
        System.out.println(((NamingEvent) event).getInstances());
    }
});

至此这个过程就分析完成了。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值