Nacos中Distro协议梳理
本文主要梳理Distro协议,Distro是阿里巴巴的私有协议,开源的Nacos就是使用的这个协议,这个协议有以下几个关键点:
- distro协议是为了注册中心而创造出的协议;
- 客户端与服务端有两个重要的交互,服务注册与心跳发送;
- 客户端以服务为维度向服务端注册,注册后每隔一段时间向服务端发送一次心跳,心跳需要带上注册服务的全部信息,在客户端看来,服务端节点对等,所以请求的节点是随机的;
- 客户端请求失败则换一个节点重新发送请求;
- 服务端节点都存储所有数据,但每个节点只负责其中一部分服务,在接收到客户端的“写”(注册、心跳、下线等)请求后,服务端节点判断请求的服务是否为自己负责,如果是,则处理,否则交由负责的节点处理;
- 每个服务端节点主动发送健康检查到其他节点,响应的节点被该节点视为健康节点;
- 服务端在接收到客户端的服务心跳后,如果该服务不存在,则将该心跳请求当作注册请求来处理;
- 服务端如果长时间未收到客户端心跳,则下线该服务;
- 负责的节点在接收到服务注册、服务心跳等写请求后将数据写入后即返回,后台异步地将数据同步给其他节点;
- 节点在收到读请求后直接从本机获取后返回,无论数据是否为最新。
服务端节点的管理
Distro协议服务端节点发现使用寻址机制实现服务端节点的管理,在Nacos中有三种寻址方式:
- 单机模式(StandaloneMemberLookup)
- 文件模式(FileConfigMemberLookup)
- 服务器模式(AddressServerMemberLookup)
单机模式
public static MemberLookup createLookUp(ServerMemberManager memberManager) throws NacosException { //NacosServer 集群方式启动 if (!ApplicationUtils.getStandaloneMode()) { String lookupType = ApplicationUtils.getProperty(LOOKUP_MODE_TYPE); //由参数中传入的寻址方式得到LookupType对象 LookupType type = chooseLookup(lookupType); //选择寻址方式 LOOK_UP = find(type); currentLookupType = type; } else { //NacosServer单机启动 LOOK_UP = new StandaloneMemberLookup(); } LOOK_UP.injectMemberManager(memberManager); Loggers.CLUSTER.info("Current addressing mode selection : {}", LOOK_UP.getClass().getSimpleName()); return LOOK_UP; } |
FindType方法
private static MemberLookup find(LookupType type) { if (LookupType.FILE_CONFIG.equals(type)) { LOOK_UP = new FileConfigMemberLookup(); return LOOK_UP; } if (LookupType.ADDRESS_SERVER.equals(type)) { LOOK_UP = new AddressServerMemberLookup(); return LOOK_UP; } // unpossible to run here throw new IllegalArgumentException(); } |
可以看到非单机启动情况下,有配置文件和服务器模式;
文件模式
监控cluster.conf文件变动实现节点管理,核心代码如下:
public class FileConfigMemberLookup extends AbstractMemberLookup {
private FileWatcher watcher = new FileWatcher() { @Override public void onChange(FileChangeEvent event) { readClusterConfFromDisk(); }
@Override public boolean interest(String context) { return StringUtils.contains(context, "cluster.conf"); } };
@Override public void start() throws NacosException { if (start.compareAndSet(false, true)) { readClusterConfFromDisk();
// Use the inotify mechanism to monitor file changes and automatically // trigger the reading of cluster.conf try { WatchFileCenter.registerWatcher(ApplicationUtils.getConfFilePath(), watcher); } catch (Throwable e) { Loggers.CLUSTER.error("An exception occurred in the launch file monitor : {}", e.getMessage()); } } }
@Override public void destroy() throws NacosException { WatchFileCenter.deregisterWatcher(ApplicationUtils.getConfFilePath(), watcher); }
private void readClusterConfFromDisk() { Collection<Member> tmpMembers = new ArrayList<>(); try { List<String> tmp = ApplicationUtils.readClusterConf(); tmpMembers = MemberUtils.readServerConf(tmp); } catch (Throwable e) { Loggers.CLUSTER .error("nacos-XXXX [serverlist] failed to get serverlist from disk!, error : {}", e.getMessage()); }
afterLookup(tmpMembers); } } |
服务器模式
使用地址服务器存储节点信息,服务端定时拉取信息进行管理;
public class AddressServerMemberLookup extends AbstractMemberLookup { // 省去成员变量 @Override public void start() throws NacosException { if (start.compareAndSet(false, true)) { this.maxFailCount = Integer.parseInt(ApplicationUtils.getProperty("maxHealthCheckFailCount", "12")); initAddressSys(); run(); } } //获取服务器地址 private void initAddressSys() { String envDomainName = System.getenv("address_server_domain"); if (StringUtils.isBlank(envDomainName)) { domainName = ApplicationUtils.getProperty("address.server.domain", "jmenv.tbsite.net"); } else { domainName = envDomainName; } String envAddressPort = System.getenv("address_server_port"); if (StringUtils.isBlank(envAddressPort)) { addressPort = ApplicationUtils.getProperty("address.server.port", "8080"); } else { addressPort = envAddressPort; } String envAddressUrl = System.getenv("address_server_url"); if (StringUtils.isBlank(envAddressUrl)) { addressUrl = ApplicationUtils.getProperty("address.server.url", ApplicationUtils.getContextPath() + "/" + "serverlist"); } else { addressUrl = envAddressUrl; } addressServerUrl = "http://" + domainName + ":" + addressPort + addressUrl; envIdUrl = "http://" + domainName + ":" + addressPort + "/env";
Loggers.CORE.info("ServerListService address-server port:" + addressPort); Loggers.CORE.info("ADDRESS_SERVER_URL:" + addressServerUrl); }
@SuppressWarnings("PMD.UndefineMagicConstantRule") private void run() throws NacosException { // With the address server, you need to perform a synchronous member node pull at startup // Repeat three times, successfully jump out boolean success = false; Throwable ex = null; int maxRetry = ApplicationUtils.getProperty("nacos.core.address-server.retry", Integer.class, 5); for (int i = 0; i < maxRetry; i++) { try { syncFromAddressUrl(); //拉取集群节点信息 success = true; break; } catch (Throwable e) { ex = e; Loggers.CLUSTER.error("[serverlist] exception, error : {}", ExceptionUtil.getAllExceptionMsg(ex)); } } if (!success) { throw new NacosException(NacosException.SERVER_ERROR, ex); }
GlobalExecutor.scheduleByCommon(new AddressServerSyncTask(), 5_000L); //创建定时任务 }
@Override public void destroy() throws NacosException { shutdown = true; }
@Override public Map<String, Object> info() { Map<String, Object> info = new HashMap<>(4); info.put("addressServerHealth", isAddressServerHealth); info.put("addressServerUrl", addressServerUrl); info.put("envIdUrl", envIdUrl); info.put("addressServerFailCount", addressServerFailCount); return info; }
private void syncFromAddressUrl() throws Exception { RestResult<String> result = restTemplate .get(addressServerUrl, Header.EMPTY, Query.EMPTY, genericType.getType()); if (result.ok()) { isAddressServerHealth = true; Reader reader = new StringReader(result.getData()); try { afterLookup(MemberUtils.readServerConf(ApplicationUtils.analyzeClusterConf(reader))); } catch (Throwable e) { Loggers.CLUSTER.error("[serverlist] exception for analyzeClusterConf, error : {}", ExceptionUtil.getAllExceptionMsg(e)); } addressServerFailCount = 0; } else { addressServerFailCount++; if (addressServerFailCount >= maxFailCount) { isAddressServerHealth = false; } Loggers.CLUSTER.error("[serverlist] failed to get serverlist, error code {}", result.getCode()); } } // 定时任务 class AddressServerSyncTask implements Runnable {
@Override public void run() { if (shutdown) { return; } try { syncFromAddressUrl(); //拉取服务列表 } catch (Throwable ex) { addressServerFailCount++; if (addressServerFailCount >= maxFailCount) { isAddressServerHealth = false; } Loggers.CLUSTER.error("[serverlist] exception, error : {}", ExceptionUtil.getAllExceptionMsg(ex)); } finally { GlobalExecutor.scheduleByCommon(this, 5_000L); } } } } |
数据同步
初始数据同步
初始数据全量同步,Distro协议节点启动时会从其他节点全量同步数据,大致流程如下
主要步骤:
- 启动一个定时任务线程DistroLoadDataTask加载数据,调用load()方法加载数据;
- 调用loadAllDataSnapshotFromRemote()方法从远程机器同步所有的数据;
- 从namingProxy代理获取所有的数据data,从获取的结果result中获取数据bytes;
- 处理数据processData从data反序列化出datumMap;
- 把数据存储到dataStore,也就是本地缓存dataMap
- 如果监听器不包括key,就创建一个空的service,并且绑定监听器
- 监听器listener执行成功后,就更新dataStore
主要代码如下:
DistroProtocol.java
public DistroProtocol(ServerMemberManager memberManager, DistroComponentHolder distroComponentHolder, DistroTaskEngineHolder distroTaskEngineHolder, DistroConfig distroConfig) { this.memberManager = memberManager; this.distroComponentHolder = distroComponentHolder; this.distroTaskEngineHolder = distroTaskEngineHolder; this.distroConfig = distroConfig; startDistroTask(); }
private void startDistroTask() { if (ApplicationUtils.getStandaloneMode()) { isInitialized = true; return; } startVerifyTask(); startLoadTask(); }
private void startLoadTask() { DistroCallback loadCallback = new DistroCallback() { @Override public void onSuccess() { isInitialized = true; }
@Override public void onFailed(Throwable throwable) { isInitialized = false; } }; GlobalExecutor.submitLoadDataTask( new DistroLoadDataTask(memberManager, distroComponentHolder, distroConfig, loadCallback)); } |
DistroLoadDataTask.java
@Override public void run() { try { load(); if (!checkCompleted()) { GlobalExecutor.submitLoadDataTask(this, distroConfig.getLoadDataRetryDelayMillis()); } else { loadCallback.onSuccess(); Loggers.DISTRO.info("[DISTRO-INIT] load snapshot data success"); } } catch (Exception e) { loadCallback.onFailed(e); Loggers.DISTRO.error("[DISTRO-INIT] load snapshot data failed. ", e); } } private void load() throws Exception { while (memberManager.allMembersWithoutSelf().isEmpty()) { Loggers.DISTRO.info("[DISTRO-INIT] waiting server list init..."); TimeUnit.SECONDS.sleep(1); } while (distroComponentHolder.getDataStorageTypes().isEmpty()) { Loggers.DISTRO.info("[DISTRO-INIT] waiting distro data storage register..."); TimeUnit.SECONDS.sleep(1); } for (String each : distroComponentHolder.getDataStorageTypes()) { if (!loadCompletedMap.containsKey(each) || !loadCompletedMap.get(each)) { loadCompletedMap.put(each, loadAllDataSnapshotFromRemote(each)); } } }
//获取所有服务数据 private boolean loadAllDataSnapshotFromRemote(String resourceType) { DistroTransportAgent transportAgent = distroComponentHolder.findTransportAgent(resourceType); DistroDataProcessor dataProcessor = distroComponentHolder.findDataProcessor(resourceType); if (null == transportAgent || null == dataProcessor) { Loggers.DISTRO.warn("[DISTRO-INIT] Can't find component for type {}, transportAgent: {}, dataProcessor: {}", resourceType, transportAgent, dataProcessor); return false; }
for (Member each : memberManager.allMembersWithoutSelf()) { try { Loggers.DISTRO.info("[DISTRO-INIT] load snapshot {} from {}", resourceType, each.getAddress()); DistroData distroData = transportAgent.getDatumSnapshot(each.getAddress()); boolean result = dataProcessor.processSnapshot(distroData); Loggers.DISTRO .info("[DISTRO-INIT] load snapshot {} from {} result: {}", resourceType, each.getAddress(), result); if (result) { return true; } } catch (Exception e) { Loggers.DISTRO.error("[DISTRO-INIT] load snapshot {} from {} failed.", resourceType, each.getAddress(), e); } } return false; } |
DistroConsistencyServiceImpl.java
//解析数据 private boolean processData(byte[] data) throws Exception { if (data.length > 0) { Map<String, Datum<Instances>> datumMap = serializer.deserializeMap(data, Instances.class);
for (Map.Entry<String, Datum<Instances>> entry : datumMap.entrySet()) { //存入dataStore dataStore.put(entry.getKey(), entry.getValue());
// 监听器中不包含这个服务 if (!listeners.containsKey(entry.getKey())) { // pretty sure the service not exist: if (switchDomain.isDefaultInstanceEphemeral()) { // create empty service Loggers.DISTRO.info("creating service {}", entry.getKey()); Service service = new Service(); String serviceName = KeyBuilder.getServiceName(entry.getKey()); String namespaceId = KeyBuilder.getNamespace(entry.getKey()); service.setName(serviceName); service.setNamespaceId(namespaceId); service.setGroupName(Constants.DEFAULT_GROUP); // now validate the service. if failed, exception will be thrown service.setLastModifiedMillis(System.currentTimeMillis()); service.recalculateChecksum();
// The Listener corresponding to the key value must not be empty RecordListener listener = listeners.get(KeyBuilder.SERVICE_META_KEY_PREFIX).peek(); if (Objects.isNull(listener)) { return false; } listener.onChange(KeyBuilder.buildServiceMetaKey(namespaceId, serviceName), service); } } }
for (Map.Entry<String, Datum<Instances>> entry : datumMap.entrySet()) {
if (!listeners.containsKey(entry.getKey())) { // Should not happen: Loggers.DISTRO.warn("listener of {} not found.", entry.getKey()); continue; }
try { for (RecordListener listener : listeners.get(entry.getKey())) { listener.onChange(entry.getKey(), entry.getValue().value); } } catch (Exception e) { Loggers.DISTRO.error("[NACOS-DISTRO] error while execute listener of key: {}", entry.getKey(), e); continue; }
// Update data store if listener executed successfully: dataStore.put(entry.getKey(), entry.getValue()); } } return true; } |
增量同步
首选看下服务端的服务注册接口;
@CanDistro @PostMapping @Secured(parser = NamingResourceParser.class, action = ActionTypes.WRITE) public String register(HttpServletRequest request) throws Exception { final String namespaceId = WebUtils .optional(request, CommonParams.NAMESPACE_ID, Constants.DEFAULT_NAMESPACE_ID); final String serviceName = WebUtils.required(request, CommonParams.SERVICE_NAME); NamingUtils.checkServiceNameFormat(serviceName); final Instance instance = parseInstance(request); serviceManager.registerInstance(namespaceId, serviceName, instance); return "ok"; } |
在InstanceController中发现注册的接口上有@CanDistro的注解。在com.alibaba.nacos.naming.web.DistroFilter找到对添加该注解的controller方法做了服务请求的转发。
@Override public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain) throws IOException, ServletException { ReuseHttpRequest req = new ReuseHttpServletRequest((HttpServletRequest) servletRequest); HttpServletResponse resp = (HttpServletResponse) servletResponse;
String urlString = req.getRequestURI();
if (StringUtils.isNotBlank(req.getQueryString())) { urlString += "?" + req.getQueryString(); }
try { String path = new URI(req.getRequestURI()).getPath(); String serviceName = req.getParameter(CommonParams.SERVICE_NAME); // For client under 0.8.0: if (StringUtils.isBlank(serviceName)) { serviceName = req.getParameter("dom"); }
if (StringUtils.isNotBlank(serviceName)) { serviceName = serviceName.trim(); } Method method = controllerMethodsCache.getMethod(req);
if (method == null) { throw new NoSuchMethodException(req.getMethod() + " " + path); }
String groupName = req.getParameter(CommonParams.GROUP_NAME); if (StringUtils.isBlank(groupName)) { groupName = Constants.DEFAULT_GROUP; }
// use groupName@@serviceName as new service name: String groupedServiceName = serviceName; if (StringUtils.isNotBlank(serviceName) && !serviceName.contains(Constants.SERVICE_INFO_SPLITER)) { groupedServiceName = groupName + Constants.SERVICE_INFO_SPLITER + serviceName; }
// proxy request to other server if necessary: //请求中包含@CanDistro 并且当前服务有没有返回,如果已同步则不需要再往指定节点同步 if (method.isAnnotationPresent(CanDistro.class) && !distroMapper.responsible(groupedServiceName)) { //获取UserAgent String userAgent = req.getHeader(HttpHeaderConsts.USER_AGENT_HEADER); //如果是Nacos-Server发来的请求不处理 返回400 if (StringUtils.isNotBlank(userAgent) && userAgent.contains(UtilsAndCommons.NACOS_SERVER_HEADER)) { // This request is sent from peer server, should not be redirected again: Loggers.SRV_LOG.error("receive invalid redirect request from peer {}", req.getRemoteAddr()); resp.sendError(HttpServletResponse.SC_BAD_REQUEST, "receive invalid redirect request from peer " + req.getRemoteAddr()); return; } //根据服务名计算归属那台服务节点处理 final String targetServer = distroMapper.mapSrv(groupedServiceName); //拼装请求头,body等 List<String> headerList = new ArrayList<>(16); Enumeration<String> headers = req.getHeaderNames(); while (headers.hasMoreElements()) { String headerName = headers.nextElement(); headerList.add(headerName); headerList.add(req.getHeader(headerName)); }
final String body = IoUtils.toString(req.getInputStream(), Charsets.UTF_8.name()); final Map<String, String> paramsValue = HttpClient.translateParameterMap(req.getParameterMap()); //Http请求 RestResult<String> result = HttpClient .request("http://" + targetServer + req.getRequestURI(), headerList, paramsValue, body, PROXY_CONNECT_TIMEOUT, PROXY_READ_TIMEOUT, Charsets.UTF_8.name(), req.getMethod()); String data = result.ok() ? result.getData() : result.getMessage(); try { WebUtils.response(resp, data, result.getCode()); } catch (Exception ignore) { Loggers.SRV_LOG.warn("[DISTRO-FILTER] request failed: " + distroMapper.mapSrv(groupedServiceName) + urlString); } } else { OverrideParameterRequestWrapper requestWrapper = OverrideParameterRequestWrapper.buildRequest(req); requestWrapper.addParameter(CommonParams.SERVICE_NAME, groupedServiceName); filterChain.doFilter(requestWrapper, resp); } } catch (AccessControlException e) { resp.sendError(HttpServletResponse.SC_FORBIDDEN, "access denied: " + ExceptionUtil.getAllExceptionMsg(e)); } catch (NoSuchMethodException e) { resp.sendError(HttpServletResponse.SC_NOT_IMPLEMENTED, "no such api:" + req.getMethod() + ":" + req.getRequestURI()); } catch (Exception e) { resp.sendError(HttpServletResponse.SC_INTERNAL_SERVER_ERROR, "Server failed," + ExceptionUtil.getAllExceptionMsg(e)); }
} |
判断请求的方法是否是有CanDistro的注解和当前服务是否是负责的服务端节点,满足条件后进入判断组装请求,其中final String targetServer = distroMapper.mapSrv(groupedServiceName)返回的是负责当前请求的服务节点,并将请求转发到这个节点及返回数据;如果不满足就正常走完Filter,表示请求当前服务。
以上方法中用到的responsible()和mapSrv()方法在com.alibaba.nacos.naming.core. DistroMapper中可以找到,源码如下:
public boolean responsible(String serviceName) { final List<String> servers = healthyList; //如果不是Distro模式或者是单机启动,则返回true,表示是当前节点负责 if (!switchDomain.isDistroEnabled() || ApplicationUtils.getStandaloneMode()) { return true; }
if (CollectionUtils.isEmpty(servers)) { // means distro config is not ready yet return false; } //如果是当前节点不在服务列表中表示是当前节点负责 int index = servers.indexOf(ApplicationUtils.getLocalAddress()); int lastIndex = servers.lastIndexOf(ApplicationUtils.getLocalAddress()); if (lastIndex < 0 || index < 0) { return true; } //计算负责当前请求服务的服务端节点 int target = distroHash(serviceName) % servers.size(); return target >= index && target <= lastIndex; } public String mapSrv(String serviceName) { final List<String> servers = healthyList; //服务列表为空或者不是Distro模式,直接返回当前服务端节点地址 if (CollectionUtils.isEmpty(servers) || !switchDomain.isDistroEnabled()) { return ApplicationUtils.getLocalAddress(); }
try { //服务名hash取模后再对服务列表长度取模得出需要取的服务端节点的下标 int index = distroHash(serviceName) % servers.size(); return servers.get(index); } catch (Throwable e) { Loggers.SRV_LOG.warn("[NACOS-DISTRO] distro mapper failed, return localhost: " + ApplicationUtils .getLocalAddress(), e); return ApplicationUtils.getLocalAddress(); } }
private int distroHash(String serviceName) { //服务名取hashcode对最大整数取模 return Math.abs(serviceName.hashCode() % Integer.MAX_VALUE); } |
在上面的注册实例的源码中,找到注册成功后会触发实例信息的同步。代码如下
public void addInstance(String namespaceId, String serviceName, boolean ephemeral, Instance... ips) throws NacosException {
String key = KeyBuilder.buildInstanceListKey(namespaceId, serviceName, ephemeral);
Service service = getService(namespaceId, serviceName);
synchronized (service) { List<Instance> instanceList = addIpAddresses(service, ephemeral, ips);
Instances instances = new Instances(); instances.setInstanceList(instanceList); //调用了DistroConsisitencyServiceImpl中的put方法 consistencyService.put(key, instances); } } |
其中consistencyService.put(key,instances) 最终调用的方法就是DistroConsistencyServiceImpl中的put方法,该方法中调用了distroProtocol.sync(),下面就梳理下增量同步数据的流程。
主要步骤:
新增数据使用异步广播同步:
- DistroProtocol 使用 sync() 方法接收增量数据;
- 调用 distroTaskEngineHolder 发布延迟任务;
- 调用 DistroDelayTaskProcessor.process() 方法进行任务投递,将延迟任务转换为异步变更任务;
- 执行变更任务 DistroSyncChangeTask.run() 方法向指定节点发送消息
- 调用 DistroHttpAgent.syncData() 方法发送数据,其内部调用 NamingProxy.syncData() 方法;
- 异常任务调用 handleFailedTask() 方法进行处理,如果失败则会调用 DistroHttpCombinedKeyTaskFailedHandler 将失败任务重新投递成延迟任务;
主要代码如下:
DistroProtocol.java
public void sync(DistroKey distroKey, DataOperation action, long delay) { for (Member each : memberManager.allMembersWithoutSelf()) { DistroKey distroKeyWithTarget = new DistroKey(distroKey.getResourceKey(), distroKey.getResourceType(), each.getAddress()); DistroDelayTask distroDelayTask = new DistroDelayTask(distroKeyWithTarget, action, delay); distroTaskEngineHolder.getDelayTaskExecuteEngine().addTask(distroKeyWithTarget, distroDelayTask); if (Loggers.DISTRO.isDebugEnabled()) { Loggers.DISTRO.debug("[DISTRO-SCHEDULE] {} to {}", distroKey, each.getAddress()); } } } |
NacosDelayTaskExecuteEngine.java
@Override public void addTask(Object key, AbstractDelayTask newTask) { lock.lock(); try { AbstractDelayTask existTask = tasks.get(key); if (null != existTask) { newTask.merge(existTask); } tasks.put(key, newTask); } finally { lock.unlock(); } }
/** * process tasks in execute engine. */ protected void processTasks() { Collection<Object> keys = getAllTaskKeys(); for (Object taskKey : keys) { AbstractDelayTask task = removeTask(taskKey); if (null == task) { continue; } NacosTaskProcessor processor = getProcessor(taskKey); if (null == processor) { getEngineLog().error("processor not found for task, so discarded. " + task); continue; } try { // ReAdd task if process failed if (!processor.process(task)) { retryFailedTask(taskKey, task); } } catch (Throwable e) { getEngineLog().error("Nacos task execute error : " + e.toString(), e); retryFailedTask(taskKey, task); } } } |
DistroDelayTaskProcessor.java
@Override public boolean process(NacosTask task) { if (!(task instanceof DistroDelayTask)) { return true; } DistroDelayTask distroDelayTask = (DistroDelayTask) task; DistroKey distroKey = distroDelayTask.getDistroKey(); if (DataOperation.CHANGE.equals(distroDelayTask.getAction())) { DistroSyncChangeTask syncChangeTask = new DistroSyncChangeTask(distroKey, distroComponentHolder); distroTaskEngineHolder.getExecuteWorkersManager().addTask(distroKey, syncChangeTask); return true; } return false; } |
DistroSyncChangeTask.java
@Override public void run() { Loggers.DISTRO.info("[DISTRO-START] {}", toString()); try { String type = getDistroKey().getResourceType(); DistroData distroData = distroComponentHolder.findDataStorage(type).getDistroData(getDistroKey()); distroData.setType(DataOperation.CHANGE); boolean result = distroComponentHolder.findTransportAgent(type).syncData(distroData, getDistroKey().getTargetServer()); if (!result) { handleFailedTask(); } Loggers.DISTRO.info("[DISTRO-END] {} result: {}", toString(), result); } catch (Exception e) { Loggers.DISTRO.warn("[DISTRO] Sync data change failed.", e); handleFailedTask(); } }
private void handleFailedTask() { String type = getDistroKey().getResourceType(); DistroFailedTaskHandler failedTaskHandler = distroComponentHolder.findFailedTaskHandler(type); if (null == failedTaskHandler) { Loggers.DISTRO.warn("[DISTRO] Can't find failed task for type {}, so discarded", type); return; } failedTaskHandler.retry(getDistroKey(), DataOperation.CHANGE); } |
综上对于Nacos中的Distro协议的源码解读,可以看出Distro协议是一种临时数据最终一致性协议,这种协议不需要把数据存储在磁盘或者数据库中,因为临时数据通常和服务器保持一个session会话,只要会话存在,数据就不会丢失;Distro协议写必须永远是成功的,如果失败也会在delay池中不停的合并,不断去请求服务节点最终数据是一致的。