Nacos配置中心源码学习

nacos配置中心

nacos配置中心源码分析

官网sdk地址:https://nacos.io/zh-cn/docs/v2/guide/user/sdk.html

配置发布client端

public boolean publishConfig(String dataId, String group, String content) throws NacosException;

@Since 1.4.1
public boolean publishConfig(String dataId, String group, String content, String type) throws NacosException;

上边的复制自官网。下面开始源码学习

  • NacosConfigService.publishConfig

这里初始化了一个worker。

private final ClientWorker worker;

private String namespace;

private final ConfigFilterChainManager configFilterChainManager;

public NacosConfigService(Properties properties) throws NacosException {
    ValidatorUtils.checkInitParam(properties);
    
    initNamespace(properties);
    this.configFilterChainManager = new ConfigFilterChainManager(properties);
    ServerListManager serverListManager = new ServerListManager(properties);
    serverListManager.start();
    
    this.worker = new ClientWorker(this.configFilterChainManager, serverListManager, properties);
    // will be deleted in 2.0 later versions
    agent = new ServerHttpAgent(serverListManager);
    
}
@Override
public boolean publishConfig(String dataId, String group, String content, String type) throws NacosException {
    return publishConfigInner(namespace, dataId, group, null, null, null, content, type, null);
}
private boolean publishConfigInner(String tenant, String dataId, String group, String tag, String appName,
        String betaIps, String content, String type, String casMd5) throws NacosException {
    group = blank2defaultGroup(group);
    ParamUtils.checkParam(dataId, group, content);
    // 构建一个ConfigRequest
    ConfigRequest cr = new ConfigRequest();
    cr.setDataId(dataId);
    cr.setTenant(tenant);
    cr.setGroup(group);
    cr.setContent(content);
    cr.setType(type);
    // 这里会根据dataId和配置的内容进行一个加密
    configFilterChainManager.doFilter(cr, null);
    content = cr.getContent();
    String encryptedDataKey = cr.getEncryptedDataKey();
    
    return worker
            .publishConfig(dataId, group, tenant, appName, tag, betaIps, content, encryptedDataKey, casMd5, type);
}
  • ClientWorker.publishConfig

在ClientWorker的初始化中,根据传入的serverListManager和properties创建一个线程池,然后启用这个线程池。

可以看到agent= new ConfigRpcTransportClient(),那么这个大概就是rpc交互的代理了。

private ConfigTransportClient agent;
public boolean publishConfig(String dataId, String group, String tenant, String appName, String tag, String betaIps,
                             String content, String encryptedDataKey, String casMd5, String type) throws NacosException {
    return agent
        .publishConfig(dataId, group, tenant, appName, tag, betaIps, content, encryptedDataKey, casMd5, type);
}
//  内部类 
public class ConfigRpcTransportClient extends ConfigTransportClient {
    public ClientWorker(final ConfigFilterChainManager configFilterChainManager, ServerListManager serverListManager,
                        final Properties properties) throws NacosException {
        this.configFilterChainManager = configFilterChainManager;

        init(properties);

        agent = new ConfigRpcTransportClient(properties, serverListManager);
        int count = ThreadUtils.getSuitableThreadCount(THREAD_MULTIPLE);
        ScheduledExecutorService executorService = Executors
            .newScheduledThreadPool(Math.max(count, MIN_THREAD_NUM), r -> {
                Thread t = new Thread(r);
                t.setName("com.alibaba.nacos.client.Worker");
                t.setDaemon(true);
                return t;
            });
        agent.setExecutor(executorService);
        agent.start();

    }
}

内部类ConfigRpcTransportClient的publishConfig

这里的逻辑就是构建了一个ConfigPublishRequest ,然后跟server端去交互

@Override
public boolean publishConfig(String dataId, String group, String tenant, String appName, String tag,
        String betaIps, String content, String encryptedDataKey, String casMd5, String type)
        throws NacosException {
    try {
        ConfigPublishRequest request = new ConfigPublishRequest(dataId, group, tenant, content);
        request.setCasMd5(casMd5);
        request.putAdditionalParam(TAG_PARAM, tag);
        request.putAdditionalParam(APP_NAME_PARAM, appName);
        request.putAdditionalParam(BETAIPS_PARAM, betaIps);
        request.putAdditionalParam(TYPE_PARAM, type);
        request.putAdditionalParam(ENCRYPTED_DATA_KEY_PARAM, encryptedDataKey == null ? "" : encryptedDataKey);
        ConfigPublishResponse response = (ConfigPublishResponse) requestProxy(getOneRunningClient(), request);
        if (!response.isSuccess()) {
            LOGGER.warn("[{}] [publish-single] fail, dataId={}, group={}, tenant={}, code={}, msg={}",
                    this.getName(), dataId, group, tenant, response.getErrorCode(), response.getMessage());
            return false;
        } else {
            LOGGER.info("[{}] [publish-single] ok, dataId={}, group={}, tenant={}, config={}", getName(),
                    dataId, group, tenant, ContentUtils.truncateContent(content));
            return true;
        }
    } catch (Exception e) {
        LOGGER.warn("[{}] [publish-single] error, dataId={}, group={}, tenant={}, code={}, msg={}",
                this.getName(), dataId, group, tenant, "unkonw", e.getMessage());
        return false;
    }
}

requestProxy这里就不继续下去了。grpc的代码。

配置发布server端

上边构建了一个ConfigPublishRequest的请求,那么对应的会进入ConfigPublishRequestHandler中。

这个可以理解为mvc中的controller层,只不过是用grpc来实现的 。

  • ConfigPublishRequestHandler.handle

beta:页面上灰度发布,可以针对不同的ip进行灰度发布

tag:页面上标签,

casMd5:页面配置的版本

@Override
@TpsControl(pointName = "ConfigPublish")
@Secured(action = ActionTypes.WRITE, signType = SignType.CONFIG)
public ConfigPublishResponse handle(ConfigPublishRequest request, RequestMeta meta) throws NacosException {
    
    try {
        String dataId = request.getDataId();
        String group = request.getGroup();
        String content = request.getContent();
        final String tenant = request.getTenant();
        
        final String srcIp = meta.getClientIp();
        final String requestIpApp = request.getAdditionParam("requestIpApp");
        final String tag = request.getAdditionParam("tag");
        final String appName = request.getAdditionParam("appName");
        final String type = request.getAdditionParam("type");
        final String srcUser = request.getAdditionParam("src_user");
        final String encryptedDataKey = request.getAdditionParam("encryptedDataKey");
        
        // check tenant
        ParamUtils.checkParam(dataId, group, "datumId", content);
        ParamUtils.checkParam(tag);
        Map<String, Object> configAdvanceInfo = new HashMap<>(10);
        MapUtil.putIfValNoNull(configAdvanceInfo, "config_tags", request.getAdditionParam("config_tags"));
        MapUtil.putIfValNoNull(configAdvanceInfo, "desc", request.getAdditionParam("desc"));
        MapUtil.putIfValNoNull(configAdvanceInfo, "use", request.getAdditionParam("use"));
        MapUtil.putIfValNoNull(configAdvanceInfo, "effect", request.getAdditionParam("effect"));
        MapUtil.putIfValNoNull(configAdvanceInfo, "type", type);
        MapUtil.putIfValNoNull(configAdvanceInfo, "schema", request.getAdditionParam("schema"));
        ParamUtils.checkParam(configAdvanceInfo);
        
        if (AggrWhitelist.isAggrDataId(dataId)) {
            Loggers.REMOTE_DIGEST
                    .warn("[aggr-conflict] {} attempt to publish single data, {}, {}", srcIp, dataId, group);
            throw new NacosException(NacosException.NO_RIGHT, "dataId:" + dataId + " is aggr");
        }
        
        final Timestamp time = TimeUtils.getCurrentTime();
        ConfigInfo configInfo = new ConfigInfo(dataId, group, tenant, appName, content);
        configInfo.setMd5(request.getCasMd5());
        configInfo.setType(type);
        configInfo.setEncryptedDataKey(encryptedDataKey);
        String betaIps = request.getAdditionParam("betaIps");
        if (StringUtils.isBlank(betaIps)) {
            if (StringUtils.isBlank(tag)) {
                if (StringUtils.isNotBlank(request.getCasMd5())) {
                    boolean casSuccess = configInfoPersistService
                            .insertOrUpdateCas(srcIp, srcUser, configInfo, time, configAdvanceInfo, false);
                    if (!casSuccess) {
                        return ConfigPublishResponse.buildFailResponse(ResponseCode.FAIL.getCode(),
                                "Cas publish fail,server md5 may have changed.");
                    }
                } else {
                    // 不设置标签和灰度的情况下,第一次会进入这里
                    configInfoPersistService.insertOrUpdate(srcIp, srcUser, configInfo, time, configAdvanceInfo, false);
                }
                ConfigChangePublisher.notifyConfigChange(
                        new ConfigDataChangeEvent(false, dataId, group, tenant, time.getTime()));
            } else {
                if (StringUtils.isNotBlank(request.getCasMd5())) {
                    boolean casSuccess = configInfoTagPersistService
                            .insertOrUpdateTagCas(configInfo, tag, srcIp, srcUser, time, false);
                    if (!casSuccess) {
                        return ConfigPublishResponse.buildFailResponse(ResponseCode.FAIL.getCode(),
                                "Cas publish tag config fail,server md5 may have changed.");
                    }
                } else {
                    configInfoTagPersistService.insertOrUpdateTag(configInfo, tag, srcIp, srcUser, time, false);
                    
                }
                ConfigChangePublisher.notifyConfigChange(
                        new ConfigDataChangeEvent(false, dataId, group, tenant, tag, time.getTime()));
            }
        } else {
            // beta publish
            if (StringUtils.isNotBlank(request.getCasMd5())) {
                boolean casSuccess = configInfoBetaPersistService
                        .insertOrUpdateBetaCas(configInfo, betaIps, srcIp, srcUser, time, false);
                if (!casSuccess) {
                    return ConfigPublishResponse.buildFailResponse(ResponseCode.FAIL.getCode(),
                            "Cas publish beta config fail,server md5 may have changed.");
                }
            } else {
                configInfoBetaPersistService.insertOrUpdateBeta(configInfo, betaIps, srcIp, srcUser, time, false);
                
            }
            ConfigChangePublisher
                    .notifyConfigChange(new ConfigDataChangeEvent(true, dataId, group, tenant, time.getTime()));
        }
        ConfigTraceService
                .logPersistenceEvent(dataId, group, tenant, requestIpApp, time.getTime(), InetUtils.getSelfIP(),
                        ConfigTraceService.PERSISTENCE_EVENT_PUB, content);
        return ConfigPublishResponse.buildSuccessResponse();
    } catch (Exception e) {
        Loggers.REMOTE_DIGEST.error("[ConfigPublishRequestHandler] publish config error ,request ={}", request, e);
        return ConfigPublishResponse.buildFailResponse(
                (e instanceof NacosException) ? ((NacosException) e).getErrCode() : ResponseCode.FAIL.getCode(),
                e.getMessage());
    }
}
  • ExternalConfigInfoPersistServiceImpl.insertOrUpdate

这里的insertOrUpdate实现类有2个,一个EmbeddedConfigInfoPersistServiceImpl,内置数据库的实现,就是德比数据库。

另外一个是自定义配置的数据库。比如MySQL,我是以MySQL继续去看的。

public void insertOrUpdate(String srcIp, String srcUser, ConfigInfo configInfo, Timestamp time,
                           Map<String, Object> configAdvanceInfo, boolean notify) {
    try {
        addConfigInfo(srcIp, srcUser, configInfo, time, configAdvanceInfo, notify);
    } catch (DataIntegrityViolationException ive) { // Unique constraint conflict
        updateConfigInfo(configInfo, srcIp, srcUser, time, configAdvanceInfo, notify);
    }
}
@Override
public void addConfigInfo(final String srcIp, final String srcUser, final ConfigInfo configInfo,
                          final Timestamp time, final Map<String, Object> configAdvanceInfo, final boolean notify) {
    boolean result = tjt.execute(status -> {
        try {
            long configId = addConfigInfoAtomic(-1, srcIp, srcUser, configInfo, time, configAdvanceInfo);
            String configTags = configAdvanceInfo == null ? null : (String) configAdvanceInfo.get("config_tags");
            addConfigTagsRelation(configId, configTags, configInfo.getDataId(), configInfo.getGroup(),
                                  configInfo.getTenant());

            historyConfigInfoPersistService.insertConfigHistoryAtomic(0, configInfo, srcIp, srcUser, time, "I");
        } catch (CannotGetJdbcConnectionException e) {
            LogUtil.FATAL_LOG.error("[db-error] " + e, e);
            throw e;
        }
        return Boolean.TRUE;
    });
}

再向下跟就是db的增和改,到这里就不继续往下看了。感兴趣自己可以看看。

配置获取client端

configService.getConfig(dataId, group, 5000);

接下来我们继续看获取配置的代码

  • NacosConfigService.getConfig

content的内容先去读本地文件的。如果本地缓存为空的话,就会去服务器拉取。

@Override
public String getConfig(String dataId, String group, long timeoutMs) throws NacosException {
    return getConfigInner(namespace, dataId, group, timeoutMs);
}
private String getConfigInner(String tenant, String dataId, String group, long timeoutMs) throws NacosException {
	// 获取group,并且去空格
    group = blank2defaultGroup(group);
    // 校验dataId和group
    ParamUtils.checkKeyParam(dataId, group);
	// 构建一个返回实体类
    ConfigResponse cr = new ConfigResponse();

    cr.setDataId(dataId);
    cr.setTenant(tenant);
    cr.setGroup(group);

    // use local config first
    // 从本地磁盘去拿数据
    String content = LocalConfigInfoProcessor.getFailover(worker.getAgentName(), dataId, group, tenant);
    if (content != null) {
        LOGGER.warn("[{}] [get-config] get failover ok, dataId={}, group={}, tenant={}, config={}",
                    worker.getAgentName(), dataId, group, tenant, ContentUtils.truncateContent(content));
        cr.setContent(content);
        String encryptedDataKey = LocalEncryptedDataKeyProcessor
            .getEncryptDataKeyFailover(agent.getName(), dataId, group, tenant);
        cr.setEncryptedDataKey(encryptedDataKey);
        configFilterChainManager.doFilter(null, cr);
        content = cr.getContent();
        return content;
    }

    try {
        ConfigResponse response = worker.getServerConfig(dataId, group, tenant, timeoutMs, false);
        cr.setContent(response.getContent());
        cr.setEncryptedDataKey(response.getEncryptedDataKey());
        configFilterChainManager.doFilter(null, cr);
        content = cr.getContent();

        return content;
    } catch (NacosException ioe) {
        if (NacosException.NO_RIGHT == ioe.getErrCode()) {
            throw ioe;
        }
        LOGGER.warn("[{}] [get-config] get from server error, dataId={}, group={}, tenant={}, msg={}",
                    worker.getAgentName(), dataId, group, tenant, ioe.toString());
    }

    LOGGER.warn("[{}] [get-config] get snapshot ok, dataId={}, group={}, tenant={}, config={}",
                worker.getAgentName(), dataId, group, tenant, ContentUtils.truncateContent(content));
    content = LocalConfigInfoProcessor.getSnapshot(worker.getAgentName(), dataId, group, tenant);
    cr.setContent(content);
    String encryptedDataKey = LocalEncryptedDataKeyProcessor
        .getEncryptDataKeyFailover(agent.getName(), dataId, group, tenant);
    cr.setEncryptedDataKey(encryptedDataKey);
    configFilterChainManager.doFilter(null, cr);
    content = cr.getContent();
    return content;
}
  • ClientWorker.getServerConfig

这里的ClientWorker和上边的发布是同一个类。

初始化一个agent,就是和server端交互的rpc请求。

下边的queryConfig中,构建一个ConfigQueryRequestq去请求服务器端,然后获取返回的内容。

下一步我们去看服务器端。

private ConfigTransportClient agent;

public ConfigResponse getServerConfig(String dataId, String group, String tenant, long readTimeout, boolean notify)
    throws NacosException {
    if (StringUtils.isBlank(group)) {
        group = Constants.DEFAULT_GROUP;
    }
    return this.agent.queryConfig(dataId, group, tenant, readTimeout, notify);
}
@SuppressWarnings("PMD.ThreadPoolCreationRule")
public ClientWorker(final ConfigFilterChainManager configFilterChainManager, ServerListManager serverListManager,
                    final Properties properties) throws NacosException {
    this.configFilterChainManager = configFilterChainManager;

    init(properties);

    agent = new ConfigRpcTransportClient(properties, serverListManager);
    int count = ThreadUtils.getSuitableThreadCount(THREAD_MULTIPLE);
    ScheduledExecutorService executorService = Executors
        .newScheduledThreadPool(Math.max(count, MIN_THREAD_NUM), r -> {
            Thread t = new Thread(r);
            t.setName("com.alibaba.nacos.client.Worker");
            t.setDaemon(true);
            return t;
        });
    agent.setExecutor(executorService);
    agent.start();
}



@Override
public ConfigResponse queryConfig(String dataId, String group, String tenant, long readTimeouts, boolean notify)
    throws NacosException {
    ConfigQueryRequest request = ConfigQueryRequest.build(dataId, group, tenant);
    request.putHeader(NOTIFY_HEADER, String.valueOf(notify));
    RpcClient rpcClient = getOneRunningClient();
    if (notify) {
        CacheData cacheData = cacheMap.get().get(GroupKey.getKeyTenant(dataId, group, tenant));
        if (cacheData != null) {
            rpcClient = ensureRpcClient(String.valueOf(cacheData.getTaskId()));
        }
    }
    ConfigQueryResponse response = (ConfigQueryResponse) requestProxy(rpcClient, request, readTimeouts);

    ConfigResponse configResponse = new ConfigResponse();
    if (response.isSuccess()) {
        LocalConfigInfoProcessor.saveSnapshot(this.getName(), dataId, group, tenant, response.getContent());
        configResponse.setContent(response.getContent());
        String configType;
        if (StringUtils.isNotBlank(response.getContentType())) {
            configType = response.getContentType();
        } else {
            configType = ConfigType.TEXT.getType();
        }
        configResponse.setConfigType(configType);
        String encryptedDataKey = response.getEncryptedDataKey();
        LocalEncryptedDataKeyProcessor
            .saveEncryptDataKeySnapshot(agent.getName(), dataId, group, tenant, encryptedDataKey);
        configResponse.setEncryptedDataKey(encryptedDataKey);
        return configResponse;
    } else if (response.getErrorCode() == ConfigQueryResponse.CONFIG_NOT_FOUND) {
        LocalConfigInfoProcessor.saveSnapshot(this.getName(), dataId, group, tenant, null);
        LocalEncryptedDataKeyProcessor.saveEncryptDataKeySnapshot(agent.getName(), dataId, group, tenant, null);
        return configResponse;
    } else if (response.getErrorCode() == ConfigQueryResponse.CONFIG_QUERY_CONFLICT) {
        LOGGER.error(
            "[{}] [sub-server-error] get server config being modified concurrently, dataId={}, group={}, "
            + "tenant={}", this.getName(), dataId, group, tenant);
        throw new NacosException(NacosException.CONFLICT,
                                 "data being modified, dataId=" + dataId + ",group=" + group + ",tenant=" + tenant);
    } else {
        LOGGER.error("[{}] [sub-server-error]  dataId={}, group={}, tenant={}, code={}", this.getName(), dataId,
                     group, tenant, response);
        throw new NacosException(response.getErrorCode(),
                                 "http error, code=" + response.getErrorCode() + ",msg=" + response.getMessage() + ",dataId=" + dataId + ",group=" + group
                                 + ",tenant=" + tenant);

    }
}

配置获取server端

client端构建了一个ConfigQueryRequest,那么对应的就会进入到ConfigQueryHandler中的handle方法进行处理。

  • ConfigQueryHandler.handle

然后再调用了自己的getContext方法,这个方法的实现有点长 有150行以上了。

在获取到读锁后,同样的会有beta、tag的判断,然后去查询数据库。这里就不继续跟了。

@Override
@TpsControl(pointName = "ConfigQuery")
@Secured(action = ActionTypes.READ, signType = SignType.CONFIG)
public ConfigQueryResponse handle(ConfigQueryRequest request, RequestMeta meta) throws NacosException {
    try {
        return getContext(request, meta, request.isNotify());
    } catch (Exception e) {
        return ConfigQueryResponse.buildFailResponse(ResponseCode.FAIL.getCode(), e.getMessage());
    }   
}

private static int tryConfigReadLock(String groupKey) {
        
    // Lock failed by default.
    int lockResult = -1;

    // Try to get lock times, max value: 10;
    // 循环10次获取
    for (int i = TRY_GET_LOCK_TIMES; i >= 0; --i) {
        lockResult = ConfigCacheService.tryReadLock(groupKey);

        // The data is non-existent.
        if (0 == lockResult) {
            break;
        }

        // Success
        if (lockResult > 0) {
            break;
        }

        // Retry.
        if (i > 0) {
            try {
                // 让线程切换一次,释放cpu的时间片
                Thread.sleep(1);
            } catch (Exception e) {
                LogUtil.PULL_CHECK_LOG.error("An Exception occurred while thread sleep", e);
            }
        }
    }

    return lockResult;
}

private ConfigQueryResponse getContext(ConfigQueryRequest configQueryRequest, RequestMeta meta, boolean notify)
            throws UnsupportedEncodingException {
    String dataId = configQueryRequest.getDataId();
    String group = configQueryRequest.getGroup();
    String tenant = configQueryRequest.getTenant();
    String clientIp = meta.getClientIp();
    String tag = configQueryRequest.getTag();
    ConfigQueryResponse response = new ConfigQueryResponse();

    final String groupKey = GroupKey2
        .getKey(configQueryRequest.getDataId(), configQueryRequest.getGroup(), configQueryRequest.getTenant());

    String autoTag = configQueryRequest.getHeader(com.alibaba.nacos.api.common.Constants.VIPSERVER_TAG);

    String requestIpApp = meta.getLabels().get(CLIENT_APPNAME_HEADER);
	// 先去获取读锁,避免读写冲突
    int lockResult = tryConfigReadLock(groupKey);

    boolean isBeta = false;
    boolean isSli = false;
    // 获取到锁了,进入if
    if (lockResult > 0) {
        //FileInputStream fis = null;
        try {
            String md5 = Constants.NULL;
            long lastModified = 0L;
            CacheItem cacheItem = ConfigCacheService.getContentCache(groupKey);
            if (cacheItem != null) {
                if (cacheItem.isBeta()) {
                    if (cacheItem.getIps4Beta().contains(clientIp)) {
                        isBeta = true;
                    }
                }
                String configType = cacheItem.getType();
                response.setContentType((null != configType) ? configType : "text");
            }
            File file = null;
            ConfigInfoBase configInfoBase = null;
            PrintWriter out = null;
            if (isBeta) {
                md5 = cacheItem.getMd54Beta();
                lastModified = cacheItem.getLastModifiedTs4Beta();
                if (PropertyUtil.isDirectRead()) {
                    configInfoBase = configInfoBetaPersistService.findConfigInfo4Beta(dataId, group, tenant);
                } else {
                    file = DiskUtil.targetBetaFile(dataId, group, tenant);
                }
                response.setBeta(true);
            } else {
                if (StringUtils.isBlank(tag)) {
                    if (isUseTag(cacheItem, autoTag)) {
                        if (cacheItem != null) {
                            if (cacheItem.tagMd5 != null) {
                                md5 = cacheItem.tagMd5.get(autoTag);
                            }
                            if (cacheItem.tagLastModifiedTs != null) {
                                lastModified = cacheItem.tagLastModifiedTs.get(autoTag);
                            }
                        }
                        // 读取数据库
                        if (PropertyUtil.isDirectRead()) {
                            configInfoBase = configInfoTagPersistService.findConfigInfo4Tag(dataId, group, tenant, autoTag);
                        } else {
                            // 读取磁盘
                            file = DiskUtil.targetTagFile(dataId, group, tenant, autoTag);
                        }
                        response.setTag(URLEncoder.encode(autoTag, Constants.ENCODE));

                    } else {
                        md5 = cacheItem.getMd5();
                        lastModified = cacheItem.getLastModifiedTs();
                        if (PropertyUtil.isDirectRead()) {
                            configInfoBase = configInfoPersistService.findConfigInfo(dataId, group, tenant);
                        } else {
                            file = DiskUtil.targetFile(dataId, group, tenant);
                        }
                        if (configInfoBase == null && fileNotExist(file)) {
                            // FIXME CacheItem
                            // No longer exists. It is impossible to simply calculate the push delayed. Here, simply record it as - 1.
                            ConfigTraceService.logPullEvent(dataId, group, tenant, requestIpApp, -1,
                                                            ConfigTraceService.PULL_EVENT_NOTFOUND, -1, clientIp, false);

                            // pullLog.info("[client-get] clientIp={}, {},
                            // no data",
                            // new Object[]{clientIp, groupKey});

                            response.setErrorInfo(ConfigQueryResponse.CONFIG_NOT_FOUND, "config data not exist");
                            return response;
                        }
                    }
                } else {
                    if (cacheItem != null) {
                        if (cacheItem.tagMd5 != null) {
                            md5 = cacheItem.tagMd5.get(tag);
                        }
                        if (cacheItem.tagLastModifiedTs != null) {
                            Long lm = cacheItem.tagLastModifiedTs.get(tag);
                            if (lm != null) {
                                lastModified = lm;
                            }
                        }
                    }
                    if (PropertyUtil.isDirectRead()) {
                        configInfoBase = configInfoTagPersistService.findConfigInfo4Tag(dataId, group, tenant, tag);
                    } else {
                        file = DiskUtil.targetTagFile(dataId, group, tenant, tag);
                    }
                    if (configInfoBase == null && fileNotExist(file)) {
                        // FIXME CacheItem
                        // No longer exists. It is impossible to simply calculate the push delayed. Here, simply record it as - 1.
                        ConfigTraceService.logPullEvent(dataId, group, tenant, requestIpApp, -1,
                                                        ConfigTraceService.PULL_EVENT_NOTFOUND, -1, clientIp, false);

                        // pullLog.info("[client-get] clientIp={}, {},
                        // no data",
                        // new Object[]{clientIp, groupKey});

                        response.setErrorInfo(ConfigQueryResponse.CONFIG_NOT_FOUND, "config data not exist");
                        return response;

                    }
                }
            }

            response.setMd5(md5);

            if (PropertyUtil.isDirectRead()) {
                response.setLastModified(lastModified);
                response.setContent(configInfoBase.getContent());
                response.setEncryptedDataKey(configInfoBase.getEncryptedDataKey());
                response.setResultCode(ResponseCode.SUCCESS.getCode());

            } else {
                //read from file
                String content = null;
                try {
                    content = readFileContent(file);
                    response.setContent(content);
                    response.setLastModified(lastModified);
                    response.setResultCode(ResponseCode.SUCCESS.getCode());
                    if (isBeta) {
                        response.setEncryptedDataKey(cacheItem.getEncryptedDataKeyBeta());
                    } else {
                        response.setEncryptedDataKey(cacheItem.getEncryptedDataKey());
                    }
                } catch (IOException e) {
                    response.setErrorInfo(ResponseCode.FAIL.getCode(), e.getMessage());
                    return response;
                }

            }

            LogUtil.PULL_CHECK_LOG.warn("{}|{}|{}|{}", groupKey, clientIp, md5, TimeUtils.getCurrentTimeStr());

            final long delayed = System.currentTimeMillis() - lastModified;

            // TODO distinguish pull-get && push-get
            /*
                 Otherwise, delayed cannot be used as the basis of push delay directly,
                 because the delayed value of active get requests is very large.
                 */
            ConfigTraceService.logPullEvent(dataId, group, tenant, requestIpApp, lastModified,
                                            ConfigTraceService.PULL_EVENT_OK, notify ? delayed : -1, clientIp, notify);

        } finally {
            releaseConfigReadLock(groupKey);
        }
    } else if (lockResult == 0) {
		// 异常的逻辑
        // FIXME CacheItem No longer exists. It is impossible to simply calculate the push delayed. Here, simply record it as - 1.
        ConfigTraceService
            .logPullEvent(dataId, group, tenant, requestIpApp, -1, ConfigTraceService.PULL_EVENT_NOTFOUND, -1,
                          clientIp, notify);
        response.setErrorInfo(ConfigQueryResponse.CONFIG_NOT_FOUND, "config data not exist");

    } else {
        PULL_LOG.info("[client-get] clientIp={}, {}, get data during dump", clientIp, groupKey);
        response.setErrorInfo(ConfigQueryResponse.CONFIG_QUERY_CONFLICT,
                              "requested file is being modified, please try later.");
    }
    return response;
}

pull配置监听client端

配置中心server端和client的交互有2种形式,一种是推送,另外一种是拉取。

下面的监听是客户端拉取服务端的过程。

  • 监听配置

(一)从本地的缓存(CacheData)列表中,获取配置了监听的配置项。配置项就是dataId+group,再加上客户端配置文件配置的tenant就组成了一个唯一的值了。

(二)再对这些配置项进行分片。taskId就是分片后的编号,每个taskId的限量是3000个,并分配一个grpc的连接。每个listener是针对于一个dataId+group的。

(三)遍历所有的分片数据,请求客户端询问哪些数据发生了变化

(四)服务端告诉客户端,有这些数据发生了变化

(五)根据服务端告知的数据,循环遍历获取最新的文件内容,并存储在客户端内存和磁盘,更新CacheData和Disk

如果希望 Nacos 推送配置变更,可以使用 Nacos 动态监听配置接口来实现。

public void addListener(String dataId, String group, Listener listener) ;


String serverAddr = "{serverAddr}";
String dataId = "{dataId}";
String group = "{group}";
Properties properties = new Properties();
properties.put("serverAddr", serverAddr);
ConfigService configService = NacosFactory.createConfigService(properties);
String content = configService.getConfig(dataId, group, 5000);
System.out.println(content);
configService.addListener(dataId, group, new Listener() {
	@Override
	public void receiveConfigInfo(String configInfo) {
		System.out.println("recieve1:" + configInfo);
	}
	@Override
	public Executor getExecutor() {
		return null;
	}
});

// 测试让主线程不退出,因为订阅配置是守护线程,主线程退出守护线程就会退出。 正式代码中无需下面代码
while (true) {
    try {
        Thread.sleep(1000);
    } catch (InterruptedException e) {
        e.printStackTrace();
    }
}

这个是官网上的一个例子。

  • NacosConfigService.addListeners
@Override
public void addListener(String dataId, String group, Listener listener) throws NacosException {
    worker.addTenantListeners(dataId, group, Arrays.asList(listener));
}

  • ClientWorker.addTenantListeners

ClientWorker就是前面一直看的ClientWorker。先去看CacheData有没有数据。

在agent.notifyListenConfig()的时候,

public void addTenantListeners(String dataId, String group, List<? extends Listener> listeners)
            throws NacosException {
    group = blank2defaultGroup(group);
    String tenant = agent.getTenant();
    // 根据 dataId、group、tenant组装拼接成一个CacheData
    CacheData cache = addCacheDataIfAbsent(dataId, group, tenant);
    synchronized (cache) {
        // 绑定监听器listener
        for (Listener listener : listeners) {
            cache.addListener(listener);
        }
        cache.setSyncWithServer(false);
        // 唤醒监听配置,利用阻塞队列的机制唤醒任务
        agent.notifyListenConfig();
    }

}
private final AtomicReference<Map<String, CacheData>> cacheMap = new AtomicReference<>(new HashMap<>());

public CacheData addCacheDataIfAbsent(String dataId, String group, String tenant) throws NacosException {
	// 根据唯一key去缓存中找数据
    CacheData cache = getCache(dataId, group, tenant);
    if (null != cache) {
        return cache;
    }
    String key = GroupKey.getKeyTenant(dataId, group, tenant);
    synchronized (cacheMap) {
        // 加锁组装一个cacheData
        CacheData cacheFromMap = getCache(dataId, group, tenant);
        // multiple listeners on the same dataid+group and race condition,so
        // double check again
        // other listener thread beat me to set to cacheMap
        if (null != cacheFromMap) {
            cache = cacheFromMap;
            // reset so that server not hang this check
            cache.setInitializing(true);
        } else {
            cache = new CacheData(configFilterChainManager, agent.getName(), dataId, group, tenant);
            // 因为数据量可能会很大,为了减轻数据传输的压力,所以一组只有3000条,实际使用过程中,一个应用订阅3000个服务的这种情况应该比较少吧。
            int taskId = cacheMap.get().size() / (int) ParamUtil.getPerTaskConfigSize();
            // 分配一个taskId
            cache.setTaskId(taskId);
            // fix issue # 1317
            if (enableRemoteSyncConfig) {
                // 如果开启了远程同步,那么去拿server端上的数据
                ConfigResponse response = getServerConfig(dataId, group, tenant, 3000L, false);
                cache.setContent(response.getContent());
            }
        }
		// 赋值完成
        Map<String, CacheData> copy = new HashMap<>(this.cacheMap.get());
        copy.put(key, cache);
        cacheMap.set(copy);
    }
    LOGGER.info("[{}] [subscribe] {}", agent.getName(), key);
	// 监控面板
    MetricsMonitor.getListenConfigCountMonitor().set(cacheMap.get().size());

    return cache;
}
  • ClientWorker.notifyListenConfig

这里是唤醒监听配置,用了一个阻塞队列。向阻塞队列中添加一个铃铛,基于阻塞队列的机制来巧妙的实现一个唤醒。

这个agent的初始化是在ConfigRpcTransportClient类中。并且定义了一个线程池。传入到agent中,并调用agent.start()

private final BlockingQueue<Object> listenExecutebell = new ArrayBlockingQueue<Object>(1);
private Object bellItem = new Object();

@Override
public void notifyListenConfig() {
    listenExecutebell.offer(bellItem);
}

public ClientWorker(final ConfigFilterChainManager configFilterChainManager, ServerListManager serverListManager,
                    final Properties properties) throws NacosException {
    this.configFilterChainManager = configFilterChainManager;

    init(properties);

    agent = new ConfigRpcTransportClient(properties, serverListManager);
    int count = ThreadUtils.getSuitableThreadCount(THREAD_MULTIPLE);
    // 初始化一个线程池
    ScheduledExecutorService executorService = Executors
        .newScheduledThreadPool(Math.max(count, MIN_THREAD_NUM), r -> {
            Thread t = new Thread(r);
            t.setName("com.alibaba.nacos.client.Worker");
            t.setDaemon(true);
            return t;
        });
    agent.setExecutor(executorService);
    agent.start();

}
  • ConfigTransportClient.start

这里有2个延时任务,第一个是授权人证的,立即启动并且每隔5毫秒执行一次。

第二个是执行配置监听的,立即启动,然后用listenExecutebell去做监听,如果bellItem被唤醒了,就立即执行,如果没被唤醒,5s执行一次轮询。

public void start() throws NacosException {
    securityProxy.login(this.properties);
    this.executor.scheduleWithFixedDelay(() -> securityProxy.login(properties), 0,
            this.securityInfoRefreshIntervalMills, TimeUnit.MILLISECONDS);
    startInternal();
}

// 这里启动了延时任务,如果有数据过来的话,就是上边的bellItem,就会立即触发,如果没有数据的话,就
public void startInternal() {
    executor.schedule(() -> {
        while (!executor.isShutdown() && !executor.isTerminated()) {
            try {
                listenExecutebell.poll(5L, TimeUnit.SECONDS);
                if (executor.isShutdown() || executor.isTerminated()) {
                    continue;
                }
                executeConfigListen();
            } catch (Exception e) {
                LOGGER.error("[ rpc listen execute ] [rpc listen] exception", e);
            }
        }
    }, 0L, TimeUnit.MILLISECONDS);

}

这里的executeConfigListen的代码,太长了,所以单独拎出来。

@Override
public void executeConfigListen() {
	// 需要建立监听关系的集合
    Map<String, List<CacheData>> listenCachesMap = new HashMap<String, List<CacheData>>(16);
    // 需要移除监听关系的集合
    Map<String, List<CacheData>> removeListenCachesMap = new HashMap<String, List<CacheData>>(16);
    long now = System.currentTimeMillis();
    // 是否需要全量同步、 5分钟
    boolean needAllSync = now - lastAllSyncTime >= ALL_SYNC_INTERNAL;
    for (CacheData cache : cacheMap.get().values()) {

        synchronized (cache) {

            // 如果是和服务器上保持一致了,就不需要去检查了。
            //check local listeners consistent.
            if (cache.isSyncWithServer()) {
                // 如果和服务器同步的,就对比md5的值
                cache.checkListenerMd5();
                if (!needAllSync) {
                    continue;
                }
            }
			// 遍历集合,把需要遍历和不需要遍历的分为2个集合存储。
            if (!CollectionUtils.isEmpty(cache.getListeners())) {
                //get listen  config
                if (!cache.isUseLocalConfigInfo()) {
                    List<CacheData> cacheDatas = listenCachesMap.get(String.valueOf(cache.getTaskId()));
                    if (cacheDatas == null) {
                        cacheDatas = new LinkedList<>();
                        listenCachesMap.put(String.valueOf(cache.getTaskId()), cacheDatas);
                    }
                    cacheDatas.add(cache);

                }
            } else if (CollectionUtils.isEmpty(cache.getListeners())) {

                if (!cache.isUseLocalConfigInfo()) {
                    List<CacheData> cacheDatas = removeListenCachesMap.get(String.valueOf(cache.getTaskId()));
                    if (cacheDatas == null) {
                        cacheDatas = new LinkedList<>();
                        removeListenCachesMap.put(String.valueOf(cache.getTaskId()), cacheDatas);
                    }
                    cacheDatas.add(cache);

                }
            }
        }

    }

    boolean hasChangedKeys = false;

    if (!listenCachesMap.isEmpty()) {
        for (Map.Entry<String, List<CacheData>> entry : listenCachesMap.entrySet()) {
            String taskId = entry.getKey();
            Map<String, Long> timestampMap = new HashMap<>(listenCachesMap.size() * 2);

            List<CacheData> listenCaches = entry.getValue();
            for (CacheData cacheData : listenCaches) {
                // 存放每一个groupKey的最后一次更新时间。后续有用到
                timestampMap.put(GroupKey.getKeyTenant(cacheData.dataId, cacheData.group, cacheData.tenant),
                                 cacheData.getLastModifiedTs().longValue());
            }
			// 构建一个批量监听请求
            ConfigBatchListenRequest configChangeListenRequest = buildConfigRequest(listenCaches);
            configChangeListenRequest.setListen(true);
            try {
                // 每个taskId对应一个连接
                RpcClient rpcClient = ensureRpcClient(taskId);
                // 这里是发起服务端请求,得到一个有配置变更的集合
                ConfigChangeBatchListenResponse configChangeBatchListenResponse = (ConfigChangeBatchListenResponse) requestProxy(rpcClient, configChangeListenRequest);
                if (configChangeBatchListenResponse != null && configChangeBatchListenResponse.isSuccess()) {

                    Set<String> changeKeys = new HashSet<String>();
                    //handle changed keys,notify listener
                    if (!CollectionUtils.isEmpty(configChangeBatchListenResponse.getChangedConfigs())) {
                        hasChangedKeys = true;
                        // 遍历存在变更的集合,组装一个changeKey
                        for (ConfigChangeBatchListenResponse.ConfigContext changeConfig : configChangeBatchListenResponse
                             .getChangedConfigs()) {
                            String changeKey = GroupKey
                                .getKeyTenant(changeConfig.getDataId(), changeConfig.getGroup(),
                                              changeConfig.getTenant());
                            changeKeys.add(changeKey);
                            boolean isInitializing = cacheMap.get().get(changeKey).isInitializing();
                            // 刷新每一个changeKey对应的值。
                            refreshContentAndCheck(changeKey, !isInitializing);
                        }

                    }

                    //handler content configs
                    for (CacheData cacheData : listenCaches) {
                        String groupKey = GroupKey
                            .getKeyTenant(cacheData.dataId, cacheData.group, cacheData.getTenant());
                        if (!changeKeys.contains(groupKey)) {
                            //sync:cache data md5 = server md5 && cache data md5 = all listeners md5.
                            synchronized (cacheData) {
                                if (!cacheData.getListeners().isEmpty()) {
									// 重新设置缓存时间,表示这个数据和服务器端保持一致了。
                                    Long previousTimesStamp = timestampMap.get(groupKey);
                                    if (previousTimesStamp != null && !cacheData.getLastModifiedTs().compareAndSet(previousTimesStamp, System.currentTimeMillis())) {
                                        continue;
                                    }
                                    cacheData.setSyncWithServer(true);
                                }
                            }
                        }

                        cacheData.setInitializing(false);
                    }

                }
            } catch (Exception e) {

                LOGGER.error("Async listen config change error ", e);
                try {
                    Thread.sleep(50L);
                } catch (InterruptedException interruptedException) {
                    //ignore
                }
            }
        }
    }

    if (!removeListenCachesMap.isEmpty()) {
        // 如果removeListenCachesMap这个不为空。
        for (Map.Entry<String, List<CacheData>> entry : removeListenCachesMap.entrySet()) {
            String taskId = entry.getKey();
            List<CacheData> removeListenCaches = entry.getValue();
            ConfigBatchListenRequest configChangeListenRequest = buildConfigRequest(removeListenCaches);
            configChangeListenRequest.setListen(false);
            try {
                RpcClient rpcClient = ensureRpcClient(taskId);
                boolean removeSuccess = unListenConfigChange(rpcClient, configChangeListenRequest);
                if (removeSuccess) {
                    // 标记缓存的更新时间,重新设置更新时间,保持server和client的数据一致。
                    for (CacheData cacheData : removeListenCaches) {
                        synchronized (cacheData) {
                            if (cacheData.getListeners().isEmpty()) {
                                ClientWorker.this
                                    .removeCache(cacheData.dataId, cacheData.group, cacheData.tenant);
                            }
                        }
                    }
                }

            } catch (Exception e) {
                LOGGER.error("async remove listen config change error ", e);
            }
            try {
                Thread.sleep(50L);
            } catch (InterruptedException interruptedException) {
                //ignore
            }
        }
    }

    if (needAllSync) {
        lastAllSyncTime = now;
    }
    //If has changed keys,notify re sync md5.
    if (hasChangedKeys) {
        notifyListenConfig();
    }
}

遍历去获取content内容,然后赋值给cacheData保存在client端

private void refreshContentAndCheck(String groupKey, boolean notify) {
    if (cacheMap.get() != null && cacheMap.get().containsKey(groupKey)) {
        CacheData cache = cacheMap.get().get(groupKey);
        refreshContentAndCheck(cache, notify);
    }
}
private void refreshContentAndCheck(CacheData cacheData, boolean notify) {
    try {
        ConfigResponse response = getServerConfig(cacheData.dataId, cacheData.group, cacheData.tenant, 3000L,
                notify);
        cacheData.setContent(response.getContent());
        cacheData.setEncryptedDataKey(response.getEncryptedDataKey());
        if (null != response.getConfigType()) {
            cacheData.setType(response.getConfigType());
        }
        if (notify) {
            LOGGER.info("[{}] [data-received] dataId={}, group={}, tenant={}, md5={}, content={}, type={}",
                    agent.getName(), cacheData.dataId, cacheData.group, cacheData.tenant, cacheData.getMd5(),
                    ContentUtils.truncateContent(response.getContent()), response.getConfigType());
        }
        cacheData.checkListenerMd5();
    } catch (Exception e) {
        LOGGER.error("refresh content and check md5 fail ,dataId={},group={},tenant={} ", cacheData.dataId,
                cacheData.group, cacheData.tenant, e);
    }
}

至此,客户端的逻辑已经完了,下来我们看服务器端。

pull配置监听server端

在上边的时候,构建了一个ConfigBatchListenRequest,是去获取服务端发生变更的配置列表。

  • ConfigChangeBatchListenRequestHandler.handle

做的2件事情:建立监听关系和返回变更的内容。

@Override
@TpsControl(pointName = "ConfigListen")
@Secured(action = ActionTypes.READ, signType = SignType.CONFIG)
public ConfigChangeBatchListenResponse handle(ConfigBatchListenRequest configChangeListenRequest, RequestMeta meta)
        throws NacosException {
    String connectionId = StringPool.get(meta.getConnectionId());
    String tag = configChangeListenRequest.getHeader(Constants.VIPSERVER_TAG);
    
    ConfigChangeBatchListenResponse configChangeBatchListenResponse = new ConfigChangeBatchListenResponse();
    // 遍历客户端传过来的列表
    for (ConfigBatchListenRequest.ConfigListenContext listenContext : configChangeListenRequest
            .getConfigListenContexts()) {
        String groupKey = GroupKey2
                .getKey(listenContext.getDataId(), listenContext.getGroup(), listenContext.getTenant());
        groupKey = StringPool.get(groupKey);
        
        String md5 = StringPool.get(listenContext.getMd5());
        
        if (configChangeListenRequest.isListen()) {
            // 建立监听的关系
            configChangeListenContext.addListen(groupKey, md5, connectionId);
            boolean isUptoDate = ConfigCacheService.isUptodate(groupKey, md5, meta.getClientIp(), tag);
            if (!isUptoDate) {
                // 返回变更后的内容
                configChangeBatchListenResponse.addChangeConfig(listenContext.getDataId(), listenContext.getGroup(),
                        listenContext.getTenant());
            }
        } else {
            // 移除监听的关系
            configChangeListenContext.removeListen(groupKey, connectionId);
        }
    }
    
    return configChangeBatchListenResponse;
    
}
  • ConfigChangeListenContext.addListen

建立监听关系。

groupKey = dataId + groupId + tanent 组成的一个唯一值。

 /**
  * groupKey-> connection set.
  */
private ConcurrentHashMap<String, HashSet<String>> groupKeyContext = new ConcurrentHashMap<>();

 /**
  * connectionId-> group key set.
  */
private ConcurrentHashMap<String, HashMap<String, String>> connectionIdContext = new ConcurrentHashMap<>();

public synchronized void addListen(String groupKey, String md5, String connectionId) {
    // 1.add groupKeyContext
    // 获取这个groupKey对应的监听的客户端
    Set<String> listenClients = groupKeyContext.get(groupKey);
    if (listenClients == null) {
        groupKeyContext.putIfAbsent(groupKey, new HashSet<>());
        listenClients = groupKeyContext.get(groupKey);
    }
    // 再把本次的conectionId也放进去
    listenClients.add(connectionId);
    
    // 2.add connectionIdContext
    // connectionIdContext中保存着,每一个connectionId对应的groupKey 和 md5的关系。
    // 当前连接groupKey对应最新的md5值。
    HashMap<String, String> groupKeys = connectionIdContext.get(connectionId);
    if (groupKeys == null) {
        connectionIdContext.putIfAbsent(connectionId, new HashMap<>(16));
        groupKeys = connectionIdContext.get(connectionId);
    }
    groupKeys.put(groupKey, md5);
    
}

这里的服务端推送给客户端的是一个唯一值,客户端收到请求后,循环遍历去查询最新的配置。这段代码在上边已经有了。就不在这里继续贴了。

push服务端推送

ConfigPublishRequestHandler这个类是发布的类,发布完成后,有个事件。

ConfigChangePublisher.notifyConfigChange(
        new ConfigDataChangeEvent(false, dataId, group, tenant, time.getTime()));

配置变更的监听。通过全局搜索,可以AsyncNotifyService看到实现了ConfigDataChangeEvent的onEvent事件。

监听到了后,要做 1-集群同步、2-磁盘缓存、3-更新缓存数据、4-服务器端推送

@Autowired
public AsyncNotifyService(ServerMemberManager memberManager) {
    this.memberManager = memberManager;

    // Register ConfigDataChangeEvent to NotifyCenter.
    NotifyCenter.registerToPublisher(ConfigDataChangeEvent.class, NotifyCenter.ringBufferSize);

    // Register A Subscriber to subscribe ConfigDataChangeEvent.
    NotifyCenter.registerSubscriber(new Subscriber() {

        @Override
        public void onEvent(Event event) {
            // Generate ConfigDataChangeEvent concurrently
            if (event instanceof ConfigDataChangeEvent) {
                ConfigDataChangeEvent evt = (ConfigDataChangeEvent) event;
                long dumpTs = evt.lastModifiedTs;
                String dataId = evt.dataId;
                String group = evt.group;
                String tenant = evt.tenant;
                String tag = evt.tag;

                MetricsMonitor.incrementConfigChangeCount(tenant, group, dataId);
				// 获取所有的集群节点列表
                Collection<Member> ipList = memberManager.allMembers();

                // In fact, any type of queue here can be
                Queue<NotifySingleTask> httpQueue = new LinkedList<>();
                // 上边的是http的queue,下边是rpc的queue
                Queue<NotifySingleRpcTask> rpcQueue = new LinkedList<>();
                for (Member member : ipList) {
                    if (!MemberUtil.isSupportedLongCon(member)) {
                        httpQueue.add(new NotifySingleTask(dataId, group, tenant, tag, dumpTs, member.getAddress(),
                                                           evt.isBeta));
                    } else {
                        rpcQueue.add(
                            new NotifySingleRpcTask(dataId, group, tenant, tag, dumpTs, evt.isBeta, member));
                    }
                }
                if (!httpQueue.isEmpty()) {
                    ConfigExecutor.executeAsyncNotify(new AsyncTask(nacosAsyncRestTemplate, httpQueue));
                }
                //  这里会执行一个AsyncRpcTask的任务,我们下一步进入AsyncRpcTask任务中。
                if (!rpcQueue.isEmpty()) {
                    // 线程池构建一个任务并且去执行,进入到线程的run方法
                    ConfigExecutor.executeAsyncNotify(new AsyncRpcTask(rpcQueue));
                }
				
            }
        }

        @Override
        public Class<? extends Event> subscribeType() {
            return ConfigDataChangeEvent.class;
        }
    });
}	
  • AsyncRpcTask.run
class AsyncRpcTask implements Runnable {
    
    private Queue<NotifySingleRpcTask> queue;
    
    public AsyncRpcTask(Queue<NotifySingleRpcTask> queue) {
        this.queue = queue;
    }
    
    @Override
    public void run() {
        while (!queue.isEmpty()) {
			// 从队列中拿取到任务
            NotifySingleRpcTask task = queue.poll();
            // 构建一个集群同步的请求
            ConfigChangeClusterSyncRequest syncRequest = new ConfigChangeClusterSyncRequest();
            syncRequest.setDataId(task.getDataId());
            syncRequest.setGroup(task.getGroup());
            syncRequest.setBeta(task.isBeta);
            syncRequest.setLastModified(task.getLastModified());
            syncRequest.setTag(task.tag);
            syncRequest.setTenant(task.getTenant());
            Member member = task.member;
            // 如果是自己,也就是本机收到这个请求,就直接去执行dump,如果是其他远程节点收到,也会执行dump
            if (memberManager.getSelf().equals(member)) {
                if (syncRequest.isBeta()) {
                    dumpService.dump(syncRequest.getDataId(), syncRequest.getGroup(), syncRequest.getTenant(),
                            syncRequest.getLastModified(), NetUtils.localIP(), true);
                } else {
                    dumpService.dump(syncRequest.getDataId(), syncRequest.getGroup(), syncRequest.getTenant(),
                            syncRequest.getTag(), syncRequest.getLastModified(), NetUtils.localIP());
                }
                continue;
            }
            if (memberManager.hasMember(member.getAddress())) {
                // start the health check and there are ips that are not monitored, put them directly in the notification queue, otherwise notify
                boolean unHealthNeedDelay = memberManager.isUnHealth(member.getAddress());
                if (unHealthNeedDelay) {
                    // target ip is unhealthy, then put it in the notification list
                    ConfigTraceService.logNotifyEvent(task.getDataId(), task.getGroup(), task.getTenant(), null,
                            task.getLastModified(), InetUtils.getSelfIP(), ConfigTraceService.NOTIFY_EVENT_UNHEALTH,
                            0, member.getAddress());
                    // get delay time and set fail count to the task
                    asyncTaskExecute(task);
                } else {
				
                    if (!MemberUtil.isSupportedLongCon(member)) {
                        // 如果不支持长连接走这里
                        asyncTaskExecute(
                                new NotifySingleTask(task.getDataId(), task.getGroup(), task.getTenant(), task.tag,
                                        task.getLastModified(), member.getAddress(), task.isBeta));
                    } else {
                        // 如果支持长连接,就走这里的rpc同步
                        try {
                            configClusterRpcClientProxy
                                    .syncConfigChange(member, syncRequest, new AsyncRpcNotifyCallBack(task));
                        } catch (Exception e) {
                            MetricsMonitor.getConfigNotifyException().increment();
                            asyncTaskExecute(task);
                        }
                    }
                  
                }
            } else {
                //No nothig if  member has offline.
            }
            
        }
    }
}
  • DumpService.dump

dump的时候交给dumpTaskMgr去执行了。

private TaskManager dumpTaskMgr;

public DumpService(ConfigInfoPersistService configInfoPersistService, CommonPersistService commonPersistService,
                   HistoryConfigInfoPersistService historyConfigInfoPersistService,
                   ConfigInfoAggrPersistService configInfoAggrPersistService,
                   ConfigInfoBetaPersistService configInfoBetaPersistService,
                   ConfigInfoTagPersistService configInfoTagPersistService, ServerMemberManager memberManager) {
    this.configInfoPersistService = configInfoPersistService;
    this.commonPersistService = commonPersistService;
    this.historyConfigInfoPersistService = historyConfigInfoPersistService;
    this.configInfoAggrPersistService = configInfoAggrPersistService;
    this.configInfoBetaPersistService = configInfoBetaPersistService;
    this.configInfoTagPersistService = configInfoTagPersistService;
    this.memberManager = memberManager;
    this.processor = new DumpProcessor(this);
    this.dumpAllProcessor = new DumpAllProcessor(this);
    this.dumpAllBetaProcessor = new DumpAllBetaProcessor(this);
    this.dumpAllTagProcessor = new DumpAllTagProcessor(this);
    this.dumpTaskMgr = new TaskManager("com.alibaba.nacos.server.DumpTaskManager");
    this.dumpTaskMgr.setDefaultTaskProcessor(processor);

    this.dumpAllTaskMgr = new TaskManager("com.alibaba.nacos.server.DumpAllTaskManager");
    this.dumpAllTaskMgr.setDefaultTaskProcessor(dumpAllProcessor);

    this.dumpAllTaskMgr.addProcessor(DumpAllTask.TASK_ID, dumpAllProcessor);
    this.dumpAllTaskMgr.addProcessor(DumpAllBetaTask.TASK_ID, dumpAllBetaProcessor);
    this.dumpAllTaskMgr.addProcessor(DumpAllTagTask.TASK_ID, dumpAllTagProcessor);

    DynamicDataSource.getInstance().getDataSource();
}
public void dump(String dataId, String group, String tenant, String tag, long lastModified, String handleIp) {
    dump(dataId, group, tenant, tag, lastModified, handleIp, false);
}
public void dump(String dataId, String group, String tenant, long lastModified, String handleIp, boolean isBeta) {
    String groupKey = GroupKey2.getKey(dataId, group, tenant);
    String taskKey = String.join("+", dataId, group, tenant, String.valueOf(isBeta));
    dumpTaskMgr.addTask(taskKey, new DumpTask(groupKey, lastModified, handleIp, isBeta));
    DUMP_LOG.info("[dump-task] add task. groupKey={}, taskKey={}", groupKey, taskKey);
}

再找到类初始化方法 中的TaskManager初始化,寻找父级一直向上,会到NacosDelayTaskExecuteEngine的方法中。

public NacosDelayTaskExecuteEngine(String name, int initCapacity, Logger logger, long processInterval) {
    super(logger);
    tasks = new ConcurrentHashMap<>(initCapacity);
    processingExecutor = ExecutorFactory.newSingleScheduledExecutorService(new NameThreadFactory(name));
    processingExecutor
            .scheduleWithFixedDelay(new ProcessRunnable(), processInterval, processInterval, TimeUnit.MILLISECONDS);
}

再去看ProcessRunnable的初始化

private class ProcessRunnable implements Runnable {
    
    @Override
    public void run() {
        try {
            processTasks();
        } catch (Throwable e) {
            getEngineLog().error(e.toString(), e);
        }
    }
}

protected void processTasks() {
    Collection<Object> keys = getAllTaskKeys();
    for (Object taskKey : keys) {
        AbstractDelayTask task = removeTask(taskKey);
        if (null == task) {
            continue;
        }
        NacosTaskProcessor processor = getProcessor(taskKey);
        if (null == processor) {
            getEngineLog().error("processor not found for task, so discarded. " + task);
            continue;
        }
        try {
            // ReAdd task if process failed
            if (!processor.process(task)) {
                retryFailedTask(taskKey, task);
            }
        } catch (Throwable e) {
            getEngineLog().error("Nacos task execute error ", e);
            retryFailedTask(taskKey, task);
        }
    }
}

NacosTaskProcessor是一个接口,这个接口有很多的实现

在这里插入图片描述

而我们是dump,一定会进入到DumpProcessor中。

  • DumpProcessor.process
@Override
public boolean process(NacosTask task) {
    DumpTask dumpTask = (DumpTask) task;
    String[] pair = GroupKey2.parseKey(dumpTask.getGroupKey());
    String dataId = pair[0];
    String group = pair[1];
    String tenant = pair[2];
    long lastModified = dumpTask.getLastModified();
    String handleIp = dumpTask.getHandleIp();
    boolean isBeta = dumpTask.isBeta();
    String tag = dumpTask.getTag();
    
    ConfigDumpEvent.ConfigDumpEventBuilder build = ConfigDumpEvent.builder().namespaceId(tenant).dataId(dataId)
            .group(group).isBeta(isBeta).tag(tag).lastModifiedTs(lastModified).handleIp(handleIp);
    
    if (isBeta) {
        // if publish beta, then dump config, update beta cache
        ConfigInfo4Beta cf = configInfoBetaPersistService.findConfigInfo4Beta(dataId, group, tenant);
        
        build.remove(Objects.isNull(cf));
        build.betaIps(Objects.isNull(cf) ? null : cf.getBetaIps());
        build.content(Objects.isNull(cf) ? null : cf.getContent());
        build.encryptedDataKey(Objects.isNull(cf) ? null : cf.getEncryptedDataKey());
        
        return DumpConfigHandler.configDump(build.build());
    }
    if (StringUtils.isBlank(tag)) {
        ConfigInfo cf = configInfoPersistService.findConfigInfo(dataId, group, tenant);
        
        build.remove(Objects.isNull(cf));
        build.content(Objects.isNull(cf) ? null : cf.getContent());
        build.type(Objects.isNull(cf) ? null : cf.getType());
        build.encryptedDataKey(Objects.isNull(cf) ? null : cf.getEncryptedDataKey());
    } else {
        ConfigInfo4Tag cf = configInfoTagPersistService.findConfigInfo4Tag(dataId, group, tenant, tag);
        
        build.remove(Objects.isNull(cf));
        build.content(Objects.isNull(cf) ? null : cf.getContent());
        
    }
    return DumpConfigHandler.configDump(build.build());
}

这里去构建ConfigDumpEventBuilder的build,最终进入到configDump

  • DumpConfigHandler.configDump
public static boolean configDump(ConfigDumpEvent event) {
    final String dataId = event.getDataId();
    final String group = event.getGroup();
    final String namespaceId = event.getNamespaceId();
    final String content = event.getContent();
    final String type = event.getType();
    final long lastModified = event.getLastModifiedTs();
    final String encryptedDataKey = event.getEncryptedDataKey();
    if (event.isBeta()) {
        // 如果是灰度,走到这里
        boolean result;
        if (event.isRemove()) {
            result = ConfigCacheService.removeBeta(dataId, group, namespaceId);
            if (result) {
                ConfigTraceService.logDumpEvent(dataId, group, namespaceId, null, lastModified, event.getHandleIp(),
                        ConfigTraceService.DUMP_EVENT_REMOVE_OK, System.currentTimeMillis() - lastModified, 0);
            }
            return result;
        } else {
            result = ConfigCacheService
                    .dumpBeta(dataId, group, namespaceId, content, lastModified, event.getBetaIps(),
                            encryptedDataKey);
            if (result) {
                ConfigTraceService.logDumpEvent(dataId, group, namespaceId, null, lastModified, event.getHandleIp(),
                        ConfigTraceService.DUMP_EVENT_OK, System.currentTimeMillis() - lastModified,
                        content.length());
            }
        }
        
        return result;
    }
    if (StringUtils.isBlank(event.getTag())) {
         // tag为空,进入到这里。
        if (dataId.equals(AggrWhitelist.AGGRIDS_METADATA)) {
            AggrWhitelist.load(content);
        }
        
        if (dataId.equals(ClientIpWhiteList.CLIENT_IP_WHITELIST_METADATA)) {
            ClientIpWhiteList.load(content);
        }
        
        if (dataId.equals(SwitchService.SWITCH_META_DATAID)) {
            SwitchService.load(content);
        }
        
        boolean result;
        if (!event.isRemove()) {
            // 更新或者添加的请求,就会进入到这里
            result = ConfigCacheService
                    .dump(dataId, group, namespaceId, content, lastModified, type, encryptedDataKey);
            
            if (result) {
                ConfigTraceService.logDumpEvent(dataId, group, namespaceId, null, lastModified, event.getHandleIp(),
                        ConfigTraceService.DUMP_EVENT_OK, System.currentTimeMillis() - lastModified,
                        content.length());
            }
        } else {
            result = ConfigCacheService.remove(dataId, group, namespaceId);
            
            if (result) {
                ConfigTraceService.logDumpEvent(dataId, group, namespaceId, null, lastModified, event.getHandleIp(),
                        ConfigTraceService.DUMP_EVENT_REMOVE_OK, System.currentTimeMillis() - lastModified, 0);
            }
        }
        return result;
    } else {
        
        boolean result;
        if (!event.isRemove()) {
            result = ConfigCacheService
                    .dumpTag(dataId, group, namespaceId, event.getTag(), content, lastModified, encryptedDataKey);
            if (result) {
                ConfigTraceService.logDumpEvent(dataId, group, namespaceId, null, lastModified, event.getHandleIp(),
                        ConfigTraceService.DUMP_EVENT_OK, System.currentTimeMillis() - lastModified,
                        content.length());
            }
        } else {
            result = ConfigCacheService.removeTag(dataId, group, namespaceId, event.getTag());
            if (result) {
                ConfigTraceService.logDumpEvent(dataId, group, namespaceId, null, lastModified, event.getHandleIp(),
                        ConfigTraceService.DUMP_EVENT_REMOVE_OK, System.currentTimeMillis() - lastModified, 0);
            }
        }
        return result;
    }
    
}
  • ConfigCacheService.dump
public static boolean dump(String dataId, String group, String tenant, String content, long lastModifiedTs,
                           String type, String encryptedDataKey) {
    String groupKey = GroupKey2.getKey(dataId, group, tenant);
    CacheItem ci = makeSure(groupKey, encryptedDataKey, false);
    ci.setType(type);
    // 缓存同步时加写锁
    final int lockResult = tryWriteLock(groupKey);
    assert (lockResult != 0);

    if (lockResult < 0) {
        DUMP_LOG.warn("[dump-error] write lock failed. {}", groupKey);
        return false;
    }

    try {
        final String md5 = MD5Utils.md5Hex(content, Constants.ENCODE);
        // 如果最后一次更新的时间要小于在内存中存储的时间的话,证明数据不是最新的,无需更新。
        if (lastModifiedTs < ConfigCacheService.getLastModifiedTs(groupKey)) {
            DUMP_LOG.warn("[dump-ignore] the content is old. groupKey={}, md5={}, lastModifiedOld={}, "
                          + "lastModifiedNew={}", groupKey, md5, ConfigCacheService.getLastModifiedTs(groupKey),
                          lastModifiedTs);
            return true;
        }
        // 如果md5值一致,并且已经写入到磁盘的文件,则无需更新
        if (md5.equals(ConfigCacheService.getContentMd5(groupKey)) && DiskUtil.targetFile(dataId, group, tenant).exists()) {
            DUMP_LOG.warn("[dump-ignore] ignore to save cache file. groupKey={}, md5={}, lastModifiedOld={}, "
                          + "lastModifiedNew={}", groupKey, md5, ConfigCacheService.getLastModifiedTs(groupKey),
                          lastModifiedTs);
        } else if (!PropertyUtil.isDirectRead()) {
            // 把最新的数据写入到磁盘文件中
            DiskUtil.saveToDisk(dataId, group, tenant, content);
        }
        // 更新缓存中存储的值
        updateMd5(groupKey, md5, lastModifiedTs, encryptedDataKey);
        return true;
    } catch (IOException ioe) {
        DUMP_LOG.error("[dump-exception] save disk error. " + groupKey + ", " + ioe);
        if (ioe.getMessage() != null) {
            String errMsg = ioe.getMessage();
            if (NO_SPACE_CN.equals(errMsg) || NO_SPACE_EN.equals(errMsg) || errMsg.contains(DISK_QUATA_CN) || errMsg
                .contains(DISK_QUATA_EN)) {
                // Protect from disk full.
                FATAL_LOG.error("磁盘满自杀退出", ioe);
                System.exit(0);
            }
        }
        return false;
    } finally {
        releaseWriteLock(groupKey);
    }
}

在更新缓存的时候,还发布了一个事件,就是LocalDataChangeEvent。


public static void updateMd5(String groupKey, String md5, long lastModifiedTs, String encryptedDataKey) {
    CacheItem cache = makeSure(groupKey, encryptedDataKey, false);
    if (cache.md5 == null || !cache.md5.equals(md5)) {
        cache.md5 = md5;
        cache.lastModifiedTs = lastModifiedTs;
        NotifyCenter.publishEvent(new LocalDataChangeEvent(groupKey));
    }
}
  • RpcConfigChangeNotifier.onEvent

这里有上边

@Override
public void onEvent(LocalDataChangeEvent event) {
    String groupKey = event.groupKey;
    boolean isBeta = event.isBeta;
    List<String> betaIps = event.betaIps;
    String[] strings = GroupKey.parseKey(groupKey);
    String dataId = strings[0];
    String group = strings[1];
    String tenant = strings.length > 2 ? strings[2] : "";
    String tag = event.tag;
    
    configDataChanged(groupKey, dataId, group, tenant, isBeta, betaIps, tag);
    
}

public void configDataChanged(String groupKey, String dataId, String group, String tenant, boolean isBeta,
                              List<String> betaIps, String tag) {
	// 这里的configChangeListenContext就是上边建立对应关系的集合类,根据groupKey获取到所有的connectionId
    Set<String> listeners = configChangeListenContext.getListeners(groupKey);
    if (CollectionUtils.isEmpty(listeners)) {
        return;
    }
    int notifyClientCount = 0;
    for (final String client : listeners) {
        // 根据connectionId获取到rpc的Connection
        Connection connection = connectionManager.getConnection(client);
        if (connection == null) {
            continue;
        }

        ConnectionMeta metaInfo = connection.getMetaInfo();
        //beta ips check.
        String clientIp = metaInfo.getClientIp();
        String clientTag = metaInfo.getTag();
        if (isBeta && betaIps != null && !betaIps.contains(clientIp)) {
            continue;
        }
        //tag check
        if (StringUtils.isNotBlank(tag) && !tag.equals(clientTag)) {
            continue;
        }

        ConfigChangeNotifyRequest notifyRequest = ConfigChangeNotifyRequest.build(dataId, group, tenant);
		// 构建一个RPC push 任务。
        RpcPushTask rpcPushRetryTask = new RpcPushTask(notifyRequest, 50, client, clientIp, metaInfo.getAppName());
        push(rpcPushRetryTask);
        notifyClientCount++;
    }
    Loggers.REMOTE_PUSH.info("push [{}] clients ,groupKey=[{}]", notifyClientCount, groupKey);
}
// 定时衰减任务
private void push(RpcPushTask retryTask) {
        ConfigChangeNotifyRequest notifyRequest = retryTask.notifyRequest;
    if (retryTask.isOverTimes()) {
        Loggers.REMOTE_PUSH
            .warn("push callback retry fail over times .dataId={},group={},tenant={},clientId={},will unregister client.",
                  notifyRequest.getDataId(), notifyRequest.getGroup(), notifyRequest.getTenant(),
                  retryTask.connectionId);
        connectionManager.unregister(retryTask.connectionId);
    } else if (connectionManager.getConnection(retryTask.connectionId) != null) {
        // first time:delay 0s; second time:delay 2s; third time:delay 4s
        // 衰减任务
        ConfigExecutor.getClientConfigNotifierServiceExecutor()
            .schedule(retryTask, retryTask.tryTimes * 2, TimeUnit.SECONDS);
    } else {
        // client is already offline, ignore task.
    }

}
  • RpcPushTask.run

到这里把传入的notifyRequest推送到客户端。

ConfigChangeNotifyRequest notifyRequest;

public RpcPushTask(ConfigChangeNotifyRequest notifyRequest, int maxRetryTimes, String connectionId,
                String clientIp, String appName) {
    this.notifyRequest = notifyRequest;
    this.maxRetryTimes = maxRetryTimes;
    this.connectionId = connectionId;
    this.clientIp = clientIp;
    this.appName = appName;
}
@Override
public void run() {
    tryTimes++;
    TpsCheckRequest tpsCheckRequest = new TpsCheckRequest();
   
    tpsCheckRequest.setPointName(POINT_CONFIG_PUSH);
    if (!tpsControlManager.check(tpsCheckRequest).isSuccess()) {
        // 推送失败的话,走这里的衰减重试
        push(this);
    } else {
        // 成功后,带有返回值的callback
        rpcPushService.pushWithCallback(connectionId, notifyRequest, new AbstractPushCallBack(3000L) {
            @Override
            public void onSuccess() {
                TpsCheckRequest tpsCheckRequest = new TpsCheckRequest();
                
                tpsCheckRequest.setPointName(POINT_CONFIG_PUSH_SUCCESS);
                tpsControlManager.check(tpsCheckRequest);
            }
            
            @Override
            public void onFail(Throwable e) {
                TpsCheckRequest tpsCheckRequest = new TpsCheckRequest();
                
                tpsCheckRequest.setPointName(POINT_CONFIG_PUSH_FAIL);
                tpsControlManager.check(tpsCheckRequest);
                Loggers.REMOTE_PUSH.warn("Push fail", e);
                push(RpcPushTask.this);
            }
            
        }, ConfigExecutor.getClientConfigNotifierServiceExecutor());
        
    }
    
}

push客户端接收

客户端有一个ClientWorker是启动的时候,就已经加载运行的类。ClientWorker

而request的类型就是ConfigChangeNotifyRequest,所以就会进入到这个方法中。

方法中根据request拿到groupKey,根据groupKey去缓存种拿到CacheData.

然后进入到notifyListenConfig();中。这里不是把变更的内容直接推送过来,而是推送了dataId、gourpId、tenant构成的groupKey。再由客户端去服务端去请求。

@Override
public void notifyListenConfig() {
    listenExecutebell.offer(bellItem);
}

走到这里,就回到我们上边的逻辑中了。

private void initRpcClientHandler(final RpcClient rpcClientInner) {
    /*
     * Register Config Change /Config ReSync Handler
     */
    rpcClientInner.registerServerRequestHandler((request) -> {
        if (request instanceof ConfigChangeNotifyRequest) {
            ConfigChangeNotifyRequest configChangeNotifyRequest = (ConfigChangeNotifyRequest) request;
            LOGGER.info("[{}] [server-push] config changed. dataId={}, group={},tenant={}",
                    rpcClientInner.getName(), configChangeNotifyRequest.getDataId(),
                    configChangeNotifyRequest.getGroup(), configChangeNotifyRequest.getTenant());
            String groupKey = GroupKey
                    .getKeyTenant(configChangeNotifyRequest.getDataId(), configChangeNotifyRequest.getGroup(),
                            configChangeNotifyRequest.getTenant());
            
            CacheData cacheData = cacheMap.get().get(groupKey);
            if (cacheData != null) {
                synchronized (cacheData) {
                    cacheData.getLastModifiedTs().set(System.currentTimeMillis());
                    cacheData.setSyncWithServer(false);
                    notifyListenConfig();
                }
                
            }
            return new ConfigChangeNotifyResponse();
        }
        return null;
    });
    
    rpcClientInner.registerServerRequestHandler((request) -> {
        if (request instanceof ClientConfigMetricRequest) {
            ClientConfigMetricResponse response = new ClientConfigMetricResponse();
            response.setMetrics(getMetrics(((ClientConfigMetricRequest) request).getMetricsKeys()));
            return response;
        }
        return null;
    });
    
    rpcClientInner.registerConnectionListener(new ConnectionEventListener() {
        
        @Override
        public void onConnected() {
            LOGGER.info("[{}] Connected,notify listen context...", rpcClientInner.getName());
            notifyListenConfig();
        }
        
        @Override
        public void onDisConnect() {
            String taskId = rpcClientInner.getLabels().get("taskId");
            LOGGER.info("[{}] DisConnected,clear listen context...", rpcClientInner.getName());
            Collection<CacheData> values = cacheMap.get().values();
            
            for (CacheData cacheData : values) {
                if (StringUtils.isNotBlank(taskId)) {
                    if (Integer.valueOf(taskId).equals(cacheData.getTaskId())) {
                        cacheData.setSyncWithServer(false);
                    }
                } else {
                    cacheData.setSyncWithServer(false);
                }
            }
        }
        
    });
    
    rpcClientInner.serverListFactory(new ServerListFactory() {
        @Override
        public String genNextServer() {
            return ConfigRpcTransportClient.super.serverListManager.getNextServerAddr();
            
        }
        
        @Override
        public String getCurrentServer() {
            return ConfigRpcTransportClient.super.serverListManager.getCurrentServerAddr();
            
        }
        
        @Override
        public List<String> getServerList() {
            return ConfigRpcTransportClient.super.serverListManager.serverUrls;
            
        }
    });
    
    NotifyCenter.registerSubscriber(new Subscriber<ServerlistChangeEvent>() {
        @Override
        public void onEvent(ServerlistChangeEvent event) {
            rpcClientInner.onServerListChange();
        }
        
        @Override
        public Class<? extends Event> subscribeType() {
            return ServerlistChangeEvent.class;
        }
    });
}

至此,nacos的整个配置中心就完成了。

值得我们学习的地方有很多:

  1. 事件解耦,异步执行引擎
  2. 数据分片
  3. 本地缓存
  4. 多级内存缓存,读写分离
  5. 衰减重试 (tryTimes*2)
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值