apollo-应用系统接入实例讲解

应用系统如何接入Apollo,接入方式比较多,针对使用的开发模式不同而不同

> 项目是纯java

> 项目是springboot

> 项目是springcloud,因为数据自生产系统都是基于微服务,所以我们就用这个进行详细介绍

下面以校验系统为实例。进行说明

1 增加配置文件如下图所示

 

配置文件说明

application.yaml:配置系统所有用到的各类属性参数配置

application-local.yml:配置apollo的本地环境

application-dve.yml:配置apollo的线下环境

application-pre.yml:配置apollo的预发布环境

application-prod.yml:配置apollo的生产环境

bootstrap.yml:读取系统的环境变量,Apollo根据不同的环境读取对应的环境配置

注意:bootstrap这个文件名称一定不能写错,写错了spring加载不到

 

 

配置文件内容

bootstrap.yml

server:
    port: 18002
spring:
    application:
        name: dcp-service-eccs
    profiles:
        active: '@spring.profiles.active@'

application-dev.yml和application-local.yml

apollo:
 bootstrap:
 enabled: true
    namespaces: application,0003.eureka_ns,0003.feign.ns,0003.ribbon.ns,0003.datasource.ns,0003.redis.ns,0003.kafka.ns,0003.hubber.ns,0003.sso.ns,0003.rbac.ns,0003.mongodb.ns,0003.fireman.ns
  meta: http://10.15.255.61:8080
app:
 id: dcp-service-eccs

application-pre.yml

apollo:
 bootstrap:
 enabled: true
    namespaces: application,0003.eureka_ns,0003.feign.ns,0003.ribbon.ns,0003.datasource.ns,0003.redis.ns,0003.kafka.ns,0003.hubber.ns,0003.sso.ns,0003.rbac.ns,0003.mongodb.ns,0003.fireman.ns
  meta: http://config.analyst.ai:8081
app:
 id: dcp-service-eccs

 

application-prod.yml

apollo:
 bootstrap:
 enabled: true
    namespaces: application,0003.eureka_ns,0003.feign.ns,0003.ribbon.ns,0003.datasource.ns,0003.redis.ns,0003.kafka.ns,0003.hubber.ns,0003.sso.ns,0003.rbac.ns,0003.mongodb.ns,0003.fireman.ns
  meta: http://config.analyst.ai:8082
app:
 id: dcp-service-eccs

 

上面唯一要变的就是不同的系统 

1 namespaces 不同,所以实际接入的时候要改为自己系统的namespace

2 id 不同,不同系统的id配置自己对应的即可

 

应用加入启动apollo的注解

配置各类参数

一般一个系统会牵涉到mysql,mong,kafka...等等,这个放在一个配置文件就可以,以校验系统为例

application.yam

 

data:
    dict:
        auth:
            url: ${data.dict.auth.url}
        cache:
            expireTime: ${data.dict.cache.expireTime}
        hystrixCache:
            expireTime: ${data.dict.hystrixCache.expireTime}
        userId: ${data.dict.userId}
eccsExecutor:
    name: ${eccsExecutor.name}
endpoints:
    shutdown:
        enabled: ${endpoints.shutdown.enabled}
        sensitive: ${endpoints.shutdown.sensitive}
eureka:
    client:
        fetchRegistry: ${eureka.client.fetchRegistry}
        registerWithEureka: ${eureka.client.registerWithEureka}
        serviceUrl:
            defaultZone: ${eureka.client.serviceUrl.defaultZone}
feign:
    client:
        dispatcher:
            name: ${feign.client.dispatcher.name}
            url: ${feign.client.dispatcher.url}
    compression:
        request:
            enabled: ${feign.compression.request.enabled}
            mimeTypes: ${feign.compression.request.mimeTypes}
            minRequestSize: ${feign.compression.request.minRequestSize}
        response:
            enabled: ${feign.compression.response.enabled}
    hystrix:
        enabled: ${feign.hystrix.enabled}
    okhttp:
        enabled: ${feign.okhttp.enabled}
fireman:
    componentId: ${fireman.componentId}
    receives: ${fireman.receives}
hubber:
    job:
        accessToken: ${hubber.job.accessToken}
        admin:
            addresses: ${hubber.job.admin.addresses}
        executor:
            appname: ${hubber.job.executor.appname}
            ip: ${hubber.job.executor.ip}
            logpath: ${hubber.job.executor.logpath}
            logretentiondays: ${hubber.job.executor.logretentiondays}
            port: ${hubber.job.executor.port}
hystrix:
    command:
        default:
            execution:
                isolation:
                    thread:
                        timeoutInMilliseconds: ${hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds}
job:
    dictDbId:
        default: ${job.dictDbId.default}
kafkaTopic:
    eccsCheckPost: ${kafkaTopic.eccsCheckPost}
    eccsCheckPre: ${kafkaTopic.eccsCheckPre}
log:
    send:
        env: ${log.send.env}
logging:
    level:
        com:
            abcft:
                dcapi:
                    eccs:
                        dao: debug
management:
    security:
        enabled: ${management.security.enabled}
mongodb:
    oneLevelMarket:
        database: ${mongodb.oneLevelMarket.database}
        host: ${mongodb.oneLevelMarket.host}
        password: ${mongodb.oneLevelMarket.password}
        port: ${mongodb.oneLevelMarket.port}
        uri: ${mongodb.oneLevelMarket.uri}
        username: ${mongodb.oneLevelMarket.username}
rbac:
    accessIdListByUserId: ${rbac.accessIdListByUserId}
    app_secret: ${rbac.app_secret}
    client_key: ${rbac.client_key}
    judgeAuthority: ${rbac.judgeAuthority}
    modules: ${rbac.modules}
    serviceHost: ${rbac.serviceHost}
ribbon:
    ConnectTimeout: ${ribbon.ConnectTimeout}
    MaxAutoRetries: ${ribbon.MaxAutoRetries}
    MaxAutoRetriesNextServer: ${ribbon.MaxAutoRetriesNextServer}
    OkToRetryOnAllOperations: ${ribbon.OkToRetryOnAllOperations}
    ReadTimeout: ${ribbon.ReadTimeout}
    ServerListRefreshInterval: ${ribbon.ServerListRefreshInterval}
spring:
    cache:
        type: ${spring.cache.type}
    cloud:
        loadbalancer:
            retry:
                enabled: ${spring.cloud.loadbalancer.retry.enabled}
    datasource:
        datacenter:
            driver-class-name: ${spring.datasource.datacenter.driver-class-name}
            jdbc-url: ${spring.datasource.datacenter.jdbc-url}
            password: ${spring.datasource.datacenter.password}
            timeoutSeconds: ${spring.datasource.datacenter.timeoutSeconds}
            username: ${spring.datasource.datacenter.username}
        eccs:
            driver-class-name: ${spring.datasource.eccs.driver-class-name}
            jdbc-url: ${spring.datasource.eccs.jdbc-url}
            password: ${spring.datasource.eccs.password}
            timeoutSeconds: ${spring.datasource.eccs.timeoutSeconds}
            username: ${spring.datasource.eccs.username}
    kafka:
        bootstrap-servers: ${spring.kafka.bootstrap-servers}
        consumer:
            auto-offset-reset: ${spring.kafka.consumer.auto-offset-reset}
            enable-auto-commit: ${spring.kafka.consumer.enable-auto-commit}
            group-id: ${spring.kafka.consumer.group-id}
        producer:
            acks: ${spring.kafka.producer.acks}
            batch-size: ${spring.kafka.producer.batch-size}
            buffer-memory: ${spring.kafka.producer.buffer-memory}
            compression-type: ${spring.kafka.producer.compression-type}
            retries: ${spring.kafka.producer.retries}
    mvc:
        throwExceptionIfNoHandlerFound: ${spring.mvc.throwExceptionIfNoHandlerFound}
    redis:
        database: ${spring.redis.default.database}
        host: ${spring.redis.default.host}
        password: ${spring.redis.default.password}
        pool:
            max-active: ${spring.redis.pool.max-active}
            max-idle: ${spring.redis.pool.max-idle}
            max-wait: ${spring.redis.pool.max-wait}
        port: ${spring.redis.default.port}
        timeout: ${spring.redis.default.timeout}
    resources:
        addMappings: ${spring.resources.addMappings}
sso:
    api:
        getUserInfo: ${sso.api.getUserInfo}
        verifyToken: ${sso.api.verifyToken}
    url: ${sso.url}





 

 

备注:配置的时候都是通过el表达式获取值,建议key就用name来配置,这样可读性比较好

 

系统程序中获取apollo的属性的值

spring结合使用

1 通过使用 @Value("${check.topic_pre}"

2

@ApolloConfig
private Config config

 

 

api使用 这个不依赖spring框架,最简单的使用方式

Config config = ConfigService.getAppConfig(); 
String someKey = "someKeyFromDefaultNamespace";
String someDefaultValue = "someDefaultValueForTheKey";
String value = config.getProperty(someKey, someDefaultValue);

 

实时监听参数值的变化

1 通过纯api实现

Config config = ConfigService.getAppConfig();
config.addChangeListener(new ConfigChangeListener() {
    @Override
    public void onChange(ConfigChangeEvent changeEvent) {
        System.out.println("Changes for namespace " + changeEvent.getNamespace());
        for (String key : changeEvent.changedKeys()) {
            ConfigChange change = changeEvent.getChange(key);
            System.out.println(String.format("Found change - key: %s, oldValue: %s, newValue: %s, changeType: %s", change.getPropertyName(), change.getOldValue(), change.getNewValue(), change.getChangeType()));
        }
    }
});

2 通过结合spring实现+@ApolloConfigChangeListener,参考校验系统

import com.ctrip.framework.apollo.Config;
import com.ctrip.framework.apollo.model.ConfigChangeEvent;
import com.ctrip.framework.apollo.spring.annotation.ApolloConfig;
import com.ctrip.framework.apollo.spring.annotation.ApolloConfigChangeListener;
import org.springframework.beans.BeansException;
import org.springframework.cloud.context.environment.EnvironmentChangeEvent;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;
import org.springframework.stereotype.Service;

import javax.annotation.PostConstruct;
import java.util.Set;

@Service
public class ConfigRefresher implements ApplicationContextAware {
private ApplicationContext applicationContext;
@ApolloConfig
private Config config;

@PostConstruct
private void initialize() {
refresher(config.getPropertyNames());
}

@ApolloConfigChangeListener
private void onChange(ConfigChangeEvent changeEvent) {
refresher(changeEvent.changedKeys());
}

private void refresher(Set<String> changedKeys) {
for (String changedKey : changedKeys) {
System.out.println("this key is changed:" + changedKey);
}
this.applicationContext.publishEvent(new EnvironmentChangeEvent(changedKeys));
}

@Override
public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
this.applicationContext = applicationContext;
}
}

对于一般系统的使用,基本上上面就可以满足了,如果满足不了,还可以去github上查看相关文档,文档非常详细 https://github.com/ctripcorp/apollo

 

展开阅读全文

没有更多推荐了,返回首页