SpringBoot 整合 Mybatis/Druid/Quartz

SpringBoot 整合Mybatis/Druid/Quartz

由于项目中定时任务逐渐增多,对系统的压力也慢慢增加。故打算将系统中的定时任务抽离出来。初步决定使用 SpringBoot+mybatis+quartz 的整合方式进行快速开发。
整个整合会包含如下任务:

  1. 整合 Mybatis (包括通用 Mapper 和分页插件)
    2. 整合 Quartz 实现动态定时任务管理(实现可在 Quartz Job 任务中注入bean)
    3. 整合 Spring AOP 编程(在 Quartz Job 任务中实现Spring AOP 编程)
  2. 整合 Druid
  3. Spring 单元测试集成 H2 数据库

整合 MyBatis

添加依赖

        <!--mybatis-->
        <dependency>
            <groupId>org.mybatis.spring.boot</groupId>
            <artifactId>mybatis-spring-boot-starter</artifactId>
            <version>${mybatis-spring-boot-starter.version}</version>
        </dependency>

        <!-- mysql -->
        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
            <scope>runtime</scope>
        </dependency>

添加application.properties配置

主要添加数据库连接配置和mybatis的mapper的xml配置文件路径以及实体类的包。还有一些mybatis的相关配置:mybatis相关配置参数参考

# 配置数据库连接
spring.datasource.driver-class-name = com.mysql.jdbc.Driver
spring.datasource.url = jdbc:mysql://localhost:3306/taskmgr?useUnicode=true&characterEncoding=utf-8&useSSL=false
spring.datasource.username = root
spring.datasource.password = 123456

#mybatis
mybatis.type-aliases-package=com.pingan.wechat.app.entity
mybatis.mapper-locations=classpath:mapper/*.xml

#more configuration about mybatis : http://www.mybatis.org/mybatis-3/zh/configuration.html
mybatis.configuration.map-underscore-to-camel-case=true
mybatis.configuration.auto-mapping-unknown-column-behavior=warning
mybatis.configuration.use-generated-keys=true

整合通用 Mapper 和分页插件

使用原始的 Mybatis 有个问题就是,每个实体类的通用CURD操作等都需要自己写 xml 配置文件或者在对应的 Mapper 文件中写对应的 @Select/@Insert 等注解来实现对应的功能。这是个特别耗时的重复工作。
解决这个问题有两种方式:

  1. 使用 Mybatis Generator 自动生成对应的 xml 文件
  2. 集成通用 Mapper。

个人觉得集成通用 Mapper 相对简单,所以选择了该中方式。通用 Mapper 的作者本身也有写两种方式的对比。MyBatis 通用 Mapper3 文档
另外,为了方便使用支持物理分页,也需要集成分页插件 Mybatis_PageHelper

添加依赖

        <!--mapper-->
        <dependency>
            <groupId>tk.mybatis</groupId>
            <artifactId>mapper-spring-boot-starter</artifactId>
            <version>${mapper-spring-boot-starter.version}</version>
        </dependency>
        <!--pagehelper-->
        <dependency>
            <groupId>com.github.pagehelper</groupId>
            <artifactId>pagehelper-spring-boot-starter</artifactId>
            <version>${pagehelper-spring-boot-starter.version}</version>
        </dependency>

编写 CommonMapper 接口

编写基本的 CommonMapper 接口,继承通用 Mapper 的接口 Mapper<T>, MySqlMapper<T>, SelectByIdsMapper<T>, DeleteByIdsMapper<T>;其中有些接口只适用于特定的数据库,需要根据实际情况做调整。 其他的业务相关的 Mapper 则需要继承这个 CommonMapper<T> 接口。
对应 service 层基本 Service 接口和实现则根据自身需要考虑是否需要。

import tk.mybatis.mapper.common.Mapper;
import tk.mybatis.mapper.common.MySqlMapper;
import tk.mybatis.mapper.common.ids.DeleteByIdsMapper;
import tk.mybatis.mapper.common.ids.SelectByIdsMapper;

/**
 * 支持单表CURD和批量(MYSQL)操作的通用Mapper
 * Created by Vio on 2017/11/6.
 */
public interface CommonMapper<T> extends Mapper<T>, MySqlMapper<T>, SelectByIdsMapper<T>, DeleteByIdsMapper<T> {

}

添加 application.properties 配置

# 这里配置自己写的基本的CommonMapper
mapper.mappers=com.xx.xxx.app.mapper.CommonMapper
mapper.not-empty=false
mapper.identity=MYSQL

#pagehelper插件配置
pagehelper.helperDialect=mysql
pagehelper.reasonable=true
pagehelper.supportMethodsArguments=true
pagehelper.params=count=countSql

整合 Quartz

整合 Quartz 时最主要会遇到两个问题:

  1. 实现了 Job 接口的 Quartz 任务类中无法注入使用Spring管理的bean
  2. 使用 Spring AOP 编程在实现了 Job 接口的 Quartz 任务类中无效(quartz aop 失效)

其实上面两个问题的根本都是因为 Job 的实例的创建和管理没有交给 Spring 来管理。下面给出可以解决上述两个问题的整合方式:

添加依赖

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-aop</artifactId>
        </dependency>

        <!-- quartz -->
        <dependency>
            <groupId>org.quartz-scheduler</groupId>
            <artifactId>quartz</artifactId>
            <version>${quartz.version}</version>
        </dependency>
        <dependency>
            <groupId>org.quartz-scheduler</groupId>
            <artifactId>quartz-jobs</artifactId>
            <version>${quartz.version}</version>
        </dependency>

编写自定义配置类和自定义 JobFactory(Important)

JobFactoy 是 Quartz 提供的一个接口,其作用是用于创建 Job 实例,对应的在 Spring Quartz 中的实现类有 AdaptableJobFactorySpringBeanJobFactory;其中 AdaptableJobFactory 继承于 SpringBeanJobFactory ,我们看下其源码

public interface JobFactory {
    Job newJob(TriggerFiredBundle bundle, Scheduler scheduler) throws SchedulerException;
}


public class AdaptableJobFactory implements JobFactory {

	@Override
	public Job newJob(TriggerFiredBundle bundle, Scheduler scheduler) throws SchedulerException {
		try {
		    // 创建Job实例
			Object jobObject = createJobInstance(bundle);
			return adaptJob(jobObject);
		}
		catch (Exception ex) {
			throw new SchedulerException("Job instantiation failed", ex);
		}
	}

    // 这里可以看到,Job实例的创建都是通过getJobClass().newInstance()来创建的,并没有对应的代理类的创建
    // 所以使用Spring AOP编程的时候所有需要被代理的Job任务实际上都不会有代理类生成,是无法使用Spring AOP编程的
	protected Object createJobInstance(TriggerFiredBundle bundle) throws Exception {
		return bundle.getJobDetail().getJobClass().newInstance();
	}


	protected Job adaptJob(Object jobObject) throws Exception {
		if (jobObject instanceof Job) {
			return (Job) jobObject;
		}
		else if (jobObject instanceof Runnable) {
			return new DelegatingJob((Runnable) jobObject);
		}
		else {
			throw new IllegalArgumentException("Unable to execute job class [" + jobObject.getClass().getName() +
					"]: only [org.quartz.Job] and [java.lang.Runnable] supported.");
		}
	}
}



public class SpringBeanJobFactory extends AdaptableJobFactory implements SchedulerContextAware {

    //此处省略部分代码 ...

   
	@Override
	protected Object createJobInstance(TriggerFiredBundle bundle) throws Exception {
		 // 可以看到SpringBeanJobFactory中是使用父类AdaptableJobFactory的方法来创建Job实例的,所以也不会有代理类的创建
		Object job = super.createJobInstance(bundle);
		if (isEligibleForPropertyPopulation(job)) {
			BeanWrapper bw = PropertyAccessorFactory.forBeanPropertyAccess(job);
			MutablePropertyValues pvs = new MutablePropertyValues();
			if (this.schedulerContext != null) {
				pvs.addPropertyValues(this.schedulerContext);
			}
			pvs.addPropertyValues(bundle.getJobDetail().getJobDataMap());
			pvs.addPropertyValues(bundle.getTrigger().getJobDataMap());
			if (this.ignoredUnknownProperties != null) {
				for (String propName : this.ignoredUnknownProperties) {
					if (pvs.contains(propName) && !bw.isWritableProperty(propName)) {
						pvs.removePropertyValue(propName);
					}
				}
				bw.setPropertyValues(pvs);
			}
			else {
				bw.setPropertyValues(pvs, true);
			}
		}
		return job;
	}

	 //此处省略部分代码 ...

}

可以发现现有的 Spring 中对 JobFactory 的实现类都无法实现我们的 AOP 编程需求,所以就需要自定义一个 JobFactory 实现类了。如下

  • 自定义 JobFactory 实现类
/**
 - 自定义JobFactory,将创建Job实例的操作交给Spring管理
 - Created by Vio on 2017/11/8.
 */
public class MySpringBeanJobFactory extends AdaptableJobFactory {
    @Autowired
    private AutowireCapableBeanFactory beanFactory;

    @Override
    protected Object createJobInstance(TriggerFiredBundle bundle) throws Exception {
        Object jobInstance;
        Class<? extends Job> jobClass = bundle.getJobDetail().getJobClass();

        // 这里将Job的实例创建喝管理交给Spring,使用Spring的beanFactory去获取Job实例,
        // 获取不到的话就交由Spring的beanFactory自动创建一个,并根据名称自动注入和检查依赖关系
        // 这样的话Job中就可以实现自动注入和实现AOP编程
        try {
            jobInstance = beanFactory.getBean(jobClass);
        } catch (Exception e) {
            jobInstance = beanFactory.createBean(jobClass, AutowireCapableBeanFactory.AUTOWIRE_BY_NAME, true);
        }
        return jobInstance;
    }
}
  • 自定义 Quartz 配置
/**
 * Quartz配置类
 * Created by Vio on 2017/11/2.
 */
@Configuration
public class QuartzConfiguration {
    private static final String QUARTZ_CONFIG = "quartz.properties";

    @Bean
    public MySpringBeanJobFactory mySpringBeanJobFactory(){
        return new MySpringBeanJobFactory();
    }

    @Bean
    public SchedulerFactoryBean schedulerFactoryBean() {
        SchedulerFactoryBean schedulerFactoryBean = new SchedulerFactoryBean();

        // 配置使用自定义的JobFactory
        schedulerFactoryBean.setJobFactory(mySpringBeanJobFactory());
        schedulerFactoryBean.setAutoStartup(true);
        // 设置quartz配置文件路径
        schedulerFactoryBean.setConfigLocation(new ClassPathResource(QUARTZ_CONFIG));
        return schedulerFactoryBean;
    }

    @Bean
    public Scheduler scheduler() {
        return schedulerFactoryBean().getScheduler();
    }
}

动态定时任务(根据数据库的配置动态调整)

动态任务的实现其实只需要从数据库中读取相关的数据,然后使用 Scheduler 的相关API重新设置对应的任务的调度即可。不过任务初始启动的时候需要将数据库中的任务取出来。加入到调度器中。( Spring 启动完成后执行某任务)实现如下:

/**
 * 定时任务启动器:
 * 应用启动时启动所有有效的定时任务
 * Created by Vio on 2017/11/7.
 */
@Component
public class QuartzTaskStarter implements ApplicationListener<ContextRefreshedEvent> {
    private static final Logger LOGGER = LoggerFactory.getLogger(QuartzTaskStarter.class);

    @Autowired
    private TaskSchedule taskSchedule;
    @Autowired
    private QuartzTaskService quartzTaskService;

    @Override
    public void onApplicationEvent(ContextRefreshedEvent event) {
        try {
            List<QuartzTaskEntity> tasks = quartzTaskService.selectAllValidTask();
            for (QuartzTaskEntity task : tasks) {
                taskSchedule.scheduleTask(task);
            }
            taskSchedule.startSchedule();
        } catch (Exception e) {
            LOGGER.error("Start all valid task fail.", e);
        }
    }
}


/**
 * 定时任务调度器
 * Created by Vio on 2017/11/2.
 */
@Component
public class TaskSchedule {
    private static final Logger LOGGER = LoggerFactory.getLogger(TaskSchedule.class);

    @Autowired
    private Scheduler scheduler;

    @Autowired
    private QuartzTaskService quartzTaskService;

    public ApiResult scheduleTask(QuartzTaskEntity task) {
        boolean scheduled = false;
        String msg = "Schedule Success!";
        try {
            Class<?> jobClass = Class.forName(task.getTaskClass());
            if (Job.class.isAssignableFrom(jobClass)) {
                JobDetail jobDetail = buildJobDetail(task, (Class<? extends Job>) jobClass);
                Trigger trigger = buildTrigger(task);

                scheduler.scheduleJob(jobDetail, trigger);
                scheduled = true;
                LOGGER.info(msg + "");
            }
        } catch (ClassNotFoundException e) {
            msg = "Schedule Fail! Class not found!";
            LOGGER.error(msg + "Task: " + task);
        } catch (SchedulerException e) {
            msg = "Schedule Fail! " + e.getMessage();
            LOGGER.error(msg + "Task: " + task, e);
        }
        return ApiResult.build(scheduled, msg, task);
    }

    // 此处省略部分代码...

    public ApiResult rescheduleTask(QuartzTaskEntity task) {
        boolean flag = false;
        String msg = "Reschedule task Success!";
        try {
            CronTrigger oldTrigger = (CronTrigger) scheduler.getTrigger(getTriggerKey(task));
            if (!oldTrigger.getCronExpression().equalsIgnoreCase(task.getCron())) {
                Trigger newTrigger = buildTrigger(task);
                scheduler.rescheduleJob(getTriggerKey(task), newTrigger);
            }
            flag = true;
        } catch (SchedulerException e) {
            msg = "Reschedule task Fail! " + e.getMessage();
            LOGGER.error(msg + "Task: " + task, e);
        }
        return ApiResult.build(flag, msg, task);
    }

    public synchronized void startSchedule() {
        try {
            if (scheduler.isShutdown()) {
                scheduler.start();
            }
        } catch (SchedulerException e) {
            LOGGER.error("Start scheduler fail.", e);
        }
    }

    // 此处省略部分代码...

    private JobDetail buildJobDetail(QuartzTaskEntity task, Class<? extends Job> clazz) {
        return JobBuilder.newJob(clazz).withIdentity(getTaskName(task), getTaskGroup(task)).build();
    }

    private Trigger buildTrigger(QuartzTaskEntity task) {
        return TriggerBuilder.newTrigger().withIdentity(getTaskName(task), getTaskGroup(task)).startNow().withSchedule(CronScheduleBuilder.cronSchedule(task.getCron())).build();
    }

    private JobKey getJobKey(QuartzTaskEntity task) {
        return JobKey.jobKey(getTaskName(task), getTaskGroup(task));
    }

    private TriggerKey getTriggerKey(QuartzTaskEntity task) {
        return TriggerKey.triggerKey(getTaskName(task), getTaskGroup(task));
    }

    private String getTaskName(QuartzTaskEntity task) {
        return StringUtils.isEmpty(task.getTaskName()) ? task.getTaskClass() : task.getTaskName();
    }

    private String getTaskGroup(QuartzTaskEntity task) {
        return StringUtils.isEmpty(task.getTaskGroup()) ? Scheduler.DEFAULT_GROUP : task.getTaskGroup();
    }
}

添加 quartz.properties 配置文件

# 简单实用内存类存储任务状态
# 更多配置使用请访问Quartz官网http://www.quartz-scheduler.org/documentation/quartz-2.2.x/configuration/ConfigMain.html
org.quartz.scheduler.instanceName = MyScheduler
org.quartz.threadPool.threadCount = 3
org.quartz.jobStore.class = org.quartz.simpl.RAMJobStore

整合 Druid

现在 Druid 官方已经给出了一个 starter 的依赖。整合 Druid 已经非常简单了,只需要添加对应的依赖和配置即可

添加 Druid 依赖

        <!-- mysql -->
        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
            <scope>runtime</scope>
        </dependency>
        <dependency>
            <groupId>com.alibaba</groupId>
            <artifactId>druid-spring-boot-starter</artifactId>
            <version>${druid-spring-boot-starter.version}</version>
        </dependency>

添加 application.properties 配置

spring.datasource.type=com.alibaba.druid.pool.DruidDataSource

# druid
# see more config about druid: https://github.com/alibaba/druid/tree/master/druid-spring-boot-starter
spring.datasource.druid.initial-size=5
spring.datasource.druid.max-active=20
spring.datasource.druid.min-idle=5
spring.datasource.druid.pool-prepared-statements=true
spring.datasource.druid.max-pool-prepared-statement-per-connection-size=20

spring.datasource.druid.max-wait=60000
spring.datasource.druid.time-between-eviction-runs-millis=60000
spring.datasource.druid.min-evictable-idle-time-millis=300000

spring.datasource.druid.validation-query=SELECT 1 FROM DUAL
spring.datasource.druid.test-on-borrow=false
spring.datasource.druid.test-on-return=false
spring.datasource.druid.test-while-idle=true

spring.datasource.druid.filters= stat,wall,slf4j
spring.datasource.druid.filter.stat.slow-sql-millis= 5000

Spring 单元测试集成 H2 数据库

在编写 Dao 的测试用例的时候,会对数据库中的数据进行操作。但是一般我们不想测试用例的运行对我们的开发库/测试服务库中的数据造成污染。那么可以考虑集成 H2 数据库来运行测试用例。集成如下:

添加依赖

        <!-- test -->
        <dependency>
            <groupId>com.h2database</groupId>
            <artifactId>h2</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>

添加application.properties配置

此处的 application.properties 文件是 test 目录下的配置文件。

# 数据源连接
spring.datasource.driver-class-name=org.h2.Driver
spring.datasource.url=jdbc:h2:mem:test
spring.datasource.username=root
spring.datasource.password=

# 设置测试启动时执行的创建schema的脚本文件
spring.datasource.schema=classpath:db/schema.sql

# 设置测试启动时执行的插入数据的脚本文件
spring.datasource.data=classpath:db/data.sql

sql 脚本文件需要自己创建并写入 sql 语句。其余的测试用例的编写和 Spring 官方说明一致。

编写测试用例

/**
 * 定时任务实体服务测试类
 * Created by Vio on 2017/11/7.
 */
@RunWith(SpringRunner.class)
@SpringBootTest
public class QuartzServiceTest {
    @Autowired
    QuartzTaskService quartzTaskService;

    @Test
    public void testTaskExist() {
        QuartzTaskEntity quartzTaskEntity = new QuartzTaskEntity();
        quartzTaskEntity.setTaskClass("TestTask2");
        quartzTaskEntity.setState(0);
        int inserted = quartzTaskService.insert(quartzTaskEntity);
        Assert.assertEquals(1, inserted);

        Assert.assertEquals(true, quartzTaskService.taskExist(quartzTaskEntity));
    }
}

Done。大功告成。

发布了37 篇原创文章 · 获赞 58 · 访问量 16万+
展开阅读全文

springboot2.0加入druid后使用的还是Hikari

03-23

配置 ``` spring: datasource: type: com.alibaba.druid.pool.DruidDataSource master: jdbcUrl: jdbc:mysql://127.0.0.1:3306/spring_boot_learning?zeroDateTimeBehavior=convertToNull&useUnicode=true&characterEncoding=utf-8 username: root password: root3306 driver-class-name: com.mysql.jdbc.Driver type: com.alibaba.druid.pool.DruidDataSource # 初始化大小,最小,最大 initialSize: 5 minIdle: 5 maxActive: 20 # 配置获取连接等待超时的时间 maxWait: 60000 # 配置间隔多久才进行一次检测,检测需要关闭的空闲连接,单位是毫秒 timeBetweenEvictionRunsMillis: 60000 # 配置一个连接在池中最小生存的时间,单位是毫秒 minEvictableIdleTimeMillis: 300000 validationQuery: SELECT 1 FROM DUAL testWhileIdle: true testOnBorrow: false testOnReturn: false # 打开PSCache,并且指定每个连接上PSCache的大小 poolPreparedStatements: true maxPoolPreparedStatementPerConnectionSize: 20 # 配置监控统计拦截的filters,去掉后监控界面sql无法统计,'wall'用于防火墙 filters: stat,wall,log4j # 合并多个DruidDataSource的监控数据 #useGlobalDataSourceStat: true slave: jdbcUrl: jdbc:mysql://127.0.0.1:3306/spring_boot_learning_one?zeroDateTimeBehavior=convertToNull&useUnicode=true&characterEncoding=utf-8 username: root password: root3306 driver-class-name: com.mysql.jdbc.Driver type: com.alibaba.druid.pool.DruidDataSource # 初始化大小,最小,最大 initialSize: 5 minIdle: 5 maxActive: 20 # 配置获取连接等待超时的时间 maxWait: 60000 # 配置间隔多久才进行一次检测,检测需要关闭的空闲连接,单位是毫秒 timeBetweenEvictionRunsMillis: 60000 # 配置一个连接在池中最小生存的时间,单位是毫秒 minEvictableIdleTimeMillis: 300000 validationQuery: SELECT 1 FROM DUAL testWhileIdle: true testOnBorrow: false testOnReturn: false # 打开PSCache,并且指定每个连接上PSCache的大小 poolPreparedStatements: true maxPoolPreparedStatementPerConnectionSize: 20 # 配置监控统计拦截的filters,去掉后监控界面sql无法统计,'wall'用于防火墙 filters: stat,wall,log4j # 合并多个DruidDataSource的监控数据 #useGlobalDataSourceStat: true ``` 主数据源配置 ``` @Configuration @MapperScan(basePackages = "indi.xulala.dao.master", sqlSessionTemplateRef = "masterSqlSessionTemplate") public class MasterDataSourceConfig { @Bean(name = "masterDataSource") @ConfigurationProperties(prefix = "spring.datasource.master") @Primary public DataSource setDataSource() { return DataSourceBuilder.create().build(); } @Bean(name = "masterTransactionManager") @Primary public DataSourceTransactionManager setTransactionManager(@Qualifier("masterDataSource") DataSource dataSource) { return new DataSourceTransactionManager(dataSource); } @Bean(name = "masterSqlSessionFactory") @Primary public SqlSessionFactory setSqlSessionFactory(@Qualifier("masterDataSource") DataSource dataSource) throws Exception { SqlSessionFactoryBean bean = new SqlSessionFactoryBean(); bean.setDataSource(dataSource); bean.setMapperLocations(new PathMatchingResourcePatternResolver().getResources("classpath:mapper/*.xml")); return bean.getObject(); } @Bean(name = "masterSqlSessionTemplate") @Primary public SqlSessionTemplate setSqlSessionTemplate(@Qualifier("masterSqlSessionFactory") SqlSessionFactory sqlSessionFactory) throws Exception { return new SqlSessionTemplate(sqlSessionFactory); } } ``` 从数据源配置 ``` @Configuration @MapperScan(basePackages = "indi.xulala.dao.slave", sqlSessionTemplateRef = "slaveSqlSessionTemplate") public class SlaveDataSourceConfig { @Bean(name = "slaveDataSource") @ConfigurationProperties(prefix = "spring.datasource.slave") @Primary public DataSource setDataSource() { return DataSourceBuilder.create().build(); } @Bean(name = "slaveTransactionManager") @Primary public DataSourceTransactionManager setTransactionManager(@Qualifier("slaveDataSource") DataSource dataSource) { return new DataSourceTransactionManager(dataSource); } @Bean(name = "slaveSqlSessionFactory") @Primary public SqlSessionFactory setSqlSessionFactory(@Qualifier("slaveDataSource") DataSource dataSource) throws Exception { SqlSessionFactoryBean bean = new SqlSessionFactoryBean(); bean.setDataSource(dataSource); bean.setMapperLocations(new PathMatchingResourcePatternResolver().getResources("classpath:mapper/*.xml")); return bean.getObject(); } @Bean(name = "slaveSqlSessionTemplate") @Primary public SqlSessionTemplate setSqlSessionTemplate(@Qualifier("slaveSqlSessionFactory") SqlSessionFactory sqlSessionFactory) throws Exception { return new SqlSessionTemplate(sqlSessionFactory); } } ``` application ``` @SpringBootApplication(exclude = {DataSourceAutoConfiguration.class, DataSourceTransactionManagerAutoConfiguration.class, HibernateJpaAutoConfiguration.class}) @MapperScan(basePackages = {"indi.xulala.dao.master","indi.xulala.dao.slave"}) public class SpringBootLearningApplication { public static void main(String[] args) { SpringApplication.run(SpringBootLearningApplication.class, args); } } ``` 启动日志 ``` 07:48:27.764 [restartedMain] INFO o.s.j.e.a.AnnotationMBeanExporter - [afterSingletonsInstantiated,434] - Registering beans for JMX exposure on startup 07:48:27.765 [restartedMain] INFO o.s.j.e.a.AnnotationMBeanExporter - [autodetect,896] - Bean with name 'masterDataSource' has been autodetected for JMX exposure 07:48:27.765 [restartedMain] INFO o.s.j.e.a.AnnotationMBeanExporter - [autodetect,896] - Bean with name 'slaveDataSource' has been autodetected for JMX exposure 07:48:27.765 [restartedMain] INFO o.s.j.e.a.AnnotationMBeanExporter - [autodetect,896] - Bean with name 'statFilter' has been autodetected for JMX exposure 07:48:27.772 [restartedMain] INFO o.s.j.e.a.AnnotationMBeanExporter - [registerBeanInstance,669] - Located MBean 'masterDataSource': registering with JMX server as MBean [com.zaxxer.hikari:name=masterDataSource,type=HikariDataSource] 07:48:27.773 [restartedMain] INFO o.s.j.e.a.AnnotationMBeanExporter - [registerBeanInstance,669] - Located MBean 'slaveDataSource': registering with JMX server as MBean [com.zaxxer.hikari:name=slaveDataSource,type=HikariDataSource] 07:48:27.774 [restartedMain] INFO o.s.j.e.a.AnnotationMBeanExporter - [registerBeanInstance,669] - Located MBean 'statFilter': registering with JMX server as MBean [com.alibaba.druid.filter.stat:name=statFilter,type=StatFilter] ``` 测试日志 ``` 07:49:20.159 [main] INFO com.zaxxer.hikari.HikariDataSource - [getConnection,110] - HikariPool-1 - Starting... 07:49:20.694 [main] INFO com.zaxxer.hikari.HikariDataSource - [getConnection,123] - HikariPool-1 - Start completed. 07:49:20.742 [main] INFO com.zaxxer.hikari.HikariDataSource - [getConnection,110] - HikariPool-2 - Starting... 07:49:20.771 [main] INFO com.zaxxer.hikari.HikariDataSource - [getConnection,123] - HikariPool-2 - Start completed. 07:49:20.813 [Thread-3] INFO o.s.w.c.s.GenericWebApplicationContext - [doClose,989] - Closing org.springframework.web.context.support.GenericWebApplicationContext@38831718: startup date [Sat Mar 24 07:49:11 CST 2018]; root of context hierarchy 07:49:20.817 [Thread-3] INFO com.zaxxer.hikari.HikariDataSource - [close,381] - HikariPool-2 - Shutdown initiated... 07:49:20.833 [Thread-3] INFO com.zaxxer.hikari.HikariDataSource - [close,383] - HikariPool-2 - Shutdown completed. 07:49:20.834 [Thread-3] INFO com.zaxxer.hikari.HikariDataSource - [close,381] - HikariPool-1 - Shutdown initiated... 07:49:20.845 [Thread-3] INFO com.zaxxer.hikari.HikariDataSource - [close,383] - HikariPool-1 - Shutdown completed. ``` druid-spring-boot-starter版本1.1.9 问答

没有更多推荐了,返回首页

©️2019 CSDN 皮肤主题: 大白 设计师: CSDN官方博客

分享到微信朋友圈

×

扫一扫,手机浏览