Spring整合Quartz动态定时任务步骤分析

学习链接

【quartz官网配置教程】

分布式定时任务调度框架Quartz
Quartz入门看这一篇文章就够了
Quartz 快速入门案例,看这一篇就够了
Quartz定时任务框架使用教程详解
Quartz 基本使用
quartz使用及原理解析
quartz-深度解析
Spring Boot集成Quartz实现定时任务的动态创建、启动、暂停、恢复、删除
Spring Boot 集成 Quartz(任务调度框架)
SpringBoot整合Quartz

Quartz misfireThreshold用法和触发器处理策略详解

全网最好的一篇讲解Quartz的MisFire机制

Quartz任务调度:MisFire策略和源码分析

Quartz 在misfire模式[错失、补偿执行] 策略

Quartz 调度原理与源码分析
Quartz配置文件详解&生产配置

【开发经验】quartz框架集群执行原理(源码解析)

【任务调度】quartz 任务调度框架工作原理源码分析

quartz集群调度机制调研及源码分析—转载

Quartz使用文档,使用Quartz实现动态任务,Spring集成Quartz,Quartz集群部署,Quartz源码分析 精选 原创

Quartz架构篇 - 分布式任务调度Quartz

Quartz分布式任务调度原理

【Quartz】分布式定时任务

分布式定时任务调度框架【Quartz】学习与实战记录完整篇

Quartz框架(一)—Quartz的基本配置
Quartz框架(二)—jobstore数据库表字段详解
Quartz框架(三)—任务的并行/串行执行
Quartz框架(四)—misfire处理机制
Quartz框架(五)— 有状态的job和无状态job
Quartz框架(六)— Trigger状态转换
Quartz框架(七)— Quartz集群原理
Quartz框架(八)— Quartz实现异步通知
Quartz框架(九)— 动态操作Quartz定时任务
Quartz框架(十)监听

Quartz分布式调度在洞窝智能营销平台中的应用
springquartz的LocalDataSourceJobStore

源码分析 | Spring定时任务Quartz执行全过程源码解读

【quartz表结构及说明】
线程池核心线程是如何保持住的?
Quartz是如何到期触发定时任务的
Quartz Job是如何执行的
Quartz集群原理
spring集成Quartz
quartz job同表不同环境

quartz百科全书

1. quartz-demo

1.导入依赖

在这里插入图片描述

如果是maven项目直接导入下面即可。

<dependency>
    <groupId>org.quartz-scheduler</groupId>
    <artifactId>quartz</artifactId>
    <version>2.3.2</version>
</dependency>

2.log4.properties

### 设置###
log4j.rootLogger=DEBUG,stdout
log4j.logger.io.netty=DEBUG
log4j.logger.com.mchange=DEBUG
### 输出信息到控制抬 ###
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%p %d{HH:mm:ss} [%c:%L] %m%n

3.quartz.properties

默认会去classpath去找quartz.properties,如果没有,那么就使用默认quartz的jar包中的quartz.properties文件

org.quartz.scheduler.instanceName = MyScheduler
org.quartz.threadPool.threadCount = 3    #开启三个线程
org.quartz.jobStore.class = org.quartz.simpl.RAMJobStore

4.监听器

1. JobListener

public class MyJobListener extends JobListenerSupport {
    @Override
    public String getName() {
        return "MyJobListener007";
    }

    @Override
    public void jobWasExecuted(JobExecutionContext context, JobExecutionException jobException) {
        System.out.println("-------jobWasExecuted-------");
    }

    @Override
    public void jobToBeExecuted(JobExecutionContext context) {
        System.out.println("-------jobToBeExecuted-------");

    }

    @Override
    public void jobExecutionVetoed(JobExecutionContext context) {
        System.out.println("-------jobExecutionVetoed-------");
    }
}

2. SchedulerListener

public class MySchedulerListener extends SchedulerListenerSupport {
    @Override
    public void jobAdded(JobDetail jobDetail) {
        System.out.println("---------jobAdded------------");
    }

    @Override
    public void jobScheduled(Trigger trigger) {
        System.out.println("---------jobScheduled------------");

    }

    @Override
    public void schedulerShutdown() {
        System.out.println("---------schedulerShutdown------------");

    }

    @Override
    public void schedulerShuttingdown() {
        System.out.println("---------schedulerShuttingdown------------");

    }

    @Override
    public void schedulerStarted() {
        System.out.println("---------schedulerStarted------------");

    }

    @Override
    public void schedulerStarting() {
        System.out.println("---------schedulerStarting------------");

    }
}

5.HelloJob继承job接口

/* 将job封装给jobDetail,由调度器scheudler根据触发器trggier条件触发相应的jobDetail,每次触发都会让jobDetail重新创建job对象,并且jobDetail会将数据传给job
              有两种方式: 1.jobDetail会根据自己usingJobData中的参数主动调用job对应的set方法,设置给job使用。
                        2.job可以从重写方法传过来的参数jobExecutionContext中获取jobDetail,
                                         然后从jobDetail中获取到jobDataMap。
                        
 HelloJob必须要有一个无参的构造方法
*/

@PersistJobDataAfterExecution
@DisallowConcurrentExecution
public class HelloJob implements Job { 

    private Integer joe;

    private Integer mike;

    public HelloJob() {
        System.out.println("无参构造执行....");
    }

    public void setJoe(Integer joe) {
        this.joe = joe;
        System.out.println("执行setJoe方法....");
    }

    public void setMike(Integer mike) {
        this.mike = mike;
        System.out.println("执行setMike方法....");
    }

    @Override
    public void execute(JobExecutionContext jobExecutionContext) throws JobExecutionException {
                                                   // 每次打印的hash值都不一样
        System.out.println("Hello job...start: " + jobExecutionContext.hashCode()); 
                                                   // 从jobExecutionContext中获取到JobDetail
        JobDataMap jobDataMap = jobExecutionContext.getJobDetail().getJobDataMap();
        // 从jobDataMap中获取数据
        /*Integer joe = (Integer) jobDataMap.get("joe");
        Integer mike = (Integer) jobDataMap.get("mike");*/

        System.out.println("当前线程: " + Thread.currentThread());
        System.out.println("joe = "+(joe++) + " , mike = " + (mike++));
        // 将数据从jobDetail对象传给job,job执行完后,将数据放入jobDataMap
        // 使用@PersistJobDataAfterExecution,才能将以下修改过的数据同步到jobDetail中
        jobDataMap.put("joe",joe);
        jobDataMap.put("mike",mike);

        try {
            // 验证并发: 触发器每隔1秒触发一次job,而一个job执行的线程需要睡上5秒。
            // jobDetail使用储备的线程,尽管线程还有2个富余,但使用此注解后,将会禁用jobDetail的并发
            // 即->使用@DisallowConcurrentExecution: 同一个JobDetail对象,禁止创建多个job去执行)
            
            // 如果不使用@DisallowConcurrentExecution此注解,那么每一秒都会使用线程池中的线程执行job,
            // 总共3条线程满了,那么只能等线程释放了(第一个线程执行睡5秒后),才会把job给到空闲的线程。
            // 所以线程数量也是一个重要的参数,否则就会影响调度的时间
            
            Thread.sleep(5000);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        System.out.println();
    }

}

4.QuartzTest.java[重点]

public class QuartzTest {

    public static void main(String[] args) {

        try {
            // 1. 创建任务调度器Scheduler,给任务调度器添加监听器SchedulerListener
            // Grab the Scheduler instance from the Factory
            Scheduler scheduler = StdSchedulerFactory.getDefaultScheduler(); 
            scheduler.getListenerManager().addSchedulerListener(new MySchedulerListener());

            // 2. 开启任务调度器
            // and start it off
            scheduler.start();

            // 3. 根据Job接口实现类,创建JobDetail对象
            // define the job and tie it to our HelloJob class
            JobDetail job1 = JobBuilder.newJob(HelloJob.class)
                    .withIdentity("job1", "group1")  // jobDetail的唯一标识  -  分组
                    .usingJobData("joe", 1)          // 将会封装到jobDataMap中
                    .usingJobData("mike", 2)         //	jobDetail会主动job对象对应的setMike方法
                    .build();                        //          将数据传给job使用

			// 4. 创建触发器Trigger, 指定触发规则
            // Trigger the job to run now, and then repeat every 40 seconds
            Trigger trigger1 = TriggerBuilder.newTrigger()
                    .withIdentity("trigger1", "group1")  // trigger的唯一标识 - 分组
                    .startNow()
                    .withSchedule(                       // 触发规则: 各种Schedule,常用是cronSchedule
                            SimpleScheduleBuilder
                                .simpleSchedule()
                                .withIntervalInSeconds(2)
                                .repeatForever()
            		).build();

            Trigger trigger2 = newTrigger()
                    .withIdentity("trigger2", "group2")
                    .startAt(new Date(new Date().getTime() + 2000))  // 开始时间         注意6表示7月
                	.endAt(new GregorianCalendar(2020, 6, 14, 23, 24, 00).getTime()) // 结束时间
                    .withSchedule(
                            CronScheduleBuilder          // 触发规则: cronSchedule
                            .cronSchedule("0/1 * * * * ?")
                            .withMisfireHandlingInstructionIgnoreMisfires()
            		 ).build();

		
           // 给任务调度器添加job监听器
           // MyJobListener myJobListener = new MyJobListener(); //
           // scheduler.getListenerManager().addJobListener(myJobListener
                            /*, KeyMatcher.keyEquals(new JobKey("job1","group1")));*/

            // 5. 使用任务调度器Scheduler 配置任务jobDetail 和 触发器Trigger
            // Tell quartz to schedule the job using our trigger
            scheduler.scheduleJob(job1, trigger2);

            // 60s过后主线程关闭 scheduler
            Thread.sleep(60000);
            scheduler.shutdown();

        } catch (SchedulerException se) {
            se.printStackTrace();
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
    }
}

2.spring整合quartz

1.导入依赖

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.zzhua</groupId>
    <artifactId>quartz-demo03</artifactId>
    <version>1.0-SNAPSHOT</version>

    <packaging>war</packaging>

    <properties>

        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <jsp.version>2.1</jsp.version>
        <servlet.version>3.1.0</servlet.version>
        <spring.version>5.0.2.RELEASE</spring.version>

    </properties>

    <dependencies>

        <dependency>
            <groupId>javax.servlet.jsp</groupId>
            <artifactId>jsp-api</artifactId>
            <version>${jsp.version}</version>
            <scope>provided</scope>
        </dependency>

        <!-- javax.servlet-api -->
        <dependency>
            <groupId>javax.servlet</groupId>
            <artifactId>javax.servlet-api</artifactId>
            <version>${servlet.version}</version>
            <scope>provided</scope>
        </dependency>

        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-context</artifactId>
            <version>5.0.2.RELEASE</version>
        </dependency>

        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-context-support</artifactId>
            <version>5.0.2.RELEASE</version>
        </dependency>

        <dependency>
            <groupId>org.quartz-scheduler</groupId>
            <artifactId>quartz</artifactId>
            <version>2.3.2</version>
        </dependency>

        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-tx</artifactId>
            <version>5.0.2.RELEASE</version>
        </dependency>

        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-web</artifactId>
            <version>${spring.version}</version>
        </dependency>

        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-webmvc</artifactId>
            <version>${spring.version}</version>
        </dependency>

        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-test</artifactId>
            <version>5.0.2.RELEASE</version>
        </dependency>

        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.12</version>
        </dependency>

        <dependency>
            <groupId>log4j</groupId>
            <artifactId>log4j</artifactId>
            <version>1.2.17</version>
        </dependency>

        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-log4j12</artifactId>
            <version>1.7.26</version>
        </dependency>

        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <version>1.16.10</version>
        </dependency>

        <dependency>
            <groupId>com.alibaba</groupId>
            <artifactId>fastjson</artifactId>
            <version>1.2.68</version>
        </dependency>

    </dependencies>

    <build>

        <plugins>
            <!-- 配置tomcat的运行插件 -->
            <plugin>
                <groupId>org.apache.tomcat.maven</groupId>
                <artifactId>tomcat7-maven-plugin</artifactId>
                <version>2.2</version>
                <configuration>
                    <port>8080</port>
                    <path>/</path>
                </configuration>
            </plugin>

            <!-- 配置jdk的编译版本 -->
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.7.0</version>
                <configuration>
                    <!-- 指定source和target的版本 -->
                    <source>1.8</source>
                    <target>1.8</target>
                </configuration>
            </plugin>
        </plugins>
    </build>


</project>

2.log4j.properties

log4j.rootLogger=info, stdout, R

log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout

log4j.appender.stdout.layout.ConversionPattern=%5p - %m%n

#log4j.appender.R=org.apache.log4j.RollingFileAppender
#log4j.appender.R.File=firestorm.log

log4j.appender.R.MaxFileSize=100KB
log4j.appender.R.MaxBackupIndex=1

log4j.appender.R.layout=org.apache.log4j.PatternLayout
log4j.appender.R.layout.ConversionPattern=%m%n

3.spring-quartz.xml

2中配置job的方法

第一种:实现 job接口,

​ 将job接口的实现类交给 jobDetail,

​ 然后让调度器Scheduler将任务 jobDetail和触发器Trigger执行调度

第二种:不用实现 job接口,

​ 将任务对象oceanStatusJob和要执行的方法名交给MethodInvokingJobDetailFactoryBean对象,

​ 将MethodInvokingJobDetailFactoryBean对象给到CronTriggerFactoryBean,并且加上cron表达式

​ 将CronTriggerFactoryBean交给调度器SchedulerFactoryBean

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd">

    <bean id="jobDetail" class="org.springframework.scheduling.quartz.JobDetailFactoryBean">
        <property name="jobClass" value="com.zzhua.job.MyJob"></property>
        <property name="applicationContextJobDataKey" value="springFactory"></property>
        <property name="group" value="jobGroup"></property>
        <property name="name" value="jobName"></property>
    </bean>

    <bean id="cronTrigger1" class="org.springframework.scheduling.quartz.CronTriggerFactoryBean">
        <property name="group" value="triggerGroup1"></property>
        <property name="name" value="triggerName1"></property>
        <property name="cronExpression" value="0/1 * * * * ?"></property>
        <property name="jobDetail" ref="jobDetail"></property>
    </bean>

    <bean id="cronTrigger2" class="org.springframework.scheduling.quartz.CronTriggerFactoryBean">
        <property name="group" value="triggerGroup2"></property>
        <property name="name" value="triggerName2"></property>
        <property name="cronExpression" value="0/2 * * * * ?"></property>
        <property name="jobDetail" ref="jobDetail"></property>
    </bean>

    <bean id="scheduler" class="org.springframework.scheduling.quartz.SchedulerFactoryBean">
        <property name="triggers">
            <array>
                <!--此处可选择配置固定的系统任务-->
                <!--<ref bean="cronTrigger1"/>-->
                <!--<ref bean="cronTrigger2"/>-->
            </array>
        </property>
    </bean>

</beans>

4.MyJob系统任务

public class MyJob extends QuartzJobBean {

    @Override
    protected void executeInternal(JobExecutionContext context) throws JobExecutionException {
        System.out.println("---------系统任务开始----------:
                           " + new SimpleDateFormat("HH:mm:ss").format(new Date()));
        ApplicationContext applicationContext 
            = (ApplicationContext) context.getJobDetail().getJobDataMap().get("springFactory");
        System.out.println("bean工厂->" + applicationContext);
        JobDetail jobDetail = context.getJobDetail();
        System.out.println("jobDetail" + jobDetail);
        Trigger trigger = context.getTrigger();
        System.out.println("trigger" + trigger);
        System.out.println();
        System.out.println("---------系统任务结束---------- ");
        System.out.println();
    }
}

5.HelloJob动态任务

@Slf4j
public class HelloJob extends QuartzJobBean {


    @Override
    protected void executeInternal(JobExecutionContext context) throws JobExecutionException {

        log.info("--------------------开始执行任务---------------------{}",
                 new SimpleDateFormat("HH:mm:ss").format(new Date()));
        log.info("jobDetail: {} , {}", 
                 context.getJobDetail().getKey().getName(), 
                 context.getJobDetail().getKey().getGroup());
        log.info("trigger:{} , {}", 
                 context.getTrigger().getKey().getName(), 
                 context.getTrigger().getKey().getGroup());
        System.out.println();

    }
}

6.动态任务的增删改操作

@RestController
@RequestMapping("job")
public class JobController {

    @Autowired
    Scheduler scheduler;

    // 添加job
    @RequestMapping(value = "addJob",produces = "application/json;charset=UTF-8")
    public String addJob(SysJob sysJob) throws ClassNotFoundException, SchedulerException {

        JobDetail jobDetail = JobBuilder
                .newJob((Class<? extends Job>) Class.forName(sysJob.getJobclassname()))
                .withIdentity(sysJob.getJobname(), sysJob.getJobgroup())
                .build();

        Trigger trigger = TriggerBuilder
                .newTrigger()
                .withIdentity(sysJob.getTriggername(), sysJob.getTriggergroup())
                .withSchedule(CronScheduleBuilder.cronSchedule(sysJob.getCronExpression()))
                .build();

        scheduler.scheduleJob(jobDetail, trigger);

        return "添加job成功";

    }

    // 停止job
    @RequestMapping(value = "pauseJob",produces = "application/json;charset=UTF-8")
    public String pause(SysJob sysJob) throws SchedulerException {
		// 根据jobkey停止, 也可以根据triggerkey停止
        scheduler.pauseJob(JobKey.jobKey(sysJob.getJobname(), sysJob.getJobgroup()));
        return "停止job成功";
    }

    // 重启job
    @RequestMapping(value = "resumeJob",produces = "application/json;charset=UTF-8")
    public String resumeJob(SysJob sysJob) throws SchedulerException {
        // 根据jobkey重启,也可以根据triggerkey重启
        scheduler.resumeJob(JobKey.jobKey(sysJob.getJobname(), sysJob.getJobgroup()));
        return "重启job成功";
    }

    // 修改job
    @RequestMapping(value = "updateJob",produces = "application/json;charset=UTF-8")
    public String updateJob(SysJob sysJob) throws SchedulerException {

        scheduler.rescheduleJob(
                TriggerKey.triggerKey(sysJob.getTriggername(), sysJob.getTriggergroup()),
                TriggerBuilder
                        .newTrigger()
                        .withIdentity(sysJob.getTriggername(), sysJob.getTriggergroup())
                 		.withSchedule(
                        		CronScheduleBuilder.cronSchedule(sysJob.getCronExpression())
                          )
                        .startNow() // 修改之后,立刻启动
                        .build()
        );

        return "修改job成功";


    }

    // 删除job
    @RequestMapping(value = "deleteJob",produces = "application/json;charset=UTF-8")
    public String deleteJob(SysJob sysJob) throws SchedulerException {

        scheduler.pauseTrigger(
            TriggerKey.triggerKey(sysJob.getTriggername(), sysJob.getTriggergroup())
        );
        
        scheduler.unscheduleJob(
            TriggerKey.triggerKey(sysJob.getTriggername(), sysJob.getTriggergroup())
        );
        
        scheduler.deleteJob(JobKey.jobKey(sysJob.getJobname(), sysJob.getJobgroup()));
        
        return "删除job成功";
    }

    // 检查scheduler中是否存在jobkey或triggerkey
    @RequestMapping(value = "checkJob",produces = "application/json;charset=UTF-8")
    public String checkJob(SysJob sysJob) throws SchedulerException {

        boolean b1 = scheduler.checkExists(
            JobKey.jobKey(sysJob.getJobname(), sysJob.getJobgroup())
        ); 
        System.out.println("jobKey: " + b1);

        boolean b2 = scheduler.checkExists(
            TriggerKey.triggerKey(sysJob.getTriggername(), sysJob.getTriggergroup())
        );
        
        System.out.println("triggerKey: " + b2);

        HashMap map = new HashMap();
        map.put("jobKey: ", b1);
        map.put("triggerKey: ", b2);
        return JSON.toJSONString(map);
    }

    // 检查job的状态
    @RequestMapping(value = "checkState",produces = "application/json;charset=UTF-8")
    public String checkState(SysJob sysJob) throws SchedulerException {

        TriggerKey triggerKey = TriggerKey.triggerKey(
                                                    sysJob.getTriggername(),
                                                    sysJob.getTriggergroup()
        										);
        
        Trigger.TriggerState triggerState = scheduler.getTriggerState(triggerKey);
        
        System.out.println(triggerState.name());
        return JSON.toJSONString(triggerState);

    }

}

7.总结

Scheduler通过jobKey 和 TriggerKey唯一标识来识别JobDetail对象和Trigger对象,

一个Trigger对象只能绑定一个JobDetail对象

一个JobDetail对象能够被多个Trigger对象绑定

一个jobDetail对象封装了一个job接口的实现类

一个trigger封装了一个schedule

Job接口实现类上的@DisAllowConcurrentExecution不允许同一个jobDetail对象,在一个job没完成时,再次创建job对象。

Job接口实现类上的@PersistJobDataAfterExecution使得job实现类中对jobdataMap修改的数据同步到jobDetail对象当中.

疑问一: job实现类如何拿到spring容器


具体分析如下: 
public class JobDetailFactoryBean implements 
    FactoryBean<JobDetail>, 
    BeanNameAware,
    ApplicationContextAware, 
    InitializingBean 
{...}

可以看出JobDetailFactoryBean实现了一系列的接口,
FactoryBean接口: 该接口是指向spring容器中注入实现该接口的bean时,会对该bean作一些特殊处理,
				特殊处理是指:当根据该bean的名字去拿bean对象的时候,返回的并不是本身的bean,
				而是拿这个本身的bean调用getObject返回来的bean(还重写的一个方法,告诉spring是否为单例)
				
InitializingBean接口: 该接口在本身的bean初始化完成,完成属性注入后,重写该接口的afterPropertiesSet方法被调用
    

ApplicationContextAware接口: 该接口在bean对象初始化完成,完成属性注入后,内置的aware接口方法回调[BeanNameAware,BeanClassLoaderAware,BeanFactoryAware]后,执行applyBeanPostProcessorsBeforeInitialization(wrappedBean, beanName)方法,这个方法会获取到所有的后置处理器[也就是扩展],这里注入了一个ApplicationContextAwareProcessor后置处理器,它会检查bean是否实现了ApplicationContextAware接口,如果实现了,那么就会回调实现的setApplicationContext(applicationContext),从而获取到spring包装过后的spring容器。然后再执行invokeInitMethods(beanName, wrappedBean, mbd);调用实现了InitializingBean接口的bean重写的afterPropertiesSet方法。

那么就有一个问题:ApplicationContextAwareProcessor这个后置处理器是什么时候怎么注入进去的。
我们来到AbstractBeanFactory中的addBeanPostProcessor方法打上断点,可以知道是在refresh()方法中的prepareBeanFactory(beanFactory);这一句中向容器中注入的该后置处理器的。 


现在回到原来的问题: JobDetailFactoryBean在实现Initializing接口重写的afterPropertiesSet方法中getJobDataMap().put(this.applicationContextJobDataKey, this.applicationContext);把spring容器给到了自己的applicationContextJobDataKey属性上面。而这个属性,之前在初始化bean完成属性注入时,可以设置值,那么就可以拿到spring容器了。


@Override
public void afterPropertiesSet() {
    Assert.notNull(this.jobClass, "Property 'jobClass' is required");

    if (this.name == null) {
        this.name = this.beanName;
    }
    if (this.group == null) {
        this.group = Scheduler.DEFAULT_GROUP;
    }
    if (this.applicationContextJobDataKey != null) {
        if (this.applicationContext == null) {
            throw new IllegalStateException(
                "JobDetailBean needs to be set up in an ApplicationContext " +
                "to be able to handle an 'applicationContextJobDataKey'");
        }
        // 拿到spring容器
        getJobDataMap().put(this.applicationContextJobDataKey, this.applicationContext);
    }

    JobDetailImpl jdi = new JobDetailImpl();
    jdi.setName(this.name != null ? this.name : toString());
    jdi.setGroup(this.group);
    jdi.setJobClass(this.jobClass);
    jdi.setJobDataMap(this.jobDataMap); // 把factoryBean的jobDataMap拿到了,也就拿到了spring工厂
    jdi.setDurability(this.durability);
    jdi.setRequestsRecovery(this.requestsRecovery);
    jdi.setDescription(this.description);
    this.jobDetail = jdi;
}

@Override
@Nullable
public JobDetail getObject() {
    return this.jobDetail;  // 返回jobDetail
}
@Override
public void refresh() throws BeansException, IllegalStateException {
   synchronized (this.startupShutdownMonitor) {
      // Prepare this context for refreshing.
      prepareRefresh();

      // Tell the subclass to refresh the internal bean factory.
      ConfigurableListableBeanFactory beanFactory = obtainFreshBeanFactory();

      // Prepare the bean factory for use in this context.
      prepareBeanFactory(beanFactory); // 此处spring自己注册了ApplicationContextAwareProcessor

      try {
         // Allows post-processing of the bean factory in context subclasses.
         postProcessBeanFactory(beanFactory);

         // Invoke factory processors registered as beans in the context.
         invokeBeanFactoryPostProcessors(beanFactory);

         // Register bean processors that intercept bean creation.
         registerBeanPostProcessors(beanFactory);

         // Initialize message source for this context.
         initMessageSource();

         // Initialize event multicaster for this context.
         initApplicationEventMulticaster();

         // Initialize other special beans in specific context subclasses.
         onRefresh();

         // Check for listener beans and register them.
         registerListeners();

         // Instantiate all remaining (non-lazy-init) singletons.
         finishBeanFactoryInitialization(beanFactory);

         // Last step: publish corresponding event.
         finishRefresh();
      }

      catch (BeansException ex) {
         if (logger.isWarnEnabled()) {
            logger.warn("Exception encountered during context initialization - " +
                  "cancelling refresh attempt: " + ex);
         }

         // Destroy already created singletons to avoid dangling resources.
         destroyBeans();

         // Reset 'active' flag.
         cancelRefresh(ex);

         // Propagate exception to caller.
         throw ex;
      }

      finally {
         // Reset common introspection caches in Spring's core, since we
         // might not ever need metadata for singleton beans anymore...
         resetCommonCaches();
      }
   }
}

疑问二:scheduler为什么不用调用start()方法了

// 这是通过DefaultLifecycleProcessor的这个处理器扩展实现的,

// 我们来到SchedulerFactoryBean中,
public class SchedulerFactoryBean extends SchedulerAccessor implements 
    FactoryBean<Scheduler>,
    BeanNameAware, 
    ApplicationContextAware, 
    InitializingBean, 
    DisposableBean,
    SmartLifecycle {/*...*/}
// 它实现了SmartLifecycle接口,间接实现了Lifecycle接口,重写了start()方法。
public interface SmartLifecycle extends Lifecycle, Phased {
	boolean isAutoStartup();
	void stop(Runnable callback);
}

// 看一下SchedulerFactoryBean重写的start()方法做了哪些事
@Override
public void start() throws SchedulingException {
    if (this.scheduler != null) {
    try {                                        		// [问题又来了: this.scheduler是怎么初始化的]
    	startScheduler(this.scheduler, this.startupDelay); // 调用下面这个方法
    }catch (SchedulerException ex) {
    	throw new SchedulingException("Could not start Quartz Scheduler", ex);
    }
}

// 被上面这个方法调用
/**
* Start the Quartz Scheduler, respecting the "startupDelay" setting.
* @param scheduler the Scheduler to start
* @param startupDelay the number of seconds to wait before starting
* the Scheduler asynchronously
*/
protected void startScheduler(final Scheduler scheduler, final int startupDelay) {
    if (startupDelay <= 0) {
        scheduler.start(); //  开启scheduler
    }
    else {
        // Not using the Quartz startDelayed method since we explicitly want a daemon
        // thread here, not keeping the JVM alive in case of all other threads ending.
        Thread schedulerThread = new Thread() {
            @Override
            public void run() {
                try {
                    Thread.sleep(startupDelay * 1000);
                }
                catch (InterruptedException ex) {
                    // simply proceed
                }
                scheduler.start();  //  开启scheduler,
            }
        };
        schedulerThread.setName("Quartz Scheduler [" + scheduler.getSchedulerName() + "]");
        schedulerThread.setDaemon(true);
        schedulerThread.start();
    }
}
//至此我们就看到了,scheduler的start方法 被调用了,那么lifecycle接口重写的start()方法,又是怎么被调用的呢?


//我们继续在start()方法上打上断点
// 通过debug追踪,可以看到是DefaultLifecleProcessor的doStart(...)在调用schedulerFactoryBean的start()方法

/**
* Start the specified bean as part of the given set of Lifecycle beans,
* making sure that any beans that it depends on are started first.
* @param lifecycleBeans Map with bean name as key and Lifecycle instance as value
* @param beanName the name of the bean to start
*/
private void doStart(
    Map<String, ? extends Lifecycle> lifecycleBeans,  // 是一个LinkedHashMap
    String beanName,                                  
    boolean autoStartupOnly
) 
{
    Lifecycle bean = lifecycleBeans.remove(beanName); // 从传过来LinkedHashMap中取出bean对象
    if (bean != null && !this.equals(bean)) {
        // 获取到bean工厂,这是因为DefaultLifeCycleProcessor还实现了BeanFactoryAware接口,
        // 把当前bean所需要依赖的其它bean先给启动了再说
        String[] dependenciesForBean = getBeanFactory().getDependenciesForBean(beanName);
        for (String dependency : dependenciesForBean) {
            doStart(lifecycleBeans, dependency, autoStartupOnly);
        }
        
        if (
            	!bean.isRunning() && // bean没有在运行,并且
            (
                !autoStartupOnly ||     // 传过来的,目前是true,所以这里是false                   
             	!(bean instanceof SmartLifecycle) ||  // bean没有实现smartlecycle接口
             	((SmartLifecycle) bean).isAutoStartup() // 调用bean重写的isAutoStartup()方法
            )                                           // SchdulerFactoryBean这里返回的,默认是true
            // 以上返回true
           ) {
            try {
                bean.start(); // 这里在调用
            }
            catch (Throwable ex) {
                throw new ApplicationContextException(
                    "Failed to start bean '" + beanName + "'", ex);
            }
        }
    }
}   

// 那么DefaultLifecycle的doStart()方法又是怎么被调用的呢?
// 是被DefaultLifecycle的内部类LifecycleGroup中的start()方法调用的
private class LifecycleGroup {
    
    private final List<LifecycleGroupMember> members = new ArrayList<>();
    
    private final Map<String, ? extends Lifecycle> lifecycleBeans;
    
    private final boolean autoStartupOnly;

    private volatile int smartMemberCount;

    public LifecycleGroup(int phase, 
                          long timeout, 
                          Map<String, ? extends Lifecycle> lifecycleBeans, // 给lifecyleBeans赋值
                          boolean autoStartupOnly) {
        this.phase = phase;
        this.timeout = timeout;
        this.lifecycleBeans = lifecycleBeans;
        this.autoStartupOnly = autoStartupOnly;
    }
    
    public void add(String name, Lifecycle bean) {
        if (bean instanceof SmartLifecycle) {
            this.smartMemberCount++;
        }
        this.members.add(new LifecycleGroupMember(name, bean)); // 把bean封装成LifecycleGroupMember
    }                                                           // 再放到members里面去
    
    public void start() {
        if (this.members.isEmpty()) {
            return;
        }
        if (logger.isInfoEnabled()) {
            logger.info("Starting beans in phase " + this.phase);
        }
        Collections.sort(this.members);
        for (LifecycleGroupMember member : this.members) {
            // 获取到所有的members,遍历members,
            // member的名字然后看是否包含在了lifecycleBeans里面
            if (this.lifecycleBeans.containsKey(member.name)) {
                doStart(this.lifecycleBeans, member.name, this.autoStartupOnly);
            }
        }
    } 
}
 
// 内部类的start()方法是被DefaultLfecycleProcessor的startBeans(boolean autoStartupOnly)方法调用
private void startBeans(boolean autoStartupOnly) { // 传过来的是true
    
    // 关键的一步: 之前拿到spring容器,所以这里拿到容器中所有实现了lifecycle接口的所有bean对象
    Map<String, Lifecycle> lifecycleBeans = getLifecycleBeans();
    Map<Integer, LifecycleGroup> phases = new HashMap<>();
    
    lifecycleBeans.forEach((beanName, bean) -> {
        if (!autoStartupOnly || 
            (bean instanceof SmartLifecycle && ((SmartLifecycle) bean).isAutoStartup())
           ) {                                                    // 如果没有实现就,返回0
            int phase = getPhase(bean); // 判断bean是否实现了Phased接口,如果实现了,返回就调用并返回int值
            
            LifecycleGroup group = phases.get(phase); // 从map中判断判断这个bean的phase值,
                                                      // 是否有对应的LifecycleGroup
            // 如果没有对应的LifecycleGroup对象
            if (group == null) {
                // 在这里创建的LifecycleGroup内部类对象
                group = new LifecycleGroup(phase,    // 上面获取到的bean的phase值
                                           this.timeoutPerShutdownPhase,  // 默认30000
                                           lifecycleBeans, // spring容器中所有的lifecycle的bean对象
                                           autoStartupOnly); 
                phases.put(phase, group);
            }
                                       // 把这个bean放进lifecycleGroup对象中,
            group.add(beanName, bean); // 所以上面内部类的add方法调用了
        }
    });
    if (!phases.isEmpty()) {
        List<Integer> keys = new ArrayList<>(phases.keySet());
        Collections.sort(keys);
        for (Integer key : keys) {
            phases.get(key).start(); // 挨个调用每个在phases这个map中的每个bean的start()方法
        }
    }
}    

// startBeans方法被DefaultLifecycleProcessor的onRefresh方法调用了,
// 		OnRefresh方法是重写的LifecycleProcessor接口的方法,而LifecycleProcessor接口继承了Lifecycle接口  
@Override
public void onRefresh() {
    startBeans(true);
    this.running = true;
}
    
// onRefresh方法被AbstractApplicationContext的finishRefresh方法调用了    
protected void finishRefresh() {  // finishRefresh是refresh方法的最后一步,容器刷新完毕后,发布相应的事件
    // Clear context-level resource caches (such as ASM metadata from scanning).
    clearResourceCaches();

    // 在这里注入进去的DefaultLifecyleProcessor
    // Initialize lifecycle processor for this context.
    initLifecycleProcessor();

    // Propagate refresh to lifecycle processor first.
    // 在这里被调用
    // 获取到LifecycleProcessor属性值,然后调用他的onRefresh方法
    // 那么LifecycleProcessor是怎样注入进去的呢?
    getLifecycleProcessor().onRefresh(); 

    // Publish the final event.
    publishEvent(new ContextRefreshedEvent(this));

    // Participate in LiveBeansView MBean, if active.
    LiveBeansView.registerApplicationContext(this);
}

// 在AbstractApplicationContext的LifecycleProcessor属性上打上断点继续追踪
// 我们来到了AbstractApplicationContext的initLifecycleProcessor方法
    /**
	 * Initialize the LifecycleProcessor.
	 * Uses DefaultLifecycleProcessor if none defined in the context.
	 * @see org.springframework.context.support.DefaultLifecycleProcessor
	 */
    protected void initLifecycleProcessor() {
        // 获取到bean工厂(bean工厂被组合到applicationContext中了,设计模式的味道)
        ConfigurableListableBeanFactory beanFactory = getBeanFactory();
        // 判断bean工厂的本地bean中是否包含指定bean   // 定值:"lifecycleProcessor"
        if (beanFactory.containsLocalBean(LIFECYCLE_PROCESSOR_BEAN_NAME)) {
            this.lifecycleProcessor = beanFactory.getBean(
                                            LIFECYCLE_PROCESSOR_BEAN_NAME, // 指定的名字
                                            LifecycleProcessor.class       // 指定的接口
            );
        }
        else {
            // 没有的话,那么就创建一个默认的DefaultLifecycleProcessor对象,于是这个对象就创建好了
            DefaultLifecycleProcessor defaultProcessor = new DefaultLifecycleProcessor();
            defaultProcessor.setBeanFactory(beanFactory);
            this.lifecycleProcessor = defaultProcessor; // 至此: 完成了lifecycleProcessor的属性赋值
            beanFactory.registerSingleton(LIFECYCLE_PROCESSOR_BEAN_NAME, this.lifecycleProcessor);
        }
    }


    
    
    
// 应用: 我们定义一个bean实现SmartLifecycle接口,交给spring管理
    @Component
    public class MyLifecycle implements SmartLifecycle {
    @Override
    public void start() {
        System.out.println("myLifecycle starting...");
    }

    @Override
    public void stop() {
        System.out.println("myLifecycle stopped...");
    }

    @Override
    public boolean isRunning() {
        return false;
    }

    @Override
    public boolean isAutoStartup() {
        return true;
    }

    @Override
    public void stop(Runnable callback) {

    }

    @Override
    public int getPhase() {
        return 1;
    }
}
    
// 可以看到打印信息
INFO - Starting beans in phase 1
myLifecycle starting...

疑问三: SchedulerFactoryBean中的this.scheduler是怎么初始化的

// 从上面的分析,在代码里看到了SchedulerFactoryBean种的scheduler是怎么调用的start()方法的。那scheduler这个属性对象是怎么完成赋值的呢?
// 注意到: SchedulerFactoryBean 实现了initializing接口,这个接口在对象初始化完成,并且完成属性注入后,被调用。
// 看看这个方法
@Override
public void afterPropertiesSet() throws Exception {
    if (this.dataSource == null && this.nonTransactionalDataSource != null) {
        this.dataSource = this.nonTransactionalDataSource;
    }

    if (this.applicationContext != null && this.resourceLoader == null) {
        this.resourceLoader = this.applicationContext;
    }

    // Create SchedulerFactory instance...
    SchedulerFactory schedulerFactory 
        = BeanUtils.instantiateClass(this.schedulerFactoryClass);
    // Load and/or apply Quartz properties to the given SchedulerFactory.
    initSchedulerFactory(schedulerFactory);

    if (this.resourceLoader != null) {
        // Make given ResourceLoader available for SchedulerFactory configuration.
        configTimeResourceLoaderHolder.set(this.resourceLoader);
    }
    if (this.taskExecutor != null) {
        // Make given TaskExecutor available for SchedulerFactory configuration.
        configTimeTaskExecutorHolder.set(this.taskExecutor);
    }
    if (this.dataSource != null) {
        // Make given DataSource available for SchedulerFactory configuration.
        configTimeDataSourceHolder.set(this.dataSource);
    }
    if (this.nonTransactionalDataSource != null) {
        // Make given non-transactional DataSource available 
        // for SchedulerFactory configuration.
        configTimeNonTransactionalDataSourceHolder.set(this.nonTransactionalDataSource);
    }

    // 在对schedulerFactory一顿设置之后,从这个工厂中开始获取scheduler,
    // 并且注入给了schedulerFactoryBean的 scheduler属性
    // Get Scheduler instance from SchedulerFactory.
    try {

        this.scheduler = createScheduler(schedulerFactory, this.schedulerName);

        populateSchedulerContext();
        if (!this.jobFactorySet && !(this.scheduler instanceof RemoteScheduler)) {
            // Use AdaptableJobFactory as default for a local Scheduler, unless when
            // explicitly given a null value through the "jobFactory" bean property.
            this.jobFactory = new AdaptableJobFactory();
        }
        if (this.jobFactory != null) {
            if (this.jobFactory instanceof SchedulerContextAware) {					                              ((SchedulerContextAware)this.jobFactory) .setSchedulerContext(
                this.scheduler.getContext());
                                                                  }
            this.scheduler.setJobFactory(this.jobFactory); // 给scheduler注入jobFactory
        }
    }

    finally {
        if (this.resourceLoader != null) {
            configTimeResourceLoaderHolder.remove();
        }
        if (this.taskExecutor != null) {
            configTimeTaskExecutorHolder.remove();
        }
        if (this.dataSource != null) {
            configTimeDataSourceHolder.remove();
        }
        if (this.nonTransactionalDataSource != null) {
            configTimeNonTransactionalDataSourceHolder.remove();
        }
    }

    registerListeners();
    registerJobsAndTriggers();
}

// 以上就创建了scheduler,那么这个scheduler又是怎么被spring管理注入的呢?
// 注意到: SchedulerFactoryBean 实现了FactoryBean接口,看看重写的方法
@Override
@Nullable
public Scheduler getObject() {
    return this.scheduler; // 就是直接返回的scheduler
}

3.WebSPI机制配置启动tomcat

导入了spring-web、spring-webmvc的依赖,

在spring-web中的META-INF/services下有特定名称文件javax.servlet.ServletContainerInitializer,
该文件中写入了启动类的全类名: org.springframework.web.SpringServletContainerInitializer,实现了ServletContainerInitializer这个接口。

tomcat在启动的时候,会去类路径下找META-INF/services路径下ServletContainerInitializer的全类名的文件,然后读取这个特定名称文件的内容,加载这个类,即SpringServletContainerInitializer这个类。

SpringServletContainerInitializer这个类中传入了感兴趣的类@HandlesTypes({WebApplicationInitializer.class}),tomcat又会去容器中找任何实现了WebApplicationInitializer这个接口的非抽象类,就是下面写的这个类。然后一个个钓鱼它们的initializer.onStartup(servletContext)方法,并且把ServletContext传给了这个方法。

经过上面这一步,我们自定义的类MyWebAppInitializer就能够在tomcat启动的过程中,拿到servletContext,
从而注册三大组件。

MyWebAppInitializer这个类大部分的功能都已经让父类给做了,比如给tomcat注册监听器ContextLoaderListener,同时加载spring的配置类。给tomcat注册前端控制器DispatcherServlet,同时加载springmvc的配置类。我们只需要重写其中给出配置类位置的方法即可。拓展:我们还可以重写父类的其它方法,在tomcat启动的过程中注册三大组件。

注意: tomcat的这种SPI的加载机制和tomcat加载web.xml是可以并存的

1.MyWebAppInitializer

package com.zzhua;

import com.zzhua.config.AppConfig;
import com.zzhua.config.RootConfig;
import org.springframework.web.servlet.support
    .AbstractAnnotationConfigDispatcherServletInitializer;
public class MyWebAppInitializer extends AbstractAnnotationConfigDispatcherServletInitializer {

   //获取根容器的配置类;(Spring的配置文件)   父容器;
   @Override
   protected Class<?>[] getRootConfigClasses() {
      // TODO Auto-generated method stub
      return new Class<?>[]{RootConfig.class};
   }

   //获取web容器的配置类(SpringMVC配置文件)  子容器;
   @Override
   protected Class<?>[] getServletConfigClasses() {
      // TODO Auto-generated method stub
      return new Class<?>[]{AppConfig.class};
   }

   //获取DispatcherServlet的映射信息
   //  /:拦截所有请求(包括静态资源(xx.js,xx.png)),但是不包括*.jsp;
   //  /*:拦截所有请求;连*.jsp页面都拦截;jsp页面是tomcat的jsp引擎解析的;
   @Override
   protected String[] getServletMappings() {
      // TODO Auto-generated method stub
      return new String[]{"/"};
   }
    
    /**
    // 比如说在此处通过重写onStatup方法,注册一个servlet。
        @Override
        public void onStartup(ServletContext servletContext) throws ServletException {
              ServletRegistration.Dynamic test2Servlet 
                                    = servletContext.addServlet("test2", new TestServlet());
              test2Servlet.addMapping("/test2");
              super.onStartup(servletContext);

        }
   */

}

2.RootConfig

相当于spring的父容器,spring主配置类

@ComponentScan("com.zzhua.job")
@ImportResource(locations = "classpath:spring-quartz.xml")
public class RootConfig {

}

3.AppConfig

相当于springmvc的子容器,springmvc的配置类

package com.zzhua.config;

import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.ComponentScan.Filter;
import org.springframework.context.annotation.FilterType;
import org.springframework.context.annotation.Import;
import org.springframework.context.annotation.ImportResource;
import org.springframework.stereotype.Controller;
import org.springframework.web.servlet.config.annotation.*;

//SpringMVC只扫描Controller;子容器
//useDefaultFilters=false 禁用默认的过滤规则;
@ComponentScan("com.zzhua.controller")
@EnableWebMvc
public class AppConfig extends WebMvcConfigurerAdapter  {


    //静态资源访问
    @Override
    public void configureDefaultServletHandling(DefaultServletHandlerConfigurer configurer) {
        // TODO Auto-generated method stub
        configurer.enable();
    }

    //拦截器
    @Override
    public void addInterceptors(InterceptorRegistry registry) {
        // TODO Auto-generated method stub
        //super.addInterceptors(registry);
//        registry.addInterceptor(new MyFirstInterceptor()).addPathPatterns("/**");
    }

}

4.Springboot整合quartz

1. 引入依赖

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.2.2.RELEASE</version>
        <relativePath/> <!-- lookup parent from repository -->
    </parent>

    <groupId>com.zzhua</groupId>
    <artifactId>springboot-quartz-demo02</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <name>springboot-quartz-demo02</name>
    <description>Demo project for Spring Boot</description>

    <properties>
        <java.version>1.8</java.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-quartz</artifactId>
        </dependency>
        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
            <scope>runtime</scope>
        </dependency>
        <dependency>
            <groupId>org.mybatis.spring.boot</groupId>
            <artifactId>mybatis-spring-boot-starter</artifactId>
            <version>1.3.1</version>
        </dependency>
        <dependency>
            <groupId>com.mchange</groupId>
            <artifactId>c3p0</artifactId>
            <version>0.9.5.2</version>
        </dependency>
        <!--druid数据库连接池 -->
        <dependency>
            <groupId>com.alibaba</groupId>
            <artifactId>druid-spring-boot-starter</artifactId>
            <version>1.1.10</version>
        </dependency>
    </dependencies>


    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
        </plugins>
    </build>

</project>

2. 建表

/*
Navicat MySQL Data Transfer

Source Server         : localhost_3306
Source Server Version : 50717
Source Host           : 127.0.0.1:3306
Source Database       : quartz-demo

Target Server Type    : MYSQL
Target Server Version : 50717
File Encoding         : 65001

Date: 2020-07-14 16:37:31
*/

SET FOREIGN_KEY_CHECKS=0;

-- ----------------------------
-- Table structure for qrtz_blob_triggers
-- ----------------------------
DROP TABLE IF EXISTS `qrtz_blob_triggers`;
CREATE TABLE `qrtz_blob_triggers` (
  `sched_name` varchar(120) NOT NULL,
  `trigger_name` varchar(200) NOT NULL,
  `trigger_group` varchar(200) NOT NULL,
  `blob_data` blob,
  PRIMARY KEY (`sched_name`,`trigger_name`,`trigger_group`),
  CONSTRAINT `qrtz_blob_triggers_ibfk_1` FOREIGN KEY (`sched_name`, `trigger_name`, `trigger_group`) REFERENCES `qrtz_triggers` (`sched_name`, `trigger_name`, `trigger_group`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

-- ----------------------------
-- Table structure for qrtz_calendars
-- ----------------------------
DROP TABLE IF EXISTS `qrtz_calendars`;
CREATE TABLE `qrtz_calendars` (
  `sched_name` varchar(120) NOT NULL,
  `calendar_name` varchar(200) NOT NULL,
  `calendar` blob NOT NULL,
  PRIMARY KEY (`sched_name`,`calendar_name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

-- ----------------------------
-- Table structure for qrtz_cron_triggers
-- ----------------------------
DROP TABLE IF EXISTS `qrtz_cron_triggers`;
CREATE TABLE `qrtz_cron_triggers` (
  `sched_name` varchar(120) NOT NULL,
  `trigger_name` varchar(200) NOT NULL,
  `trigger_group` varchar(200) NOT NULL,
  `cron_expression` varchar(120) NOT NULL,
  `time_zone_id` varchar(80) DEFAULT NULL,
  PRIMARY KEY (`sched_name`,`trigger_name`,`trigger_group`),
  CONSTRAINT `qrtz_cron_triggers_ibfk_1` FOREIGN KEY (`sched_name`, `trigger_name`, `trigger_group`) REFERENCES `qrtz_triggers` (`sched_name`, `trigger_name`, `trigger_group`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

-- ----------------------------
-- Table structure for qrtz_fired_triggers
-- ----------------------------
DROP TABLE IF EXISTS `qrtz_fired_triggers`;
CREATE TABLE `qrtz_fired_triggers` (
  `sched_name` varchar(120) NOT NULL,
  `entry_id` varchar(95) NOT NULL,
  `trigger_name` varchar(200) NOT NULL,
  `trigger_group` varchar(200) NOT NULL,
  `instance_name` varchar(200) NOT NULL,
  `fired_time` bigint(20) NOT NULL,
  `sched_time` bigint(20) NOT NULL,
  `priority` int(11) NOT NULL,
  `state` varchar(16) NOT NULL,
  `job_name` varchar(200) DEFAULT NULL,
  `job_group` varchar(200) DEFAULT NULL,
  `is_nonconcurrent` varchar(5) DEFAULT NULL,
  `requests_recovery` varchar(5) DEFAULT NULL,
  PRIMARY KEY (`sched_name`,`entry_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

-- ----------------------------
-- Table structure for qrtz_job_details
-- ----------------------------
DROP TABLE IF EXISTS `qrtz_job_details`;
CREATE TABLE `qrtz_job_details` (
  `sched_name` varchar(120) NOT NULL,
  `job_name` varchar(200) NOT NULL,
  `job_group` varchar(200) NOT NULL,
  `description` varchar(250) DEFAULT NULL,
  `job_class_name` varchar(250) NOT NULL,
  `is_durable` varchar(5) NOT NULL,
  `is_nonconcurrent` varchar(5) NOT NULL,
  `is_update_data` varchar(5) NOT NULL,
  `requests_recovery` varchar(5) NOT NULL,
  `job_data` blob,
  PRIMARY KEY (`sched_name`,`job_name`,`job_group`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

-- ----------------------------
-- Table structure for qrtz_locks
-- ----------------------------
DROP TABLE IF EXISTS `qrtz_locks`;
CREATE TABLE `qrtz_locks` (
  `sched_name` varchar(120) NOT NULL,
  `lock_name` varchar(40) NOT NULL,
  PRIMARY KEY (`sched_name`,`lock_name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

-- ----------------------------
-- Table structure for qrtz_paused_trigger_grps
-- ----------------------------
DROP TABLE IF EXISTS `qrtz_paused_trigger_grps`;
CREATE TABLE `qrtz_paused_trigger_grps` (
  `sched_name` varchar(120) NOT NULL,
  `trigger_group` varchar(200) NOT NULL,
  PRIMARY KEY (`sched_name`,`trigger_group`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

-- ----------------------------
-- Table structure for qrtz_scheduler_state
-- ----------------------------
DROP TABLE IF EXISTS `qrtz_scheduler_state`;
CREATE TABLE `qrtz_scheduler_state` (
  `sched_name` varchar(120) NOT NULL,
  `instance_name` varchar(200) NOT NULL,
  `last_checkin_time` bigint(20) NOT NULL,
  `checkin_interval` bigint(20) NOT NULL,
  PRIMARY KEY (`sched_name`,`instance_name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

-- ----------------------------
-- Table structure for qrtz_simple_triggers
-- ----------------------------
DROP TABLE IF EXISTS `qrtz_simple_triggers`;
CREATE TABLE `qrtz_simple_triggers` (
  `sched_name` varchar(120) NOT NULL,
  `trigger_name` varchar(200) NOT NULL,
  `trigger_group` varchar(200) NOT NULL,
  `repeat_count` bigint(20) NOT NULL,
  `repeat_interval` bigint(20) NOT NULL,
  `times_triggered` bigint(20) NOT NULL,
  PRIMARY KEY (`sched_name`,`trigger_name`,`trigger_group`),
  CONSTRAINT `qrtz_simple_triggers_ibfk_1` FOREIGN KEY (`sched_name`, `trigger_name`, `trigger_group`) REFERENCES `qrtz_triggers` (`sched_name`, `trigger_name`, `trigger_group`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

-- ----------------------------
-- Table structure for qrtz_simprop_triggers
-- ----------------------------
DROP TABLE IF EXISTS `qrtz_simprop_triggers`;
CREATE TABLE `qrtz_simprop_triggers` (
  `sched_name` varchar(120) NOT NULL,
  `trigger_name` varchar(200) NOT NULL,
  `trigger_group` varchar(200) NOT NULL,
  `str_prop_1` varchar(512) DEFAULT NULL,
  `str_prop_2` varchar(512) DEFAULT NULL,
  `str_prop_3` varchar(512) DEFAULT NULL,
  `int_prop_1` int(11) DEFAULT NULL,
  `int_prop_2` int(11) DEFAULT NULL,
  `long_prop_1` bigint(20) DEFAULT NULL,
  `long_prop_2` bigint(20) DEFAULT NULL,
  `dec_prop_1` decimal(13,4) DEFAULT NULL,
  `dec_prop_2` decimal(13,4) DEFAULT NULL,
  `bool_prop_1` varchar(5) DEFAULT NULL,
  `bool_prop_2` varchar(5) DEFAULT NULL,
  PRIMARY KEY (`sched_name`,`trigger_name`,`trigger_group`),
  CONSTRAINT `qrtz_simprop_triggers_ibfk_1` FOREIGN KEY (`sched_name`, `trigger_name`, `trigger_group`) REFERENCES `qrtz_triggers` (`sched_name`, `trigger_name`, `trigger_group`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

-- ----------------------------
-- Table structure for qrtz_triggers
-- ----------------------------
DROP TABLE IF EXISTS `qrtz_triggers`;
CREATE TABLE `qrtz_triggers` (
  `sched_name` varchar(120) NOT NULL,
  `trigger_name` varchar(200) NOT NULL,
  `trigger_group` varchar(200) NOT NULL,
  `job_name` varchar(200) NOT NULL,
  `job_group` varchar(200) NOT NULL,
  `description` varchar(250) DEFAULT NULL,
  `next_fire_time` bigint(20) DEFAULT NULL,
  `prev_fire_time` bigint(20) DEFAULT NULL,
  `priority` int(11) DEFAULT NULL,
  `trigger_state` varchar(16) NOT NULL,
  `trigger_type` varchar(8) NOT NULL,
  `start_time` bigint(20) NOT NULL,
  `end_time` bigint(20) DEFAULT NULL,
  `calendar_name` varchar(200) DEFAULT NULL,
  `misfire_instr` smallint(6) DEFAULT NULL,
  `job_data` blob,
  PRIMARY KEY (`sched_name`,`trigger_name`,`trigger_group`),
  KEY `sched_name` (`sched_name`,`job_name`,`job_group`),
  CONSTRAINT `qrtz_triggers_ibfk_1` FOREIGN KEY (`sched_name`, `job_name`, `job_group`) REFERENCES `qrtz_job_details` (`sched_name`, `job_name`, `job_group`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

3. yml配置

spring:
  datasource:
    url: jdbc:mysql://127.0.0.1:3306/quartz-demo?serverTimezone=GMT&characterEncoding=utf8
                                &zeroDateTimeBehavior=convertToNull&allowMultiQueries=true
    username: root
    password: root
    driver-class-name: com.mysql.jdbc.Driver
    type: com.alibaba.druid.pool.DruidDataSource
    druid:
      initialSize: 2
      minIdle: 2
      maxActive: 30
      #StatViewServlet:
      #loginUsername: admin
      #loginPassword: admin
  quartz:
    #相关属性配置
    properties:
      org:
        quartz:
          scheduler:
            instanceName: DefaultQuartzScheduler
            instanceId: AUTO
          jobStore:
            class: org.quartz.impl.jdbcjobstore.JobStoreTX
            driverDelegateClass: org.quartz.impl.jdbcjobstore.StdJDBCDelegate
            tablePrefix: QRTZ_
            isClustered: false
            clusterCheckinInterval: 10000
            useProperties: true
          threadPool:
            class: org.quartz.simpl.SimpleThreadPool
            threadCount: 10
            threadPriority: 5
            threadsInheritContextClassLoaderOfInitializingThread: true
          dataSource:
            default:
              URL: jdbc:mysql://127.0.0.1:3306/quartz-demo?characterEncoding=utf-8
              user: root
              password: root
              driver: com.mysql.jdbc.Driver

    #数据库方式
    job-store-type: jdbc
    #启动时开启下面2项配置,quartz将会自动创建需要的表,创建表后。
    #记得关掉下面的配置,否则将会清理掉原来的数据,重新建表
#    jdbc:
#      initialize-schema: always

4.动态定时任务的增删改查

JobDetail -> JobKey (name,group) -> Job

Trigger -> TriggerKey (name,group) -> Schedule

@Controller
public class IndexController {
    
    @Autowired
    private Scheduler scheduler;

    @RequestMapping(value = "hasExecuted",method = RequestMethod.GET)
    @ResponseBody
    public String hasExecuted(String jobName,String jobGroup){

        JobKey jobKey = JobKey.jobKey(jobName, jobGroup);
        JobDetail oldJobDetail = null;
        try {
            oldJobDetail = scheduler.getJobDetail(jobKey);
        } catch (SchedulerException e) {
            e.printStackTrace();
        }

        if (oldJobDetail == null) {
            return "未开启";
        } else {
            return "已开启";
        }
    }

    @ResponseBody
    @RequestMapping(value = "/index", method = RequestMethod.GET)
    public String index() throws SchedulerException {

        TriggerKey triggerKey = TriggerKey.triggerKey("zzhua-MyTrigger", "MyTrigger");
        CronTrigger triggerOld = null;

        try {
            // 从Scheduler中根据TriggerKey
            triggerOld = (CronTrigger) scheduler.getTrigger(triggerKey);
        } catch (SchedulerException e) {
            e.printStackTrace();
        }

        if (triggerOld == null) {

            //cron表达式
            CronScheduleBuilder cronScheduleBuilder = CronScheduleBuilder.cronSchedule("0/6 * * * * ?");

            //将job加入到jobDetail中
            JobDetail jobDetail = JobBuilder.newJob(UploadTask.class).withIdentity("zzhua-MyJob", "MyJob").build();
            Trigger trigger = TriggerBuilder.newTrigger().withIdentity("zzhua-MyTrigger","MyTrigger").withSchedule(cronScheduleBuilder).build();

            //执行任务
            scheduler.scheduleJob(jobDetail, trigger);
            return "开始执行任务!";
        } else {
            System.out.println("当前job已存在--------------------------------------------");
            return "该任务已存在";
        }
    }

    /**
     * 删除job
     *
     * @param triggerName  触发器名称
     * @param triggerGroup 触发器分组
     * @param jobName      任务名称
     * @param jobGroup     任务分组
     * @throws SchedulerException
     */
    public void deleteJob(String triggerName, 
                          String triggerGroup, 
                          String jobName, 
                          String jobGroup) throws SchedulerException {
        TriggerKey triggerKey = TriggerKey.triggerKey(triggerName, triggerGroup);
        scheduler.pauseTrigger(triggerKey);
        scheduler.unscheduleJob(triggerKey);
        JobKey jobKey = JobKey.jobKey(jobName, jobGroup);
        scheduler.deleteJob(jobKey);
    }

    /**
     * 修改定时任务
     *
     * @param oldTriggerKey 需要修改的TriggerKey 也就是唯一标识
     * @param cron          新的cron表达式
     */
    public void updateJob(TriggerKey oldTriggerKey, String cron) {
        CronScheduleBuilder scheduleBuilder = CronScheduleBuilder.cronSchedule(cron);
        CronTrigger cronTrigger = TriggerBuilder.newTrigger()
                .withIdentity(oldTriggerKey).withSchedule(scheduleBuilder).build();
        try {
            scheduler.rescheduleJob(oldTriggerKey, cronTrigger);
        } catch (SchedulerException e) {
            e.printStackTrace();
        }
    }

    /**
     * 新增job任务
     *
     * @param jobName          job名称
     * @param jobGroupName     job分组名称
     * @param triggerName      触发器名称
     * @param triggerGroupName 触发器分组名称
     * @param jobClass         需要执行的job.class
     * @param cron             cron 表达式
     * @throws SchedulerException
     */
    public void addJob(String jobName, String jobGroupName,
                       String triggerName, String triggerGroupName, 
                       String jobClassName, String cron) 
                                  throws SchedulerException, ClassNotFoundException {
        
        // 根据jobClassName获取到job类
        Class<Job> jobClass = (Class<Job>) Class.forName(jobClassName);
        
        JobDetail jobDetail = JobBuilder.newJob(jobClass)
                                        .withIdentity(jobName, jobGroupName).build();

        CronScheduleBuilder cronScheduleBuilder = CronScheduleBuilder.cronSchedule(cron);
        Trigger trigger = TriggerBuilder.newTrigger()
                                        .withIdentity(triggerName, triggerGroupName)
                                        .withSchedule(cronScheduleBuilder).build();
        
        scheduler.scheduleJob(jobDetail, trigger);
    }


}
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值