SpringBoot2.3.4整合xxl-job实现分布式定时任务

概述

xxl-job是一个分布式任务调度平台,其核心设计目标时开发迅速、学习简单、轻量级、易扩展。现已开放源码并接入多家公司线上产品线,开箱即用
更多详情参阅xxl-job官网
xxl-job是一个十分优秀的开源任务调度平台,已经有很多知名的公司都接入了自己的系统。本文基于SpringBoot2.3.4整合xxl-job实现分布式定时任务。

下载xxl-job源码

码云地址
github地址
本文下载最新稳定版本2.2.0作为整合版本
xxl-job

搭建环境

在doc/db目录下找到sql文件tables_xxl_job.sql,并在数据库执行
可以将xxl-job-admin项目打包成jar执行或者导入IDEA中本地执行,本文方便整合和测试,导入IDEA中,加载相应依赖,修改配置文件数据库连接信息
xxl-job数据库连接
执行XxlJobApplication启动类,在浏览器访问http://localhost:8094/xxl-job-admin
输入用户名/密码:admin/123456
xxl-job登录
xxl-job首页

SpringBoot整合xxl-job

在pom.xml中引入依赖

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-test</artifactId>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>com.xuxueli</groupId>
    <artifactId>xxl-job-core</artifactId>
    <version>2.2.0</version>
</dependency>

application.yml配置文件中添加配置信息

server:
  port: 8093
logging:
  config: classpath:logback-spring.xml
xxl:
  job:
    admin:
      addresses: http://127.0.0.1:8094/xxl-job-admin #调度中心地址
    accessToken: #执行器通讯token
    executor:
      appname: springboot-xxl-job #执行器名称,执行器心跳注册分组依据,为空时关闭自动注册
      address: #执行器注册地址
      ip: #执行器IP
      port: 9999 #执行器端口号
      logpath: /data/applogs/xxl-job/jobhandler #执行器运行日志文件存储路径
      logretentiondays: 30 #日志文件保存天数

新建xxl-job配置类

@Configuration
public class XxlJobConfig {

    private Logger logger = LoggerFactory.getLogger(XxlJobConfig.class);

    @Value("${xxl.job.admin.addresses}")
    private String adminAddresses;

    @Value("${xxl.job.accessToken}")
    private String accessToken;

    @Value("${xxl.job.executor.appname}")
    private String appname;

    @Value("${xxl.job.executor.address}")
    private String address;

    @Value("${xxl.job.executor.ip}")
    private String ip;

    @Value("${xxl.job.executor.port}")
    private int port;

    @Value("${xxl.job.executor.logpath}")
    private String logPath;

    @Value("${xxl.job.executor.logretentiondays}")
    private int logRetentionDays;

    @Bean
    public XxlJobSpringExecutor xxlJobSpringExecutor() {
        logger.info(">>>>>>>>>>> xxl-job config init.");
        XxlJobSpringExecutor xxlJobSpringExecutor = new XxlJobSpringExecutor();
        xxlJobSpringExecutor.setAdminAddresses(adminAddresses);
        xxlJobSpringExecutor.setAppname(appname);
        xxlJobSpringExecutor.setAddress(address);
        xxlJobSpringExecutor.setIp(ip);
        xxlJobSpringExecutor.setPort(port);
        xxlJobSpringExecutor.setAccessToken(accessToken);
        xxlJobSpringExecutor.setLogPath(logPath);
        xxlJobSpringExecutor.setLogRetentionDays(logRetentionDays);
        return xxlJobSpringExecutor;
    }
}

编写执行器

@Component
public class XxlJobHandler {

    private Logger logger = LoggerFactory.getLogger(XxlJobHandler.class);

    /**
     * 简单任务(Bean模式)
     * @param param
     * @return
     * @throws Exception
     */
    @XxlJob("jobHandler")
    public ReturnT<String> jobHandler(String param) throws Exception {
        logger.info("XXL-JOB, Hello World");
        for (int i = 0; i < 5; i++) {
            logger.info("beat at:" + i);
            TimeUnit.SECONDS.sleep(2);
        }
        return ReturnT.SUCCESS;
    }

    /**
     * 分片广播任务
     * @param param
     * @return
     * @throws Exception
     */
    @XxlJob("shardingJobHandler")
    public ReturnT<String> shardingJobHandler(String param) throws Exception {
        ShardingUtil.ShardingVO shardingVO = ShardingUtil.getShardingVo();
        logger.info("分片参数:当前分片序号 = {},总分片数 = {}", shardingVO.getIndex(), shardingVO.getTotal());
        for (int i = 0; i < shardingVO.getTotal(); i++) {
            if (i == shardingVO.getIndex()) {
                logger.info("第{}片,命中分片开始处理!", i);
            } else {
                logger.info("第{}片,忽略!", i);
            }
        }
        return ReturnT.SUCCESS;
    }

    /**
     * 命令行执行
     * @param param
     * @return
     * @throws Exception
     */
    @XxlJob("commandJobHandler")
    public ReturnT<String> commandJobHandler(String param) throws Exception {
        String command = param;
        int exitValue = -1;
        BufferedReader bufferedReader = null;
        try {
            Process process = Runtime.getRuntime().exec(command);
            BufferedInputStream bufferedInputStream = new BufferedInputStream(process.getInputStream());
            bufferedReader = new BufferedReader(new InputStreamReader(bufferedInputStream));
            String line;
            while ((line = bufferedReader.readLine()) != null) {
                logger.info(line);
            }
            process.waitFor();
            exitValue = process.exitValue();
        } catch (Exception e) {
            logger.info("出现异常!", e);
        } finally {
            if (bufferedReader != null) {
                bufferedReader.close();
            }
        }
        if (exitValue == 0) {
            return IJobHandler.SUCCESS;
        } else {
            return new ReturnT<String>(IJobHandler.FAIL.getCode(), "command exit value(" + exitValue + ") is failed");
        }
    }

    /**
     * 跨平台Http任务
     * @param param
     * @return
     * @throws Exception
     */
    @XxlJob("httpJobHandler")
    public ReturnT<String> httpJobHandler(String param) throws Exception {
        if (param == null || param.trim().length() == 0) {
            logger.info("param[" + param + "] invalid.");
            return ReturnT.FAIL;
        }
        String[] httpParams = param.split("\n");
        String url = null;
        String method = null;
        String data = null;
        for (String httpParam : httpParams) {
            if (httpParam.startsWith("url:")) {
                url = httpParam.substring(httpParam.indexOf("url:") + 4).trim();
            }
            if (httpParam.startsWith("method:")) {
                method = httpParam.substring(httpParam.indexOf("method:") + 7).trim().toUpperCase();
            }
            if (httpParam.startsWith("data:")) {
                data = httpParam.substring(httpParam.indexOf("data:") + 5).trim();
            }
        }
        if (url == null || url.trim().length() == 0) {
            logger.info("url[" + url + "] invalid.");
            return ReturnT.FAIL;
        }
        if (method == null || !Arrays.asList("GET", "POST").contains(method)) {
            logger.info("method[" + method + "] invalid.");
            return ReturnT.FAIL;
        }
        HttpURLConnection connection = null;
        BufferedReader bufferedReader = null;
        try {
            URL realUrl = new URL(url);
            connection = (HttpURLConnection) realUrl.openConnection();
            connection.setRequestMethod(method);
            connection.setDoOutput(true);
            connection.setDoInput(true);
            connection.setUseCaches(false);
            connection.setReadTimeout(5 * 1000);
            connection.setConnectTimeout(3 * 1000);
            connection.setRequestProperty("connection", "Keep-Alive");
            connection.setRequestProperty("Content-Type", "application/json;charset=UTF-8");
            connection.setRequestProperty("Accept-Charset", "application/json;charset=UTF-8");
            connection.connect();
            if (data != null && data.trim().length() > 0) {
                DataOutputStream dataOutputStream = new DataOutputStream(connection.getOutputStream());
                dataOutputStream.write(data.getBytes("UTF-8"));
                dataOutputStream.flush();
                dataOutputStream.close();
            }
            int statusCode = connection.getResponseCode();
            if (statusCode != 200) {
                throw new RuntimeException("Http Request StatusCode(" + statusCode + ") Invalid.");
            }
            bufferedReader = new BufferedReader(new InputStreamReader(connection.getInputStream(), "UTF-8"));
            StringBuilder result = new StringBuilder();
            String line;
            while ((line = bufferedReader.readLine()) != null) {
                result.append(line);
            }
            String responseMsg = result.toString();
            logger.info(responseMsg);
            return ReturnT.SUCCESS;
        } catch (Exception e) {
            logger.info("执行异常!", e);
            return ReturnT.FAIL;
        } finally {
            try {
                if (bufferedReader != null) {
                    bufferedReader.close();
                }
                if (connection != null) {
                    connection.disconnect();
                }
            } catch (Exception E) {
                logger.info("执行异常!", E);
            }
        }
    }

    /**
     * 生命周期任务示例:任务初始化与销毁时,支持自定义相关逻辑
     * @param param
     * @return
     * @throws Exception
     */
    @XxlJob(value = "jobHandler2", init = "init", destroy = "destroy")
    public ReturnT<String> jobHandler2(String param) throws Exception {
        logger.info("XXL-JOB, Hello World.");
        return ReturnT.SUCCESS;
    }

    public void init() {
        logger.info("init");
    }

    public void destroy() {
        logger.info("destory");
    }
}

调度中心新增执行器

新增执行器

新增定时任务

新增定时任务

执行定时任务

执行定时任务

查看后台执行日志

后台日志
xxl-job使用起来十分简单,官网提供了详细的文档,详细了解更多xxl-job的其他知识参阅xxl-job中文网

Spring Boot可以很方便地整合各种组件和框架,包括Elasticsearch、Canal和Kafka。下面简单介绍一下如何使用Spring Boot整合这三个组件实现MySQL数据同步到Elasticsearch的功能。 1. 集成Easy Elasticsearch 首先需要在pom.xml中引入Easy Elasticsearch的依赖: ``` <dependency> <groupId>com.jdon</groupId> <artifactId>easy-elasticsearch</artifactId> <version>1.0.0</version> </dependency> ``` 然后在application.properties中配置Elasticsearch的地址: ``` spring.elasticsearch.rest.uris=http://localhost:9200 ``` 2. 集成Canal Canal是阿里巴巴开源的一款MySQL数据增量订阅&消费组件,可以实时监听MySQL的binlog并将数据同步到其他存储介质,比如Kafka或Elasticsearch。 在pom.xml中引入Canal的依赖: ``` <dependency> <groupId>com.alibaba.otter</groupId> <artifactId>canal-client</artifactId> <version>1.1.5</version> </dependency> ``` 然后在application.properties中配置Canal的参数: ``` canal.server.host=localhost canal.server.port=11111 canal.destination=test canal.username= canal.password= ``` 3. 集成Kafka Kafka是一款分布式的消息队列,可以将数据异步地发送到其他系统或存储介质。 在pom.xml中引入Kafka的依赖: ``` <dependency> <groupId>org.springframework.kafka</groupId> <artifactId>spring-kafka</artifactId> <version>2.3.4.RELEASE</version> </dependency> ``` 然后在application.properties中配置Kafka的参数: ``` spring.kafka.bootstrap-servers=localhost:9092 spring.kafka.consumer.group-id=test-group ``` 4. 实现数据同步 首先需要创建一个Canal客户端,实现Canal的监听器接口,监听到MySQL的binlog变化时将数据发送到Kafka。 ``` @Component public class CanalClient implements CanalEventListener { @Autowired private KafkaTemplate<String, String> kafkaTemplate; @Override public void onEvent(CanalEvent canalEvent) { List<CanalEntry.Entry> entries = canalEvent.getEntries(); for (CanalEntry.Entry entry : entries) { CanalEntry.EntryType entryType = entry.getEntryType(); if (entryType == CanalEntry.EntryType.ROWDATA) { CanalEntry.RowChange rowChange; try { rowChange = CanalEntry.RowChange.parseFrom(entry.getStoreValue()); } catch (Exception e) { throw new RuntimeException("ERROR ## parser of eromanga-event has an error , data:" + entry.toString(), e); } if (rowChange != null) { String tableName = entry.getHeader().getTableName(); for (CanalEntry.RowData rowData : rowChange.getRowDatasList()) { Map<String, String> dataMap = new HashMap<>(); for (CanalEntry.Column column : rowData.getAfterColumnsList()) { dataMap.put(column.getName(), column.getValue()); } kafkaTemplate.send(tableName, new Gson().toJson(dataMap)); } } } } } } ``` 然后创建一个Kafka消费者,将数据从Kafka读取出来,再通过Easy Elasticsearch将数据同步到Elasticsearch。 ``` @Component public class KafkaConsumer { @Autowired private ElasticsearchTemplate elasticsearchTemplate; @KafkaListener(topics = "test") public void processMessage(String message) { Gson gson = new Gson(); Type type = new TypeToken<Map<String, String>>(){}.getType(); Map<String, String> dataMap = gson.fromJson(message, type); IndexQuery indexQuery = new IndexQueryBuilder() .withId(dataMap.get("id")) .withObject(dataMap) .build(); elasticsearchTemplate.index(indexQuery); } } ``` 最后启动Spring Boot应用程序,就能实现MySQL数据同步到Elasticsearch的功能了。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值