雪花SnowflakeId-利用mysql生成和管理workerId和datacenterId

目录

背景

常见管理和生成方法

本方法涉及框架

方法原理简要说明

SnowflakeId的配置改动

项目结构说明

项目启动流程图

定时器流程图

源码

优点缺点


背景

众所周知,snowflakeId算法中,如果workerId和datacenterId重复,则在高并发场景下生成的id也可能重复。

本文简述了如果避免不同实例生成相同workerId和datacenterId的常用方法,

并提出了用mysql管理的新方法

常见管理和生成方法

生成或管理workerId和datacenterId方式,目前网络上大概有以下几种

1. 用hutool工具,生成workerId和datacenterId

hutool生成datacenterId根据MAC地址生成,workerI生成和进程pid有关

唯一ID工具-IdUtil | Hutool​​​​​​​

2. 基于redis生成和管理

3.基于zookeeper和生成和管理

4.基于hostName和ip地址生成workerId和datacenterId,见下面的连接

https://blog.csdn.net/l848168/article/details/132697275

5.其他注册中心实现

本方法涉及框架

springboot

mysql

quartz

spring-boot-data-jpa

方法原理简要说明

1. 需要有张mysql表,mysql表的自增id,即使在分布式并发环境下,也能获得有序的不同的自增数值,通过这个不会重复的数值可以取余生成workerId和datacenterId(后续简称w_d组合),解决生成w_d组合的并发安全问题

2.需要有另一张mysql表,配合定时器(本例中每小时运行一次),记录所有w_d组合信息, 进一步避免服务使用相同的w_d组合

SnowflakeId的配置改动

workerIdBits(数据标识id所占的位数)改为7
datacenterIdBits(数据标识id所占的位数)改为7

调大这两个参数可以使得支持节点数上升到16129个

目的是为了降低id冲突的概率,代价是生成的id会变大,当然你也可以用默认参数

项目结构说明

InitSnowUtil.java:初始化SnowflakeIdWorker的工具类,生成极小概率重复的workerId和datacenterId

SnowflakeIdUtil.java:SnowflakeId工具类

SnowTask.java:定时器。该类用于定时每小时向mysql中注册workerId和datacenterId信息, 如果发现其他服务使用相同的workerId和datacenterId,则会申请新的workerId和datacenterId

项目启动流程图

定时器流程图

源码

SnowflakeIdUtil.java

package com.example.demo;

/**
 * @description:
 */

public class SnowflakeIdUtil {

    public static SnowflakeIdWorker idWorker = new SnowflakeIdWorker(0, 0);

    public static volatile Long workerId_now ;

    public static volatile Long datacenterId_now;

    public static void initWorker(long workerId, long datacenterId){
        System.out.println("new SnowflakeIdWorker, workerId:"+workerId+", datacenterId:"+datacenterId);
        idWorker = new SnowflakeIdWorker(workerId, datacenterId);
        SnowflakeIdUtil.workerId_now = workerId;
        SnowflakeIdUtil.datacenterId_now = datacenterId;
    }

    /**
     * 使用该方法获取id
     */
    public static long nextId(){
        long id = idWorker.nextId();
        return id;
    }




    public static class SnowflakeIdWorker {

        // ==============================Fields===========================================
        /**
         * 开始时间截 (2015-01-01)
         */
        private final long twepoch = 1420041600000L;

        /**
         * 机器id所占的位数
         */
        private final long workerIdBits = 7L;

        /**
         * 数据标识id所占的位数
         */
        private final long datacenterIdBits = 7L;

        /**
         * 支持的最大机器id,结果是31 (这个移位算法可以很快的计算出几位二进制数所能表示的最大十进制数)
         */
        private final long maxWorkerId = -1L ^ (-1L << workerIdBits);

        /**
         * 支持的最大数据标识id,结果是31
         */
        private final long maxDatacenterId = -1L ^ (-1L << datacenterIdBits);

        /**
         * 序列在id中占的位数
         */
        private final long sequenceBits = 12L;

        /**
         * 机器ID向左移12位
         */
        private final long workerIdShift = sequenceBits;

        /**
         * 数据标识id向左移17位(12+5)
         */
        private final long datacenterIdShift = sequenceBits + workerIdBits;

        /**
         * 时间截向左移22位(5+5+12)
         */
        private final long timestampLeftShift = sequenceBits + workerIdBits + datacenterIdBits;

        /**
         * 生成序列的掩码,这里为4095 (0b111111111111=0xfff=4095)
         */
        private final long sequenceMask = -1L ^ (-1L << sequenceBits);

        /**
         * 工作机器ID(0~31)
         */
        private long workerId;

        /**
         * 数据中心ID(0~31)
         */
        private long datacenterId;

        /**
         * 毫秒内序列(0~4095)
         */
        private long sequence = 0L;

        /**
         * 上次生成ID的时间截
         */
        private long lastTimestamp = -1L;

        //==============================Constructors=====================================

        /**
         * 构造函数
         *
         * @param workerId     工作ID (0~31)
         * @param datacenterId 数据中心ID (0~31)
         */
        public SnowflakeIdWorker(long workerId, long datacenterId) {
            if (workerId > maxWorkerId || workerId < 0) {
                throw new IllegalArgumentException(String.format("worker Id can't be greater than %d or less than 0", maxWorkerId));
            }
            if (datacenterId > maxDatacenterId || datacenterId < 0) {
                throw new IllegalArgumentException(String.format("datacenter Id can't be greater than %d or less than 0", maxDatacenterId));
            }
            this.workerId = workerId;
            this.datacenterId = datacenterId;
        }

        // ==============================Methods==========================================

        /**
         * 获得下一个ID (该方法是线程安全的)
         *
         * @return SnowflakeId
         */
        public synchronized long nextId() {
            long timestamp = timeGen();

            //如果当前时间小于上一次ID生成的时间戳,说明系统时钟回退过这个时候应当抛出异常
            if (timestamp < lastTimestamp) {
                throw new RuntimeException(
                        String.format("Clock moved backwards.  Refusing to generate id for %d milliseconds", lastTimestamp - timestamp));
            }

            //如果是同一时间生成的,则进行毫秒内序列
            if (lastTimestamp == timestamp) {
                sequence = (sequence + 1) & sequenceMask;
                //毫秒内序列溢出
                if (sequence == 0) {
                    //阻塞到下一个毫秒,获得新的时间戳
                    timestamp = tilNextMillis(lastTimestamp);
                }
            }
            //时间戳改变,毫秒内序列重置
            else {
                sequence = 0L;
            }

            //上次生成ID的时间截
            lastTimestamp = timestamp;

            //移位并通过或运算拼到一起组成64位的ID
            return ((timestamp - twepoch) << timestampLeftShift) //
                    | (datacenterId << datacenterIdShift) //
                    | (workerId << workerIdShift) //
                    | sequence;
        }

        /**
         * 阻塞到下一个毫秒,直到获得新的时间戳
         *
         * @param lastTimestamp 上次生成ID的时间截
         * @return 当前时间戳
         */
        protected long tilNextMillis(long lastTimestamp) {
            long timestamp = timeGen();
            while (timestamp <= lastTimestamp) {
                timestamp = timeGen();
            }
            return timestamp;
        }

        /**
         * 返回以毫秒为单位的当前时间
         *
         * @return 当前时间(毫秒)
         */
        protected long timeGen() {
            return System.currentTimeMillis();
        }




    }


    /**
     * 测试
     */
    //==============================Test=============================================
    public static void main(String[] args) {
        /*SnowflakeIdWorker idWorker = new SnowflakeIdWorker(0, 0);

        for (int i = 0; i < 100; i++) {
            long id = idWorker.nextId();
            System.out.println(id);
        }*/
        for (int i = 0; i < 100; i++) {
            long id = SnowflakeIdUtil.nextId();
            System.out.println(id);
        }
    }
}


InitSnowUtil.java
package com.example.demo;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.core.env.Environment;
import org.springframework.stereotype.Component;

import javax.annotation.PostConstruct;
import java.sql.*;
import java.text.SimpleDateFormat;
import java.util.*;
import java.util.Date;
import java.util.stream.Collectors;




/*
、初始化表语句
drop table snow_w_d_creator;
CREATE TABLE `snow_w_d_creator` (
  `id` int NOT NULL AUTO_INCREMENT,
  `createTime` datetime DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  `markWord` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_bin DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=121792 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin;



*/


/**
 * 说明:使用SnowflakeId工具,SnowflakeId的workerId和datacenterId需要每个服务都不同,
 * 该类用于在没有zookeeper的情况下生成极小概率重复的workerId和datacenterId
 *
 * 原理:向mysql数据库插入一条数据,mysql数据库会保证id不重复且递增,根据返回的id进行取余操作,
 * 然后根据余数计算workerId和datacenterId,
 * 如果到了边界值,则会重新生成新值,
 * 如果该id组的值已经被其他服务使用,1小时左右自动更换id
 *
 * 使用前提条件:需要能链接mysql,并且已经初始化两个表
 */
@Component
public class InitSnowUtil {
    /**
     * 机器id所占的位数
     * 不建议workerIdBits和datacenterIdBits之和大于14,否则一段时间后超过long的范围(9223372036854775807),造成生成的id为负数
     * workerIdBits和datacenterIdBits之和等于14,约可以正常使用100年不会超出long范围,支持节点数16,129个
     * workerIdBits和datacenterIdBits之和等于10(原SnowflakeId默认配置),约可以正常使用800年不会超出long范围,支持节点数1024个
     * 需要和SnowflakeIdWorker.workerIdBits同步修改
     */
    public final static long workerIdBits = 7L;

    /**
     * 数据标识id所占的位数
     * 不建议workerIdBits和datacenterIdBits之和大于14,否则一段时间后超过long的范围(9223372036854775807),造成生成的id为负数
     * workerIdBits和datacenterIdBits之和等于14,约可以正常使用100年不会超出long范围
     * workerIdBits和datacenterIdBits之和等于10(原SnowflakeId默认配置),约可以正常使用800年不会超出long范围
     * 需要和SnowflakeIdWorker.datacenterIdBits同步修改
     */
    public final static long datacenterIdBits = 7L;

    /**
     * 支持的最大机器id,workerIdBits为5,结果是31,workerIdBits为7,结果是127 (这个移位算法可以很快的计算出几位二进制数所能表示的最大十进制数)
     */
    public static final long maxWorkerId = -1L ^ (-1L << workerIdBits);

    /**
     * 支持的最大数据标识id,datacenterIdBits为5,结果是31,datacenterIdBits为7,结果是127
     */
    public static final long maxDatacenterId = -1L ^ (-1L << datacenterIdBits);

    /**
     * mysql数据库url
     */
    public static String jdbcUrl = "jdbc:mysql://localhost:3306/m0?useUnicode=true&characterEncoding=utf-8&useSSL=false&serverTimezone=Asia/Shanghai";

    /**
     * mysql数据库用户名
     */
    public static String username = "root";

    /**
     * mysql数据库用户密码
     */
    public static String password = "1234";

    /**
     * 数据库名称
     */
    public static final String dbname = "m0";

    /**
     * 使用主键列生成workerId和datacenterId的表名称
     */
    public static final String tablename = "snow_w_d_creator";

    public static final String uuid = UUID.randomUUID().toString().replaceAll("-", "");

    private static Log log = LogFactory.getLog(InitSnowUtil.class);

    /**
     * springboot环境,目前用于获取配置文件信息,用于整合进springboot内
     */
    @Autowired
    private Environment environment;


    /**
     * 说明:bean初始化后获取数据库连接进行后续初始化SnowflakeIdUtil工作
     * 用于整合进springboot里
     */
    @PostConstruct
    public void postConstruct() throws Exception {
        jdbcUrl = environment.getProperty("spring.datasource.url");
        username = environment.getProperty("spring.datasource.username");
        password = environment.getProperty("spring.datasource.password");
        log.info("雪花id初始化工具:开始查询数据库然后创建SnowflakeIdWorker");
        InitSnowUtil.initSnowflakeIdUtil();
    }


    /**
     * 初始化SnowflakeIdUtil
     */
    public static void initSnowflakeIdUtil() throws Exception {
        log.info("雪花id初始化工具:创建一个数据库连接");
        Connection connection = DriverManager.getConnection(jdbcUrl, username, password);

        log.info("雪花id初始化工具:在mysql中查询过去一小时相同的id组合");
        Set<String> w_d_set = getRegisterWithOutOwn(connection);

        log.info("雪花id初始化工具:校验注册数量");
        if (w_d_set.size() > 16000) {
            RuntimeException runtimeException = new RuntimeException("workerId和datacenterId的组合要不够用了!");
            log.error("", runtimeException);
        }

        log.info("雪花id初始化工具: 执行具体的初始化工作");
        doInitSnowflakeIdUtil(connection, w_d_set);
    }


    /**
     * 执行具体的初始化工作
     *
     * @param connection 数据库连接
     * @param w_d_set    过去一个小时在mysql中存在的id组合
     */
    public static void doInitSnowflakeIdUtil(Connection connection, Set<String> w_d_set) throws Exception {
        //若存在相同的workerid,则申请新的workerid,避免重复
        log.info("雪花id初始化工具: uuid:" + uuid);
        log.info("雪花id初始化工具: 尝试获取新的id组合");
        HashMap<String, Long> idMap = null;
        String merge = null;
        do {
            idMap = InitSnowUtil.getIdMap(connection, maxWorkerId, maxDatacenterId, dbname, tablename, uuid);
            merge = idMap.get("workerId") + "_" + idMap.get("datacenterId");
        }
        while (w_d_set.contains(merge));
        //将workerid等id组合信息注册到mysql
        registerMysql(idMap.get("workerId"), idMap.get("datacenterId"), connection);
        connection.close();
        //初始化Worker
        SnowflakeIdUtil.initWorker(idMap.get("workerId"), idMap.get("datacenterId"));
        log.info("雪花id初始化工具:SnowflakeIdUtil初始化完成");
    }


    /**
     * 获取下一个workerId和datacenterId组合
     *
     * @param connection      数据库连接
     * @param maxWorkerId     SnowflakeId最大可用workerId
     * @param maxDatacenterId SnowflakeId最大可用datacenterId
     * @param dbname          连接的数据库名称
     * @param tablename       表名称
     * @return workerId和datacenterId
     */
    public static HashMap<String, Long> getIdMap(Connection connection, long maxWorkerId, long maxDatacenterId, String dbname, String tablename, String uuid) throws Exception {
        Statement insert_statement = null;
        ResultSet generatedKeys = null;
        try {
            long datacenterId = 0;
            long workerId = 0;
            String sql = "INSERT INTO `" + dbname + "`.`" + tablename + "`(`markWord`) VALUES (\"" + uuid + "\");";
            log.info("雪花id初始化工具:sql:" + sql);
            insert_statement = connection.createStatement();
            insert_statement.executeUpdate(sql, Statement.RETURN_GENERATED_KEYS);
            generatedKeys = insert_statement.getGeneratedKeys();
            generatedKeys.next();
            int key = generatedKeys.getInt(1);
            log.info("雪花id初始化工具:获取到的id=" + key);
            long modnum = key % ((maxWorkerId + 1) * (maxDatacenterId + 1));
            long datacenterId_temp = (modnum) % (maxDatacenterId + 1);
            long workerId_temp = (modnum) / (maxDatacenterId + 1);
            workerId = workerId_temp;
            datacenterId = datacenterId_temp;
            log.info("雪花id初始化工具:计算得到workerId:{" + workerId + "}, datacenterId:{" + datacenterId + "}");
            HashMap<String, Long> map = new HashMap<>();
            map.put("workerId", workerId);
            map.put("datacenterId", datacenterId);
            return map;
        }finally {
            if(insert_statement != null){
                insert_statement.close();
            }
            if(generatedKeys != null){
                generatedKeys.close();
            }
        }

    }


    /**
     * 将workerId和datacenterId的信息注入mysql中
     *
     * @param workerId
     * @param datacenterId
     * @param connection   数据库连接
     * @throws Exception
     */
    public static void registerMysql(Long workerId, Long datacenterId, Connection connection) throws Exception {
        //将workerid等信息注册到mysql
        Statement statement = null;
        try {
            log.info("将workerid等信息注册到mysql");
            String workerid_datacenterId = workerId + "_" + datacenterId;
            String sql = "INSERT INTO `m0`.`w_d_register_data`( `workerid_datacenterId`, `uuidmark`) VALUES (\"" + workerid_datacenterId + "\",\"" + InitSnowUtil.uuid + "\") ";
            log.info("将workerid信息注册到mysql,sql:" + sql);
            statement = connection.createStatement();
            statement.executeUpdate(sql, Statement.RETURN_GENERATED_KEYS);
        }finally {
            if(statement != null){
                statement.close();
            }
        }

    }


    /**
     * 获取过去一小时的workerid和datacenterId的注册信息
     *
     * @param connection 数据库连接
     */
    public static List<HashMap<String, String>> getRegister(Connection connection) throws Exception {
        LinkedList<HashMap<String, String>> list = new LinkedList<>();
        try (Statement select_statement = connection.createStatement()){
            Calendar calendar = Calendar.getInstance();
            calendar.setTime(new Date());
            calendar.set(Calendar.MINUTE, 0);
            calendar.set(Calendar.SECOND, 0);
            calendar.add(Calendar.HOUR_OF_DAY, -1);
            Date time = calendar.getTime();
            SimpleDateFormat df = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
            String format = df.format(time);
            StringBuffer sql = new StringBuffer("SELECT * FROM `m0`.`w_d_register_data`");
            sql.append("WHERE cretetime > \"" + format + "\"");
            log.info("雪花id初始化工具:在mysql中查询过去一小时id组合信息,sql:" + sql);
            ResultSet resultSet = select_statement.executeQuery(sql.toString());
            //若存在相同的workerid,则申请新的workerid避免重复
            while (resultSet.next()) {
                HashMap<String, String> map = new HashMap<String, String>();
                String workerid_datacenterId = resultSet.getString(2);
                String uuidmark = resultSet.getString(3);
                map.put("workerid_datacenterId", workerid_datacenterId);
                map.put("uuidmark", uuidmark);
                list.add(map);
            }
            resultSet.close();
        }
        return list;
    }


    /**
     * 获取过去一小时的workerid和datacenterId的注册信息(不包含自己的信息)
     *
     * @param connection 数据库连接
     */
    public static Set<String> getRegisterWithOutOwn(Connection connection) throws Exception {
        List<HashMap<String, String>> list = getRegister(connection);
        Set<String> w_d_set = list.stream()
                .filter(map -> !uuid.equals(map.get("uuidmark")))
                .map(map -> map.get("workerid_datacenterId"))
                .collect(Collectors.toSet());
        return w_d_set;
    }


    /**
     * 测试入口1
     */
    public static void main(String[] args) throws Exception {
        InitSnowUtil.initSnowflakeIdUtil();
        for (int i = 0; i < 100; i++) {
            long id = SnowflakeIdUtil.nextId();
            System.out.println(id);
        }
    }


}


SnowTask.java
package com.example.demo;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;

import javax.sql.DataSource;
import java.sql.Connection;
import java.sql.Statement;
import java.text.SimpleDateFormat;
import java.util.Calendar;
import java.util.Date;
import java.util.Set;

/**
 * 初始化表语句
 * CREATE TABLE `w_d_register_data` (
 *   `id` bigint NOT NULL AUTO_INCREMENT,
 *   `workerid_datacenterId` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_bin DEFAULT NULL,
 *   `uuidmark` varchar(255) COLLATE utf8mb4_bin DEFAULT NULL,
 *   `cretetime` datetime DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
 *   PRIMARY KEY (`id`)
 * ) ENGINE=InnoDB AUTO_INCREMENT=401 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin;
 *
 *
 */


/**
 * 说明:该类用于定时向mysql中注册workerId和datacenterId信息,
 * 如果发现其他服务使用相同的workerId和datacenterId,则会申请新的workerId和datacenterId
 */
@Component
public class SnowTask {

    private static Log log = LogFactory.getLog(SnowTask.class);


    @Autowired
    DataSource dataSource;


    /**
     * 每小时运行一次,检查过去1小时5分钟内是否存在相同的workerid、datacenterId组合,如果存在则重新申请Id组合
     *
     * @throws Exception
     */
    @Scheduled(cron = "0 5 0/1 * * ?")//每小时的05分运行一次
//    @Scheduled(cron = "0/10 * * * * ?")//每10秒运行一次
    public void checkregister() throws Exception {
        try (Connection connection = dataSource.getConnection()) {

            log.info("将id组合信息注册到mysql");
            InitSnowUtil.registerMysql(SnowflakeIdUtil.workerId_now, SnowflakeIdUtil.datacenterId_now, connection);

            log.info("睡30秒等待其他线程完成信息注册");
            Thread.sleep(20000);

            log.info("校验是否存在相同的id组合");
            String w_d_now = SnowflakeIdUtil.workerId_now + "_" + SnowflakeIdUtil.datacenterId_now;
            Set<String> w_d_set = InitSnowUtil.getRegisterWithOutOwn(connection);
            if (!w_d_set.contains(w_d_now)) {
                log.info("在mysql中不存在相同的id组合");
                return;
            }

            log.warn("在mysql中查询发现存在相同的id组合,将重新申请id组合并初始化");
            InitSnowUtil.doInitSnowflakeIdUtil(connection, w_d_set);
        }
    }


    /**
     * 每天运行一次,删除表w_d_register_data昨天以及之前的数据
     *
     * @throws Exception
     */
    @Scheduled(cron = "0 0 1 * * ?")//每天运行一次
//    @Scheduled(cron = "0/10 * * * * ?")//每10秒运行一次
    public void delete() throws Exception {
        try (Connection connection = dataSource.getConnection();
             Statement statement = connection.createStatement();) {
            //在mysql中查询是否存在相同的workerid
            Calendar calendar = Calendar.getInstance();
            calendar.setTime(new Date());
            calendar.set(Calendar.HOUR_OF_DAY, 0);
            calendar.set(Calendar.MINUTE, 0);
            calendar.set(Calendar.SECOND, 0);
            Date time = calendar.getTime();
            SimpleDateFormat df = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
            String format = df.format(time);
            StringBuffer sql = new StringBuffer("DELETE FROM `m0`.`w_d_register_data` ");
            sql.append("WHERE cretetime < \"" + format + "\"");
            log.info("删除表w_d_register_data昨天以及之前的数据,sql:" + sql);
            statement.executeUpdate(sql.toString());
        }
    }

}

application.yml

server:
  port: 8752

spring:
  application:
    name: snowflakeIdDemo
  datasource:
    driver-class-name: com.mysql.jdbc.Driver
    url: jdbc:mysql://localhost:3306/m0?useUnicode=true&characterEncoding=utf-8&useSSL=false&serverTimezone=Asia/Shanghai
    username: root
    password: 1234

pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.4.0</version>
        <relativePath/> <!-- lookup parent from repository -->
    </parent>
    <groupId>com.example</groupId>
    <artifactId>snowflakeIdDemo</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <name>snowflakeIdDemo</name>
    <description>Demo project for Spring Boot</description>

    <properties>
        <java.version>1.8</java.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter</artifactId>
        </dependency>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>

        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
            <version>8.0.25</version>
        </dependency>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-jpa</artifactId>
        </dependency>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-quartz</artifactId>
        </dependency>

    </dependencies>

</project>

优点缺点

优点:

1. 除了mysql,无需引入redis、zookeeper等第三方中间件

2. 无需关注ip、mac地址、等网络信息

3. 申请到的workerId和datacenterId组合不会重复

缺点:

1.该代码没有上过生产环境,建议全面测试下

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值