分布式唯一id的生成

据我 了解,大概有几种吧

  1. redis 
  2. flicker方案
  3. 普通方法
  4. SnowFlake算法

一,redis id发号机

常见的就是redis去分配id,为啥?

因为redis单线程,不存在id乱套的问题。那么如何一个构造?

它是一个redis集群,解决单点故障问题,其次是比如有3台redis,那么

第一台:1,4,7,10

第二台:2,5,8,11

第三台:3,6,9,12

看到没都不重复,相差的是机子的数量,这样永远不会拿重复。

这个如何实现?

redis hash类型INCREAMENT增加特定数值

二,flicker 方案

我是使用另一种方案:flicker 方案(auto_increment + replace into + MyISAM)

很明显mysql数据表需要主键,递增,使用replace into+MyISAM引擎

SET FOREIGN_KEY_CHECKS=0;

-- ----------------------------
-- Table structure for `test`
-- ----------------------------
DROP TABLE IF EXISTS `test`;
CREATE TABLE `test` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `ip` varchar(255) DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=MyISAM AUTO_INCREMENT=30 DEFAULT CHARSET=utf8;

由于看到携程的处理方案,因为你id主键永远会使用完的,所以不可能说把id作为分布式id使用,但是我们可以用作号段,在这个号段基础上*100,*1000等等,那么我们就有100个id,1000个id一次类推。其次是性能问题,如果每次拿号都去数据库拿,受数据库性能的影响!

 

这里使用ActomicLong解决并发问题,保证原子性。

还有ConcurrentHashMap有两个,一个保存id对应的ActomicLong的id号,一个是保存他们id对应的最大值,比如我现在ip为1,那么这个号段是100个,我把它乘以100,然后作为第一个id号,这时第一个id为100,号段最大为199是吧,如果大于199,则使用replace into去数据库申请号段。

贴下我的两个主要代码

import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.atomic.AtomicLong;

public class Count {
    private static AtomicLong atomicLong;

    public static ConcurrentHashMap<String,AtomicLong> concurrentHashMap=new ConcurrentHashMap();

    //储存号段里面最大值
    private static ConcurrentHashMap<String,String> currentMaxId=new ConcurrentHashMap();

    public Count(String ip,long number){
        /**
         * 拿到一个号段之后乘以100,也就是拿到100个id
         * 比如拿到1,id是从100,到199之间.
         */
        this.atomicLong=new AtomicLong(number*100);
        number=number*100+99;
        currentMaxId.put(ip,String.valueOf(number));
        concurrentHashMap.put(ip,atomicLong);
    }

    public static long addCount(String ip){
        if(concurrentHashMap.get(ip)==null){
            System.out.println("AtomicLong还没有拿到号段");
            return 0;
        }else{
            int i=Integer.valueOf(currentMaxId.get(ip));
            if(concurrentHashMap.get(ip).get()==i){
                System.out.println("号段的id已经超过100,重新获取号段");
                return 1;
            }
        }
        return concurrentHashMap.get(ip).incrementAndGet();
    }

    public static void putValue(String ip, long number){
        new Count(ip,number);
    }
}

上面这个是工具类

业务逻辑

import com.example.demo.dao.NumberDao;
import com.example.demo.util.Count;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;

import static com.example.demo.util.Count.addCount;

@RestController
public class NumberController {

    @Autowired
    NumberDao numberDao;

    @RequestMapping("/number")
    public String getNumber(@RequestParam(name = "ip", required = true) String ip) {
        long id = addCount(ip);
        if (id == 0) {
            try {
                numberDao.addId(ip);
            } catch (Exception e) {
                //e.printStackTrace();
            }
            System.out.println(numberDao.getId());
            Count.putValue(ip, Integer.parseInt(numberDao.getId()));
            return String.valueOf(Count.concurrentHashMap.get(ip));
        } else if (id == 1) {
            try {
                numberDao.addId(ip);
            } catch (Exception e) {
                //e.printStackTrace();
            }
            Count.putValue(ip, Integer.parseInt(numberDao.getId()));
            return String.valueOf(addCount(ip));
        } else {
            return String.valueOf(id);
        }
    }

}

使用的是springboot jpa


import com.example.demo.entity.Test;
import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.data.jpa.repository.Query;
import org.springframework.stereotype.Repository;


@Repository
public interface NumberDao extends JpaRepository<Test, Long> {

    @Query(value = "SELECT LAST_INSERT_ID()", nativeQuery = true)
    String getId();

    @Query(value = "REPLACE INTO test (ip) VALUES (:ip)", nativeQuery = true)
    void addId(String ip) throws Exception;


}

 

控制台输出:

 

基本逻辑:一开始查第一个ConcurrentHashMap有没有这个ip(这里的ip指的是多个应用服务器请求,具体要看自己业务做相应改变,仅供参考)的id号,一开始默认为0,如果为0则去数据库请求号段,保存这个号段id的最大值到另一个ConcurrentHashmap里面,如果不为0,则获取id号。如果为1,意思是你现在Map里面的id以及超过你预定的号段值,需要你再去请求新的号段。

数据库的截图:

使用ip为1,2,3去测试。

github:https://github.com/dajitui/-id-

 

三,普通做法

时间+业务sid(只要保证id唯一即可)

 

四,SnowFlake算法

https://segmentfault.com/a/1190000011282426?utm_source=tag-newest

 可以看出主要区别id的地方在工作机器id以及后面的序列号

 

有SnowFlake是由Scala编写,这里有一个由java编写demo

import java.util.concurrent.ConcurrentHashMap;

/**
 * Created on 2019/4/4.
 *
 * @author dajitui
 */
public class SnowFlake {

    private long workerId;
    private long datacenterId;
    private long sequence;

    public SnowFlake(long workerId, long datacenterId, long sequence) {
        // sanity check for workerId
        if (workerId > maxWorkerId || workerId < 0) {
            throw new IllegalArgumentException(String.format("worker Id can't be greater than %d or less than 0", maxWorkerId));
        }
        if (datacenterId > maxDatacenterId || datacenterId < 0) {
            throw new IllegalArgumentException(String.format("datacenter Id can't be greater than %d or less than 0", maxDatacenterId));
        }
        System.out.printf("worker starting. timestamp left shift %d, datacenter id bits %d, worker id bits %d, sequence bits %d, workerid %d",
                timestampLeftShift, datacenterIdBits, workerIdBits, sequenceBits, workerId);

        this.workerId = workerId;
        this.datacenterId = datacenterId;
        this.sequence = sequence;
    }

    private long twepoch = 1288834974657L;

    private long workerIdBits = 5L;
    private long datacenterIdBits = 5L;
    private long maxWorkerId = -1L ^ (-1L << workerIdBits);
    private long maxDatacenterId = -1L ^ (-1L << datacenterIdBits);
    private long sequenceBits = 12L;

    private long workerIdShift = sequenceBits;
    private long datacenterIdShift = sequenceBits + workerIdBits;
    private long timestampLeftShift = sequenceBits + workerIdBits + datacenterIdBits;
    private long sequenceMask = -1L ^ (-1L << sequenceBits);

    private long lastTimestamp = -1L;

    public long getWorkerId() {
        return workerId;
    }

    public long getDatacenterId() {
        return datacenterId;
    }

    public long getTimestamp() {
        return System.currentTimeMillis();
    }

    public synchronized long nextId() {
        long timestamp = timeGen();

        if (timestamp < lastTimestamp) {
            System.err.printf("clock is moving backwards.  Rejecting requests until %d.", lastTimestamp);
            throw new RuntimeException(String.format("Clock moved backwards.  Refusing to generate id for %d milliseconds",
                    lastTimestamp - timestamp));
        }

        if (lastTimestamp == timestamp) {
            sequence = (sequence + 1) & sequenceMask;
            if (sequence == 0) {
                timestamp = tilNextMillis(lastTimestamp);
            }
        } else {
            sequence = 0;
        }

        lastTimestamp = timestamp;
        return ((timestamp - twepoch) << timestampLeftShift) |
                (datacenterId << datacenterIdShift) |
                (workerId << workerIdShift) |
                sequence;
    }

    private long tilNextMillis(long lastTimestamp) {
        long timestamp = timeGen();
        while (timestamp <= lastTimestamp) {
            timestamp = timeGen();
        }
        return timestamp;
    }

    private long timeGen() {
        return System.currentTimeMillis();
    }

    //---------------测试---------------
    public static void main(String[] args) {
        SnowFlake worker = new SnowFlake(1, 1, 1);
        ConcurrentHashMap map=new ConcurrentHashMap(1000);
        for (int i = 0; i < 1000; i++) {
            System.out.println(worker.nextId());
            if(map.get(worker.nextId())==null){
                map.put(worker.nextId(),1);
            }else {
                System.out.println("出现重复id");
            }
        }
    }

}

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值