记录解决容器环境下多节点,雪花算法机器号,数据中心号一样的问题
Java代码获取环境变量
String worker_id = System.getenv("MACHINE_ID");
String data_center_id = System.getenv("DATA_CENTER_ID");
每个节点启动时配置环境变量( 保证每个节点这两个参数不一样)
apiVersion: v1
kind: Pod
metadata:
name: my-service-pod
labels:
app: my-service
spec:
containers:
- name: my-service-container
image: my-service-image:latest
ports:
- containerPort: 8080
env:
- name: MACHINE_ID
value: "123"
- name: DATA_CENTER_ID
value: "456"
这种方法不太友好,集群每个节点都需要单独配置很麻烦
通过数据库加锁方式解决分布式环境下多节点机器号全局唯一的问题
小示例
#加上事务
@Transactional(rollbackFor = RuntimeException.class)
public long getWorkId(String appName) {
try {
RowMapper<SnowflakeInfo> snowflakeInfoRowMapper = (rs, rowNum) -> {
SnowflakeInfo snowflakeInfo = new SnowflakeInfo();
snowflakeInfo.setId(rs.getString(1));
snowflakeInfo.setAppName(rs.getString(2));
snowflakeInfo.setWorkId(rs.getLong(3));
return snowflakeInfo;
};
#加上行锁,数据库其他线程会阻塞
String selectSql = "select id, app_name, work_id from t_snowflake_work_id where app_name = ? for update";
List<SnowflakeInfo> snowflakeInfos = jdbcTemplate.query(selectSql, snowflakeInfoRowMapper, appName);
Long workId;
if (snowflakeInfos.size() == 0) {
//首次启动,插入数据,workid从0开始
String id = UUID.randomUUID().toString().replace("-", "");
workId = 0L;
jdbcTemplate.update("INSERT INTO t_snowflake_work_id (id,app_name,work_id) VALUES (?,?,?);", id, appName, workId);
} else {
SnowflakeInfo snowflakeInfo = snowflakeInfos.get(0);
workId = snowflakeInfo.getWorkId();
}
if (workId >= 32) {
//workId增加到32从0开始
workId = 0L;
}
jdbcTemplate.update("update t_snowflake_work_id set work_id = ? where app_name = ?;", workId + 1, appName);
return workId;
} catch (Exception e) {
logger.error("记录操作日志异常", e);
}
return 0;
}