ShardingSphere学习
一、垂直分库
对于一个系统来说,会有很多的功能模块,每个功能模块都有自己的数据表。这些表可能数据量很大,比如用户表、订单表、商品表。所以我们可以对这个系统进行一个垂直分库。 所谓的垂直分库就是将这个数据量比较大的表放到单独的库中。但是这个方案其实实际上并没有解决一个表中数据太大的问题。
什么情况下我们需要进行一个分表呢?两种情况:第一就是表中数据量太大超过了500w条记录 第二的话就是表的大小太大了,超过了2GB。这两种情况的出现就要考虑进行分表了。
二、垂直分表
垂直分表主要针对的是第二种情况,表中数据行数不多,但是表的大小太大了。
垂直分表就是将占用空间比较大的字段以及不是进行查询的必要字段进行拆分,分到另外的一个表中。这个操作的复杂度就是我们在进行数据库操作的时候会涉及到更多的表。而且我们要注意,垂直分表的时候原表和拆分之后的表的ID要一一对应,说明这是一个数据。
三、水平分表
当我们一个表中的数据太多,比如超过了500w行的话,我们需要进行一个水平分表。所谓的水平分表简单的来说就是将xxx行-xxx行的数据放到一个表中,xxx行-xxx行的数据放到一个表中。当然我们拆分出来的表可以放在同一个数据库下,也可以放在不同的数据库下。这个就要看具体的业务的性能了。有时候放在同一个数据库中性能也会有很好的提升的。
如果放到了不同的数据库中的话,我们在进行操作数据的时候要进行一个额外的操作(比如取余操作)来判断对应的数据放在哪个数据库中。
四、分库分表的两种架构实现
第一种的话就是将分库分表抽取出一个一个数据操作层,这个数据操作层是放在业务服务器中的,我们业务层代码通过直接操作这个数据操作层来调用具体的操作。比如ShardingSphere-JDBC就是这个架构的实现。
使用ShardingSphere-JDBC的时候和我们平常开发项目使用JDBC是一样的,ShardingSphere-JDBC适用于任何基于JDBC的ORM框架,比如JPA, Hibernate, Mybatis, Spring JDBC Template 或直接使用 JDBC
但是ShardingSphere-JDBC这种数据操作层的实现有一定的局限性就是只能用于java应用,而不能用于其他的应用。
附上官网的ShardingSphere-JDBC的架构图
第二种的话就是数据库中间件的架构,数据库中间件就是一个单独的服务,是独立用业务服务器的,业务服务器通过和这个数据库中间件建立连接,然后再通过这个数据库中间件来操作具体的数据。
常见的数据库中间件架构的实现有Mycat、ShardingSphere-proxy
数据库中间件的架构可以对异构语言提供支持,我们的应用可以不是java语言了,可以是用别的语言开发的应用。但是proxy也有一定的局限性就是现在只支持MySQL、postgreSQL 这两种数据库。
附上官网的ShardingSphere-proxy的架构图
而且现在ShardingSphere支持ShardingSphere-JDBC和ShardingSphere-proxy的混合部署了,我们可以更加灵活的来进行分库分表了,这样的话我们的java应用程序可以使用JDBC 其他语言的可以使用proxy,更加灵活,而且可以通过统一注册中心来配置分片策略
附上官网的混合部署模式架构图
五、使用docker快速配置MySQL的主从复制集群,一主两从
首先说明一下我是使用的阿里云服务器
1、使用docker启动三个mysql服务,docker命令如下,可以自己根据自己的挂载的文件位置更改代码
docker run -p 3306:3306 -v /opt/soft/mysql/conf:/etc/mysql/conf.d \
-v /opt/soft/mysql/data:/var/lib/mysql \
--name mysql-master -e MYSQL_ROOT_PASSWORD=123456 -d mysql:8.0.29
docker run -p 3308:3306 -v /opt/soft/mysql/slave1conf:/etc/mysql/conf.d \
-v /opt/soft/mysql/slave1data:/var/lib/mysql \
--name mysql-slave1 -e MYSQL_ROOT_PASSWORD=123456 -d mysql:8.0.29
docker run -p 3308:3306 -v /opt/soft/mysql/slave2conf:/etc/mysql/conf.d \
-v /opt/soft/mysql/slave2data:/var/lib/mysql \
--name mysql-slave2 -e MYSQL_ROOT_PASSWORD=123456 -d mysql:8.0.29
2、配置主节点的配置文件
[mysqld]
# 服务器唯一id,默认值1
server-id=1
# 设置日志格式,默认值ROW
binlog_format=STATEMENT
# 二进制日志名,默认binlog
# log-bin=binlog
# 设置需要复制的数据库,默认复制全部数据库
#binlog-do-db=mytestdb
# 设置不需要复制的数据库
#binlog-ignore-db=mysql
#binlog-ignore-db=infomation_schema
3、在主节点中配置从机的一些配置
-- 创建slave用户
CREATE USER 'mysql-slave'@'%';
-- 设置密码
ALTER USER 'mysql-slave'@'%' IDENTIFIED WITH mysql_native_password BY '123456';
-- 授予复制权限
GRANT REPLICATION SLAVE ON *.* TO 'mysql-slave'@'%';
-- 刷新权限
FLUSH PRIVILEGES;
4、在主机中查看主机的状态,主要记录主机的binlog日志以及位置
SHOW MASTER STATUS;
5、配置从机的配置文件
[mysqld]
# 服务器唯一id,每台服务器的id必须不同,如果配置其他从机,注意修改id
server-id=2
# 中继日志名,默认xxxxxxxxxxxx-relay-bin
#relay-log=relay-bin
6、在从机中配置主从复制的信息
注意这里的主机的ip地址我建议这样来看
使用docker inspect 主机容器名 , 这个命令可以看到容器里面的IPAddress字段,这个字段就是主机的ip地址,用这个来配置
CHANGE MASTER TO MASTER_HOST='172.17.0.5',
MASTER_USER='msyql-slave',MASTER_PASSWORD='123456', MASTER_PORT=3306,
MASTER_LOG_FILE='binlog.000003',MASTER_LOG_POS=1357;
7、在从机中开启主从复制功能
START SLAVE;
-- 查看状态(不需要分号)
SHOW SLAVE STATUS\G
这里介绍一下在我们配置主机的配置文件my.cnf 的时候有一些选项的功能
binlog-do-db:配置我们要同步哪个数据库
binlog-ignore-db:设置不需要进行同步的数据库
那么来说一下上面两个配置的一个顺序问题
如果我们上面的两个配置都没有配置的话,默认会将我们创建的每一个数据库都进行同步,写入日志中
如果我们配置了上面两个中的一个的话,肯定按照这个配置来进行执行
但是如果我们上面两个配置都配置了的话,那么会先走binlog-do-db的配置,会去对这个数据库进行匹配,如果有这个数据库的话写入日志,如果没有的话不写入日志。如果binlog-do-db没有匹配成功的话,会去执行binlog-ignore-db配置的内容。如果匹配到了binlog-ignore-db中的数据库的话不写入日志,如果没有匹配到binlog-ignore-db中的数据库的话写入日志中。
六、使用Spring Boot项目来进行实操
一、
1、创建一个Spring Boot项目 我是用的boot版本为 2.3.7.RELEASE
2、导入如下的pom文件
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.apache.shardingsphere</groupId>
<artifactId>shardingsphere-jdbc-core-spring-boot-starter</artifactId>
<version>5.1.1</version>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>com.baomidou</groupId>
<artifactId>mybatis-plus-boot-starter</artifactId>
<version>3.3.1</version>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
3、创建数据库 名字叫做db_user
4、创建数据表 t_user(id bigint , name varchar(30))
5、创建实体类,实体类的代码如下
//因为整合了mybatis-plus 所以才使用@TableName 和 @TableId注解
@TableName("t_user")
@Data
@AllArgsConstructor
@NoArgsConstructor
public class User {
@TableId(type = IdType.AUTO)
private Long id;
private String name;
}
6、创建mapper接口
@Mapper
public interface UserMapper extends BaseMapper<User> {
}
7、简单的配置 注解写的很清楚了,这里的配置可以参考官网文档,记得选择文档的版本为5.1.1,不要用最新的啊,发生了变化了
spring.application.name=sharingSphere-jdbc
#开发环境
spring.profiles.active=dev
#模式配置,开发环境可以使用内存模式
spring.shardingsphere.mode.type=Memory
#开启shardingsphere的sql打印功能
spring.shardingsphere.props.sql-show=true
# 配置真实数据源
spring.shardingsphere.datasource.names=master,slave1,slave2
# 配置第 1 个数据源
spring.shardingsphere.datasource.master.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.master.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.master.jdbc-url=jdbc:mysql://112.126.80.104:3306/db_user
spring.shardingsphere.datasource.master.username=root
spring.shardingsphere.datasource.master.password=123456
# 配置第 2 个数据源
spring.shardingsphere.datasource.slave1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.slave1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.slave1.jdbc-url=jdbc:mysql://112.126.80.104:3307/db_user
spring.shardingsphere.datasource.slave1.username=root
spring.shardingsphere.datasource.slave1.password=123456
# 配置第 3 个数据源
spring.shardingsphere.datasource.slave2.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.slave2.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.slave2.jdbc-url=jdbc:mysql://112.126.80.104:3308/db_user
spring.shardingsphere.datasource.slave2.username=root
spring.shardingsphere.datasource.slave2.password=123456
# 读写分离类型,如: Static,Dynamic 因为我们的数据源是静态配置的,所以这里使用static就可以了
#<readwrite-splitting-data-source-name>的意思是将我们读写分离的数据库又抽取成了一个数据源,给这个数据源起个名字
spring.shardingsphere.rules.readwrite-splitting.data-sources.readwriteds.type=Static
# 下面的配置就是给我们封装的读写分离数据源配置哪个数据源是写数据源,哪些是读数据源
# 写数据源名称
spring.shardingsphere.rules.readwrite-splitting.data-sources.readwriteds.props.write-data-source-name=master
# 读数据源名称,多个从数据源用逗号分隔
spring.shardingsphere.rules.readwrite-splitting.data-sources.readwriteds.props.read-data-source-names=slave1,slave2
# 负载均衡算法名称,自己给负载均衡算法起个名字
spring.shardingsphere.rules.readwrite-splitting.data-sources.readwriteds.load-balancer-name=polling_alg
# 负载均衡算法配置
# 负载均衡算法类型 这里使用的轮询
spring.shardingsphere.rules.readwrite-splitting.load-balancers.polling_alg.type=ROUND_ROBIN
这里我想对负载均衡算法进行一个特别的补充说明(基于5.1.1版本)
1.轮询算法
type为ROUND_ROBIN 这也就是我上面配置文件中配置的,这个算法会在查询的时候对从机进行轮询访问查询
2.随机访问算法
type为RANDOM 这个我就不用具体说了吧
3.权重访问算法
类型为WEIGHT 根据我们设置的权重,优先把请求给权重大的服务器
注意对于上面的两种算法:轮询和随机是没有可配置的属性的。但是对于这个权重访问算法来说是需要进行配置属性的属性就是给从机配置权重值
# 负载均衡算法属性配置
spring.shardingsphere.rules.readwrite-splitting.load-balancers.<load-balance-algorithm-name>.props.<read-data_source-name>=权重值(double类型)
8、简单测试插入
@Test
void test01() {
User user = new User(null,"李四");
mapper.insert(user);
}
二、
shardingsphere-jdbc事务的测试
首先说明一下,ShardingSphere在主从模式下,为了保证主机与从机之间的事务一致性,防止出现分布式事务的问题,如果我们开启了事务的话,也就是使用了@Transactional注解的话,ShardingSphere-JDBC的底层会将读写操作都交给主机来处理。如果我们没有开启事务的话,默认是将写操作放到主机,读操作放到从机中的。
三、
垂直分片学习
首先垂直分片分为垂直分表和垂直分库,如果是垂直分表的,就是将一个表拆成一个数据库中的多个表,在进行操作的时候进行了一些额外的多表联查,使用shardingsphere的时候没有什么特殊的设置。但是如果是垂直分库的话涉及到了多个数据源,所以需要进行一些额外的设置。
1、使用docker启动两个mysql作为两个数据库,一个存储order 一个存储user
docker启动命令和之前的差不多
docker run -p 3301:3306 -v /opt/soft/mysql/server-user/conf:/etc/mysql/conf.d \
-v /opt/soft/mysql/server-user/data:/var/lib/mysql \
--name server-user -e MYSQL_ROOT_PASSWORD=123456 -d mysql:8.0.29
docker run -p 3302:3306 -v /opt/soft/mysql/server-order/conf:/etc/mysql/conf.d \
-v /opt/soft/mysql/server-order/data:/var/lib/mysql \
--name server-order -e MYSQL_ROOT_PASSWORD=123456 -d mysql:8.0.29
2.在server-user服务器中,创建数据库以及数据表
CREATE DATABASE db_user;
USE db_user;
CREATE TABLE t_user (
id BIGINT AUTO_INCREMENT,
name VARCHAR(30),
PRIMARY KEY (id)
);
3、在server-order服务器中,创建数据库以及数据表
CREATE DATABASE db_order;
USE db_order;
CREATE TABLE t_order (
id BIGINT AUTO_INCREMENT,
order_no VARCHAR(30),
user_id BIGINT,
amount DECIMAL(10,2),
PRIMARY KEY(id)
);
4、创建实体类
@TableName("t_order")
@Data
@AllArgsConstructor
@NoArgsConstructor
public class Order {
@TableId(type = IdType.AUTO)
private Long id;
private String orderNo;
private Long userId;
private BigDecimal amount;
}
@TableName("t_user")
@Data
@AllArgsConstructor
@NoArgsConstructor
public class User {
@TableId(type = IdType.AUTO)
private Long id;
private String name;
}
5、创建mapper接口
@Mapper
public interface UserMapper extends BaseMapper<User> {
}
@Mapper
public interface OrderMapper extends BaseMapper<Order> {
}
6、配置文件
spring.application.name=sharingSphere-jdbc
#开发环境
spring.profiles.active=dev
#模式配置,开发环境可以使用内存模式
spring.shardingsphere.mode.type=Memory
#开启shardingsphere的sql打印功能
spring.shardingsphere.props.sql-show=true
# 配置真实数据源
spring.shardingsphere.datasource.names=server-user,server-order
# 配置第 1 个数据源
spring.shardingsphere.datasource.server-user.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.server-user.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.server-user.jdbc-url=jdbc:mysql://112.126.80.104:3301/db_user
spring.shardingsphere.datasource.server-user.username=root
spring.shardingsphere.datasource.server-user.password=123456
# 配置第 2 个数据源
spring.shardingsphere.datasource.server-order.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.server-order.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.server-order.jdbc-url=jdbc:mysql://112.126.80.104:3302/db_order
spring.shardingsphere.datasource.server-order.username=root
spring.shardingsphere.datasource.server-order.password=123456
# 标准分片表配置
# 由数据源名 + 表名组成,以小数点分隔。多个表以逗号分隔,支持 inline 表达式。
# 缺省表示使用已知数据源与逻辑表名称生成数据节点,用于广播表(即每个库中都需要一个同样的表用于关联查询,多为字典表)或只分库不分表且所有库的表结构完全一致的情况
spring.shardingsphere.rules.sharding.tables.t_user.actual-data-nodes=server-user.t_user
spring.shardingsphere.rules.sharding.tables.t_order.actual-data-nodes=server-order.t_order
7、测试插入数据
@Test
void testInsertOrderAndUser() {
User user = new User(null,"高启强");
userMapper.insert(user);
Order order = new Order(null,"EyeDropLyq001",1L,new BigDecimal(100));
orderMapper.insert(order);
}
四、
下面讲一下水平分片
先说一下系统的设计,将server-order数据库分成了两个数据库,server-order0和serve-order1 每个数据库中又有两个表t_order0,t_order1 所以这个设计是水平分库和水平分表的集合,将原来的一个库一个表,分成了两个库四个表。
1、老样子先使用docker搭建mysql服务器
docker run -p 3310:3306 -v /opt/soft/mysql/server-order0/conf:/etc/mysql/conf.d \
-v /opt/soft/mysql/server-order0/data:/var/lib/mysql \
--name server-order0 -e MYSQL_ROOT_PASSWORD=123456 -d mysql:8.0.29
docker run -p 3311:3306 -v /opt/soft/mysql/server-order1/conf:/etc/mysql/conf.d \
-v /opt/soft/mysql/server-order1/data:/var/lib/mysql \
--name server-order1 -e MYSQL_ROOT_PASSWORD=123456 -d mysql:8.0.29
2、创建数据库
CREATE DATABASE db_order;
USE db_order;
CREATE TABLE t_order0 (
id BIGINT,
order_no VARCHAR(30),
user_id BIGINT,
amount DECIMAL(10,2),
PRIMARY KEY(id)
);
CREATE TABLE t_order1 (
id BIGINT,
order_no VARCHAR(30),
user_id BIGINT,
amount DECIMAL(10,2),
PRIMARY KEY(id)
);
注意呦!! 这里id没有设置为自增主键,因为涉及到多个数据源了,所以不能简单的设置为主键自增了,要使用更加复杂的主键策略,防止多个数据源中的表的id出现冲突,下面会讲到。
3、在spring boot 中进行配置,注意这里的配置就比较复杂了,慢慢来配置
#-----------------------------------基本配置
spring.application.name=sharingSphere-jdbc
#开发环境
spring.profiles.active=dev
#模式配置,开发环境可以使用内存模式
spring.shardingsphere.mode.type=Memory
#开启shardingsphere的sql打印功能
spring.shardingsphere.props.sql-show=true
#----------------------------------配置数据源
# 配置真实数据源
spring.shardingsphere.datasource.names=server-user,server-order0,server-order1
# 配置第 1 个数据源
spring.shardingsphere.datasource.server-user.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.server-user.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.server-user.jdbc-url=jdbc:mysql://112.126.80.104:3301/db_user
spring.shardingsphere.datasource.server-user.username=root
spring.shardingsphere.datasource.server-user.password=123456
# 配置第 2 个数据源
spring.shardingsphere.datasource.server-order0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.server-order0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.server-order0.jdbc-url=jdbc:mysql://112.126.80.104:3310/db_order
spring.shardingsphere.datasource.server-order0.username=root
spring.shardingsphere.datasource.server-order0.password=123456
# 配置第 3 个数据源
spring.shardingsphere.datasource.server-order1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.server-order1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.server-order1.jdbc-url=jdbc:mysql://112.126.80.104:3311/db_order
spring.shardingsphere.datasource.server-order1.username=root
spring.shardingsphere.datasource.server-order1.password=123456
# 标准分片表配置
# 由数据源名 + 表名组成,以小数点分隔。多个表以逗号分隔,支持 inline 表达式。
# 缺省表示使用已知数据源与逻辑表名称生成数据节点,用于广播表(即每个库中都需要一个同样的表用于关联查询,多为字典表)或只分库不分表且所有库的表结构完全一致的情况
spring.shardingsphere.rules.sharding.tables.t_user.actual-data-nodes=server-user.t_user
#此时我们有多个order了,所以我们使用inline表达式进行配置
spring.shardingsphere.rules.sharding.tables.t_order.actual-data-nodes=server-order$->{0..1}.t_order$->{0..1}
# -------------------分库策略,缺省表示使用默认分库策略,以下的分片策略只能选其一
# 用于单分片键的标准分片场景
# 分片列名称 也就是根据哪个列进行一个分片,主要是我们分片算法的计算可能会用到这个列
# 注意这里的t_order 填写逻辑表明就可以了
spring.shardingsphere.rules.sharding.tables.t_order.database-strategy.standard.sharding-column=user_id
# 分片算法名称
spring.shardingsphere.rules.sharding.tables.t_order.database-strategy.standard.sharding-algorithm-name=alg_mod
#------------------------------# 分表策略,同分库策略
# 分片列名称 也就是根据哪个列进行一个分片,主要是我们分片算法的计算可能会用到这个列
spring.shardingsphere.rules.sharding.tables.t_order.table-strategy.standard.sharding-column=order_no
# 分片算法名称
spring.shardingsphere.rules.sharding.tables.t_order.table-strategy.standard.sharding-algorithm-name=alg_hash_mod
# ---------------------------分片算法配置
# 分片算法类型
# 我使用的这个分片算法是使用的取模分片算法,我准备用在分库上,让user_id这个列对2进行取余,如果是偶数的话分到server-order0这个数据库,如果是奇数的话分配到server-order1这个数据库上
spring.shardingsphere.rules.sharding.sharding-algorithms.alg_mod.type=MOD
# 分片算法属性配置
spring.shardingsphere.rules.sharding.sharding-algorithms.alg_mod.props.sharding-count=2
# 我使用的这个分片算法是使用的哈希取模分片算法,我准备用在分表上,这个算法非常适合我们分片的列是string类型的场景
spring.shardingsphere.rules.sharding.sharding-algorithms.alg_hash_mod.type=HASH_MOD
# 分片算法属性配置
spring.shardingsphere.rules.sharding.sharding-algorithms.alg_hash_mod.props.sharding-count=2
4、进行插入数据测试
@Test
public void testInsertOrderFenPian(){
for (int i = 0; i < 4; i++) {
Order order = new Order();
order.setUserId((long) i);
order.setOrderNo("EyeDrop00"+i);
order.setAmount(new BigDecimal(100));
orderMapper.insert(order);
}
}
五、
对上面遗留的一个问题进行一个补充,那就是分布式系统中的主键id的生成策略。因为我们引入了mybatis-plus,所以我们可以使用mp中自带的分布式id的生成策略,也就是IdType.ASSIGN_ID ,这个算法也是基于(雪花算法)实现的。但是如果我们不使用mybatis-plus的话,我们怎么保证分布式下的主键id问题呢,那就是进行分布式序列配置
# --------------------------------分布式序列策略配置
# 分布式序列列名称
spring.shardingsphere.rules.sharding.tables.t_user.key-generate-strategy.column=id
spring.shardingsphere.rules.sharding.tables.t_order.key-generate-strategy.column=id
# 分布式序列算法名称
spring.shardingsphere.rules.sharding.tables.t_user.key-generate-strategy.key-generator-name=alg_snowflake
spring.shardingsphere.rules.sharding.tables.t_order.key-generate-strategy.key-generator-name=alg_snowflake
# 分布式序列算法配置
# 分布式序列算法类型
spring.shardingsphere.rules.sharding.key-generators.alg_snowflake.type=SNOWFLAKE
# 分布式序列算法属性配置
#spring.shardingsphere.rules.sharding.key-generators.alg_snowflake.props.xxx=
注意 :在配置了这个分布式序列算法之后,我们要将主键上面的mp的主键策略设置为@TableId(type = IdType.AUTO)
测试插入数据
@Test
public void testInsertOrderAndSnowFlake(){
for (int i = 20; i < 24; i++) {
Order order = new Order();
order.setUserId(1L);
order.setOrderNo("EyeDrop"+i);
order.setAmount(new BigDecimal(100));
orderMapper.insert(order);
}
}
在这里我感觉有必要做一个关于分布式序列策略的额外的补充,可以更好的帮助大家拓宽一下知识。
在传统的数据库中保证主键唯一且自增已经成为了一个基本的功能,比如mysql的主键自增,oracle的自增序列。但是对于分布式数据库来说,我们将一个表拆分到了多个数据库中而且涉及到了多个数据源的话,那么我们这些数据库中的表的主键id是无法互相感知的,所以这个时候会出现一个问题,那就是可能会导致多个表中的id发生碰撞。当然我们可以使用一些传统的方法来解决这个问题,比如:我们给每一个表设置id的初值,以及设置它的增长步长,这个方法可以防止发生碰撞,但是引入了额外的维护规则,光是这个步长和初值的良好设计就需要花费一点时间。方案缺乏完整性和可扩展性。
为了解决这个问题,目前shardingsphere中提供了三种方法来解决:
1、使用UUID
使用UUID.randomUUID()方法来生成的一个唯一ID,但是有个缺点就是无序性,所以对于mysql的innodb数据引擎这种需要维护主键索引的数据索引来说不是很好。
2、SNOWFLAKE(雪花算法)
雪花算法可以说是一个经典的解决分布式主键id的算法了,最早是Twitter公司开源出来的。雪花算法可以保证不同进程生成的id不重复,也可以保证同一个进程生成的id的有序性,非常适合用来生成主键id,因为主键id一般都会用来作为索引。
下面说一下雪花算法的原理
先看官网的一张图
雪花算法是由64bit组成的一个长整型数据。
说一下各部分的组成
首先一个bit位的符号位,因为id肯定是正数,所以一直都是0
41bit位的时间戳位,因为是41位,所以最远可支持的时间是 2^41/(365 * 24*60 * 60 * 1000L)约等于69.73年,而时间基点为2016年,所以大概可以支持到2086年,(可恶又是一个纪元时间坑)
10bit位的工作进程位,就是通过这10bit位来保证不同进程生成的id不重复的,java进程中,每个进程都有自己的一个唯一id
12bit位的序列号位,这个序列号位既可以保证同一进程中的id不重复,我们在同一个进程甚至同一个时间戳中生成的id通过这个12bit位的序列号来保证生成不同的id,但是如果我们在一个时间戳内生成的id超过了4096(2^12)个id的话,我们序列号位会等待到下一个毫秒继续生成。
六、
多表关联
1、在server-order0和server-order1这两个数据源中创建t_order-item0 和t_order-item1两张表。
但是我们想让同一个用户的订单和订单项在同一个数据源中,避免发生跨库关联,所以我们的t_order和t_order_item应该有相同的分片策略。
CREATE TABLE t_order_item0(
id BIGINT,
order_no VARCHAR(30),
user_id BIGINT,
price DECIMAL(10,2),
`count` INT,
PRIMARY KEY(id)
);
CREATE TABLE t_order_item1(
id BIGINT,
order_no VARCHAR(30),
user_id BIGINT,
price DECIMAL(10,2),
`count` INT,
PRIMARY KEY(id)
);
2、创建订单项的实体类
@Data
@AllArgsConstructor
@NoArgsConstructor
@TableName("t_order_item")
public class OrderItem {
@TableId(type = IdType.AUTO)
private Long id;
private String orderNo;
private Long userId;
private BigDecimal price;
private Integer count;
}
3、创建mapper
@Mapper
public interface OrderItemMapper extends BaseMapper<OrderItem> {
}
4、修改配置
#-----------------------------------基本配置
spring.application.name=sharingSphere-jdbc
#开发环境
spring.profiles.active=dev
#模式配置,开发环境可以使用内存模式
spring.shardingsphere.mode.type=Memory
#开启shardingsphere的sql打印功能
spring.shardingsphere.props.sql-show=true
#----------------------------------配置数据源
# 配置真实数据源
spring.shardingsphere.datasource.names=server-user,server-order0,server-order1
# 配置第 1 个数据源
spring.shardingsphere.datasource.server-user.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.server-user.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.server-user.jdbc-url=jdbc:mysql://112.126.80.104:3301/db_user
spring.shardingsphere.datasource.server-user.username=root
spring.shardingsphere.datasource.server-user.password=123456
# 配置第 2 个数据源
spring.shardingsphere.datasource.server-order0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.server-order0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.server-order0.jdbc-url=jdbc:mysql://112.126.80.104:3310/db_order
spring.shardingsphere.datasource.server-order0.username=root
spring.shardingsphere.datasource.server-order0.password=123456
# 配置第 3 个数据源
spring.shardingsphere.datasource.server-order1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.server-order1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.server-order1.jdbc-url=jdbc:mysql://112.126.80.104:3311/db_order
spring.shardingsphere.datasource.server-order1.username=root
spring.shardingsphere.datasource.server-order1.password=123456
# 标准分片表配置
# 由数据源名 + 表名组成,以小数点分隔。多个表以逗号分隔,支持 inline 表达式。
# 缺省表示使用已知数据源与逻辑表名称生成数据节点,用于广播表(即每个库中都需要一个同样的表用于关联查询,多为字典表)或只分库不分表且所有库的表结构完全一致的情况
spring.shardingsphere.rules.sharding.tables.t_user.actual-data-nodes=server-user.t_user
#此时我们有多个order了,所以我们使用inline表达式进行配置
spring.shardingsphere.rules.sharding.tables.t_order.actual-data-nodes=server-order$->{0..1}.t_order$->{0..1}
spring.shardingsphere.rules.sharding.tables.t_order_item.actual-data-nodes=server-order$->{0..1}.t_order_item$->{0..1}
# -------------------分库策略,缺省表示使用默认分库策略,以下的分片策略只能选其一
# 用于单分片键的标准分片场景
# 分片列名称 也就是根据哪个列进行一个分片,主要是我们分片算法的计算可能会用到这个列
# 注意这里的t_order 填写逻辑表明就可以了
spring.shardingsphere.rules.sharding.tables.t_order.database-strategy.standard.sharding-column=user_id
spring.shardingsphere.rules.sharding.tables.t_order_item.database-strategy.standard.sharding-column=user_id
# 分片算法名称
spring.shardingsphere.rules.sharding.tables.t_order.database-strategy.standard.sharding-algorithm-name=alg_mod
spring.shardingsphere.rules.sharding.tables.t_order_item.database-strategy.standard.sharding-algorithm-name=alg_mod
#------------------------------# 分表策略,同分库策略
# 分片列名称 也就是根据哪个列进行一个分片,主要是我们分片算法的计算可能会用到这个列
spring.shardingsphere.rules.sharding.tables.t_order.table-strategy.standard.sharding-column=order_no
spring.shardingsphere.rules.sharding.tables.t_order_item.table-strategy.standard.sharding-column=order_no
# 分片算法名称
spring.shardingsphere.rules.sharding.tables.t_order.table-strategy.standard.sharding-algorithm-name=alg_hash_mod
spring.shardingsphere.rules.sharding.tables.t_order_item.table-strategy.standard.sharding-algorithm-name=alg_hash_mod
# ---------------------------分片算法配置
# 分片算法类型
# 我使用的这个分片算法是使用的取模分片算法,我准备用在分库上,让user_id这个列对2进行取余,如果是偶数的话分到server-order0这个数据库,如果是奇数的话分配到server-order1这个数据库上
spring.shardingsphere.rules.sharding.sharding-algorithms.alg_mod.type=MOD
# 分片算法属性配置
spring.shardingsphere.rules.sharding.sharding-algorithms.alg_mod.props.sharding-count=2
# 我使用的这个分片算法是使用的哈希取模分片算法,我准备用在分表上,这个算法非常适合我们分片的列是string类型的场景
spring.shardingsphere.rules.sharding.sharding-algorithms.alg_hash_mod.type=HASH_MOD
# 分片算法属性配置
spring.shardingsphere.rules.sharding.sharding-algorithms.alg_hash_mod.props.sharding-count=2
# --------------------------------分布式序列策略配置
# 分布式序列列名称
spring.shardingsphere.rules.sharding.tables.t_user.key-generate-strategy.column=id
spring.shardingsphere.rules.sharding.tables.t_order.key-generate-strategy.column=id
spring.shardingsphere.rules.sharding.tables.t_order_item.key-generate-strategy.column=id
# 分布式序列算法名称
spring.shardingsphere.rules.sharding.tables.t_user.key-generate-strategy.key-generator-name=alg_snowflake
spring.shardingsphere.rules.sharding.tables.t_order.key-generate-strategy.key-generator-name=alg_snowflake
spring.shardingsphere.rules.sharding.tables.t_order_item.key-generate-strategy.key-generator-name=alg_snowflake
# 分布式序列算法配置
# 分布式序列算法类型
spring.shardingsphere.rules.sharding.key-generators.alg_snowflake.type=SNOWFLAKE
# 分布式序列算法属性配置
#spring.shardingsphere.rules.sharding.key-generators.alg_snowflake.props.xxx=
5、测试插入
@Test
public void testInsertOrderAndOrderItem(){
for (int i = 1; i <= 4; i++) {
Order order = new Order();
order.setUserId(1L);
order.setOrderNo("EyeDrop"+i);
for (int j = 1; j <= 3; j++) {
OrderItem orderItem = new OrderItem();
orderItem.setOrderNo("EyeDrop"+i);
orderItem.setCount(3);
orderItem.setUserId(1L);
orderItem.setPrice(new BigDecimal(3));
orderItemMapper.insert(orderItem);
}
orderMapper.insert(order);
}
}
测试之后可以发现order0中对应的order_no和order_item0中的order_no是对应的,order1中对应的order_no和order_item1中的order_no是对应的。测试完成
下面测试一下查询看看有啥问题吗?
1、首先我们先创建一个实体类的VO
@Data
@AllArgsConstructor
@NoArgsConstructor
public class OrderAndItemVo {
private String orderNo;
private BigDecimal amount;
}
2、在mapper中定义查询的sql
@Mapper
public interface OrderItemMapper extends BaseMapper<OrderItem> {
@Select({"SELECT t1.order_no AS orderNo,",
"SUM(t2.count*t2.price) AS amount",
"FROM t_order t1",
"join t_order_item t2",
"on t1.order_no=t2.order_no",
"GROUP BY t2.order_no"})
List<OrderAndItemVo> getAllOrderAndItem();
}
3、测试代码
@Test
public void testGetOrderAndItemVo(){
List<OrderAndItemVo> items = orderItemMapper.getAllOrderAndItem();
items.forEach(System.out::println);
}
但是我们可以发现,如果我们这样配置的话,shardingsphere底层做关联查询的时候其实做的是笛卡尔积的查询,执行了8次实际的查询,显然这个不是必要的,因为我们的order0和order_item0是对应的,order1和order_item1是对应的,所以让order0和order_item1进行关联查询这就是典型的额外的不必要的操作,所以为了解决这个问题,我们可以进行绑定表的配置。
# ------------------------------------绑定表规则列表
spring.shardingsphere.rules.sharding.binding-tables[0]=t_order,t_order_item
这个时候再去测试一下查询,会发现由原来的8个实际查询编程了4个实际查询。
补充:不是随便两个表都可以进行绑定的,进行绑定的两个表要有一样的分片策略,然后分片的列也要是一样的才可以。
七、
广播表
什么是广播表呢?这个表在所有的数据源中都存在,表中数据量不大,但是经常需要和海量数据的数据表进行关联查询,比如字典表。
这个时候就可以把这种表设置为广播表了,在广播表中,插入数据会在所有的表中同时插入,保证多个数据源中数据的一致性。但是查询的时候只会从一个数据源中进行查询。
可以和任何一个表进行一个join操作
1、在server-user、server-order0、server-order1数据库中创建t_dict表
CREATE TABLE t_dict(
id BIGINT,
dict_type VARCHAR(200),
PRIMARY KEY(id)
);
2、创建实体类
@TableName("t_dict")
@Data
@AllArgsConstructor
@NoArgsConstructor
public class Dict {
@TableId(type = IdType.ASSIGN_ID)
private Long id;
private String dictType;
}
3、创建mapper
@Mapper
public interface DictMapper extends BaseMapper<Dict> {
}
4、配置广播表
#---------------------------------- 广播表规则列表
spring.shardingsphere.rules.sharding.broadcast-tables[0]=t_dict
5、测试
@Test
public void testInserDict(){
Dict dict = new Dict();
dict.setDictType("EyeDropType");
dictMapper.insert(dict);
}
@Test
public void testSelectDict(){
List<Dict> dicts = dictMapper.selectList(null);
dicts.forEach(System.out::println);
}
七、ShardingSphere-proxy学习
使用ShardingSphere-proxy会单独启动一个服务,不会对我们的业务代码有侵入,我们的业务层像访问mysql一样进行操作,具体的操作都交给了ShardingSphere-proxy了。
一般启动ShardingSphere-proxy有两种常用的方式
一、使用二进制包安装
要注意,如果我们使用ShardingSphere-proxy来操作mysql的话,我们需要将mysql的驱动包放到ext-lib/下,使用docker也是一样的。
然后修改一下conf目录下的server.yaml文件
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
######################################################################################################
#
# If you want to configure governance, authorization and proxy properties, please refer to this file.
#
######################################################################################################
#mode:
# type: Cluster
# repository:
# type: ZooKeeper
# props:
# namespace: governance_ds
# server-lists: localhost:2181
# retryIntervalMilliseconds: 500
# timeToLiveSeconds: 60
# maxRetries: 3
# operationTimeoutMilliseconds: 500
# overwrite: false
#
rules:
- !AUTHORITY
users:
- root@%:root
# - sharding@:sharding
provider:
type: ALL_PRIVILEGES_PERMITTED
# - !TRANSACTION
# defaultType: XA
# providerType: Atomikos
# # When the provider type is Narayana, the following properties can be configured or not
# props:
# recoveryStoreUrl: jdbc:mysql://127.0.0.1:3306/jbossts
# recoveryStoreDataSource: com.mysql.jdbc.jdbc2.optional.MysqlDataSource
# recoveryStoreUser: root
# recoveryStorePassword: 12345678
# - !SQL_PARSER
# sqlCommentParseEnabled: true
# sqlStatementCache:
# initialCapacity: 2000
# maximumSize: 65535
# concurrencyLevel: 4
# parseTreeCache:
# initialCapacity: 128
# maximumSize: 1024
# concurrencyLevel: 4
props:
# max-connections-size-per-query: 1
# kernel-executor-size: 16 # Infinite by default.
# proxy-frontend-flush-threshold: 128 # The default value is 128.
# proxy-hint-enabled: false
sql-show: true
# check-table-metadata-enabled: false
# show-process-list-enabled: false
# # Proxy backend query fetch size. A larger value may increase the memory usage of ShardingSphere Proxy.
# # The default value is -1, which means set the minimum value for different JDBC drivers.
# proxy-backend-query-fetch-size: -1
# check-duplicate-table-enabled: false
# proxy-frontend-executor-size: 0 # Proxy frontend executor size. The default value is 0, which means let Netty decide.
# # Available options of proxy backend executor suitable: OLAP(default), OLTP. The OLTP option may reduce time cost of writing packets to client, but it may increase the latency of SQL execution
# # and block other clients if client connections are more than `proxy-frontend-executor-size`, especially executing slow SQL.
# proxy-backend-executor-suitable: OLAP
# proxy-frontend-max-connections: 0 # Less than or equal to 0 means no limitation.
# sql-federation-enabled: false
# # Available proxy backend driver type: JDBC (default), ExperimentalVertx
# proxy-backend-driver-type: JDBC
启动就可以了。
二、使用docker启动
1、启动docker
docker run -d \
-v /opt/shardingsphere-proxy/conf:/opt/shardingsphere-proxy/conf \
-v /opt/shardingsphere-proxy/ext-lib:/opt/shardingsphere-proxy/ext-lib \
-e ES_JAVA_OPTS="-Xmx256m -Xms256m -Xmn128m" \
-p 3321:3307 \
--name server-proxy-a \
apache/shardingsphere-proxy:5.1.1
2、将mysql驱动放到ext-lib中
3、在conf中配置server.yaml配置文件
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
######################################################################################################
#
# If you want to configure governance, authorization and proxy properties, please refer to this file.
#
######################################################################################################
#mode:
# type: Cluster
# repository:
# type: ZooKeeper
# props:
# namespace: governance_ds
# server-lists: localhost:2181
# retryIntervalMilliseconds: 500
# timeToLiveSeconds: 60
# maxRetries: 3
# operationTimeoutMilliseconds: 500
# overwrite: false
#
rules:
- !AUTHORITY
users:
- root@%:root
# - sharding@:sharding
provider:
type: ALL_PRIVILEGES_PERMITTED
# - !TRANSACTION
# defaultType: XA
# providerType: Atomikos
# # When the provider type is Narayana, the following properties can be configured or not
# props:
# recoveryStoreUrl: jdbc:mysql://127.0.0.1:3306/jbossts
# recoveryStoreDataSource: com.mysql.jdbc.jdbc2.optional.MysqlDataSource
# recoveryStoreUser: root
# recoveryStorePassword: 12345678
# - !SQL_PARSER
# sqlCommentParseEnabled: true
# sqlStatementCache:
# initialCapacity: 2000
# maximumSize: 65535
# concurrencyLevel: 4
# parseTreeCache:
# initialCapacity: 128
# maximumSize: 1024
# concurrencyLevel: 4
props:
# max-connections-size-per-query: 1
# kernel-executor-size: 16 # Infinite by default.
# proxy-frontend-flush-threshold: 128 # The default value is 128.
# proxy-hint-enabled: false
sql-show: true
# check-table-metadata-enabled: false
# show-process-list-enabled: false
# # Proxy backend query fetch size. A larger value may increase the memory usage of ShardingSphere Proxy.
# # The default value is -1, which means set the minimum value for different JDBC drivers.
# proxy-backend-query-fetch-size: -1
# check-duplicate-table-enabled: false
# proxy-frontend-executor-size: 0 # Proxy frontend executor size. The default value is 0, which means let Netty decide.
# # Available options of proxy backend executor suitable: OLAP(default), OLTP. The OLTP option may reduce time cost of writing packets to client, but it may increase the latency of SQL execution
# # and block other clients if client connections are more than `proxy-frontend-executor-size`, especially executing slow SQL.
# proxy-backend-executor-suitable: OLAP
# proxy-frontend-max-connections: 0 # Less than or equal to 0 means no limitation.
# sql-federation-enabled: false
# # Available proxy backend driver type: JDBC (default), ExperimentalVertx
# proxy-backend-driver-type: JDBC
三、配置读写分离
在conf目录下,创建一个config-readwrite-splitting.yaml文件
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
######################################################################################################
#
# Here you can configure the rules for the proxy.
# This example is configuration of replica-query rule.
#
######################################################################################################
#
#schemaName: readwrite_splitting_db
#
#dataSources:
# primary_ds:
# url: jdbc:postgresql://127.0.0.1:5432/demo_primary_ds
# username: postgres
# password: postgres
# connectionTimeoutMilliseconds: 30000
# idleTimeoutMilliseconds: 60000
# maxLifetimeMilliseconds: 1800000
# maxPoolSize: 50
# minPoolSize: 1
# replica_ds_0:
# url: jdbc:postgresql://127.0.0.1:5432/demo_replica_ds_0
# username: postgres
# password: postgres
# connectionTimeoutMilliseconds: 30000
# idleTimeoutMilliseconds: 60000
# maxLifetimeMilliseconds: 1800000
# maxPoolSize: 50
# minPoolSize: 1
# replica_ds_1:
# url: jdbc:postgresql://127.0.0.1:5432/demo_replica_ds_1
# username: postgres
# password: postgres
# connectionTimeoutMilliseconds: 30000
# idleTimeoutMilliseconds: 60000
# maxLifetimeMilliseconds: 1800000
# maxPoolSize: 50
# minPoolSize: 1
#
#rules:
#- !READWRITE_SPLITTING
# dataSources:
# readwrite_ds:
# type: Static
# props:
# write-data-source-name: primary_ds
# read-data-source-names: replica_ds_0,replica_ds_1
######################################################################################################
#
# If you want to connect to MySQL, you should manually copy MySQL driver to lib directory.
#
######################################################################################################
schemaName: readwrite_splitting_db
dataSources:
write_ds:
url: jdbc:mysql://172.31.51.249:3306/db_user?serverTimezone=UTC&useSSL=false
username: root
password: 123456
connectionTimeoutMilliseconds: 30000
idleTimeoutMilliseconds: 60000
maxLifetimeMilliseconds: 1800000
maxPoolSize: 50
minPoolSize: 1
read_ds_0:
url: jdbc:mysql://172.31.51.249:3307/db_user?serverTimezone=UTC&useSSL=false
username: root
password: 123456
connectionTimeoutMilliseconds: 30000
idleTimeoutMilliseconds: 60000
maxLifetimeMilliseconds: 1800000
maxPoolSize: 50
minPoolSize: 1
read_ds_1:
url: jdbc:mysql://172.31.51.249:3308/db_user?serverTimezone=UTC&useSSL=false
username: root
password: 123456
connectionTimeoutMilliseconds: 30000
idleTimeoutMilliseconds: 60000
maxLifetimeMilliseconds: 1800000
maxPoolSize: 50
minPoolSize: 1
rules:
- !READWRITE_SPLITTING
dataSources:
readwrite_ds:
type: Static
props:
write-data-source-name: write_ds
read-data-source-names: read_ds_0,read_ds_1
四、创建一个Spring boot 项目来使用ShardingSphere-proxy
1、创建boot项目
2、pom文件如下
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>com.baomidou</groupId>
<artifactId>mybatis-plus-boot-starter</artifactId>
<version>3.3.1</version>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<scope>runtime</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
3、创建实体类
@TableName("t_user")
@Data
@AllArgsConstructor
@NoArgsConstructor
public class User {
@TableId(type = IdType.AUTO)
private Long id;
private String name;
}
4、创建mapper
@Mapper
public interface UserMapper extends BaseMapper<User> {
}
5、创建配置文件
# 应用名称
spring.application.name=sharding-proxy-demo
# 开发环境设置
spring.profiles.active=dev
#mysql数据库连接(proxy)
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.datasource.url=jdbc:mysql://112.126.80.104:3321/readwrite_splitting_db?serverTimezone=GMT%2B8&useSSL=false
spring.datasource.username=root
spring.datasource.password=root
#mybatis日志
mybatis-plus.configuration.log-impl=org.apache.ibatis.logging.stdout.StdOutImpl
6、测试
@Test
void testSelectUser() {
List<User> userList = userMapper.selectList(null);
userList.forEach(System.out::println);
}
五、配置垂直分片
1、在conf目录下创建一个配置文件config-shadow.yaml
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
######################################################################################################
#
# Here you can configure the rules for the proxy.
# This example is configuration of sharding rule.
#
######################################################################################################
#
#schemaName: sharding_db
#
#dataSources:
# ds_0:
# url: jdbc:postgresql://127.0.0.1:5432/demo_ds_0
# username: postgres
# password: postgres
# connectionTimeoutMilliseconds: 30000
# idleTimeoutMilliseconds: 60000
# maxLifetimeMilliseconds: 1800000
# maxPoolSize: 50
# minPoolSize: 1
# ds_1:
# url: jdbc:postgresql://127.0.0.1:5432/demo_ds_1
# username: postgres
# password: postgres
# connectionTimeoutMilliseconds: 30000
# idleTimeoutMilliseconds: 60000
# maxLifetimeMilliseconds: 1800000
# maxPoolSize: 50
# minPoolSize: 1
#
#rules:
#- !SHARDING
# tables:
# t_order:
# actualDataNodes: ds_${0..1}.t_order_${0..1}
# tableStrategy:
# standard:
# shardingColumn: order_id
# shardingAlgorithmName: t_order_inline
# keyGenerateStrategy:
# column: order_id
# keyGeneratorName: snowflake
# t_order_item:
# actualDataNodes: ds_${0..1}.t_order_item_${0..1}
# tableStrategy:
# standard:
# shardingColumn: order_id
# shardingAlgorithmName: t_order_item_inline
# keyGenerateStrategy:
# column: order_item_id
# keyGeneratorName: snowflake
# bindingTables:
# - t_order,t_order_item
# defaultDatabaseStrategy:
# standard:
# shardingColumn: user_id
# shardingAlgorithmName: database_inline
# defaultTableStrategy:
# none:
#
# shardingAlgorithms:
# database_inline:
# type: INLINE
# props:
# algorithm-expression: ds_${user_id % 2}
# t_order_inline:
# type: INLINE
# props:
# algorithm-expression: t_order_${order_id % 2}
# t_order_item_inline:
# type: INLINE
# props:
# algorithm-expression: t_order_item_${order_id % 2}
#
# keyGenerators:
# snowflake:
# type: SNOWFLAKE
#
# scalingName: default_scaling
# scaling:
# default_scaling:
# input:
# workerThread: 40
# batchSize: 1000
# output:
# workerThread: 40
# batchSize: 1000
# streamChannel:
# type: MEMORY
# props:
# block-queue-size: 10000
# completionDetector:
# type: IDLE
# props:
# incremental-task-idle-minute-threshold: 30
# dataConsistencyChecker:
# type: DATA_MATCH
# props:
# chunk-size: 1000
######################################################################################################
#
# If you want to connect to MySQL, you should manually copy MySQL driver to lib directory.
#
######################################################################################################
schemaName: sharding_db
dataSources:
ds_0:
url: jdbc:mysql://172.31.51.249:3301/db_user?serverTimezone=UTC&useSSL=false
username: root
password: 123456
connectionTimeoutMilliseconds: 30000
idleTimeoutMilliseconds: 60000
maxLifetimeMilliseconds: 1800000
maxPoolSize: 50
minPoolSize: 1
ds_1:
url: jdbc:mysql://172.31.51.249:3302/db_order?serverTimezone=UTC&useSSL=false
username: root
password: 123456
connectionTimeoutMilliseconds: 30000
idleTimeoutMilliseconds: 60000
maxLifetimeMilliseconds: 1800000
maxPoolSize: 50
minPoolSize: 1
rules:
- !SHARDING
tables:
t_order:
actualDataNodes: ds_1.t_order
t_user:
actualDataNodes: ds_0.t_user
# tableStrategy:
# standard:
# shardingColumn: order_id
# shardingAlgorithmName: t_order_inline
# keyGenerateStrategy:
# column: order_id
# keyGeneratorName: snowflake
# t_order_item:
# actualDataNodes: ds_${0..1}.t_order_item_${0..1}
# tableStrategy:
# standard:
# shardingColumn: order_id
# shardingAlgorithmName: t_order_item_inline
# keyGenerateStrategy:
# column: order_item_id
# keyGeneratorName: snowflake
# bindingTables:
# - t_order,t_order_item
# defaultDatabaseStrategy:
# standard:
# shardingColumn: user_id
# shardingAlgorithmName: database_inline
# defaultTableStrategy:
# none:
#
# shardingAlgorithms:
# database_inline:
# type: INLINE
# props:
# algorithm-expression: ds_${user_id % 2}
# t_order_inline:
# type: INLINE
# props:
# algorithm-expression: t_order_${order_id % 2}
# t_order_item_inline:
# type: INLINE
# props:
# algorithm-expression: t_order_item_${order_id % 2}
#
# keyGenerators:
# snowflake:
# type: SNOWFLAKE
#
# scalingName: default_scaling
# scaling:
# default_scaling:
# input:
# workerThread: 40
# batchSize: 1000
# output:
# workerThread: 40
# batchSize: 1000
# streamChannel:
# type: MEMORY
# props:
# block-queue-size: 10000
# completionDetector:
# type: IDLE
# props:
# incremental-task-idle-minute-threshold: 30
# dataConsistencyChecker:
# type: DATA_MATCH
# props:
# chunk-size: 1000
注意要将读写分离的配置文件删掉,然后重启docker
2、测试
select * from t_order;
select * from t_user;
如果使用boot测试的话,记得修改mysql链接里面的数据库的名字
六、配置水平分片
在conf目录下创建config-sharding.yaml
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
######################################################################################################
#
# Here you can configure the rules for the proxy.
# This example is configuration of sharding rule.
#
######################################################################################################
#
#schemaName: sharding_db
#
#dataSources:
# ds_0:
# url: jdbc:postgresql://127.0.0.1:5432/demo_ds_0
# username: postgres
# password: postgres
# connectionTimeoutMilliseconds: 30000
# idleTimeoutMilliseconds: 60000
# maxLifetimeMilliseconds: 1800000
# maxPoolSize: 50
# minPoolSize: 1
# ds_1:
# url: jdbc:postgresql://127.0.0.1:5432/demo_ds_1
# username: postgres
# password: postgres
# connectionTimeoutMilliseconds: 30000
# idleTimeoutMilliseconds: 60000
# maxLifetimeMilliseconds: 1800000
# maxPoolSize: 50
# minPoolSize: 1
#
#rules:
#- !SHARDING
# tables:
# t_order:
# actualDataNodes: ds_${0..1}.t_order_${0..1}
# tableStrategy:
# standard:
# shardingColumn: order_id
# shardingAlgorithmName: t_order_inline
# keyGenerateStrategy:
# column: order_id
# keyGeneratorName: snowflake
# t_order_item:
# actualDataNodes: ds_${0..1}.t_order_item_${0..1}
# tableStrategy:
# standard:
# shardingColumn: order_id
# shardingAlgorithmName: t_order_item_inline
# keyGenerateStrategy:
# column: order_item_id
# keyGeneratorName: snowflake
# bindingTables:
# - t_order,t_order_item
# defaultDatabaseStrategy:
# standard:
# shardingColumn: user_id
# shardingAlgorithmName: database_inline
# defaultTableStrategy:
# none:
#
# shardingAlgorithms:
# database_inline:
# type: INLINE
# props:
# algorithm-expression: ds_${user_id % 2}
# t_order_inline:
# type: INLINE
# props:
# algorithm-expression: t_order_${order_id % 2}
# t_order_item_inline:
# type: INLINE
# props:
# algorithm-expression: t_order_item_${order_id % 2}
#
# keyGenerators:
# snowflake:
# type: SNOWFLAKE
#
# scalingName: default_scaling
# scaling:
# default_scaling:
# input:
# workerThread: 40
# batchSize: 1000
# output:
# workerThread: 40
# batchSize: 1000
# streamChannel:
# type: MEMORY
# props:
# block-queue-size: 10000
# completionDetector:
# type: IDLE
# props:
# incremental-task-idle-minute-threshold: 30
# dataConsistencyChecker:
# type: DATA_MATCH
# props:
# chunk-size: 1000
######################################################################################################
#
# If you want to connect to MySQL, you should manually copy MySQL driver to lib directory.
#
######################################################################################################
schemaName: sharding_db
dataSources:
ds_user:
url: jdbc:mysql://172.31.51.249:3301/db_user?serverTimezone=UTC&useSSL=false
username: root
password: 123456
connectionTimeoutMilliseconds: 30000
idleTimeoutMilliseconds: 60000
maxLifetimeMilliseconds: 1800000
maxPoolSize: 50
minPoolSize: 1
ds_order0:
url: jdbc:mysql://172.31.51.249:3310/db_order?serverTimezone=UTC&useSSL=false
username: root
password: 123456
connectionTimeoutMilliseconds: 30000
idleTimeoutMilliseconds: 60000
maxLifetimeMilliseconds: 1800000
maxPoolSize: 50
minPoolSize: 1
ds_order1:
url: jdbc:mysql://172.31.51.249:3311/db_order?serverTimezone=UTC&useSSL=false
username: root
password: 123456
connectionTimeoutMilliseconds: 30000
idleTimeoutMilliseconds: 60000
maxLifetimeMilliseconds: 1800000
maxPoolSize: 50
minPoolSize: 1
rules:
- !SHARDING
tables:
t_order:
actualDataNodes: ds_order${0..1}.t_order${0..1}
databaseStrategy:
standard:
shardingColumn: user_id
shardingAlgorithmName: alg_mod
tableStrategy:
standard:
shardingColumn: order_no
shardingAlgorithmName: alg_hash_mod
keyGenerateStrategy:
column: id
keyGeneratorName: alg_snowflake
t_order_item:
actualDataNodes: ds_order${0..1}.t_order_item${0..1}
databaseStrategy:
standard:
shardingColumn: user_id
shardingAlgorithmName: alg_mod
tableStrategy:
standard:
shardingColumn: order_no
shardingAlgorithmName: alg_hash_mod
keyGenerateStrategy:
column: id
keyGeneratorName: alg_snowflake
t_user:
actualDataNodes: ds_user.t_user
# tableStrategy:
# standard:
# shardingColumn: order_id
# shardingAlgorithmName: t_order_inline
# keyGenerateStrategy:
# column: order_id
# keyGeneratorName: snowflake
# t_order_item:
# actualDataNodes: ds_${0..1}.t_order_item_${0..1}
# tableStrategy:
# standard:
# shardingColumn: order_id
# shardingAlgorithmName: t_order_item_inline
# keyGenerateStrategy:
# column: order_item_id
# keyGeneratorName: snowflake
bindingTables:
- t_order,t_order_item
broadcastTables:
- t_dict
# defaultDatabaseStrategy:
# standard:
# shardingColumn: user_id
# shardingAlgorithmName: database_inline
# defaultTableStrategy:
# none:
#
shardingAlgorithms:
alg_mod:
type: MOD
props:
sharding-count: 2
alg_mod_hash:
type: HASH_MOD
props:
sharding-count: 2
# t_order_item_inline:
# type: INLINE
# props:
# algorithm-expression: t_order_item_${order_id % 2}
#
keyGenerators:
alg_snowflake:
type: SNOWFLAKE
#
# scalingName: default_scaling
# scaling:
# default_scaling:
# input:
# workerThread: 40
# batchSize: 1000
# output:
# workerThread: 40
# batchSize: 1000
# streamChannel:
# type: MEMORY
# props:
# block-queue-size: 10000
# completionDetector:
# type: IDLE
# props:
# incremental-task-idle-minute-threshold: 30
# dataConsistencyChecker:
# type: DATA_MATCH
# props:
# chunk-size: 1000
测试
mysql> show databases;
mysql> use sharding_db;
mysql> show tables;
mysql> select * from t_order; --测试水平分片
mysql> select * from t_dict; --测试广播表
参考
文章参考shardingsphere官网 以及 B站尚硅谷的shardingsphere课程。