ShardingSphere-Proxy分表分库、读写分离基本使用(v5.4.1版本)

本文主要通过示例来展示ShardingSphere-Proxy的以下7个示例:

        1.读写分离

        2.分表不分库

        3.分库分表(同时演示了联邦查询)

        4.分表不分库 + 读写分离

        5.分库分表 + 读写分离

        6.广播表

        7.绑定表

数据表拆分标准

1.表的体积大于2G;

2.表的行数大于1000w,以单表主键等简单形式访问数据;

3.表的行数大于500w,小范围查询(结果集小于100行)等形式访问数据;

4.表的行数大于200w,以多表join,范围查询,order by,group by,高频率等复杂形式访问数据,由于DML;

5.数据有时间过期特性的。

PS:只要达到上面任何一个标准,都需要做分表处理。

PS2:阿里巴巴开发手册中的规范:
    单表行数超过 500 万行或者单表容量超过 2GB,才推荐进行分库分表。
    说明:如果预计三年后的数据量根本达不到这个级别,请不要在创建表时就分库分表。

常见的分库分表工具

-工具:ShardingSphere-JDBC
-中间件:MyCat、ShardingSphere-Proxy
 

MySQL主从同步安装配置

        因为读写分离会涉及到MySQL的主从同步,而ShardingSphere并不处理主库和从库的数据同步,所以这部分还是要靠自己搭建MySQL的主从同步来实现。

        本小节以Windows环境下搭建一主两从MySQL 5.7.32为例(master端口:3308、slave1端口:3309、slave2端口:3310)(Linux下配置主从也差不多,配置文件部分是修改的my.conf),下面是MySQL安装配置步骤:

1.解压mysql-5.7.32-winx64.zip

2.在根目录创建并编写my.ini:(注意下面的端口号和两个路径)

PS:master 需要配置log-bin目录及server-id;slave可以只配置server-id即可(server-id不能重复哦!)。

PS2:master按如下方式开启binlog日志后,会发现在data目录下出现类似mysql-bin.000001、mysql-bin.000002这样的binlog日志文件。


---MySQL master配置:

[mysqld]
port = 3308
basedir=D:\mysql-5.7.32-winx64-master
datadir=D:\mysql-5.7.32-winx64-master\data
max_connections=200
character-set-server=utf8
default-storage-engine=INNODB
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES

log-bin=mysql-bin# 开启二进制日志
server-id=200# 设置server-id

# master开启二进制日志后默认记录所有库所有表的操作,可以通过配置来指定只记录指定的数据库甚至指定的表的操作

# 不同步哪些数据库(忽略mysql系统库复制)
# binlog-ignore-db = mysql
# binlog-ignore-db = test
# binlog-ignore-db = information_schema
 
# 只同步哪些数据库,除此之外,其他不同步
# binlog-do-db=test

[mysql]
default-character-set=utf8


---MySQL slave1配置:

[mysqld]
port = 3309
basedir=D:\mysql-5.7.32-winx64-slave1
datadir=D:\mysql-5.7.32-winx64-slave1\data
max_connections=200
character-set-server=utf8
default-storage-engine=INNODB
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES

server-id=201# 设置server-id

[mysql]
default-character-set=utf8


---MySQL slave2配置:
[mysqld]
port = 3310
basedir=D:\mysql-5.7.32-winx64-slave2
datadir=D:\mysql-5.7.32-winx64-slave2\data
max_connections=200
character-set-server=utf8
default-storage-engine=INNODB
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES

server-id=202# 设置server-id

[mysql]
default-character-set=utf8


3.注册MySQL服务:

管理员权限进入MySQL的bin目录并执行:

cd D:\mysql-5.7.32-winx64-master\bin

mysqld install mysql-master --defaults-file="D:\mysql-5.7.32-winx64-master\my.ini"


cd D:\mysql-5.7.32-winx64-slave1\bin

mysqld install mysql-slave1 --defaults-file="D:\mysql-5.7.32-winx64-slave1\my.ini"


cd D:\mysql-5.7.32-winx64-slave2\bin

mysqld install mysql-slave2 --defaults-file="D:\mysql-5.7.32-winx64-slave2\my.ini"


4.初始化数据库:

在三个MySQL的bin目录中执行:

mysqld --initialize

5.启动MySQL服务:

net start mysql-master

net start mysql-slave1

net start mysql-slave2

6.初始化root用户密码:

在data/xxx.err文件中找到临时密码,进行登录(PS:搜A temporary password is generated for,临时密码在最右侧)
然后用临时密码登录MySQL:

mysql -P3308 -u root -p

mysql -P3309 -u root -p

mysql -P3310 -u root -p


修改root密码:
set password for root@localhost=password('8888');


7.登录MySQL master主服务器,并设置用于同步的账户:

GRANT REPLICATION SLAVE,REPLICATION CLIENT ON *.* TO 'mysync'@'%' identified by '8888'; 

FLUSH PRIVILEGES;

8.登录MySQL master主服务器,执行查看状态,记录binlog日志的文件名及位置参数:

show master status;

PS:主要查询 File和Position 的值。

9.登录MySQL slave1和MySQL slave2从服务器,根据上一步查询结果,配置MySQL slave1和MySQL slave2从服务器,并启动从服务器复制功能:

change master to master_host='127.0.0.1',master_port=3308,master_user='mysync',master_password='8888',master_log_file='mysql-bin.000002',master_log_pos=1505;

start slave;


10.检查从服务器复制功能状态:

show slave status\G

PS:主要关注以下几个参数,尤其是Slave_IO_Running和Slave_SQL_Running必须是Yes才行:

                Slave_IO_State: Waiting for master to send event
                Master_Host: 127.0.0.1
                Master_User: mysync
                Master_Port: 3308
                  Master_Log_File: mysql-bin.000002
              Read_Master_Log_Pos: 2027

              Slave_IO_Running: Yes
              Slave_SQL_Running: Yes


注意:
如果查询状态后发现 Slave_IO_Running或Slave_SQL_Running的值为No,则执行下以下命令然后重新开启复制功能:

stop slave ;

set GLOBAL SQL_SLAVE_SKIP_COUNTER=1;

start slave ;


如果仍未解决问题,可用如下命令:

STOP REPLICA IO_THREAD FOR CHANNEL '';

change master to master_host='127.0.0.1',master_port=3311,master_user='root',master_password='8888',master_log_file='mysql-bin.000004',master_log_pos=158;

START REPLICA IO_THREAD FOR CHANNEL '';

start slave;

ShardingSphere-Proxy简介

        Apache ShardingSphere 由 ShardingSphere-JDBC 和 ShardingSphere-Proxy 这 2 款既能够独立部署,又支持混合部署配合使用的产品组成。 
它们均提供标准化的基于数据库作为存储节点的增量功能,可适用于如 Java 同构、异构语言、云原生等各种多样化的应用场景。

        ShardingSphere-JDBC 定位为轻量级 Java 框架,在 Java 的 JDBC 层提供的额外服务。
它使用客户端直连数据库,以 jar 包形式提供服务,无需额外部署和依赖,可理解为增强版的 JDBC 驱动,完全兼容 JDBC 和各种 ORM 框架。

        ShardingSphere-Proxy 定位为透明化的数据库代理端,通过实现数据库二进制协议,对异构语言提供支持。(ShardingSphere-Proxy 和 Mycat 产品定位差不多)
        目前提供 MySQL 和 PostgreSQL 协议,透明化数据库操作,对 DBA 更加友好:
        -对应用程序完全透明,可直接当做 MySQL/PostgreSQL 使用;
        -兼容 MariaDB 等基于 MySQL 协议的数据库,以及 openGauss 等基于 PostgreSQL 协议的数据库;
        -适用于任何兼容 MySQL/PostgreSQL 协议的的客户端,如:MySQL Command Client, MySQL Workbench, Navicat 等。

注意:ShardingSphere目前并不支持CASE WHEN、HAVING、UNION (ALL),有限支持子查询。

官网地址

-官网:https://shardingsphere.apache.org/index_zh.html

-官方文档:https://shardingsphere.apache.org/document/current/cn/overview/

-数据分片算法介绍:
    https://shardingsphere.apache.org/document/current/cn/dev-manual/sharding/

-负载均衡算法介绍:
    https://shardingsphere.apache.org/document/current/cn/user-manual/common-config/builtin-algorithm/load-balance/

-元数据持久化介绍:
    https://shardingsphere.apache.org/document/current/cn/user-manual/common-config/builtin-algorithm/metadata-repository/

-联邦查询介绍:
    当关联查询中的多个表分布在不同的数据库实例上时,通过开启联邦查询可以进行跨库关联查询,以及子查询。
    https://shardingsphere.apache.org/document/5.4.1/cn/user-manual/shardingsphere-proxy/yaml-config/rules/

ShardingSphere-Proxy安装步骤

目前 ShardingSphere-Proxy 可以通过以下方式:
    -二进制发布包
    -Docker
    -Helm

本文采用二进制发布包方式来安装(我是把Proxy安装到我本机VM虚拟机的CentOS7中了):

1.解压 apache-shardingsphere-5.4.1-shardingsphere-proxy-bin.tar.gz 到指定目录:

        我这里是放在了:/home/apache-shardingsphere-5.4.1-shardingsphere-proxy-bin

2.MySQL驱动jar放入根目录下的lib或ext-lib目录中:
    (如果要放入ext-lib目录,则该目录需要自行创建哦!)

3.修改配置文件:

        Proxy关键配置文件都在conf目录下:

        如果要实现读写分离与分库分表,关键的配置文件是:

        server.yaml:公共配置文件(配置治理、授权和代理属性等)

        config-readwrite-splitting.yaml:读写分离配置文件

        config-sharding.yaml:分库分表配置文件

比如我做示例的server.yaml配置如下:

mode:
  type: Standalone
  repository:
    type: JDBC

authority:
  users:
    - user: root@%
      password: 8888
    - user: sharding
      password: sharding
  privilege:
    type: ALL_PERMITTED

props:
  system-log-level: INFO
  max-connections-size-per-query: 1
  kernel-executor-size: 16  # Infinite by default.
  proxy-frontend-flush-threshold: 128  # The default value is 128.
  sql-show: true

PS:我这里示例是单机部署,Proxy也支持集群部署,可以和ZooKeeper整合。

4.启动或停止Proxy服务命令:

-启动:

/home/apache-shardingsphere-5.4.1-shardingsphere-proxy-bin/bin/start.sh 3306

(3306是自己指定的Proxy用到的端口号,可根据需要自行修改)

-停止:

/home/apache-shardingsphere-5.4.1-shardingsphere-proxy-bin/bin/stop.sh

安装好Proxy后,就可以进行我们接下来的示例了。

示例1:读写分离


测试环境准备

-MySQL一主两从集群(已实现主从复制的配置):

master:192.168.88.161:3311

slave1:192.168.88.161:3312

slave2:192.168.88.161:3313

-数据库名:read-write-db

-建测试表的SQL:

CREATE TABLE `xm`  (
  `id` bigint(20) NOT NULL,
  `name` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
  PRIMARY KEY (`id`) USING BTREE
) ENGINE = InnoDB CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = Dynamic;

-读写分离示例结构如下:


ShardingSphere-Proxy配置

实现读写分离涉及的配置文件:config-readwrite-splitting.yaml

config-readwrite-splitting.yaml配置如下:

schemaName: readwrite_splitting_db

dataSources:
  write_ds:
    url: jdbc:mysql://192.168.88.161:3311/read-write-db?characterEncoding=UTF-8&useUnicode=true&useSSL=false&tinyInt1isBit=false&allowPublicKeyRetrieval=true&serverTimezone=Asia/Shanghai
    username: root
    password: 8888
    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1
  read_ds_0:
    url: jdbc:mysql://192.168.88.161:3312/read-write-db?characterEncoding=UTF-8&useUnicode=true&useSSL=false&tinyInt1isBit=false&allowPublicKeyRetrieval=true&serverTimezone=Asia/Shanghai
    username: root
    password: 8888
    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1
  read_ds_1:
    url: jdbc:mysql://192.168.88.161:3313/read-write-db?characterEncoding=UTF-8&useUnicode=true&useSSL=false&tinyInt1isBit=false&allowPublicKeyRetrieval=true&serverTimezone=Asia/Shanghai
    username: root
    password: 8888
    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1

rules:
- !READWRITE_SPLITTING
  dataSources:
    readwrite_ds:
      writeDataSourceName: write_ds
      readDataSourceNames:
        - read_ds_0
        - read_ds_1
      loadBalancerName: random
  loadBalancers:
    random:
      type: RANDOM

- !SINGLE
  tables: 
    - "*.*"


核心配置说明:

-schemaName: 指定Proxy逻辑库名为readwrite_splitting_db。

-dataSources:配置用到的数据源,这里配置了一主两从三个数据源,write_ds、read_ds_0、read_ds_1为数据源自定义名称。

-rules:规则配置:

        - !READWRITE_SPLITTING:表示进行读写分离规则的配置。

        readwrite_ds:读写分离的dataSources自定义名称。

        writeDataSourceName:写库使用的数据源名。

        readDataSourceNames:读库使用的数据源名。

        loadBalancerName:自定义负载均衡名称(下面的loadBalancers配置中会用到,对,就那个小写的random)。

        loadBalancers:负载均衡算法配置,这里的type指定了算法为RANDOM(随机)。

注意1:

        未配置的表在 5.4.1版本 都不会自动加入 ShardingSphere 的逻辑 Database,所以需要使用 下面文档中 提到的 API 将部分或全部单表注册入逻辑库。

(不配置会在连接Proxy时出现表不存在的问题哦!)
    https://shardingsphere.apache.org/document/5.4.1/cn/user-manual/shardingsphere-jdbc/yaml-config/rules/single/

可直接在配置文件中添加以下单表配置即可:
- !SINGLE
  tables: 
    - "*.*"

注意2:

        这里负载均衡策略用的是RANDOM(随机),内置负载均衡算法还提供了ROUND_ROBIN(轮询)和WEIGHT(权重),详见:
https://shardingsphere.apache.org/document/5.4.1/cn/user-manual/common-config/builtin-algorithm/load-balance/

 比如,使用ROUND_ROBIN(轮询)的读写分离rules规则配置如下:

rules:
- !READWRITE_SPLITTING
  dataSources:
    readwrite_ds:
      writeDataSourceName: write_ds
      readDataSourceNames:
        - read_ds_0
        - read_ds_1
      loadBalancerName: round_robin
  loadBalancers:
    round_robin:
      type: ROUND_ROBIN

使用WEIGHT(权重)的读写分离rules规则配置如下:

rules:
- !READWRITE_SPLITTING
  dataSources:
    readwrite_ds:
      writeDataSourceName: write_ds
      readDataSourceNames:
        - read_ds_0
        - read_ds_1
      loadBalancerName: weight
      transactionalReadQueryStrategy: PRIMARY
  loadBalancers:
    weight:
      type: WEIGHT
      props:
        read_ds_0: 1
        read_ds_1: 10

特别注意:

        测试读写分离时,不要使用Navicat连接Proxy然后多次执行select语句来测试负载均衡算法,之前我直接用Navicat测试的,然后发现了下面的问题,其实用JDBC程序连接Proxy测试,就没下面的问题了!(^_^)

不知道是不是BUG,官方说一共提供了三种轮询策略:RANDOM、WEIGHT、ROUND_ROBIN:
        经测试,只有读写分离负载均衡轮询算法ROUND_ROBIN设置不生效(如果配置多个read库,只会读取第一个read库)。

 

结果测试

1.进入ShardingSphere-Proxy的逻辑库readwrite_splitting_db

2.插入数据:
insert into xm values(1,'项目1');
insert into xm values(2,'项目2');
insert into xm values(3,'项目3');
insert into xm values(4,'项目4');
insert into xm values(5,'项目5');

3.查询结果
select * from xm;

示例2:分表不分库


测试环境准备

MySQL:

192.168.88.161:3306

-数据库名:sharding-test

-建测试表的SQL:

CREATE TABLE `xm_0`  (
  `id` bigint(20) NOT NULL,
  `name` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
  PRIMARY KEY (`id`) USING BTREE
) ENGINE = InnoDB CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = Dynamic;

CREATE TABLE `xm_1`  (
  `id` bigint(20) NOT NULL,
  `name` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
  PRIMARY KEY (`id`) USING BTREE
) ENGINE = InnoDB CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = Dynamic;

-分表不分库示例结构如下:

ShardingSphere-Proxy配置

实现分表不分库涉及配置文件:config-sharding.yaml

config-sharding.yaml配置如下:

databaseName: sharding_db

dataSources:
  ds_0:
    url: jdbc:mysql://192.168.88.161:3306/sharding-test?serverTimezone=UTC&useSSL=false
    username: root
    password: 8888
    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1

rules:
- !SHARDING
  tables:
    xm:
      actualDataNodes: ds_0.xm_${0..1}
      tableStrategy:
        standard:
          shardingColumn: id
          shardingAlgorithmName: xm_inline
      keyGenerateStrategy:
          column: id
          keyGeneratorName: snowflake
  defaultDatabaseStrategy:
    none:
  defaultTableStrategy:
    none:
 
  shardingAlgorithms:
    xm_inline:
      type: INLINE
      props:
        algorithm-expression: xm_${id % 2}
 
  keyGenerators:
    snowflake:
      type: SNOWFLAKE

核心配置说明:

-databaseName:  指定Proxy逻辑库名为sharding_db。

-dataSources:配置用到的数据源。

-rules:规则配置:

        - !SHARDING:表示进行分库分表规则的配置。

        tables:配置要分表的表信息。

        xm:要分表的表名。

        actualDataNodes:实际数据节点配置。

        tableStrategy:指定分表规则。

        shardingColumn:分表字段。

        shardingAlgorithmName:分表规则名(xm_inline引用下面的shardingAlgorithms中配置的名称)。

        keyGenerateStrategy:key键策略配置(column值为键名;keyGeneratorName值为生成策略的策略名,snowflake引用下面配置的keyGenerators中的名称)。

        shardingAlgorithms:分片规则定义。

        keyGenerators:key键生成策略定义,采用雪花算法。

结果测试

1.进入ShardingSphere-Proxy的逻辑库sharding_db

2.插入数据:
insert into xm(name) values('项目1');
insert into xm(name) values('项目2');
insert into xm(name) values('项目3');
insert into xm(name) values('项目4');

3.查询结果
select * from xm;

示例3:分库分表(带联邦查询示例)


测试环境准备

-MySQL:

192.168.88.161:3306

-数据库名:sharding-test_0、sharding-test_1

-建测试表的SQL:

CREATE TABLE `student_0`  (
  `id` bigint(20) NOT NULL,
  `name` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
  `age` int(11) NULL DEFAULT NULL,
  PRIMARY KEY (`id`) USING BTREE
) ENGINE = InnoDB CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = Dynamic;

CREATE TABLE `student_1`  (
  `id` bigint(20) NOT NULL,
  `name` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
  `age` int(11) NULL DEFAULT NULL,
  PRIMARY KEY (`id`) USING BTREE
) ENGINE = InnoDB CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = Dynamic;

CREATE TABLE `xm`  (
  `id` bigint(20) NOT NULL,
  `name` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
  PRIMARY KEY (`id`) USING BTREE
) ENGINE = InnoDB CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = Dynamic;

CREATE TABLE `ht`  (
  `id` bigint(20) NOT NULL,
  `htname` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
  `price` decimal(10, 2) NULL DEFAULT NULL,
  `xm_id` bigint(20) NOT NULL,
  `fzr` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
  PRIMARY KEY (`id`) USING BTREE
) ENGINE = InnoDB CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = Dynamic;

ShardingSphere-Proxy配置

实现分库分表并实现联邦查询涉及以下两个配置文件:server.yaml和config-sharding.yaml

server.yaml配置如下:

mode:
  type: Standalone
  repository:
    type: JDBC

authority:
  users:
    - user: root@%
      password: 8888
    - user: sharding
      password: sharding
  privilege:
    type: ALL_PERMITTED

sqlFederation:
  sqlFederationEnabled: true
  executionPlanCache:
    initialCapacity: 2000
    maximumSize: 65535

props:
  system-log-level: INFO
  max-connections-size-per-query: 1
  kernel-executor-size: 16  # Infinite by default.
  proxy-frontend-flush-threshold: 128  # The default value is 128.
  sql-show: true


注意:如果要进行跨库关联查询(联邦查询),需要配置:

sqlFederation:
  sqlFederationEnabled: true
  executionPlanCache:
    initialCapacity: 2000
    maximumSize: 65535

config-sharding.yaml配置如下:

databaseName: sharding_db

dataSources:
  ds_0:
    url: jdbc:mysql://192.168.88.161:3306/sharding-test_0?serverTimezone=UTC&useSSL=false
    username: root
    password: 8888
    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1
  ds_1:
    url: jdbc:mysql://192.168.88.161:3306/sharding-test_1?serverTimezone=UTC&useSSL=false
    username: root
    password: 8888
    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1

rules:
- !SHARDING
  tables:
    student:
      actualDataNodes: ds_${0..1}.student_${0..1}
      tableStrategy:
        standard:
          shardingColumn: age
          shardingAlgorithmName: student_inline
      keyGenerateStrategy:
          column: id
          keyGeneratorName: snowflake
  defaultDatabaseStrategy:
    standard:
      shardingColumn: id
      shardingAlgorithmName: database_inline
  defaultTableStrategy:
    none:
 
  shardingAlgorithms:
    database_inline:
      type: INLINE
      props:
        algorithm-expression: ds_${id % 2}
    student_inline:
      type: INLINE
      props:
        algorithm-expression: student_${age % 2}
 
  keyGenerators:
    snowflake:
      type: SNOWFLAKE

- !SINGLE
  tables: 
    - "*.*"

注意:上面配置中,用id字段分库,用age字段分表,只是为了演示效果,分片键的值一般是不允许修改的!

结果测试


1.进入ShardingSphere-Proxy的逻辑库sharding_db

2.插入数据(分表分库测试数据):
insert into student(name,age) values('学生1',11);
insert into student(name,age) values('学生2',13);
insert into student(name,age) values('学生3',14);
insert into student(name,age) values('学生4',16);
insert into student(name,age) values('学生5',15);
insert into student(name,age) values('学生6',17);
insert into student(name,age) values('学生7',18);
insert into student(name,age) values('学生8',12);

3.查询分表分库结果:
select * from student

4.插入数据(跨库关联查询(联邦查询)测试数据):
INSERT INTO `xm` VALUES (1, '项目1');
INSERT INTO `xm` VALUES (2, '项目2');
INSERT INTO `xm` VALUES (3, '项目3');
INSERT INTO `xm` VALUES (4, '项目4');

INSERT INTO `ht` VALUES (1, '合同1-1', 10.00, 1, 'A');
INSERT INTO `ht` VALUES (2, '合同1-2', 20.00, 1, 'B');
INSERT INTO `ht` VALUES (3, '合同2-1', 10.00, 2, 'C');
INSERT INTO `ht` VALUES (4, '合同2-2', 20.00, 2, 'D');
INSERT INTO `ht` VALUES (5, '合同2-3', 30.00, 2, 'B');

5.跨库关联查询(联邦查询)结果:
select a.name,b.htname,b.price,b.fzr 
from xm a left join ht b 
on a.id = b.xm_id

示例4:分表不分库 + 读写分离


测试环境准备

MySQL一主两从集群(已实现主从复制的配置):

master:192.168.88.161:3308

slave1:192.168.88.161:3309

slave2:192.168.88.161:3310

-数据库名:sharding-test

-建测试表的SQL:

CREATE TABLE `xm_0`  (
  `id` bigint(20) NOT NULL,
  `name` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
  PRIMARY KEY (`id`) USING BTREE
) ENGINE = InnoDB CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = Dynamic;

CREATE TABLE `xm_1`  (
  `id` bigint(20) NOT NULL,
  `name` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
  PRIMARY KEY (`id`) USING BTREE
) ENGINE = InnoDB CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = Dynamic;

ShardingSphere-Proxy配置

实现分表不分库的读写分离涉及配置文件:config-readwrite-splitting.yaml

config-readwrite-splitting.yaml配置如下:


schemaName: readwrite_splitting_db

dataSources:
  write_ds:
    url: jdbc:mysql://192.168.88.161:3308/sharding-test?characterEncoding=UTF-8&useUnicode=true&useSSL=false&tinyInt1isBit=false&allowPublicKeyRetrieval=true&serverTimezone=Asia/Shanghai
    username: root
    password: 8888
    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1
  read_ds_0:
    url: jdbc:mysql://192.168.88.161:3309/sharding-test?characterEncoding=UTF-8&useUnicode=true&useSSL=false&tinyInt1isBit=false&allowPublicKeyRetrieval=true&serverTimezone=Asia/Shanghai
    username: root
    password: 8888
    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1
  read_ds_1:
    url: jdbc:mysql://192.168.88.161:3310/sharding-test?characterEncoding=UTF-8&useUnicode=true&useSSL=false&tinyInt1isBit=false&allowPublicKeyRetrieval=true&serverTimezone=Asia/Shanghai
    username: root
    password: 8888
    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1

rules:
- !READWRITE_SPLITTING
  dataSources:
    readwrite_ds:
      writeDataSourceName: write_ds
      readDataSourceNames:
        - read_ds_0
        - read_ds_1
      loadBalancerName: random
  loadBalancers:
    random:
      type: RANDOM

- !SHARDING
  tables:
    xm:
      actualDataNodes: readwrite_ds.xm_${0..1}
      tableStrategy:
        standard:
          shardingColumn: id
          shardingAlgorithmName: xm_inline
      keyGenerateStrategy:
          column: id
          keyGeneratorName: snowflake
  defaultDatabaseStrategy:
    none:
  defaultTableStrategy:
    none:
 
  shardingAlgorithms:
    xm_inline:
      type: INLINE
      props:
        algorithm-expression: xm_${id % 2}
 
  keyGenerators:
    snowflake:
      type: SNOWFLAKE

- !SINGLE
  tables: 
    - "*.*"

核心配置说明:

       这里同时配置了读写分离和分表规则,关键要注意:

        - !READWRITE_SPLITTING读写分离配置中的数据源名为readwrite_ds,

        而在- !SHARDING分表配置中的actualDataNodes的值,引用了读写分离的这个readwrite_ds数据源。

结果测试


1.进入ShardingSphere-Proxy的逻辑库readwrite_splitting_db

2.插入数据:
insert into xm(name) values('项目1');
insert into xm(name) values('项目2');
insert into xm(name) values('项目3');
insert into xm(name) values('项目4');
insert into xm(name) values('项目5');
insert into xm(name) values('项目6');
insert into xm(name) values('项目7');
insert into xm(name) values('项目8');
insert into xm(name) values('项目9');
insert into xm(name) values('项目10');

3.查询结果
select * from xm;

示例5:分库分表 + 读写分离


测试环境准备

-MySQL一主两从集群(已实现主从复制的配置):

master:192.168.88.161:3311

slave1:192.168.88.161:3312

slave2:192.168.88.161:3313

-数据库名:sharding-test_0、sharding-test_1

-建测试表的SQL:

CREATE TABLE `student_0`  (
  `id` bigint(20) NOT NULL,
  `name` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
  `age` int(11) NULL DEFAULT NULL,
  PRIMARY KEY (`id`) USING BTREE
) ENGINE = InnoDB CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = Dynamic;

CREATE TABLE `student_1`  (
  `id` bigint(20) NOT NULL,
  `name` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
  `age` int(11) NULL DEFAULT NULL,
  PRIMARY KEY (`id`) USING BTREE
) ENGINE = InnoDB CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = Dynamic;

ShardingSphere-Proxy配置

实现分库分表的读写分离涉及配置文件:config-readwrite-splitting.yaml

config-readwrite-splitting.yaml配置如下:


schemaName: readwrite_splitting_db

dataSources:
  write_ds_0:
    url: jdbc:mysql://192.168.88.161:3311/sharding-test_0?characterEncoding=UTF-8&useUnicode=true&useSSL=false&tinyInt1isBit=false&allowPublicKeyRetrieval=true&serverTimezone=Asia/Shanghai
    username: root
    password: 8888
    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1
  write_ds_1:
    url: jdbc:mysql://192.168.88.161:3311/sharding-test_1?characterEncoding=UTF-8&useUnicode=true&useSSL=false&tinyInt1isBit=false&allowPublicKeyRetrieval=true&serverTimezone=Asia/Shanghai
    username: root
    password: 8888
    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1
  read_ds_0:
    url: jdbc:mysql://192.168.88.161:3312/sharding-test_0?characterEncoding=UTF-8&useUnicode=true&useSSL=false&tinyInt1isBit=false&allowPublicKeyRetrieval=true&serverTimezone=Asia/Shanghai
    username: root
    password: 8888
    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1
  read_ds_1:
    url: jdbc:mysql://192.168.88.161:3313/sharding-test_0?characterEncoding=UTF-8&useUnicode=true&useSSL=false&tinyInt1isBit=false&allowPublicKeyRetrieval=true&serverTimezone=Asia/Shanghai
    username: root
    password: 8888
    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1
  read_ds_2:
    url: jdbc:mysql://192.168.88.161:3312/sharding-test_1?characterEncoding=UTF-8&useUnicode=true&useSSL=false&tinyInt1isBit=false&allowPublicKeyRetrieval=true&serverTimezone=Asia/Shanghai
    username: root
    password: 8888
    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1
  read_ds_3:
    url: jdbc:mysql://192.168.88.161:3313/sharding-test_1?characterEncoding=UTF-8&useUnicode=true&useSSL=false&tinyInt1isBit=false&allowPublicKeyRetrieval=true&serverTimezone=Asia/Shanghai
    username: root
    password: 8888
    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1

rules:
- !READWRITE_SPLITTING
  dataSources:
    readwrite_ds_0:
      writeDataSourceName: write_ds_0
      readDataSourceNames:
        - read_ds_0
        - read_ds_1
      loadBalancerName: random
    readwrite_ds_1:
      writeDataSourceName: write_ds_1
      readDataSourceNames:
        - read_ds_2
        - read_ds_3
      loadBalancerName: random
  loadBalancers:
    random:
      type: RANDOM

- !SHARDING
  tables:
    student:
      actualDataNodes: readwrite_ds_${0..1}.student_${0..1}
      tableStrategy:
        standard:
          shardingColumn: age
          shardingAlgorithmName: student_inline
      keyGenerateStrategy:
          column: id
          keyGeneratorName: snowflake
  defaultDatabaseStrategy:
    standard:
      shardingColumn: id
      shardingAlgorithmName: database_inline
  defaultTableStrategy:
    none:
 
  shardingAlgorithms:
    database_inline:
      type: INLINE
      props:
        algorithm-expression: readwrite_ds_${id % 2}
    student_inline:
      type: INLINE
      props:
        algorithm-expression: student_${age % 2}
 
  keyGenerators:
    snowflake:
      type: SNOWFLAKE

- !SINGLE
  tables: 
    - "*.*"

结果测试


1.进入ShardingSphere-Proxy的逻辑库readwrite_splitting_db

2.插入数据:
insert into student(name,age) values('学生1',11);
insert into student(name,age) values('学生2',13);
insert into student(name,age) values('学生3',14);
insert into student(name,age) values('学生4',16);
insert into student(name,age) values('学生5',15);
insert into student(name,age) values('学生6',17);
insert into student(name,age) values('学生7',18);
insert into student(name,age) values('学生8',12);

3.查询结果
select * from student

示例6:广播表

        广播表指所有的分片数据源中都存在的表,表结构及其数据在每个数据库中均完全一致。 适用于数据量不大且需要与海量数据的表进行关联查询的场景,例如:字典表。

广播具有以下特性:

(1)插入、更新操作会实时在所有节点上执行,保持各个分片的数据一致性

(2)查询操作,只从一个节点获取

(3)可以跟任何一个表进行 JOIN 操作

测试环境准备

-MySQL:

192.168.88.161:3306

-数据库名:sharding-test_0、sharding-test_1

-建测试的广播表的SQL:

CREATE TABLE t_dict(
    id BIGINT,
    dict_type VARCHAR(200),
    PRIMARY KEY(id)
);

ShardingSphere-Proxy配置

实现广播表需要修改以下配置文件:config-sharding.yaml

config-sharding.yaml配置如下:

databaseName: sharding_db

dataSources:
  ds_0:
    url: jdbc:mysql://192.168.88.161:3306/sharding-test_0?serverTimezone=UTC&useSSL=false
    username: root
    password: 8888
    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1
  ds_1:
    url: jdbc:mysql://192.168.88.161:3306/sharding-test_1?serverTimezone=UTC&useSSL=false
    username: root
    password: 8888
    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1

rules:
- !SHARDING
  tables:
    student:
      actualDataNodes: ds_${0..1}.student_${0..1}
      tableStrategy:
        standard:
          shardingColumn: age
          shardingAlgorithmName: student_inline
      keyGenerateStrategy:
          column: id
          keyGeneratorName: snowflake
  defaultDatabaseStrategy:
    standard:
      shardingColumn: id
      shardingAlgorithmName: database_inline
  defaultTableStrategy:
    none:
  shardingAlgorithms:
    database_inline:
      type: INLINE
      props:
        algorithm-expression: ds_${id % 2}
    student_inline:
      type: INLINE
      props:
        algorithm-expression: student_${age % 2}
 
  keyGenerators:
    snowflake:
      type: SNOWFLAKE

- !BROADCAST
  tables: 
    - t_dict

- !SINGLE
  tables: 
    - "*.*"

配置说明:

        广播表在老版本中只能存在于使用分片表内的数据源中。为了解决单表和广播表如果不在同一个存储节点时无法关联查询的问题,新版中调整为可以使用逻辑库下的全部数据源,所以需要把广播表从分片规则中移出。

新版中使用广播表只需要独立配置下BROADCAST RULE即可:

- !BROADCAST
  tables: 
    - t_dict

结果测试


1.进入ShardingSphere-Proxy的逻辑库sharding_db

2.插入数据:

insert into t_dict values(1,'Dict_A');
insert into t_dict values(2,'Dict_B');
insert into t_dict values(3,'Dict_C');
insert into t_dict values(4,'Dict_D');

3.查询结果
select * from t_dict

示例7:绑定表

        
        绑定表指分片规则一致的一组分片表。 
        使用绑定表进行多表关联查询时,必须使用分片键进行关联,否则会出现笛卡尔积关联或跨库关联,从而影响查询效率。 
        例如:t_order 表和 t_order_item 表,均按照 order_id 分片,并且使用 order_id 进行关联,则此两张表互为绑定表关系。 
        绑定表之间的多表关联查询不会出现笛卡尔积关联,关联查询效率将大大提升。

测试环境准备


-MySQL:

192.168.88.161:3306

-数据库名:sharding-test_0、sharding-test_1

-建测试表的SQL:


CREATE TABLE t_order_0 (
  id BIGINT,
  order_no VARCHAR(30),
  user_id VARCHAR(30),
  amount DECIMAL(10,2),
  PRIMARY KEY(id) 
);

CREATE TABLE t_order_1 (
  id BIGINT,
  order_no VARCHAR(30),
  user_id VARCHAR(30),
  amount DECIMAL(10,2),
  PRIMARY KEY(id) 
);

CREATE TABLE t_order_item_0(
    id BIGINT,
    order_no VARCHAR(30),
    user_id VARCHAR(30),
    goods_no VARCHAR(30),
    price DECIMAL(10,2),
    `count` INT,
    PRIMARY KEY(id)
);

CREATE TABLE t_order_item_1(
    id BIGINT,
    order_no VARCHAR(30),
    user_id VARCHAR(30),
    goods_no VARCHAR(30),
    price DECIMAL(10,2),
    `count` INT,
    PRIMARY KEY(id)
);

ShardingSphere-Proxy配置

实现绑定表需要修改以下配置文件:config-sharding.yaml

config-sharding.yaml配置如下:

databaseName: sharding_db

dataSources:
  ds_0:
    url: jdbc:mysql://192.168.88.161:3306/sharding-test_0?serverTimezone=UTC&useSSL=false
    username: root
    password: 8888
    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1
  ds_1:
    url: jdbc:mysql://192.168.88.161:3306/sharding-test_1?serverTimezone=UTC&useSSL=false
    username: root
    password: 8888
    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1

rules:
- !SHARDING
  tables:
    t_order:
      actualDataNodes: ds_${0..1}.t_order_${0..1}
      databaseStrategy:
        standard:
          shardingColumn: order_no
          shardingAlgorithmName: inline_hash_mod_order_no
      tableStrategy:
        standard:
          shardingColumn: user_id
          shardingAlgorithmName: inline_hash_mod_order_user_id
      keyGenerateStrategy:
          column: id
          keyGeneratorName: snowflake
    t_order_item:
      actualDataNodes: ds_${0..1}.t_order_item_${0..1}
      databaseStrategy:
        standard:
          shardingColumn: order_no
          shardingAlgorithmName: inline_hash_mod_order_no
      tableStrategy:
        standard:
          shardingColumn: user_id
          shardingAlgorithmName: inline_hash_mod_order_item_user_id
      keyGenerateStrategy:
          column: id
          keyGeneratorName: snowflake
  bindingTables:
    - t_order,t_order_item
  shardingAlgorithms:
    inline_hash_mod_order_no:
      type: INLINE
      props:
        algorithm-expression: ds_${Math.abs(order_no.hashCode()) % 2}
    inline_hash_mod_order_user_id:
      type: INLINE
      props:
        algorithm-expression: t_order_${Math.abs(user_id.hashCode()) % 2}
    inline_hash_mod_order_item_user_id:
      type: INLINE
      props:
        algorithm-expression: t_order_item_${Math.abs(user_id.hashCode()) % 2}

  keyGenerators:
    snowflake:
      type: SNOWFLAKE

- !SINGLE
  tables: 
    - "*.*"

注意1:
        绑定表主要是在分表分库配置的基础上添加了下面的配置:
  bindingTables:
    - t_order,t_order_item

注意2:
        如果在server.yaml中开启了联邦查询配置,则不论是否配置绑定表,均会以联邦查询的方式进行关联查询哦!

结果测试


1.进入ShardingSphere-Proxy的逻辑库sharding_db

2.插入数据:


-- 插入用户zhangsan的订单及详情
insert into t_order(order_no,user_id,amount) values('NO202404150001','zhangsan',100);
insert into t_order(order_no,user_id,amount) values('NO202404150002','zhangsan',50);
insert into t_order(order_no,user_id,amount) values('NO202404150003','zhangsan',150);
insert into t_order(order_no,user_id,amount) values('NO202404150004','zhangsan',200);

insert into t_order_item(order_no,user_id,goods_no,price,count) values('NO202404150001','zhangsan','GD001',10,3);
insert into t_order_item(order_no,user_id,goods_no,price,count) values('NO202404150001','zhangsan','GD002',20,1);
insert into t_order_item(order_no,user_id,goods_no,price,count) values('NO202404150001','zhangsan','GD003',50,1);
insert into t_order_item(order_no,user_id,goods_no,price,count) values('NO202404150002','zhangsan','GD003',50,1);
insert into t_order_item(order_no,user_id,goods_no,price,count) values('NO202404150003','zhangsan','GD004',150,1);
insert into t_order_item(order_no,user_id,goods_no,price,count) values('NO202404150004','zhangsan','GD003',50,1);
insert into t_order_item(order_no,user_id,goods_no,price,count) values('NO202404150004','zhangsan','GD004',150,1);

-- 插入用户lisi的订单及详情
insert into t_order(order_no,user_id,amount) values('NO202404150005','lisi',30);
insert into t_order(order_no,user_id,amount) values('NO202404150006','lisi',50);
insert into t_order(order_no,user_id,amount) values('NO202404150007','lisi',20);
insert into t_order(order_no,user_id,amount) values('NO202404150008','lisi',100);

insert into t_order_item(order_no,user_id,goods_no,price,count) values('NO202404150005','lisi','GD001',10,1);
insert into t_order_item(order_no,user_id,goods_no,price,count) values('NO202404150005','lisi','GD002',20,1);
insert into t_order_item(order_no,user_id,goods_no,price,count) values('NO202404150006','lisi','GD003',50,1);
insert into t_order_item(order_no,user_id,goods_no,price,count) values('NO202404150007','lisi','GD002',20,1);
insert into t_order_item(order_no,user_id,goods_no,price,count) values('NO202404150008','lisi','GD001',10,5);
insert into t_order_item(order_no,user_id,goods_no,price,count) values('NO202404150008','lisi','GD003',50,1);

3.查询结果

select a.order_no,a.amount,b.user_id,b.goods_no,b.price,b.count 
from t_order a inner join t_order_item b 
on a.order_no = b.order_no

-不使用绑定表会出现笛卡尔积:

-使用绑定表则不会出现笛卡尔积:

-使用联邦查询的结果:

  • 30
    点赞
  • 24
    收藏
    觉得还不错? 一键收藏
  • 3
    评论
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值