环境说明
本实战记录使用Docker 19.03.5,mysql 5.7.24 ,Java 1.8.0_151,ShardingSphere 4.1.1,mysql-connector 5.1.32,使用教程参考官网手册Proxy和Scaling部分.
Docker和mysql的安装可见Docker安装Mysql并配置
重要:记得my.cnf中开启mysql的binlog,否则数据不会迁移.
server-id=20
log-bin=mysql-bin
binlog_format=row
expire_logs_days=7
扩容方式
这里采用双写的方式,即原有服务器不停止,数据写入原来旧数据库中,再同时写入到新数据库中.
通过ShardingScaling连接原有数据库,读出原有数据,再写入到ShardingProxy,因为ShardingProxy配置了新的分片规则,所以数据会按新的规则存入对应的数据库中
这里假设原来是单库单表,扩展为双库双表.
开始
原库的数据准备
mysql -uroot -proot
create database orderdb;
use orderdb;
CREATE TABLE `t_order` (
`order_id` bigint(20) NOT NULL,
`user_id` int(11) NOT NULL,
`status` varchar(50) DEFAULT NULL,
`order_no` varchar(64) DEFAULT NULL,
PRIMARY KEY (`order_id`)
) ENGINE=InnoDB;
INSERT INTO `t_order` VALUES (1,2,'1','1111111'),(2,2,'1','2222222'),(3,2,'1','3333333'),(4,3,'1','4444444'),(5,3,'1','5555555');
ShardingProxy和Scaling下载及配置
官网下载地址找到Find all releases in the Archive repository.这项选择4.1.1,然后选择对应的proxy和scaling.以及mysql-connector 5.1.47.
Proxy和scailing解压到目录后,将connector.jar分别拷贝到两个解压后的lib目录.
ShardingProxy配置
准备数据
mysql -uroot -proot
create database order_0;
use order_0;
CREATE TABLE `t_order_0` (
`order_id` bigint(20) NOT NULL,
`user_id` int(11) NOT NULL,
`status` varchar(50) DEFAULT NULL,
`order_no` varchar(64) DEFAULT NULL,
PRIMARY KEY (`order_id`)
) ENGINE=InnoDB;
CREATE TABLE `t_order_1` (
`order_id` bigint(20) NOT NULL,
`user_id` int(11) NOT NULL,
`status` varchar(50) DEFAULT NULL,
`order_no` varchar(64) DEFAULT NULL,
PRIMARY KEY (`order_id`)
) ENGINE=InnoDB;
create database order_1;
use order_1;
CREATE TABLE `t_order_0` (
`order_id` bigint(20) NOT NULL,
`user_id` int(11) NOT NULL,
`status` varchar(50) DEFAULT NULL,
`order_no` varchar(64) DEFAULT NULL,
PRIMARY KEY (`order_id`)
) ENGINE=InnoDB;
CREATE TABLE `t_order_1` (
`order_id` bigint(20) NOT NULL,
`user_id` int(11) NOT NULL,
`status` varchar(50) DEFAULT NULL,
`order_no` varchar(64) DEFAULT NULL,
PRIMARY KEY (`order_id`)
) ENGINE=InnoDB;
修改配置
conf/server.yaml
确定里面有账户及密码信息,此信息用于Scaling连接使用
authentication:
users:
root:
password: 123456
conf/config-sharding.yaml
配置新的分片规则,连接后的database为order,table为t_order,实际分别连接order_0和order_1的t_order_0,t_order_1.
schemaName: order
dataSources:
ds_0:
url: jdbc:mysql://127.0.0.1:3306/order_0?serverTimezone=UTC&useSSL=false
username: root
password: root
connectionTimeoutMilliseconds: 30000
idleTimeoutMilliseconds: 60000
maxLifetimeMilliseconds: 1800000
maxPoolSize: 50
ds_1:
url: jdbc:mysql://127.0.0.1:3306/order_1?serverTimezone=UTC&useSSL=false
username: root
password: root
connectionTimeoutMilliseconds: 30000
idleTimeoutMilliseconds: 60000
maxLifetimeMilliseconds: 1800000
maxPoolSize: 50
shardingRule:
tables:
t_order:
actualDataNodes: ds_${0..1}.t_order_${0..1}
databaseStrategy:
inline:
shardingColumn: user_id
algorithmExpression: ds_${user_id % 2}
tableStrategy:
inline:
shardingColumn: order_id
algorithmExpression: t_order_${order_id % 2}
keyGenerator:
type: SNOWFLAKE
测试Proxy是否正常
进入bin目录
./start.sh
启动成功后连接Proxy
mysql -uroot -p123456 -h127.0.0.1 -P3307
并插入数据,验证数据是否正常,也可进入order_0,order_1的数据库中一一查看表中是否有数据.然后删除数据,避免扩容后不好验证结果.
use order;
INSERT INTO `t_order` VALUES (1,2,'1','1111111'),(2,2,'1','2222222'),(3,2,'1','3333333'),(4,3,'1','4444444'),(5,3,'1','5555555');
select * from t_order;
delete from t_order;
ShardingScaling配置
确认lib中已经有mysql-connector.
进入bin目录
./start.sh
然后测试是否正常
curl -X GET http://localhost:8888/shardingscaling/job/list
#返回应为:
{"success":true,"errorCode":0,"errorMsg":null,"model":[]}
正常以后可添加扩展任务将数据迁移
curl -X POST \
http://localhost:8888/shardingscaling/job/start \
-H 'content-type: application/json' \
-d '{
"ruleConfiguration": {
"sourceDatasource": "ds_0: !!org.apache.shardingsphere.orchestration.core.configuration.YamlDataSourceConfiguration\n dataSourceClassName: com.zaxxer.hikari.HikariDataSource\n properties:\n jdbcUrl: jdbc:mysql://127.0.0.1:3306/orderdb?serverTimezone=UTC&useSSL=false\n username: root\n password: '\''root'\''\n connectionTimeout: 30000\n idleTimeout: 60000\n maxLifetime: 1800000\n maxPoolSize: 50\n minPoolSize: 1\n maintenanceIntervalMilliseconds: 30000\n readOnly: false\n",
"sourceRule": "defaultDatabaseStrategy:\n inline:\n algorithmExpression: ds_0\n shardingColumn: user_id\ntables:\n t_order:\n actualDataNodes: ds_0.t_order\n keyGenerator:\n column: order_id\n type: SNOWFLAKE\n logicTable: t_order\n tableStrategy:\n inline:\n algorithmExpression: t_order\n shardingColumn: order_id\n",
"destinationDataSources": {
"name": "dt_0",
"password": "123456",
"url": "jdbc:mysql://127.0.0.1:3307/order?serverTimezone=UTC&useSSL=false",
"username": "root"
}
},
"jobConfiguration": {
"concurrency": 1
}
}'
注意:
如果提交返回
Datasources check failed!
Unknown system variable ‘transaction_isolation’
这些错误多半是Connection获取失败.可通过logs/stdout.log查看原因,
或可通过源代码AbstractDataSourceChecker的源码35行打断点验证.源码也要检出4.1.1版本.
官网mysql-connector的5.1.47版本在我的本地使用mariadb 10.2.12时老是报错,我换低一点版本5.1.32没有问题.
RuntimeException: Decode binlog event failed, errorCode: 1236,
Slave can not handle replication events with the checksum that master is …
可设置checksum的值试试
set global binlog_checksum=‘NONE’;
添加 my.cnf 配置文件中添加如下设置,下次重启就可以不用做步骤1,直接生效了。
binlog_checksum=NONE
查看数据是否已迁移.可连接Proxy或者真正的数据库查看.
如果以后要扩容,那么就在Proxy中配置新的分片规则,然后在Scaling中依次添加迁移任务即可(为什么官方不能直接用原ShardingJdbc里面的规则作为Scaling的sourceDataSource呢?要是能的话就更方便了).