测试读写分离:
[root@anedbtest01 conf]# cat schema.xml
<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">
<schema name="testdb" checkSQLschema="true" sqlMaxLimit="100">
<table name="travelrecord" dataNode="dn1" autoIncrement="true" primaryKey="ID" />
<table name="t1" dataNode="dn1" autoIncrement="true" primaryKey="ID" />
</schema>
<dataNode name="dn1" dataHost="shard" database="db1" />
<dataHost name="shard" maxCon="1000" minCon="10" balance="0" writeType="0" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<writeHost host="hostM1" url="127.0.0.1:3306" user="root" password="123">
<readHost host="hostS2" url="127.0.0.1:3307" user="root" password="123" />
</writeHost>
</dataHost>
</mycat:schema>
balance="0":读请求仅发送到writeHost上(不开启读写分离)。
balance="1":读请求随机分发到当前writeHost对应的readHost和standby的writeHost上。
balance="2":读请求随机分发到当前dataHost内所有的writeHost和readHost上。
balance="3":读请求随机分发到当前writeHost对应的readHost上。
mysql> show @@heartbeat;
+--------+-------+-----------+------+---------+-------+--------+---------+--------------+---------------------+-------+
| NAME | TYPE | HOST | PORT | RS_CODE | RETRY | STATUS | TIMEOUT | EXECUTE_TIME | LAST_ACTIVE_TIME | STOP |
+--------+-------+-----------+------+---------+-------+--------+---------+--------------+---------------------+-------+
| hostM1 | mysql | 127.0.0.1 | 3306 | 1 | 0 | idle | 0 | 1,1,1 | 2018-09-05 16:44:36 | false |
| hostS2 | mysql | 127.0.0.1 | 3307 | 1 | 0 | idle | 0 | 0,0,0 | 2018-09-05 16:44:36 | false |
+--------+-------+-----------+------+---------+-------+--------+---------+--------------+---------------------+-------+
2 rows in set (0.00 sec)
create table t1 (id bigint not null primary key,user_id varchar(100),date DATE, fee decimal);
我们在主库:3306 插入数据
insert into t1(id,user_id,date,fee) values(2,@@hostname,20180901,100);
insert into t1(id,user_id,date,fee) values(5000002,@@hostname,20180905,100);
从库3307插入数据
insert into t1(id,user_id,date,fee) values(3,@@port,20180901,100);
insert into t1(id,user_id,date,fee) values(5000003,@@port,20180905,100);
主库查看数据:
mysql> select * from t1;
+---------+-------------+------------+------+
| id | user_id | date | fee |
+---------+-------------+------------+------+
| 2 | anedbtest01 | 2018-09-01 | 100 |
| 5000002 | anedbtest01 | 2018-09-05 | 100 |
+---------+-------------+------------+------+
2 rows in set (0.00 sec)
从库查看数据:
mysql> select * from t1;
+---------+-------------+------------+------+
| id | user_id | date | fee |
+---------+-------------+------------+------+
| 2 | anedbtest01 | 2018-09-01 | 100 |
| 3 | 3307 | 2018-09-01 | 100 |
| 5000002 | anedbtest01 | 2018-09-05 | 100 |
| 5000003 | 3307 | 2018-09-05 | 100 |
+---------+-------------+------------+------+
4 rows in set (0.00 sec)
通过mycat 查看数据可以发现,查看的数据是从库的数据,实现了读写分离:
[root@anedbtest01 bin]# /mnt/mysql5641/bin/mysql -uroot -p123 -P8066 -h127.0.0.1
mysql> select * from t1;
+---------+-------------+------------+------+
| id | user_id | date | fee |
+---------+-------------+------------+------+
| 2 | anedbtest01 | 2018-09-01 | 100 |
| 3 | 3307 | 2018-09-01 | 100 |
| 5000002 | anedbtest01 | 2018-09-05 | 100 |
| 5000003 | 3307 | 2018-09-05 | 100 |
+---------+-------------+------------+------+
4 rows in set (0.00 sec)
修改 schema.xml 中balance="2" ,在mycat端查询,可以看到有时候查询会落到主库,有时候查询会落到在从库
[root@anedbtest01 bin]# /mnt/mysql5641/bin/mysql -uroot -p123 -P8066 -h127.0.0.1
mysql> select * from t1;
+---------+-------------+------------+------+
| id | user_id | date | fee |
+---------+-------------+------------+------+
| 2 | anedbtest01 | 2018-09-01 | 100 |
| 3 | 3307 | 2018-09-01 | 100 |
| 5000002 | anedbtest01 | 2018-09-05 | 100 |
| 5000003 | 3307 | 2018-09-05 | 100 |
+---------+-------------+------------+------+
4 rows in set (0.00 sec)
mysql> select * from t1;
+---------+-------------+------------+------+
| id | user_id | date | fee |
+---------+-------------+------------+------+
| 2 | anedbtest01 | 2018-09-01 | 100 |
| 5000002 | anedbtest01 | 2018-09-05 | 100 |
+---------+-------------+------------+------+
2 rows in set (0.01 sec)