数据库集群环境
服务 | 端口 | 服务器 | 容器名字 |
MySql-pxc01 | 13306 | 192.168.142.130 | Node1(主)分片 |
MySql-pxc02 | 13306 | 192.168.142.132 | Node2(从) |
MySql-pxc03 | 13306 | 192.168.142.133 | Node1(主)分片 |
MySql-pxc04 | 13306 | 192.168.142.134 | Node2(从) |
MySql01 | 3306 | 192.168.142.130 | Mster01(主) |
MySql02 | 3306 | 192.168.142.133 | Slave01(从) |
Mycat01 | 8066,9066, | 192.168.142.132 | mycat |
Mycat02 | 8066,9066, | 192.168.142.134 | mycat |
HaProxy | 4001,4002 | 192.168.142.130 | haproxy |
通过haproxy转发至mycat,由mycat向数据库pxc集群中插入数据时报错;错误信息如下:
java.lang.RuntimeException: org.springframework.jdbc.BadSqlGrammarException:
### Error updating database. Cause: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: bad insert sql (sharding column:ID not provided,INSERT INTO tb_house_resources (
意思是插入的数据sql语句不对,没有提供数据的id字段
在实现分库分表的情况下,数据库自增主键已无法保证自增主键的全局唯一。为此,MyCat 提供了全局sequence,并且提供了包含本地配置和数据库配置等多种实现方式。
解决如下:
1、在实际连接的数据库中创建MYCAT_SEQUENCE表以及三个function,分别是mycat_seq_currval、mycat_seq_nextval、mycat_seq_setval
创建语句如下
DROP TABLE IF EXISTS MYCAT_SEQUENCE;
CREATE TABLE MYCAT_SEQUENCE (
NAME VARCHAR (50) NOT NULL,
current_value INT NOT NULL,
increment INT NOT NULL DEFAULT 100,
PRIMARY KEY (NAME)
) ENGINE = INNODB ;
DROP FUNCTION IF EXISTS `mycat_seq_currval`;
DELIMITER ;;
CREATE FUNCTION `mycat_seq_currval`(seq_name VARCHAR(50))
RETURNS VARCHAR(64) CHARSET utf8
DETERMINISTIC
BEGIN DECLARE retval VARCHAR(64);
SET retval="-999999999,null";
SELECT CONCAT(CAST(current_value AS CHAR),",",CAST(increment AS CHAR) ) INTO retval
FROM MYCAT_SEQUENCE WHERE NAME = seq_name;
RETURN retval ;
END
;;
DELIMITER ;
DROP FUNCTION IF EXISTS `mycat_seq_nextval`;
DELIMITER ;;
CREATE FUNCTION `mycat_seq_nextval`(seq_name VARCHAR(50)) RETURNS VARCHAR(64)
CHARSET utf8
DETERMINISTIC
BEGIN UPDATE MYCAT_SEQUENCE
SET current_value = current_value + increment
WHERE NAME = seq_name;
RETURN mycat_seq_currval(seq_name);
END
;;
DELIMITER ;
DROP FUNCTION IF EXISTS `mycat_seq_setval`;
DELIMITER ;;
CREATE FUNCTION `mycat_seq_setval`(seq_name VARCHAR(50), VALUE INTEGER)
RETURNS VARCHAR(64) CHARSET utf8
DETERMINISTIC
BEGIN UPDATE MYCAT_SEQUENCE
SET current_value = VALUE
WHERE NAME = seq_name;
RETURN mycat_seq_currval(seq_name);
END
;;
DELIMITER ;
我机器上截图:
向表MYCAT_SEQUENCE中插入数据
insert into MYCAT_SEQUENCE (name,current_value,increment) values ('TB_HOUSE_RESOURCES',0,1);
意思是:name为需要自动增长表的表名;current_value当前的值为0,则下次插入的数据的id为1,步长为1即每次增加1
schema.xml(最后给出完整的配置)
中主要的配置如下,chechSQLschema必须为true,否则报错,第一个<table />中name为指定需要自动增长id的表名,dataNode为指定数据库的节点,我这里有分片所以是两个,primaryKey="ID" autoIncrement="true" 为主键为id,自动增长
rule.xml(最后给出完整的配置)
修改的主要配置如下 :<tableRule>中的rule01与schema.xml中使用的rule相对应,其它的说明如图
注意:
可能会有报错
Caused by: org.xml.sax.SAXParseException; lineNumber: 150; columnNumber: 14; The content of element type "mycat:rule" must match "(tableRule*,function*)".
MyCat的rule.xml配置文件里不支持tableRule/function/tableRule/function交叉的方式配置。只能是tableRule/function顺序配置,即在整个rule.xml的配置文件中,所有的<tableRule>都是在<function>的前面
server.xml(最后给出完整的配置)
需要添加:
<property name="sequnceHandlerType">1</property>
#sequncehandlerType设为1;
#0 表示是表示使用本地文件方式
#1 表示的是根据数据库来生成,就是我们自己配置的自增长
#2 表示时间戳的方式
sequence_db_conf.properties
配置如下,TB_HOUSE_RESOURCES为操作的表名,dn1为MYCAT_SEQUENCE所在 的节点
到这里配置就已经完成了,启动测试
如果表中已经有数据了,需要指定id的开始的值,修改MYCAT_SEQUENCE对应name表的current_value的值即可
----------------------------------------------------------------------------------------------------------------
我的完整的配置
schema.xml
<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">
<!--配置数据表,schema的名字为haoke,server.xml文件中配置的信息-->
<schema name="haoke" checkSQLschema="true" sqlMaxLimit="100">
<!-- 设置表的信息,通过节点dn1、dn2操作tb_house_resources,分片的规则为rule01, 在rule.xml中进行配置 -->
<table name="tb_house_resources" dataNode="dn1,dn2" primaryKey="ID" autoIncrement="true" rule="rule01" />
<table name="mycat_sequence" dataNode="dn1" primaryKey="name" />
<!-- 通过dn3来操作表tb_ad -->
<table name="tb_ad" dataNode="dn3"/>
<table name="tb_estate" dataNode="dn3"/>
</schema>
<!-- 单独的一个数据库,在接连到mycat的时候会多显示 一个数据库 -->
<schema name="mytest" checkSQLschema="false" sqlMaxLimit="100" dataNode="dn4">
</schema>
<!-- 配置分片关系 -->
<!-- 定义节点dn1、dn2、dn3在配置schema操作表时会使用到,对应的dataHost为cluster1/2/3,这个下<dataHost>标签中设置对应的ip会使用到,数据库均为haoke -->
<dataNode name="dn1" dataHost="cluster1" database="haoke" />
<dataNode name="dn2" dataHost="cluster2" database="haoke" />
<dataNode name="dn3" dataHost="cluster3" database="haoke" />
<dataNode name="dn4" dataHost="cluster4" database="mytest" />
<!--
<dataNode name="dn4" dataHost="cluster4" database="mytest" />
<dataNode name="dn5" dataHost="cluster5" database="mytest" />
<dataNode name="dn6" dataHost="cluster6" database="mytest" />
-->
<!-- 配置连接信息-->
<!-- pxc集群 -->
<dataHost name="cluster1" maxCon="1000" minCon="10" balance="2" writeType="1" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<!-- 设置写入节点为192.168.142.129:13306 -->
<writeHost host="W1" url="192.168.142.130:13306" user="root" password="root">
<!-- 设置读取节点为192.168.142.129:13307 -->
<readHost host="W1R1" url="192.168.142.132:13306" user="root" password="root" />
</writeHost>
</dataHost>
<dataHost name="cluster2" maxCon="1000" minCon="10" balance="2" writeType="1" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<!-- 设置写入节点为192.168.142.129:13308 -->
<writeHost host="W2" url="192.168.142.133:13306" user="root" password="root">
<!-- 设置写入节点为192.168.142.129:13309 -->
<readHost host="W2R1" url="192.168.142.134:13306" user="root" password="root" />
</writeHost>
</dataHost>
<!-- 普通的主从复制数据库 -->
<dataHost name="cluster3" maxCon="1000" minCon="10" balance="3" writeType="1" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<!-- 设置写入节点为192.168.142.129:13306 -->
<writeHost host="W3" url="192.168.142.130:3306" user="root" password="root">
<!-- 设置写入节点为192.168.142.129:13306 -->
<readHost host="W3R1" url="192.168.142.133:3306" user="root" password="root" />
</writeHost>
</dataHost>
<!-- 普通的主从复制数据库 -->
<dataHost name="cluster4" maxCon="1000" minCon="10" balance="3" writeType="1" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<!-- 设置写入节点为192.168.142.129:13306 -->
<writeHost host="W3" url="192.168.142.130:3306" user="root" password="root">
<!-- 设置写入节点为192.168.142.129:13306 -->
<readHost host="W3R1" url="192.168.142.133:3306" user="root" password="root" />
</writeHost>
</dataHost>
</mycat:schema>
rule.xml
<?xml version="1.0" encoding="UTF-8"?>
<!-- - - Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License. - You
may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0
- - Unless required by applicable law or agreed to in writing, software -
distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the
License for the specific language governing permissions and - limitations
under the License. -->
<!DOCTYPE mycat:rule SYSTEM "rule.dtd">
<mycat:rule xmlns:mycat="http://io.mycat/">
<tableRule name="rule1">
<rule>
<columns>id</columns>
<algorithm>func1</algorithm>
</rule>
</tableRule>
<tableRule name="rule2">
<rule>
<columns>user_id</columns>
<algorithm>func1</algorithm>
</rule>
</tableRule>
<tableRule name="sharding-by-intfile">
<rule>
<columns>sharding_id</columns>
<algorithm>hash-int</algorithm>
</rule>
</tableRule>
<tableRule name="auto-sharding-long">
<rule>
<columns>id</columns>
<algorithm>rang-long</algorithm>
</rule>
</tableRule>
<tableRule name="mod-long">
<rule>
<columns>id</columns>
<algorithm>mod-long</algorithm>
</rule>
</tableRule>
<tableRule name="sharding-by-murmur">
<rule>
<columns>id</columns>
<algorithm>murmur</algorithm>
</rule>
</tableRule>
<tableRule name="crc32slot">
<rule>
<columns>id</columns>
<algorithm>crc32slot</algorithm>
</rule>
</tableRule>
<tableRule name="sharding-by-month">
<rule>
<columns>create_time</columns>
<algorithm>partbymonth</algorithm>
</rule>
</tableRule>
<tableRule name="latest-month-calldate">
<rule>
<columns>calldate</columns>
<algorithm>latestMonth</algorithm>
</rule>
</tableRule>
<tableRule name="auto-sharding-rang-mod">
<rule>
<columns>id</columns>
<algorithm>rang-mod</algorithm>
</rule>
</tableRule>
<tableRule name="jch">
<rule>
<columns>id</columns>
<algorithm>jump-consistent-hash</algorithm>
</rule>
</tableRule>
<tableRule name="rule01" >
<rule>
<columns>id</columns>
<algorithm>mod-long</algorithm>
</rule>
</tableRule>
<function name="murmur"
class="io.mycat.route.function.PartitionByMurmurHash">
<property name="seed">0</property><!-- 默认是0 -->
<property name="count">2</property><!-- 要分片的数据库节点数量,必须指定,否则没法分片 -->
<property name="virtualBucketTimes">160</property><!-- 一个实际的数据库节点被映射为这么多虚拟节点,默认是160倍,也就是虚拟节点数是物理节点数的160倍 -->
<!-- <property name="weightMapFile">weightMapFile</property> 节点的权重,没有指定权重的节点默认是1。以properties文件的格式填写,以从0开始到count-1的整数值也就是节点索引为key,以节点权重值为值。所有权重值必须是正整数,否则以1代替 -->
<!-- <property name="bucketMapPath">/etc/mycat/bucketMapPath</property>
用于测试时观察各物理节点与虚拟节点的分布情况,如果指定了这个属性,会把虚拟节点的murmur hash值与物理节点的映射按行输出到这个文件,没有默认值,如果不指定,就不会输出任何东西 -->
</function>
<function name="crc32slot"
class="io.mycat.route.function.PartitionByCRC32PreSlot">
</function>
<function name="hash-int"
class="io.mycat.route.function.PartitionByFileMap">
<property name="mapFile">partition-hash-int.txt</property>
</function>
<function name="rang-long"
class="io.mycat.route.function.AutoPartitionByLong">
<property name="mapFile">autopartition-long.txt</property>
</function>
<!-- 在schemas.xml中配置rule时使用到,pxc集群数据分表时的规则 -->
<function name="mod-long" class="io.mycat.route.function.PartitionByMod">
<!-- 分表时有2个pxc集群,所以为2,对2进行求余 -->
<property name="count">2</property>
</function>
<function name="func1" class="io.mycat.route.function.PartitionByLong">
<property name="partitionCount">8</property>
<property name="partitionLength">128</property>
</function>
<function name="latestMonth"
class="io.mycat.route.function.LatestMonthPartion">
<property name="splitOneDay">24</property>
</function>
<function name="partbymonth"
class="io.mycat.route.function.PartitionByMonth">
<property name="dateFormat">yyyy-MM-dd</property>
<property name="sBeginDate">2015-01-01</property>
</function>
<function name="rang-mod" class="io.mycat.route.function.PartitionByRangeMod">
<property name="mapFile">partition-range-mod.txt</property>
</function>
<function name="jump-consistent-hash" class="io.mycat.route.function.PartitionByJumpConsistentHash">
<property name="totalBuckets">3</property>
</function>
</mycat:rule>
server.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE mycat:server SYSTEM "server.dtd">
<mycat:server xmlns:mycat="http://io.mycat/">
<system>
<property name="nonePasswordLogin">0</property>
<property name="useHandshakeV10">1</property>
<property name="useSqlStat">0</property>
<property name="useGlobleTableCheck">0</property>
<property name="sequnceHandlerType">1</property>
<property name="subqueryRelationshipCheck">false</property>
<property name="processorBufferPoolType">0</property>
<property name="handleDistributedTransactions">0</property>
<property name="useOffHeapForMerge">1</property>
<property name="memoryPageSize">64k</property>
<property name="spillsFileBufferSize">1k</property>
<property name="useStreamOutput">0</property>
<property name="systemReserveMemorySize">384m</property>
<property name="useZKSwitch">false</property>
<!-- 修改服务的端口与管理的端口信息 -->
<!--
单机环境下需要将端口措开
<property name="serverPort">18067</property>
<property name="managerPort">19067</property>
-->
</system>
<!--这里是设置的itcast用户和虚拟逻辑库-->
<!-- 这里设置登录的用户名为itcast, 登录的密码为itcast123 -->
<user name="itcast" defaultAccount="true">
<property name="password">itcast123</property>
<!-- 设置使用的 schemas为haoke,这里的名字需要在schemas中使用到 -->
<!-- 对应数据库的名字 -->
<property name="schemas">haoke,mytest</property>
</user>
</mycat:server>